CN107402819A - The management method and system of a kind of client-cache - Google Patents

The management method and system of a kind of client-cache Download PDF

Info

Publication number
CN107402819A
CN107402819A CN201710661560.5A CN201710661560A CN107402819A CN 107402819 A CN107402819 A CN 107402819A CN 201710661560 A CN201710661560 A CN 201710661560A CN 107402819 A CN107402819 A CN 107402819A
Authority
CN
China
Prior art keywords
data
cache
client
ssd disks
crash handling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710661560.5A
Other languages
Chinese (zh)
Inventor
魏盟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201710661560.5A priority Critical patent/CN107402819A/en
Publication of CN107402819A publication Critical patent/CN107402819A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

This application discloses a kind of management method of client-cache, including:Timestamp is added for the data that client-cache is got;Wherein, timestamp is the acquisition time of data;First crash handling is carried out to data, is recycled data, and data collection is transferred to SSD disks;Second crash handling is carried out to the data collection for being transferred to SSD disks, is released data, and leaves out release data from SSD disks, to empty the memory space that release data take.It possesses that bigger spatial cache, reading and writing data be more efficient, file recovery crash handling is more timely to the management method, can significantly improve the recycling and reuse efficiency of caching, give full play to the performance of hardware in itself, improve customer experience.The application further simultaneously discloses a kind of management system of client-cache, has above-mentioned beneficial effect.

Description

The management method and system of a kind of client-cache
Technical field
The application is related to cache management technical field, the management method and system of more particularly to a kind of client-cache.
Background technology
In order to more efficiently use memory headroom, it is necessary to be managed accordingly it, have to ensure to write file Retention has enough spaces to merge integration, while the file that can also enable to be read is to greatest extent in internal memory Middle hit, readwrite performance is improved with this.
Industry is typically all to take customized mode pair to the management method for caching (using memory headroom) at present Caching unified management, read-write data can be caused mixed in together, and then cause the recycling and reuse of caching less efficient, more deposited Bandwidth between each road file influences each other, and it is not ideal enough to ultimately result in readwrite performance, can not give full play of hardware in itself The performance having, especially some scenes for needing constant read-write, such as broadcasting and TV matchmaker money and video monitoring etc., they can be to read-write Stability propose higher requirement, also just more need to realize using caching and timely and effectively handle.
So how to provide, a kind of spatial cache is bigger, more timely client-cache administrative mechanism is for more efficient, recovery Those skilled in the art's urgent problem to be solved.
The content of the invention
The purpose of the application is to provide the management method and system of a kind of client-cache, and it is empty that it possesses bigger caching Between, reading and writing data is more efficient, file recovery crash handling is more timely, the recycling and reuse efficiency of caching can be significantly improved, The performance of hardware in itself is given full play to, improves customer experience.
In order to solve the above technical problems, the application provides a kind of management method of client-cache, the management method includes:
Timestamp is added for the data that client-cache is got;Wherein, when the timestamp is the acquisition of the data Between;
First crash handling is carried out to the data, is recycled data, and the data collection is transferred to SSD disks;
Second crash handling is carried out to the data collection for being transferred to the SSD disks, is released data, and from described Leave out the release data on SSD disks, to empty the memory space that the release data take.
Optionally, before the data addition timestamp got for client-cache, in addition to:
Client-cache rear end storage server initiates data acquisition request;
The rear end storage server according to the data acquisition request, to the client-cache return hot spot data and Pre-reads data;Wherein, the pre-reads data is ranked up according to ordinal characteristics.
Optionally, after the data addition timestamp got for client-cache, in addition to:
The data are ranked up according to time sequencing corresponding to each timestamp, obtain data sorting table.
Optionally, the first crash handling is carried out to the data, is recycled data, including:
Judge whether the acquisition time of the data is less than in advance by lru algorithm using the timestamp added in the data If the time;
If the acquisition time is less than the preset time, it is the data collection to judge the data.
Optionally, the data collection is transferred to SSD disks, including:
The data collection that write-in comes from client-cache transfer on the SSD disks;
Corresponding metadata is generated according to the description information of the data collection;
The metadata is write into the SSD disks, so as to which in abnormal cases flow can be carried out according to the metadata Playback obtains the data collection again.
Optionally, the second crash handling is carried out to the data collection for being transferred to the SSD disks, is released data, Including:
Whether the data collection for judging to write the SSD disks is to be sequentially written in;
If the data collection performs union operation to be sequentially written in, by the unrecoverable data by preset length, obtain The whole piece band of corresponding length;
Second crash handling is carried out to the whole piece band, obtains the release data.
Present invention also provides a kind of management system of client-cache, the management system includes:
Timestamp adding device, the data for being got for client-cache add timestamp;Wherein, the timestamp For the acquisition time of the data;
First crash handling unit, for carrying out the first crash handling to the data, data are recycled, and by described in Data collection is transferred to SSD disks;
Second crash handling unit, for carrying out the second crash handling to the data collection for being transferred to the SSD disks, Data are released, and leave out the release data from the SSD disks, to empty the storage sky that the release data take Between.
Optionally, the management system also includes:
Data acquisition request initiates unit, and initiating data acquisition for client-cache rear end storage server please Ask;
High hit rate data returning unit, for the rear end storage server according to the data acquisition request, to institute State client-cache and return to hot spot data and pre-reads data;Wherein, the pre-reads data is ranked up according to ordinal characteristics.
Optionally, the first crash handling unit includes:
Judgment sub-unit, for judging obtaining for the data by lru algorithm using the timestamp added in the data Take whether the time is less than preset time;
Data collection judges subelement, if being less than the preset time for the acquisition time, judges the data For the data collection;
Data collection writes subelement, for described in the write-in from client-cache transfer on the SSD disks Data collection;
Metadata generates subelement, for generating corresponding metadata according to the description information of the data collection;
Metadata writes subelement, for the metadata to be write into the SSD disks, so that in abnormal cases being capable of root Flow playback, which is carried out, according to the metadata obtains the data collection again.
Optionally, the second crash handling unit, including:
Judgment sub-unit is sequentially written in, whether the data collection for judging to write the SSD disks is to be sequentially written in;
Union operation subelement, for the unrecoverable data to be performed into union operation by preset length, obtain corresponding length Whole piece band;
Discharge data and obtain subelement, for carrying out second crash handling to the whole piece band, obtain the release Data.
The management method of a kind of client-cache provided herein, by adding for the data that client-cache is got Add timestamp;First crash handling is carried out to the data, is recycled data, and the data collection is transferred to SSD disks; Second crash handling is carried out to the data collection for being transferred to the SSD disks, is released data, and from the SSD disks Leave out the release data, to empty the memory space that the release data take.
Obviously, technical scheme provided herein, the two level in limited memory space is served as by using the characteristic of SSD disks Caching, spatial cache is have greatly expanded, and using two layers of crash handling transfer in time or release pending data, ensure that storage In the high hit rate of data in spatial cache.The management method possesses that bigger spatial cache, reading and writing data be more efficient, file returns It is more timely to receive crash handling, the recycling and reuse efficiency of caching can be significantly improved, the performance of hardware in itself is given full play to, carry Customer experience is risen.The application additionally provides a kind of management system of client-cache simultaneously, has above-mentioned beneficial effect, herein Repeat no more.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of application, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing of offer obtains other accompanying drawings.
A kind of flow chart of the management method for client-cache that Fig. 1 is provided by the embodiment of the present application;
The flow chart of the management method for another client-cache that Fig. 2 is provided by the embodiment of the present application;
The flow chart of the management method for another client-cache that Fig. 3 is provided by the embodiment of the present application;
The flow chart of the management method for another client-cache that Fig. 4 is provided by the embodiment of the present application;
A kind of structured flowchart of the management system for client-cache that Fig. 5 is provided by the embodiment of the present application;
The structured flowchart of the management system for another client-cache that Fig. 6 is provided by the embodiment of the present application.
Embodiment
The core of the application is to provide the management method and system of a kind of client-cache, and it is empty that it possesses bigger caching Between, reading and writing data is more efficient, file recovery crash handling is more timely, the recycling and reuse efficiency of caching can be significantly improved, The performance of hardware in itself is given full play to, improves customer experience.
To make the purpose, technical scheme and advantage of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art The all other embodiment obtained under the premise of creative work is not made, belong to the scope of the application protection.
Below in conjunction with Fig. 1, a kind of flow of the management method for client-cache that Fig. 1 is provided by the embodiment of the present application Figure.
It specifically includes following steps:
S101:Timestamp is added for the data that client-cache is got;Wherein, timestamp is the acquisition time of data;
This step it is intended that client-cache get data addition timestamp, wherein, the timestamp is got The acquisition time of data.Words sentence talk about, exactly by the data got on client-cache add these data in when The time being acquired on client-cache, it is the data that each data are added similar to explorer in Windows systems File creation time, last modification time etc. indicate, so that subsequent step is made whether to perform at failure according to the timestamp The judgement of reason.
Specifically, how the mode for adding timestamp for the data that client-cache is got has a lot, for example, as one Individual additional mark exist with data file;Can also be by data file package, and addition should during package again Timestamp etc., herein and it is not specifically limited, only needs to utilize the timestamp in subsequent step to data file Temporally it is ranked up, and then performs the judgement of crash handling.
Further, it can be based on one that rear end storage server, which obtains the data that will be stored in client-cache, Principle is determined, for example, being based on temperature principle or access privileges principle, i.e., by some hot spot datas, visit capacity in the unit interval Higher data are stored in client-cache, the access in storage system can also be carried out in real time, pre-preparation is following The data file that will be accessed, the access speed of data file that will be accessed is accelerated with this.
Further, can also be that these get the data file of client-cache according to the time on its timestamp Order generates a data sequencing table, in order to retrieval of the crash handling to document time in subsequent step, is detecting certain After timestamp in individual data file has met failure condition, the data file before its time is examined again with regard to unnecessary Survey, can effectively save detecting step and time.
S102:First crash handling is carried out to data, is recycled data, and data collection is transferred to SSD disks;
On the basis of S101, the data that this step is intended to get client-cache perform the first crash handling, with Data are recycled, and the data collection is transferred to SSD.
Wherein, the data file that why got to client-cache performs crash handling, is because client is delayed Deposit occupancy is the memory headroom of system, and memory headroom is different from memory space, and the size of the latter has benefited from good expansion, Almost can be infinitely great, and the former size is then much smaller than the latter, because both functional meanings are different, the former mainly performs number According to cache, it is general only by the program currently run storage wherein, and the latter can be understood as database.In other words, What selection to be performed exactly from the latter is partially placed into the former, and is the reading requirement for meeting high speed, the readwrite performance of internal memory Far above hard disk.
Therefore, to make memory headroom be maintained at a good running status, it is necessary to regularly clear up non-present operation program, Keep good, efficient, there are enough remaining spaces to support other applications that may start, so will have crash handling, Otherwise just appear in memory usage it is too high when, phenomenon that most of operations can not all perform.
When caching needs to be cleaned, (for example space hold is already close to critical value) (is lost, it is necessary to be eliminated using certain Effect) which data algorithm determine to clean out, and conventional life cycle algorithm has following several:
FIFO:First In First Out, first in first out, judge the stored time, excellent from data farthest at present First it is eliminated;
LRU:Least Recently Used, it is least recently used, the time that judgement is used recently, current farthest High priority data is eliminated;
LFU:Least Frequently Used, are least commonly used, within a period of time, minimum by access times Data, preferentially it is eliminated.
Wherein, lru algorithm is a kind of page replacement algorithm of memory management, in internal memory but no data block It is called LRU, operating system can belong to LRU according to which data and be moved out internal memory and vacating space loads other number According to.How to save using capacity less in save as most process resource be provided, be always the important directions of research, and internal memory Virtual memory management, be present most general, most successful mode, i.e., in the case of limited memory, extend a part of external memory As virtual memory, real internal memory is used when only storing current operation to obtain information.This has undoubtedly greatly expanded internal memory Function, drastically increase the concurrency of computer.Virtual paged memory management, then be space needed for process is divided into it is multiple The page, the current desired page is only deposited in internal memory, remaining page is put into the way to manage of external memory.
However, favorably just having disadvantage, virtual paged memory management adds the memory headroom needed for process, but also brings fortune This elongated shortcoming of row time:In process running, inevitably some information and internal memory deposited in external memory In it is existing swap, due to the low speed of external memory, the step for time for being spent can not ignore.Thus, take as far as possible well Algorithm read the number of external memory to reduce, and quite significant thing.
For the low speed of external memory, the application employs SSD disks (solid state hard disc) to solve this problem, different from traditional machine Tool hard disk, SSD disks use full chip-stored, and read or write speed is fast.This step uses SSD disks to utilize its power down as intermediate medium The characteristic of data and higher readwrite performance are not lost, is used as L2 cache, can effectively solve that memory headroom is small to ask Topic, and the write performance significantly improved and reliability are brought simultaneously.
S103:Second crash handling is carried out to data collection, is released data, and leaves out release data from SSD disks, To empty the memory space that release data take.
On the basis of S102, this step will be performed from client-cache it is obtaining after the first crash handling and The data collection of SSD disks is transferred to, through after a period of time, performing the second crash handling, being released data in SSD disks, And leave out the release data on SSD disks, that is, empty the memory space shared by these release data.
Specifically, specific algorithm used in second crash handling can be with algorithm one used in the first crash handling Cause, it is of course also possible in view of the inconsistent situation of executive agent, selection more suitably life cycle algorithm.In addition, at two failures The maximum different place of reason is how to set the out-of-service time because memory headroom compared to SSD disks and mechanical hard disk for, number Magnitude is smaller, so to ensure that limited memory headroom has good running status, the out-of-service time in the first crash handling The relatively small, time that can be set is shorter, is typically set at 5 seconds or so;And the memory space of SSD disks is for internal memory, There is great expansion, it is contemplated that to the difference of the order of magnitude, the out-of-service time in the second crash handling can be relatively long, It is generally set to 1 minute or so.
Certainly, the parameter setting of the above is a parameter under normal circumstances, does not represent the logical of armamentarium model With setting, herein and it is not specifically limited, specific requirement under actual conditions, unit type, memory hierarchy, SSD disks should be regarded The many factors such as space size and read or write speed are formulated to integrate.
Based on above-mentioned technical proposal, a kind of management method for client-cache that the embodiment of the present application provides, by using The characteristic of SSD disks serves as the L2 cache in limited memory space, have greatly expanded spatial cache, and using two layers of crash handling and When shift or release pending data, ensure that the high hit rate for depositing in data in spatial cache.The management method possesses more Big spatial cache, reading and writing data are more efficient, file recovery crash handling is more timely, can significantly improve the recovery and again of caching Utilization ratio, the performance of hardware in itself is given full play to, improves customer experience.
Below in conjunction with Fig. 2, the flow of the management method for another client-cache that Fig. 2 is provided by the embodiment of the present application Figure.
The present embodiment how is directed in a upper embodiment before S101 by the data file in the storage server of rear end The description conducted in client-cache is got, other steps are substantially the same with a upper embodiment, and same section can be found in One embodiment relevant portion, will not be repeated here.
It specifically includes following steps:
S201:Client-cache rear end storage server initiates data acquisition request;
S202:Rear end storage server returns to hot spot data and pre- reading according to data acquisition request to client-cache According to;Wherein, pre-reads data is ranked up according to ordinal characteristics;
S203:Timestamp is added for the data that client-cache is got, according to time sequencing pair corresponding to each timestamp Data are ranked up, and obtain data sorting table.
The present embodiment first by by client-cache rear end storage server initiate data acquisition request, and then after Storage server is held according to the data acquisition request received, based on temperature principle and pre-reads priority by hot spot data and in advance Read data and be sent to client-cache, and after client-cache successfully gets these data, obtained for the addition of these data The timestamp of time, and in view of the convenience of subsequent failure processing, always according to time sequencing corresponding to each timestamp to data File is ranked up, and then obtains data sorting table.
Wherein, data acquisition request and the form of expression of timestamp are varied, herein and are not specifically limited, in S101 In be also discussed in detail, reference can be made to relevant portion in S101, it is only necessary to follow-up required function can be realized, herein Repeat no more.
Below in conjunction with Fig. 3, the flow of the management method for another client-cache that Fig. 3 is provided by the embodiment of the present application Figure.
The present embodiment be directed to a upper embodiment in how to be carried out in S102 the first crash handling be recycled data with And the specific restriction that what kind of data to SSD disks are done is shifted, other steps are substantially the same with a upper embodiment, identical portions Divide and can be found in upper embodiment relevant portion, will not be repeated here.
It specifically includes following steps:
S301:Timestamp is added for the data that client-cache is got;
S302:Judge whether the acquisition time of data is less than preset time by lru algorithm using timestamp;
This step is intended to judge to deposit in the timestamp institute added in the data file of client-cache using lru algorithm Whether the acquisition time of representative is less than preset time.Wherein, the preset time is calculated by lru algorithm, it is contemplated that It is to deposit in limited memory headroom, so the out-of-service time can be relatively short.
A rational hypothesis can be done herein, it is assumed that storage there are 3 to be obtained from rear end storage server in client-cache The data file got, its timestamp added according to time order and function order is respectively the 13 of one day:20、13:25、13:30, and 13:The out-of-service time that the lru algorithm of 32 points of execution is set as 5 seconds, then can be calculated 13:32—00:05=13:27, Then judge the acquisition time on all timestamps of data present in client-cache earlier than 13:27 will be performed at failure Reason, i.e., in chronological sequence the first data file of order storage and the second data file can be all performed because earlier than preset time Crash handling.
S303:Judge that data are data collection;
On the basis of S302, it can judge that in chronological sequence the first data file of order storage and the second data file are all Because that can be performed crash handling earlier than preset time, i.e., timestamp is respectively 13:20 the first data file and for 13:25 Second data file can be judged as data collection.
S304:Data collection is write to SSD disks, and corresponding metadata is obtained according to data collection;
On the basis of S303, this step is intended to the first data file and the second data file being recycled among SSD disks, And analyze and obtain the metadata of first, second data file.
Wherein, metadata is defined as:The data of data are described, to the descriptive information of data and information resources, talk about sentence To talk about, metadata is to describe the data of other data, or perhaps for providing the structured data for information about of certain resource, Metadata is the data of the objects such as description information resource or data, and its application target is:Identify resource;Evaluate resource;Tracking The change of resource in use;Realization simply and efficiently manages a large amount of networked datas;Realize effective hair of information resources Existing, lookup, integration organization and effective management to using resource.
Due to metadata and data, therefore it can be stored and be obtained in database with the method for similar data, Provided that the tissue of data element provides the metadata of description data element simultaneously, it will makes the use of data element become accurate and high Effect, user can first look at its metadata so as to obtain the information needed for oneself when using data.
S305:Metadata is write into SSD disks, reset again so as to which in abnormal cases flow can be carried out according to metadata It is recycled data.
On the basis of S304, this step is intended to first number by the data file of description first of generation and the second data file According to also writing in SSD disks, data are recycled again so that flow playback can be carried out according to metadata in abnormal cases. The mode that its flow is reset has a lot, can be described herein by a more vivid example, it is assumed that by under one Load, which links, has successfully downloaded some data file, and now the download link and the data file are all stored in SSD disks by can In, once the data file is lost because abnormal, then can download to obtain the data file again by former download link.
Certainly, other manner also be present to realize Preservation Metadata, and then data are ensured by the characteristic of the metadata The reliability of file, herein and it is not specifically limited, should be determined on a case-by-case basis.
Below in conjunction with Fig. 4, the flow of the management method for another client-cache that Fig. 4 is provided by the embodiment of the present application Figure.
The present embodiment is to be released data institute for how to carry out the second crash handling in S103 in a upper embodiment The specific restriction done, other steps are substantially the same with a upper embodiment, and same section can be found in an embodiment dependent part Point, it will not be repeated here.
It specifically includes following steps:
S401:Data collection is transferred to SSD disks;
S402:Whether the data collection for judging to write SSD disks is to be sequentially written in;
This step is intended to judge the data collection of write-in SSD disks with the presence or absence of association sequentially, that is, judges these recovery Whether data are sequentially written in.Specifically, how to determine whether that the mode that is sequentially written in is varied, does not do and has herein Body limits, and should get off depending on actual conditions and select suitable mode.
S403:Unrecoverable data is performed into union operation by preset length, obtains the whole piece band of corresponding length;
S404:Second crash handling is carried out to whole piece band, is released data.
It is the basis being sequentially written in for the data collection of write-in SSD disks that S403 and S404, which establish judged result in S402, On, union operation is performed according to preset length according to its relevance by the data file to writing in order, to obtain correspondingly The positive band of length.Why operation is merged, be determined by the storage characteristics of SSD disks, substantial amounts of small data file Slow SSD read or write speed can be dragged, the preset length be also according to the difference of specific SSD setup parameters come select one it is more suitable Size.
And then, S404 is to carry out the second crash handling to the whole piece band being stored on SSD disks, to be released data, The difference of the first crash handling and the second crash handling was specifically described in S103, is mainly reflected in the judgement of out-of-service time On, herein and be not specifically limited, should regard the storage size of specific SSD disks in actual conditions, model, readwrite performance and Other influence factors determine to integrate.
Based on above-mentioned technical proposal, a kind of management method for client-cache that the embodiment of the present application provides, by using The characteristic of SSD disks serves as the L2 cache in limited memory space, have greatly expanded spatial cache, and using two layers of crash handling and When shift or release pending data, ensure that the high hit rate for depositing in data in spatial cache.The management method possesses more Big spatial cache, reading and writing data are more efficient, file recovery crash handling is more timely, can significantly improve the recovery and again of caching Utilization ratio, the performance of hardware in itself is given full play to, improves customer experience.
Because situation is complicated, it can not enumerate and be illustrated, those skilled in the art should be able to recognize more the application The basic skills principle combination actual conditions of offer may have many examples, in the case where not paying enough creative works, Should be in the protection domain of the application.
Fig. 5, a kind of structure of the management system for client-cache that Fig. 5 is provided by the embodiment of the present application are referred to below Block diagram.
The system can include:
Timestamp adding device 100, the data for being got for client-cache add timestamp;Wherein, timestamp For the acquisition time of data;
First crash handling unit 200, for carrying out the first crash handling to data, data are recycled, and will recovery Data are transferred to SSD disks;
Second crash handling unit 300, for carrying out the second crash handling to the data collection for being transferred to SSD disks, obtain Data are discharged, and leave out release data from SSD disks, to empty the memory space that release data take.
Wherein, the first crash handling unit 200 includes:
Judgment sub-unit, judge whether the acquisition time of data is less than by lru algorithm for the timestamp using data Preset time;
Data collection judges subelement, if being less than preset time for obtaining the time, judges that data are data collection;
Data collection writes subelement, for the data collection that write-in comes from client-cache transfer on SSD disks;
Metadata generates subelement, for generating corresponding metadata according to the description information of data collection;
Metadata writes subelement, for metadata to be write into SSD disks, so that in abnormal cases can be according to metadata Carry out flow playback and be recycled data again.
Wherein, the second crash handling unit 200, including:
Judgment sub-unit is sequentially written in, whether the data collection for judging to write SSD disks is to be sequentially written in;
Union operation subelement, for unrecoverable data to be performed into union operation by preset length, obtain the whole of corresponding length Band;
Discharge data and obtain subelement, for carrying out the second crash handling to whole piece band, be released data.
Further, management system can also include:
Data acquisition request initiates unit, and data acquisition request is initiated for client-cache rear end storage server;
High hit rate data returning unit, for rear end storage server according to data acquisition request, to client-cache Return to hot spot data and pre-reads data;Wherein, pre-reads data is ranked up according to ordinal characteristics.
Incorporated by reference to Fig. 6, the structural frames of the management system for another client-cache that Fig. 6 is provided by the embodiment of the present application Figure, above each unit can apply in the specific concrete instance of following one:
Concrete practice process is divided into reading and writes two parts, for read operation:
(1) the temporary data read from rear end storage server of client's end memory, including hot spot data and according to order The data that feature is read in advance, these data will be greatly enhanced the hit rate of user's read operation several times in the future, before quickening Hold response speed;
(2) time stamp T 0 is added according to band to the above-mentioned data read, and be ranked up according to the time;
(3) by lru algorithm, after specific time △ T (can customize), current buffer zone is entered Row traversal is searched, and the data block that △ T are subtracted earlier than current time is flushed on SSD disks in the region applied, reclaims internal memory Resource gives the data that will be read to use;
(4) for the data newly read, recovered memory source is occupied, and have can for the data of front end applications hit Can be in caching, it is also possible to which on SSD disks, both data extraction efficiencies are all far above directly to be carried from the storage of rear end Take, if the hiting data on SSD disks, be extracted into caching and keep in;
(5) data in SSD disks can also make crash handling by lru algorithm, and now data are thoroughly discharged, and future is to this The reading of partial data will have to reacquire in storing from rear end.
For write operation:
(1) in the buffer, the write-in data that front end applications issue directly are flushed on SSD disks and in SSD disc recordings Associated metadata is used for the power loss recovery that may occur, and does not lose the characteristic of data using the power down of SSD disks, and now write request returns, Front end is regarded as this IO completions, and because SSD disks are without the tracking time, rule is fast, can effectively reduce write latency;
(2) data on SSD disks can be pre-processed first, and for the file of sequential write, data are merged into some length For len (stripe size can customize) whole piece band;
(3) after specific time △ T ' (being different from time interval above), refresh in SSD disks Data are stored to rear end, and confirmation write-in is completed after receiving the ACK of rule return, while deletes the corresponding data in SSD disks;
(4) if the situation that generation power down or failure are restarted, system can extract first number after recovery of stomge from SSD disks According to, process is reset, complete write-in flow.
Each embodiment is described by the way of progressive in specification, and what each embodiment stressed is and other realities Apply the difference of example, between each embodiment identical similar portion mutually referring to.For device disclosed in embodiment Speech, because it is corresponded to the method disclosed in Example, so description is fairly simple, related part is referring to method part illustration .
Professional further appreciates that, with reference to the unit of each example of the embodiments described herein description And algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware and The interchangeability of software, the composition and step of each example are generally described according to function in the above description.These Function is performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specialty Technical staff can realize described function using distinct methods to each specific application, but this realization should not Think to exceed scope of the present application.
Specific case used herein is set forth to the principle and embodiment of the application, and above example is said It is bright to be only intended to help and understand the present processes and its core concept.It should be pointed out that the ordinary skill for the art For personnel, on the premise of the application principle is not departed from, some improvement and modification, these improvement can also be carried out to the application Also fallen into modification in the application scope of the claims.
It should also be noted that, in this manual, such as first and second or the like relational terms be used merely to by One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also other key elements including being not expressly set out, or also include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged Except other identical element in the process including key element, method, article or equipment being also present.

Claims (10)

  1. A kind of 1. management method of client-cache, it is characterised in that including:
    Timestamp is added for the data that client-cache is got;Wherein, the timestamp is the acquisition time of the data;
    First crash handling is carried out to the data, is recycled data, and the data collection is transferred to SSD disks;
    Second crash handling is carried out to the data collection for being transferred to the SSD disks, is released data, and from the SSD Leave out the release data on disk, to empty the memory space that the release data take.
  2. 2. management method according to claim 1, it is characterised in that in the data addition got for client-cache Between stab before, in addition to:
    Client-cache rear end storage server initiates data acquisition request;
    The rear end storage server returns to hot spot data to the client-cache and pre-read according to the data acquisition request Data;Wherein, the pre-reads data is ranked up according to ordinal characteristics.
  3. 3. management method according to claim 2, it is characterised in that in the data addition got for client-cache Between stab after, in addition to:
    The data are ranked up according to time sequencing corresponding to each timestamp, obtain data sorting table.
  4. 4. according to the management method described in any one of claims 1 to 3, it is characterised in that the data are carried out with the first failure Processing, is recycled data, including:
    When judging whether the acquisition time of the data is less than default by lru algorithm using the timestamp added in the data Between;
    If the acquisition time is less than the preset time, it is the data collection to judge the data.
  5. 5. management method according to claim 4, it is characterised in that the data collection is transferred to SSD disks, including:
    The data collection that write-in comes from client-cache transfer on the SSD disks;
    Corresponding metadata is generated according to the description information of the data collection;
    The metadata is write into the SSD disks, so as to which in abnormal cases flow playback can be carried out according to the metadata The data collection is obtained again.
  6. 6. management method according to claim 5, it is characterised in that the data collection to being transferred to the SSD disks The second crash handling is carried out, is released data, including:
    Whether the data collection for judging to write the SSD disks is to be sequentially written in;
    If the data collection performs union operation to be sequentially written in, by the unrecoverable data by preset length, obtain correspondingly The whole piece band of length;
    Second crash handling is carried out to the whole piece band, obtains the release data.
  7. A kind of 7. management system of client-cache, it is characterised in that including:
    Timestamp adding device, the data for being got for client-cache add timestamp;Wherein, the timestamp is institute State the acquisition time of data;
    First crash handling unit, for carrying out the first crash handling to the data, it is recycled data, and by the recovery Data are transferred to SSD disks;
    Second crash handling unit, for carrying out the second crash handling to the data collection for being transferred to the SSD disks, obtain Data are discharged, and leave out the release data from the SSD disks, to empty the memory space that the release data take.
  8. 8. management system according to claim 7, it is characterised in that also include:
    Data acquisition request initiates unit, and data acquisition request is initiated for client-cache rear end storage server;
    High hit rate data returning unit, for the rear end storage server according to the data acquisition request, to the visitor Family end caching returns to hot spot data and pre-reads data;Wherein, the pre-reads data is ranked up according to ordinal characteristics.
  9. 9. the management system according to claim 7 or 8, it is characterised in that the first crash handling unit includes:
    Judgment sub-unit, for using in the data add timestamp the acquisition of the data is judged by lru algorithm when Between whether be less than preset time;
    Data collection judges subelement, if being less than the preset time for the acquisition time, judges the data for institute State data collection;
    Data collection writes subelement, for the recovery that write-in comes from client-cache transfer on the SSD disks Data;
    Metadata generates subelement, for generating corresponding metadata according to the description information of the data collection;
    Metadata writes subelement, for the metadata to be write into the SSD disks, so that in abnormal cases can be according to institute State metadata progress flow playback and obtain the data collection again.
  10. 10. management system according to claim 9, it is characterised in that the second crash handling unit, including:
    Judgment sub-unit is sequentially written in, whether the data collection for judging to write the SSD disks is to be sequentially written in;
    Union operation subelement, for the unrecoverable data to be performed into union operation by preset length, obtain the whole of corresponding length Band;
    Discharge data and obtain subelement, for carrying out second crash handling to the whole piece band, obtain the release data.
CN201710661560.5A 2017-08-04 2017-08-04 The management method and system of a kind of client-cache Pending CN107402819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710661560.5A CN107402819A (en) 2017-08-04 2017-08-04 The management method and system of a kind of client-cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710661560.5A CN107402819A (en) 2017-08-04 2017-08-04 The management method and system of a kind of client-cache

Publications (1)

Publication Number Publication Date
CN107402819A true CN107402819A (en) 2017-11-28

Family

ID=60401938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710661560.5A Pending CN107402819A (en) 2017-08-04 2017-08-04 The management method and system of a kind of client-cache

Country Status (1)

Country Link
CN (1) CN107402819A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109032505A (en) * 2018-06-26 2018-12-18 深圳忆联信息系统有限公司 Data read-write method, device, computer equipment and storage medium with timeliness
CN110704174A (en) * 2018-07-09 2020-01-17 中国移动通信有限公司研究院 Memory release method and device, electronic equipment and storage medium
CN112764690A (en) * 2021-02-03 2021-05-07 北京同有飞骥科技股份有限公司 Distributed storage system
CN113672562A (en) * 2020-05-14 2021-11-19 北京字节跳动网络技术有限公司 Data deleting method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100664A1 (en) * 2008-10-21 2010-04-22 Hitachi, Ltd. Storage system
CN101794259A (en) * 2010-03-26 2010-08-04 成都市华为赛门铁克科技有限公司 Data storage method and device
CN102521147A (en) * 2011-11-17 2012-06-27 曙光信息产业(北京)有限公司 Management method by using rapid non-volatile medium as cache
CN103279562A (en) * 2013-06-09 2013-09-04 网易(杭州)网络有限公司 Method and device for second-level cache of database and database storage system
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100664A1 (en) * 2008-10-21 2010-04-22 Hitachi, Ltd. Storage system
CN101794259A (en) * 2010-03-26 2010-08-04 成都市华为赛门铁克科技有限公司 Data storage method and device
CN102521147A (en) * 2011-11-17 2012-06-27 曙光信息产业(北京)有限公司 Management method by using rapid non-volatile medium as cache
CN103279562A (en) * 2013-06-09 2013-09-04 网易(杭州)网络有限公司 Method and device for second-level cache of database and database storage system
CN105573669A (en) * 2015-12-11 2016-05-11 上海爱数信息技术股份有限公司 IO read speeding cache method and system of storage system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109032505A (en) * 2018-06-26 2018-12-18 深圳忆联信息系统有限公司 Data read-write method, device, computer equipment and storage medium with timeliness
CN110704174A (en) * 2018-07-09 2020-01-17 中国移动通信有限公司研究院 Memory release method and device, electronic equipment and storage medium
CN113672562A (en) * 2020-05-14 2021-11-19 北京字节跳动网络技术有限公司 Data deleting method, device, equipment and storage medium
CN113672562B (en) * 2020-05-14 2024-01-16 抖音视界有限公司 Data deleting method, device, equipment and storage medium
CN112764690A (en) * 2021-02-03 2021-05-07 北京同有飞骥科技股份有限公司 Distributed storage system

Similar Documents

Publication Publication Date Title
CN102782683B (en) Buffer pool extension for database server
CN103049397B (en) A kind of solid state hard disc inner buffer management method based on phase transition storage and system
CN104850358B (en) A kind of magneto-optic electricity mixing storage system and its data acquisition and storage method
CN107402819A (en) The management method and system of a kind of client-cache
CN103336849B (en) A kind of database retrieval system improves the method and device of retrieval rate
CN106662981A (en) Storage device, program, and information processing method
CN103631536B (en) A kind of method utilizing the invalid data of SSD to optimize RAID5/6 write performance
CN107643880A (en) The method and device of file data migration based on distributed file system
CN109947363A (en) A kind of data cache method of distributed memory system
CN105242871A (en) Data writing method and apparatus
CN110427158B (en) Writing method of solid state disk and solid state disk
CN103440207A (en) Caching method and caching device
CN102981971B (en) A kind of phase transition storage loss equalizing method of quick response
Lee et al. Eliminating periodic flush overhead of file I/O with non-volatile buffer cache
CN113568582B (en) Data management method, device and storage equipment
CN103544110A (en) Block-level continuous data protection method based on solid-state disc
CN109086141B (en) Memory management method and device and computer readable storage medium
KR20070074836A (en) System and method for managing log information for transaction
CN103049393B (en) Memory headroom management method and device
CN106469123A (en) A kind of write buffer distribution based on NVDIMM, method for releasing and its device
CN109558456A (en) A kind of file migration method, apparatus, equipment and readable storage medium storing program for executing
RU2525752C2 (en) Method and apparatus for storing, reading and writing compound document
CN113377292A (en) Single machine storage engine
CN103019956B (en) A kind of to data cached method of operating and device
CN105446848B (en) The test method and device of the data processing performance of electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171128

RJ01 Rejection of invention patent application after publication