Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearly
Chu, be fully described by, it is clear that described embodiment be only a part of embodiment of the present invention rather than
Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creation
The every other embodiment obtained under property work premise, broadly falls into the scope of protection of the invention.
Refer to Fig. 1, be the schematic flow sheet of a kind of caching data processing method that the embodiment of the present invention provides,
Described method may include that
S101, detection disk space utilization rate and index node utilization rate;
Concrete, regular check disk space utilization rate and index node (inode) utilization rate, such as, silent
Recognize every 10 seconds and check a disk space utilization rate and index node utilization rate.Getting described disk space
After utilization rate and described index node utilization rate, it can be determined that whether described disk space utilization rate is more than first
Utilization rate threshold value, and judge that whether described index node utilization rate is more than the second utilization rate threshold value.Wherein, institute
State the first utilization rate threshold value and described second utilization rate threshold value is configured the most in advance.
S102, when described disk space utilization rate uses more than the first utilization rate threshold value and/or described index node
When rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described file management row
The file of the correspondence respectively of each file destination information in table uses information to be real-time update;
Concrete, when judging that described disk space utilization rate is more than the first utilization rate threshold value and/or described index
When node utilization rate is more than the second utilization rate threshold value, default file management list can be obtained.Wherein, exist
Before S101 step, first configuration needs directory listing and the cache cleaner strategy carrying out monitoring, then to described
Directory listing travels through, and to obtain the All Files information in described directory listing, and is believed by All Files
Breath is identified as file destination information, and the file the most corresponding further according to each file destination information uses information
Described each file destination information is ranked up.Wherein, if cache cleaner strategy is LRU, the most described file
Use information includes the time that file uses time, i.e. file to be used for the last time;If cache cleaner strategy
For LFU, the most described file uses information to include file access times, i.e. file quilt within nearest a period of time
The number of times used;The situation that file is used can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, then can be used by file the earliest
File destination information corresponding to time comes foremost, will target literary composition corresponding to the most untapped file
Part information comes foremost, and uses the order of time to each file destination according to from the earliest to nearest file
Information is ranked up, and wherein, the file destination information corresponding to most recently used file comes backmost.Again
Such as, if described file uses information to include file access times, then can be by minimum for file access times
File destination information comes foremost, and according to file access times order from less to more to each file destination
Information is ranked up, and wherein, the most file destination information of access times comes backmost.
After described each file destination information is ranked up, then by the sequence described each file destination letter after good
Breath is added sequentially in file management list, the described each file destination letter in the most described file management list
Breath is that sequence is good.As a example by cache cleaner strategy is as LRU, if described each file destination information be according to from
Arriving nearest file the earliest uses the order of time to be ranked up, each mesh described in the most described file management list
Mark fileinfo is also to use the order of time to be ranked up according to from the earliest to nearest file, described literary composition
Coming file corresponding to the file destination information of foremost in part managing listings uses the time to be the earliest, described
Coming file corresponding to rearmost file destination information in file management list uses the time to be nearest.
Preferably, when detecting that file corresponding to described file destination information is used, i.e. file destination letter
The file that breath is corresponding is created, accesses, write etc. time, the described file destination information that file is used
It is defined as fileinfo to be updated;Corresponding to the fileinfo described to be updated in described file management list
File uses information to be updated, and uses information in described file management list according to the file after updating
Fileinfo described to be updated resequence, each file destination in the most described file management list letter
The file that breath is corresponding respectively uses information to be real-time update.Such as, if described in described file management list
Each file destination information is ranked up according to LRU, and described each file destination information is also according to from
Early the order of time is used to be ranked up, then when detecting that one of them file destination is believed to nearest file
When the file that breath is corresponding is used, the time can be used to be updated to file corresponding for this file destination information
Near, and this file destination information is discharged to finally.By the file that real-time update file destination information is corresponding
Use information, it is ensured that the accuracy when clearing up data cached, to avoid cleaning out recently or often to make
Data cached.
S103, in each file destination information using information to be ranked up according to described file, in order
Obtain at least one file destination information, using as at least one fileinfo for clearance, and to described at least
One fileinfo for clearance the data cached of correspondence respectively is cleared up;
Concrete, after getting default file management list, can use according to described file
In each file destination information that information is ranked up, obtain at least one file destination information in order, to make
For at least one fileinfo for clearance, and at least one fileinfo for clearance described correspondence respectively is delayed
Deposit data is cleared up.Wherein, the quantity of at least one the file destination information described got can be pre-
If file clean-up quantity.Such as, if each file destination information described in described file management list be according to
LRU is ranked up, and described each file destination information be also according to from the earliest to nearest file use time
Between order be ranked up, and the quantity of file destination information is 1000, the file clean-up quantity preset
Be 100, then after getting described file management list, can be by described file management list front 100
The data cached deletion that individual file destination information is corresponding.The embodiment of the present invention has only in the default stage, to mesh
Record list once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only
Need to obtain in order the file destination information of specified quantity in file management list, such that it is able to keep away
All directory listing is traveled through when exempting to clear up caching brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic
Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value
Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list
File uses information to be real-time update, then at each file destination being ranked up according to file use information
In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance,
And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching
During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing
Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 2 again, be the flow process signal of the another kind of caching data processing method that the embodiment of the present invention provides
Figure, described method may include that
S201, presets the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Concrete, preset the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity, enter one
Step can also be pre-configured with directory listing and the cache cleaner strategy needing to carry out monitoring.Wherein, described
One utilization rate threshold value is for detecting whether disk space utilization rate reaches to need to carry out the critical of cache cleaner
Point, described second utilization rate threshold value is whether to reach needs to carry out caching clear for detecting index node utilization rate
The critical point of reason, the quantity of documents of required cleaning when described file clean-up quantity is to carry out cache cleaner every time.
Wherein, described cache cleaner strategy includes LRU or LFU.
S202, travels through directory listing, to obtain each file destination information in described directory listing;
Concrete, by the way of asynchronous, directory listing is described, i.e. directory listing is traveled through,
To obtain each file destination information in described directory listing.
S203, uses information to described each target literary composition according to the file that described each file destination information is the most corresponding
Part information is ranked up;
Concrete, use information to described each target according to the file that described each file destination information is the most corresponding
Fileinfo is ranked up.Wherein, if cache cleaner strategy is LRU, the most described file uses information to include
The time that the file use time, i.e. file is used for the last time;If cache cleaner strategy is LFU, then institute
Stating file uses information to include file access times, the number of times that i.e. file is used within nearest a period of time;
The situation that file is used can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, then can be used by file the earliest
File destination information corresponding to time comes foremost, will target literary composition corresponding to the most untapped file
Part information comes foremost, and uses the order of time to each file destination according to from the earliest to nearest file
Information is ranked up, and wherein, the file destination information corresponding to most recently used file comes backmost.Again
Such as, if described file uses information to include file access times, then can be by minimum for file access times
File destination information comes foremost, and according to file access times order from less to more to each file destination
Information is ranked up, and wherein, the most file destination information of access times comes backmost.
S204, is added sequentially to the described each file destination information after sequence in file management list;
Concrete, after described each file destination information is ranked up, then by the sequence described each mesh after good
Mark fileinfo is added sequentially in file management list, the described each mesh in the most described file management list
Mark fileinfo is that sequence is good.As a example by cache cleaner strategy is as LRU, if described each file destination information
It is to use the order of time to be ranked up, in the most described file management list according to from the earliest to nearest file
Described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file
, described file management list comes file corresponding to the file destination information of foremost and uses the time to be
Early, coming file corresponding to rearmost file destination information in described file management list uses the time to be
Nearest.
Preferably, after generating the file management list including described each file destination information, Ke Yitong
Cross Linux monitor file inotify (inotify is a Linux characteristic, and it monitors file system operation,
Such as reading, write and create) interface monitors the respectively the most corresponding file of described each file destination information in real time
Read and write, when detecting that file corresponding to described file destination information is used, illustrate that inotify connects
Mouth monitors file corresponding to this file destination information and is created, accesses or writes, at this point it is possible to by file
The described file destination information used is defined as fileinfo to be updated;To in described file management list
The file that described fileinfo to be updated is corresponding uses information to be updated, and uses according to the file after updating
Fileinfo described to be updated in described file management list is resequenced by information, the most described file
The file of the correspondence respectively of each file destination information in managing listings uses information to be real-time update.Wherein,
The event that file is monitored by inotify interface includes: IN_ACCESS/IN_MODIFY/
IN_CREATE/IN_DELETE/IN_DELETE_SELF/IN_MODIFY/IN_MOVE_SELF/
IN_MOVED_FROM/IN_MOVED_TO.Such as, if each mesh described in described file management list
Mark fileinfo is to use the order of time to be ranked up according to from the earliest to nearest file, then work as inotify
Interface detects when file corresponding to one of them file destination information is accessed, and can be believed by this file destination
The file that breath is corresponding uses the time to be updated to nearest file and uses the time, and this file destination information is discharged to
Finally.Wherein, the step using information to be updated the file that file destination information is corresponding can be following
Any instant in S205 to S207 step is carried out.The embodiment of the present invention passes through real-time update file destination information
Corresponding file uses information, it is ensured that the accuracy when clearing up data cached, to avoid cleaning out
Near or commonly used is data cached.
S205, detection disk space utilization rate and index node utilization rate;
Concrete, regular check disk space utilization rate and index node utilization rate, such as, give tacit consent to every 10 seconds
Check a disk space utilization rate and index node utilization rate.Get described disk space utilization rate and
After described index node utilization rate, it can be determined that whether described disk space utilization rate is more than the first utilization rate threshold
Value, and judge that whether described index node utilization rate is more than the second utilization rate threshold value.
S206, when described disk space utilization rate uses more than the first utilization rate threshold value and/or described index node
When rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described file management row
The file of the correspondence respectively of each file destination information in table uses information to be real-time update;
Concrete, when judging that described disk space utilization rate is more than the first utilization rate threshold value and/or described index
When node utilization rate is more than the second utilization rate threshold value, default file management list can be obtained, i.e. start to open
Dynamic data buffer storage clearing function.The accessed described each file destination information in file management list is
Through using information sorting good according to file, and the file use letter that described each file destination information is the most corresponding
Breath is real-time update, and file use time or file that the most described each file destination information is the most corresponding use
Number of times is up-to-date state.
S207, in each file destination information using information to be ranked up according to described file, in order
Obtain at least one file destination information, using as at least one fileinfo for clearance, and to described at least
One fileinfo for clearance the data cached of correspondence respectively is cleared up;
Concrete, after getting default file management list, can use according to described file
In each file destination information that information is ranked up, obtain at least one file destination information in order, to make
For at least one fileinfo for clearance, and at least one fileinfo for clearance described correspondence respectively is delayed
Deposit data is cleared up.Wherein, the quantity of at least one the file destination information described got can be pre-
If file clean-up quantity.
Alternatively, after carrying out the most data cached cleaning, can determine whether that described disk space makes
Whether by rate less than the first utilization rate threshold value, and whether described index node utilization rate is less than the second utilization rate threshold
Value;If judging, being is then to stop data cached cleaning;If being judged as NO, then obtain extremely further
A few new fileinfo for clearance, obtains at least one new file destination information the most in order,
Using for clearance fileinfo new as at least one, then at least one new fileinfo for clearance described
The data cached of correspondence is cleared up respectively, then proceedes to detect described disk space utilization rate and described index
Node utilization rate does not the most exceed standard, if the most not exceeding standard, then stops data cached cleaning, otherwise
Continue to clear up data cached, until described disk space utilization rate and described index node utilization rate are equal
Do not exceed standard.Wherein, when carrying out data cached cleaning, cleaned quantity of documents can be all to preset every time
File clean-up quantity.
Such as, if each file destination information is ranked up according to LRU described in described file management list,
And described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file
, and the quantity of file destination information is 1000, the file clean-up quantity preset is 100, then obtaining
After described file management list, can be by 100 file destination information front in described file management list
Corresponding data cached deletion, then determines whether that whether described disk space utilization rate is less than the first use
Rate threshold value, and whether described index node utilization rate is less than the second utilization rate threshold value, if judging to be is, then
Stop data cached cleaning, otherwise continue the 101st in described file management list to the 200th
The data cached deletion that individual file destination information is corresponding, by that analogy, when carrying out data cached cleaning every time
Delete corresponding data cached of 100 file destination information the most in order, until described disk space utilization rate
The most do not exceed standard with described index node utilization rate.The embodiment of the present invention has only in the default stage, to catalogue
List once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only need
The file destination information of specified quantity is obtained in order, such that it is able to avoid in file management list
During cleaning caching, all directory listing is traveled through brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic
Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value
Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list
File uses information to be real-time update, then at each file destination being ranked up according to file use information
In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance,
And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching
During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing
Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 3, be the structural representation of a kind of data cached processing means that the embodiment of the present invention provides,
Described data cached processing means 1 may include that detection module 10, acquisition module 20, cache cleaner module
30;
Described detection module 10, is used for detecting disk space utilization rate and index node utilization rate;
Concrete, described detection module 10 can with regular check disk space utilization rate and index node utilization rate,
Such as, described detection module 10 can check that a disk space utilization rate and index node use for every 10 seconds
Rate.After getting described disk space utilization rate and described index node utilization rate, described detection module 10
Can determine whether that described disk space utilization rate, whether more than the first utilization rate threshold value, and judges described rope
Whether draw node utilization rate more than the second utilization rate threshold value.Wherein, described first utilization rate threshold value and described
Two utilization rate threshold values are configured the most in advance.
Described acquisition module 20, for when described disk space utilization rate is more than the first utilization rate threshold value and/or institute
When stating index node utilization rate more than the second utilization rate threshold value, obtain the file management list preset;Wherein,
The file of the correspondence respectively of each file destination information in described file management list uses information to be real-time update
's;
Concrete, when described detection module 10 judges that described disk space utilization rate is more than the first utilization rate threshold
When value and/or described index node utilization rate are more than the second utilization rate threshold value, described acquisition module 20 can obtain
The file management list preset.Wherein, the described each file destination information in described file management list is logical
After in advance, directory listing is traveled through obtained, and the described each target literary composition in described file management list
Part information is to have used information to be sorted according to file.Wherein it is possible to by preset buffer memory Prune Policies
To determine that file uses information, such as, if cache cleaner strategy is LRU, the most described file uses information bag
Include the time that file uses time, i.e. file to be used for the last time;If cache cleaner strategy is LFU, then
Described file uses information to include file access times, the number of times that i.e. file is used within nearest a period of time;
The situation that file is used can include file be created, write, the event such as access.Wherein, if described literary composition
Part uses information to include, and file uses time, the described each file destination information in the most described file management list
It is to use the order of time to be ranked up according to from the earliest to nearest file;If described file uses information
Including file access times, the described each file destination information in the most described file management list is according to file
Access times order from less to more is ranked up.
Wherein, the file use information that each file destination information in described file management list is corresponding respectively is
Real-time update, therefore, the sorting position of each file destination information in described file management list is also real
Shi Gengxin's, to ensure the accuracy when clearing up data cached, it is to avoid clean out nearest or commonly used
Data cached.
Described cache cleaner module 30, in each target being ranked up according to described file use information
In fileinfo, obtain at least one file destination information in order, using as at least one file for clearance
Information, and at least one fileinfo for clearance described the data cached of correspondence respectively is cleared up;
Concrete, after described acquisition module 20 gets default file management list, described caching is clear
Reason module 30 can be in each file destination information using information to be ranked up according to described file, by suitable
Sequence obtains at least one file destination information, using as at least one fileinfo for clearance, and to described extremely
A few fileinfo for clearance is cleared up by the data cached of correspondence respectively.Wherein, get described extremely
The quantity of few file destination information can be the file clean-up quantity preset.Such as, if described file pipe
Described in reason list, each file destination information is ranked up according to LRU, and described each file destination information
Also it is to use the order of time to be ranked up according to from the earliest to nearest file, and file destination information
Quantity is 1000, and the file clean-up quantity preset is 100, the most described acquisition module 20 get described
After file management list, described cache cleaner module 30 can be by described file management list first 100
The data cached deletion that file destination information is corresponding.The embodiment of the present invention has only in the default stage, to catalogue
List once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only need
The file destination information of specified quantity is obtained in order, such that it is able to avoid in file management list
During cleaning caching, all directory listing is traveled through brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic
Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value
Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list
File uses information to be real-time update, then at each file destination being ranked up according to file use information
In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance,
And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching
During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing
Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 4 again, be the structural representation of the data cached processing means of another kind that the embodiment of the present invention provides
Figure, described data cached processing means 1 can include the detection module 10 in above-mentioned Fig. 3 correspondence embodiment,
Acquisition module 20, cache cleaner module 30, further, described data cached processing means 1 can also be wrapped
Include: presetting module 40, spider module 50, order module 60, add module 70, determine module 80, more
New module 90, judge module 100, stopping modular 110;
Described presetting module 40, is used for presetting the first utilization rate threshold value, the second utilization rate threshold value and file clear
Reason quantity;
Concrete, described presetting module 40 can preset the first utilization rate threshold value, the second utilization rate threshold value and
File clean-up quantity, described presetting module 40 can also be pre-configured with need to carry out the directory listing that monitors and
Cache cleaner strategy.Wherein, described first utilization rate threshold value is for detecting whether disk space utilization rate reaches
To needing to carry out the critical point of cache cleaner, described second utilization rate threshold value is for detecting index node use
Whether rate reaches the critical point needing to carry out cache cleaner, and described file clean-up quantity is clear for carrying out caching every time
The quantity of documents of required cleaning during reason.Wherein, described cache cleaner strategy includes LRU or LFU.
Described spider module 50, for directory listing is traveled through, each with obtain in described directory listing
File destination information;
Concrete, after described presetting module 40 sets relevant parameter, described spider module 50 can be passed through
Directory listing is described by asynchronous mode, i.e. travels through directory listing, to obtain described catalogue row
Each file destination information in table.
Described order module 60, uses information for the file the most corresponding according to described each file destination information
Described each file destination information is ranked up;
Concrete, after directory listing is traveled through by described spider module 50, described order module 60 can
Use information that described each file destination information is entered with the file corresponding respectively according to described each file destination information
Row sequence.Wherein, if cache cleaner strategy is LRU, the most described file uses information to include when file uses
Between, the time that i.e. file is used for the last time;If cache cleaner strategy is LFU, the most described file uses
Information includes file access times, the number of times that i.e. file is used within nearest a period of time;File is used
Situation can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, and the most described order module 60 can be by
The file destination information that file the earliest uses the time corresponding comes foremost, will the most untapped file
Corresponding file destination information comes foremost, and uses the suitable of time according to from the earliest to nearest file
Ordered pair each file destination information is ranked up, wherein, and the file destination information corresponding to most recently used file
Come backmost.The most such as, if described file uses information to include file access times, the most described sequence mould
File destination information minimum for file access times can be come foremost by block 60, and uses secondary according to file
Each file destination information is ranked up by number order from less to more, wherein, and the target literary composition that access times are most
Part information comes backmost.
Described interpolation module 70, for being added sequentially to file by the described each file destination information after sequence
In managing listings;
Concrete, after described each file destination information is ranked up by described order module 60, described interpolation
Described each file destination information after sequence well is added sequentially in file management list by module 70, i.e. institute
The described each file destination information in file management list of stating is that sequence is good.With cache cleaner strategy as LRU
As a example by, if described each file destination information is to use the order of time to carry out according to from the earliest to nearest file
Sequence, each file destination information described in the most described file management list is also according to from the earliest to nearest literary composition
Part uses the order of time to be ranked up, and comes the file destination letter of foremost in described file management list
The file that breath is corresponding uses the time to be the earliest, comes rearmost file destination in described file management list
File corresponding to information uses the time to be nearest.
Described determine module 80, for when detecting that file corresponding to described file destination information is used,
The described file destination information used by file is defined as fileinfo to be updated;
Concrete, the described each file destination information after sequence is added sequentially to literary composition by described interpolation module 70
After in part managing listings, the described inotify interface reality determining that module 80 can pass through Linux supervision file
Time monitor reading and the write of the respectively corresponding file of described each file destination information, when described target being detected
When the file that fileinfo is corresponding is used, illustrate that inotify interface monitors is corresponding to this file destination information
File is created, accesses or writes, now, and the described described mesh determining that file can be used by module 80
Mark fileinfo is defined as fileinfo to be updated.Wherein, the event bag that file is monitored by inotify interface
Include: IN_ACCESS/IN_MODIFY/IN_CREATE/IN_DELETE/IN_DELETE_SELF/
IN_MODIFY/IN_MOVE_SELF/IN_MOVED_FROM/IN_MOVED_TO。
Described more new module 90, for corresponding to the fileinfo described to be updated in described file management list
File use information be updated;
Described more new module 90, is additionally operable to use information to described file management list according to the file after updating
In fileinfo described to be updated resequence;
Concrete, the described described file destination information determining that file is used by module 80 is defined as to be updated
After fileinfo, described more new module 90 can be to the file described to be updated in described file management list
File corresponding to information uses information to be updated, and uses information to described file according to the file after updating
Fileinfo described to be updated in managing listings is resequenced, each in the most described file management list
The file of file destination information correspondence respectively uses information to be real-time update.Such as, if described file management
Each file destination information described in list is to use the order of time to arrange according to from the earliest to nearest file
Sequence, then when inotify interface detects that file corresponding to one of them file destination information is accessed, institute
State more new module 90 time to be used to be updated to nearest file corresponding for this file destination information file to make
With the time, and this file destination information is discharged to finally.By the literary composition that real-time update file destination information is corresponding
Part uses information, it is ensured that the accuracy when clearing up data cached, nearest or frequent to avoid cleaning out
Use is data cached.
Described judge module 100, is used for after described cache cleaner module 30 is to data cached cleaning,
Determine whether whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node makes
By rate whether less than the second utilization rate threshold value;
Described stopping modular 110, if judging to be for described judge module 100 is then to stop caching number
According to cleaning;
Described cache cleaner module 30, being judged as NO if being additionally operable to described judge module 100, obtaining the most further
Take at least one new fileinfo for clearance, and at least one new fileinfo for clearance difference described
The data cached of correspondence is cleared up;
Concrete, described cache cleaner module 30 is the most right at least one new fileinfo for clearance described
Answer data cached clear up after, described judge module 100 continue detect described disk space utilization rate
The most do not exceed standard with described index node utilization rate, if the most not exceeding standard, the most described stopping modular 110
Stopping data cached cleaning, the most described cache cleaner module 30 continues to clear up data cached,
Until described disk space utilization rate and described index node utilization rate the most do not exceed standard.Wherein, carry out every time
During data cached cleaning, cleaned quantity of documents can be all the file clean-up quantity preset.
Such as, if each file destination information is ranked up according to LRU described in described file management list,
And described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file
, and the quantity of file destination information is 1000, and the file clean-up quantity preset is 100, then described in obtain
Delivery block 20 is after getting described file management list, and described cache cleaner module 30 can be by described
The data cached deletion that in file management list, front 100 file destination information are corresponding, then by described judgement
Module 100 determines whether whether described disk space utilization rate is less than the first utilization rate threshold value, and described rope
Draw whether node utilization rate is less than the second utilization rate threshold value, if judging to be is, the most described stopping modular 110
Stop clearing up data cached, otherwise continued to arrange described file management by described cache cleaner module 30
The data cached deletion that the file destination information of the 101st to the 200th in table is corresponding, by that analogy,
Delete 100 file destination information corresponding data cached when carrying out data cached cleaning the most in order every time,
Until described disk space utilization rate and described index node utilization rate the most do not exceed standard.The embodiment of the present invention is only
Need in the default stage, directory listing is once traveled through, follow-up clear up data cached time without the most right
Directory listing travels through, it is only necessary to obtain the file destination of specified quantity in file management list in order
Information, such that it is able to all travel through brought huge every time to directory listing when avoiding clearing up caching
Expense.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic
Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value
Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list
File uses information to be real-time update, then at each file destination being ranked up according to file use information
In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance,
And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching
During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing
Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 5, be the structural representation of another data cached processing means that the embodiment of the present invention provides
Figure, described data cached processing means 1000 may include that at least one processor 1001, such as CPU,
At least one network interface 1004, user interface 1003, memorizer 1005, at least one communication bus 1002.
Wherein, communication bus 1002 is for realizing the connection communication between these assemblies.Wherein, user interface 1003
Can include display screen (Display), keyboard (Keyboard), optional user interface 1003 can also include
The wireline interface of standard, wave point.Network interface 1004 optionally can include standard wireline interface,
Wave point (such as WI-FI interface).Memorizer 1005 can be high-speed RAM memorizer, it is also possible to right and wrong
Unstable memorizer (non-volatile memory), for example, at least one disk memory.Memorizer 1005
Optionally can also is that at least one is located remotely from the storage device of aforementioned processor 1001.As it is shown in figure 5,
As the memorizer 1005 of a kind of computer-readable storage medium can include operating system, network communication module,
Subscriber Interface Module SIM and equipment control application program.
In the data cached processing means 1000 shown in Fig. 5, user interface 1003 is mainly used in as user
The interface of input is provided, obtains the data of user's output;And processor 1001 may be used for calling memorizer 1005
The equipment of middle storage controls application program, and specifically performs following steps:
Detection disk space utilization rate and index node utilization rate;
When described disk space utilization rate exceedes more than the first utilization rate threshold value and/or described index node utilization rate
During the second utilization rate threshold value, obtain the file management list preset;Wherein, in described file management list
The file of each file destination information correspondence respectively uses information to be real-time update;
Using in each file destination information of being ranked up of information according to described file, obtain in order to
Few file destination information, using as at least one fileinfo for clearance, and to described at least one treat
Cleaning fileinfo is cleared up by the data cached of correspondence respectively.
In one embodiment, described processor 1001 also performs following steps:
When detecting that file corresponding to described file destination information is used, the described mesh that file is used
Mark fileinfo is defined as fileinfo to be updated;
Information is used to carry out more the file that the fileinfo described to be updated in described file management list is corresponding
Newly;
Use information to the fileinfo described to be updated in described file management list according to the file after updating
Resequence.
In one embodiment, described processor 1001 is performing detection disk space utilization rate and index node
Before utilization rate, also perform following steps:
Directory listing is traveled through, to obtain each file destination information in described directory listing;
Use information to described each file destination information according to the file that described each file destination information is the most corresponding
It is ranked up;
Described each file destination information after sequence is added sequentially in file management list;
Wherein, described file uses information to include file use time or file access times.
In one embodiment, directory listing is traveled through, to obtain by described processor 1001 in execution
Before stating each file destination information in directory listing, also perform following steps:
Preset the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Wherein, the quantity of at least one fileinfo for clearance described is identical with described file clean-up quantity.
In one embodiment, described processor 1001 is using information carrying out according to described file in execution
In each file destination information of sequence, obtain at least one file destination information in order, using as at least one
Individual fileinfo for clearance, and at least one fileinfo for clearance described respectively corresponding data cached enter
After row cleaning, also perform following steps:
Judge whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node uses
Whether rate is less than the second utilization rate threshold value;
If judging, being is then to stop data cached cleaning;
If being judged as NO, obtain at least one new fileinfo for clearance the most further, and to described at least
One new fileinfo for clearance the data cached of correspondence respectively is cleared up.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic
Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value
Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list
File uses information to be real-time update, then at each file destination being ranked up according to file use information
In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance,
And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching
During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing
Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
One of ordinary skill in the art will appreciate that all or part of flow process realizing in above-described embodiment method,
Can be by computer program and complete to instruct relevant hardware, described program can be stored in a calculating
In machine read/write memory medium, this program is upon execution, it may include such as the flow process of the embodiment of above-mentioned each method.
Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory,
Or random store-memory body (Random Access Memory, RAM) etc. ROM).
Above disclosed be only present pre-ferred embodiments, certainly can not with this limit the present invention it
Interest field, the equivalent variations therefore made according to the claims in the present invention, still belong to the scope that the present invention is contained.