CN106294206A - A kind of caching data processing method and device - Google Patents

A kind of caching data processing method and device Download PDF

Info

Publication number
CN106294206A
CN106294206A CN201510262136.4A CN201510262136A CN106294206A CN 106294206 A CN106294206 A CN 106294206A CN 201510262136 A CN201510262136 A CN 201510262136A CN 106294206 A CN106294206 A CN 106294206A
Authority
CN
China
Prior art keywords
file
utilization rate
destination information
information
fileinfo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510262136.4A
Other languages
Chinese (zh)
Other versions
CN106294206B (en
Inventor
黄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huaduo Network Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201510262136.4A priority Critical patent/CN106294206B/en
Publication of CN106294206A publication Critical patent/CN106294206A/en
Application granted granted Critical
Publication of CN106294206B publication Critical patent/CN106294206B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses a kind of caching data processing method and device, wherein method includes: detection disk space utilization rate and index node utilization rate;When described disk space utilization rate more than the first utilization rate threshold value and/or described index node utilization rate more than the second utilization rate threshold value time, obtain the file management list preset;Wherein, the file that each file destination information in described file management list is corresponding respectively uses information to be real-time update;In each file destination information using information to be ranked up according to described file, obtain at least one file destination information in order, using as at least one fileinfo for clearance, and data cached clear up at least one fileinfo for clearance described is corresponding respectively.Use the present invention, system burden can be reduced when carrying out cache cleaner.

Description

A kind of caching data processing method and device
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of caching data processing method and device.
Background technology
Linux is the class unix operating system of a set of free use and Free propagation, be one based on POSIX (Portable Operating System Interface, portable operating system interface) and the multi-user of unix, Multitask, support multithreading and the operation system of multi-CPU (Central Processing Unit, central processing unit) System.
Application program based on (SuSE) Linux OS, often use file system is as data buffer storage, but When application program employs substantial amounts of file as data buffer storage, cache management problem will be faced, now, Need periodically file to be cleared up, in order to avoid Insufficient disk space.Common cache cleaner algorithm has LFU (least Frequently used, the most commonly used algorithm) and LRU (Least Recently Used, Do not use algorithm for a long time) etc..
The most common file clean-up method is to search assigned catalogue, by the visit of file by shell script traversal Ask that the information such as time, edit session is cleared up.But when file is too much, assigned catalogue is traveled through Expense can be the biggest so that carry out cache cleaner every time and system burden all can be made to increase the weight of.
Summary of the invention
The embodiment of the present invention provides a kind of caching data processing method and device, can carry out cache cleaner Time reduce system burden.
Embodiments provide a kind of caching data processing method, including:
Detection disk space utilization rate and index node utilization rate;
When described disk space utilization rate exceedes more than the first utilization rate threshold value and/or described index node utilization rate During the second utilization rate threshold value, obtain the file management list preset;Wherein, in described file management list The file of each file destination information correspondence respectively uses information to be real-time update;
Using in each file destination information of being ranked up of information according to described file, obtain in order to Few file destination information, using as at least one fileinfo for clearance, and to described at least one treat Cleaning fileinfo is cleared up by the data cached of correspondence respectively.
Correspondingly, the embodiment of the present invention additionally provides a kind of data cached processing means, including:
Detection module, is used for detecting disk space utilization rate and index node utilization rate;
Acquisition module, for when described disk space utilization rate is more than the first utilization rate threshold value and/or described index When node utilization rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described literary composition The file of the correspondence respectively of each file destination information in part managing listings uses information to be real-time update;
Cache cleaner module, for believing at each file destination using information to be ranked up according to described file In breath, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance described the data cached of correspondence respectively is cleared up.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to enforcement In example or description of the prior art, the required accompanying drawing used is briefly described, it should be apparent that, describe below In accompanying drawing be only some embodiments of the present invention, for those of ordinary skill in the art, do not paying On the premise of going out creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of caching data processing method that the embodiment of the present invention provides;
Fig. 2 is the schematic flow sheet of the another kind of caching data processing method that the embodiment of the present invention provides;
Fig. 3 is the structural representation of a kind of data cached processing means that the embodiment of the present invention provides;
Fig. 4 is the structural representation of the data cached processing means of another kind that the embodiment of the present invention provides;
Fig. 5 is the structural representation of another data cached processing means that the embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearly Chu, be fully described by, it is clear that described embodiment be only a part of embodiment of the present invention rather than Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creation The every other embodiment obtained under property work premise, broadly falls into the scope of protection of the invention.
Refer to Fig. 1, be the schematic flow sheet of a kind of caching data processing method that the embodiment of the present invention provides, Described method may include that
S101, detection disk space utilization rate and index node utilization rate;
Concrete, regular check disk space utilization rate and index node (inode) utilization rate, such as, silent Recognize every 10 seconds and check a disk space utilization rate and index node utilization rate.Getting described disk space After utilization rate and described index node utilization rate, it can be determined that whether described disk space utilization rate is more than first Utilization rate threshold value, and judge that whether described index node utilization rate is more than the second utilization rate threshold value.Wherein, institute State the first utilization rate threshold value and described second utilization rate threshold value is configured the most in advance.
S102, when described disk space utilization rate uses more than the first utilization rate threshold value and/or described index node When rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described file management row The file of the correspondence respectively of each file destination information in table uses information to be real-time update;
Concrete, when judging that described disk space utilization rate is more than the first utilization rate threshold value and/or described index When node utilization rate is more than the second utilization rate threshold value, default file management list can be obtained.Wherein, exist Before S101 step, first configuration needs directory listing and the cache cleaner strategy carrying out monitoring, then to described Directory listing travels through, and to obtain the All Files information in described directory listing, and is believed by All Files Breath is identified as file destination information, and the file the most corresponding further according to each file destination information uses information Described each file destination information is ranked up.Wherein, if cache cleaner strategy is LRU, the most described file Use information includes the time that file uses time, i.e. file to be used for the last time;If cache cleaner strategy For LFU, the most described file uses information to include file access times, i.e. file quilt within nearest a period of time The number of times used;The situation that file is used can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, then can be used by file the earliest File destination information corresponding to time comes foremost, will target literary composition corresponding to the most untapped file Part information comes foremost, and uses the order of time to each file destination according to from the earliest to nearest file Information is ranked up, and wherein, the file destination information corresponding to most recently used file comes backmost.Again Such as, if described file uses information to include file access times, then can be by minimum for file access times File destination information comes foremost, and according to file access times order from less to more to each file destination Information is ranked up, and wherein, the most file destination information of access times comes backmost.
After described each file destination information is ranked up, then by the sequence described each file destination letter after good Breath is added sequentially in file management list, the described each file destination letter in the most described file management list Breath is that sequence is good.As a example by cache cleaner strategy is as LRU, if described each file destination information be according to from Arriving nearest file the earliest uses the order of time to be ranked up, each mesh described in the most described file management list Mark fileinfo is also to use the order of time to be ranked up according to from the earliest to nearest file, described literary composition Coming file corresponding to the file destination information of foremost in part managing listings uses the time to be the earliest, described Coming file corresponding to rearmost file destination information in file management list uses the time to be nearest.
Preferably, when detecting that file corresponding to described file destination information is used, i.e. file destination letter The file that breath is corresponding is created, accesses, write etc. time, the described file destination information that file is used It is defined as fileinfo to be updated;Corresponding to the fileinfo described to be updated in described file management list File uses information to be updated, and uses information in described file management list according to the file after updating Fileinfo described to be updated resequence, each file destination in the most described file management list letter The file that breath is corresponding respectively uses information to be real-time update.Such as, if described in described file management list Each file destination information is ranked up according to LRU, and described each file destination information is also according to from Early the order of time is used to be ranked up, then when detecting that one of them file destination is believed to nearest file When the file that breath is corresponding is used, the time can be used to be updated to file corresponding for this file destination information Near, and this file destination information is discharged to finally.By the file that real-time update file destination information is corresponding Use information, it is ensured that the accuracy when clearing up data cached, to avoid cleaning out recently or often to make Data cached.
S103, in each file destination information using information to be ranked up according to described file, in order Obtain at least one file destination information, using as at least one fileinfo for clearance, and to described at least One fileinfo for clearance the data cached of correspondence respectively is cleared up;
Concrete, after getting default file management list, can use according to described file In each file destination information that information is ranked up, obtain at least one file destination information in order, to make For at least one fileinfo for clearance, and at least one fileinfo for clearance described correspondence respectively is delayed Deposit data is cleared up.Wherein, the quantity of at least one the file destination information described got can be pre- If file clean-up quantity.Such as, if each file destination information described in described file management list be according to LRU is ranked up, and described each file destination information be also according to from the earliest to nearest file use time Between order be ranked up, and the quantity of file destination information is 1000, the file clean-up quantity preset Be 100, then after getting described file management list, can be by described file management list front 100 The data cached deletion that individual file destination information is corresponding.The embodiment of the present invention has only in the default stage, to mesh Record list once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only Need to obtain in order the file destination information of specified quantity in file management list, such that it is able to keep away All directory listing is traveled through when exempting to clear up caching brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 2 again, be the flow process signal of the another kind of caching data processing method that the embodiment of the present invention provides Figure, described method may include that
S201, presets the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Concrete, preset the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity, enter one Step can also be pre-configured with directory listing and the cache cleaner strategy needing to carry out monitoring.Wherein, described One utilization rate threshold value is for detecting whether disk space utilization rate reaches to need to carry out the critical of cache cleaner Point, described second utilization rate threshold value is whether to reach needs to carry out caching clear for detecting index node utilization rate The critical point of reason, the quantity of documents of required cleaning when described file clean-up quantity is to carry out cache cleaner every time. Wherein, described cache cleaner strategy includes LRU or LFU.
S202, travels through directory listing, to obtain each file destination information in described directory listing;
Concrete, by the way of asynchronous, directory listing is described, i.e. directory listing is traveled through, To obtain each file destination information in described directory listing.
S203, uses information to described each target literary composition according to the file that described each file destination information is the most corresponding Part information is ranked up;
Concrete, use information to described each target according to the file that described each file destination information is the most corresponding Fileinfo is ranked up.Wherein, if cache cleaner strategy is LRU, the most described file uses information to include The time that the file use time, i.e. file is used for the last time;If cache cleaner strategy is LFU, then institute Stating file uses information to include file access times, the number of times that i.e. file is used within nearest a period of time; The situation that file is used can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, then can be used by file the earliest File destination information corresponding to time comes foremost, will target literary composition corresponding to the most untapped file Part information comes foremost, and uses the order of time to each file destination according to from the earliest to nearest file Information is ranked up, and wherein, the file destination information corresponding to most recently used file comes backmost.Again Such as, if described file uses information to include file access times, then can be by minimum for file access times File destination information comes foremost, and according to file access times order from less to more to each file destination Information is ranked up, and wherein, the most file destination information of access times comes backmost.
S204, is added sequentially to the described each file destination information after sequence in file management list;
Concrete, after described each file destination information is ranked up, then by the sequence described each mesh after good Mark fileinfo is added sequentially in file management list, the described each mesh in the most described file management list Mark fileinfo is that sequence is good.As a example by cache cleaner strategy is as LRU, if described each file destination information It is to use the order of time to be ranked up, in the most described file management list according to from the earliest to nearest file Described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file , described file management list comes file corresponding to the file destination information of foremost and uses the time to be Early, coming file corresponding to rearmost file destination information in described file management list uses the time to be Nearest.
Preferably, after generating the file management list including described each file destination information, Ke Yitong Cross Linux monitor file inotify (inotify is a Linux characteristic, and it monitors file system operation, Such as reading, write and create) interface monitors the respectively the most corresponding file of described each file destination information in real time Read and write, when detecting that file corresponding to described file destination information is used, illustrate that inotify connects Mouth monitors file corresponding to this file destination information and is created, accesses or writes, at this point it is possible to by file The described file destination information used is defined as fileinfo to be updated;To in described file management list The file that described fileinfo to be updated is corresponding uses information to be updated, and uses according to the file after updating Fileinfo described to be updated in described file management list is resequenced by information, the most described file The file of the correspondence respectively of each file destination information in managing listings uses information to be real-time update.Wherein, The event that file is monitored by inotify interface includes: IN_ACCESS/IN_MODIFY/ IN_CREATE/IN_DELETE/IN_DELETE_SELF/IN_MODIFY/IN_MOVE_SELF/ IN_MOVED_FROM/IN_MOVED_TO.Such as, if each mesh described in described file management list Mark fileinfo is to use the order of time to be ranked up according to from the earliest to nearest file, then work as inotify Interface detects when file corresponding to one of them file destination information is accessed, and can be believed by this file destination The file that breath is corresponding uses the time to be updated to nearest file and uses the time, and this file destination information is discharged to Finally.Wherein, the step using information to be updated the file that file destination information is corresponding can be following Any instant in S205 to S207 step is carried out.The embodiment of the present invention passes through real-time update file destination information Corresponding file uses information, it is ensured that the accuracy when clearing up data cached, to avoid cleaning out Near or commonly used is data cached.
S205, detection disk space utilization rate and index node utilization rate;
Concrete, regular check disk space utilization rate and index node utilization rate, such as, give tacit consent to every 10 seconds Check a disk space utilization rate and index node utilization rate.Get described disk space utilization rate and After described index node utilization rate, it can be determined that whether described disk space utilization rate is more than the first utilization rate threshold Value, and judge that whether described index node utilization rate is more than the second utilization rate threshold value.
S206, when described disk space utilization rate uses more than the first utilization rate threshold value and/or described index node When rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described file management row The file of the correspondence respectively of each file destination information in table uses information to be real-time update;
Concrete, when judging that described disk space utilization rate is more than the first utilization rate threshold value and/or described index When node utilization rate is more than the second utilization rate threshold value, default file management list can be obtained, i.e. start to open Dynamic data buffer storage clearing function.The accessed described each file destination information in file management list is Through using information sorting good according to file, and the file use letter that described each file destination information is the most corresponding Breath is real-time update, and file use time or file that the most described each file destination information is the most corresponding use Number of times is up-to-date state.
S207, in each file destination information using information to be ranked up according to described file, in order Obtain at least one file destination information, using as at least one fileinfo for clearance, and to described at least One fileinfo for clearance the data cached of correspondence respectively is cleared up;
Concrete, after getting default file management list, can use according to described file In each file destination information that information is ranked up, obtain at least one file destination information in order, to make For at least one fileinfo for clearance, and at least one fileinfo for clearance described correspondence respectively is delayed Deposit data is cleared up.Wherein, the quantity of at least one the file destination information described got can be pre- If file clean-up quantity.
Alternatively, after carrying out the most data cached cleaning, can determine whether that described disk space makes Whether by rate less than the first utilization rate threshold value, and whether described index node utilization rate is less than the second utilization rate threshold Value;If judging, being is then to stop data cached cleaning;If being judged as NO, then obtain extremely further A few new fileinfo for clearance, obtains at least one new file destination information the most in order, Using for clearance fileinfo new as at least one, then at least one new fileinfo for clearance described The data cached of correspondence is cleared up respectively, then proceedes to detect described disk space utilization rate and described index Node utilization rate does not the most exceed standard, if the most not exceeding standard, then stops data cached cleaning, otherwise Continue to clear up data cached, until described disk space utilization rate and described index node utilization rate are equal Do not exceed standard.Wherein, when carrying out data cached cleaning, cleaned quantity of documents can be all to preset every time File clean-up quantity.
Such as, if each file destination information is ranked up according to LRU described in described file management list, And described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file , and the quantity of file destination information is 1000, the file clean-up quantity preset is 100, then obtaining After described file management list, can be by 100 file destination information front in described file management list Corresponding data cached deletion, then determines whether that whether described disk space utilization rate is less than the first use Rate threshold value, and whether described index node utilization rate is less than the second utilization rate threshold value, if judging to be is, then Stop data cached cleaning, otherwise continue the 101st in described file management list to the 200th The data cached deletion that individual file destination information is corresponding, by that analogy, when carrying out data cached cleaning every time Delete corresponding data cached of 100 file destination information the most in order, until described disk space utilization rate The most do not exceed standard with described index node utilization rate.The embodiment of the present invention has only in the default stage, to catalogue List once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only need The file destination information of specified quantity is obtained in order, such that it is able to avoid in file management list During cleaning caching, all directory listing is traveled through brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 3, be the structural representation of a kind of data cached processing means that the embodiment of the present invention provides, Described data cached processing means 1 may include that detection module 10, acquisition module 20, cache cleaner module 30;
Described detection module 10, is used for detecting disk space utilization rate and index node utilization rate;
Concrete, described detection module 10 can with regular check disk space utilization rate and index node utilization rate, Such as, described detection module 10 can check that a disk space utilization rate and index node use for every 10 seconds Rate.After getting described disk space utilization rate and described index node utilization rate, described detection module 10 Can determine whether that described disk space utilization rate, whether more than the first utilization rate threshold value, and judges described rope Whether draw node utilization rate more than the second utilization rate threshold value.Wherein, described first utilization rate threshold value and described Two utilization rate threshold values are configured the most in advance.
Described acquisition module 20, for when described disk space utilization rate is more than the first utilization rate threshold value and/or institute When stating index node utilization rate more than the second utilization rate threshold value, obtain the file management list preset;Wherein, The file of the correspondence respectively of each file destination information in described file management list uses information to be real-time update 's;
Concrete, when described detection module 10 judges that described disk space utilization rate is more than the first utilization rate threshold When value and/or described index node utilization rate are more than the second utilization rate threshold value, described acquisition module 20 can obtain The file management list preset.Wherein, the described each file destination information in described file management list is logical After in advance, directory listing is traveled through obtained, and the described each target literary composition in described file management list Part information is to have used information to be sorted according to file.Wherein it is possible to by preset buffer memory Prune Policies To determine that file uses information, such as, if cache cleaner strategy is LRU, the most described file uses information bag Include the time that file uses time, i.e. file to be used for the last time;If cache cleaner strategy is LFU, then Described file uses information to include file access times, the number of times that i.e. file is used within nearest a period of time; The situation that file is used can include file be created, write, the event such as access.Wherein, if described literary composition Part uses information to include, and file uses time, the described each file destination information in the most described file management list It is to use the order of time to be ranked up according to from the earliest to nearest file;If described file uses information Including file access times, the described each file destination information in the most described file management list is according to file Access times order from less to more is ranked up.
Wherein, the file use information that each file destination information in described file management list is corresponding respectively is Real-time update, therefore, the sorting position of each file destination information in described file management list is also real Shi Gengxin's, to ensure the accuracy when clearing up data cached, it is to avoid clean out nearest or commonly used Data cached.
Described cache cleaner module 30, in each target being ranked up according to described file use information In fileinfo, obtain at least one file destination information in order, using as at least one file for clearance Information, and at least one fileinfo for clearance described the data cached of correspondence respectively is cleared up;
Concrete, after described acquisition module 20 gets default file management list, described caching is clear Reason module 30 can be in each file destination information using information to be ranked up according to described file, by suitable Sequence obtains at least one file destination information, using as at least one fileinfo for clearance, and to described extremely A few fileinfo for clearance is cleared up by the data cached of correspondence respectively.Wherein, get described extremely The quantity of few file destination information can be the file clean-up quantity preset.Such as, if described file pipe Described in reason list, each file destination information is ranked up according to LRU, and described each file destination information Also it is to use the order of time to be ranked up according to from the earliest to nearest file, and file destination information Quantity is 1000, and the file clean-up quantity preset is 100, the most described acquisition module 20 get described After file management list, described cache cleaner module 30 can be by described file management list first 100 The data cached deletion that file destination information is corresponding.The embodiment of the present invention has only in the default stage, to catalogue List once travels through, follow-up when clearing up data cached without again directory listing being traveled through, only need The file destination information of specified quantity is obtained in order, such that it is able to avoid in file management list During cleaning caching, all directory listing is traveled through brought great expense incurred every time.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 4 again, be the structural representation of the data cached processing means of another kind that the embodiment of the present invention provides Figure, described data cached processing means 1 can include the detection module 10 in above-mentioned Fig. 3 correspondence embodiment, Acquisition module 20, cache cleaner module 30, further, described data cached processing means 1 can also be wrapped Include: presetting module 40, spider module 50, order module 60, add module 70, determine module 80, more New module 90, judge module 100, stopping modular 110;
Described presetting module 40, is used for presetting the first utilization rate threshold value, the second utilization rate threshold value and file clear Reason quantity;
Concrete, described presetting module 40 can preset the first utilization rate threshold value, the second utilization rate threshold value and File clean-up quantity, described presetting module 40 can also be pre-configured with need to carry out the directory listing that monitors and Cache cleaner strategy.Wherein, described first utilization rate threshold value is for detecting whether disk space utilization rate reaches To needing to carry out the critical point of cache cleaner, described second utilization rate threshold value is for detecting index node use Whether rate reaches the critical point needing to carry out cache cleaner, and described file clean-up quantity is clear for carrying out caching every time The quantity of documents of required cleaning during reason.Wherein, described cache cleaner strategy includes LRU or LFU.
Described spider module 50, for directory listing is traveled through, each with obtain in described directory listing File destination information;
Concrete, after described presetting module 40 sets relevant parameter, described spider module 50 can be passed through Directory listing is described by asynchronous mode, i.e. travels through directory listing, to obtain described catalogue row Each file destination information in table.
Described order module 60, uses information for the file the most corresponding according to described each file destination information Described each file destination information is ranked up;
Concrete, after directory listing is traveled through by described spider module 50, described order module 60 can Use information that described each file destination information is entered with the file corresponding respectively according to described each file destination information Row sequence.Wherein, if cache cleaner strategy is LRU, the most described file uses information to include when file uses Between, the time that i.e. file is used for the last time;If cache cleaner strategy is LFU, the most described file uses Information includes file access times, the number of times that i.e. file is used within nearest a period of time;File is used Situation can include file be created, write, the event such as access.
Such as, if described file uses information to include, file uses the time, and the most described order module 60 can be by The file destination information that file the earliest uses the time corresponding comes foremost, will the most untapped file Corresponding file destination information comes foremost, and uses the suitable of time according to from the earliest to nearest file Ordered pair each file destination information is ranked up, wherein, and the file destination information corresponding to most recently used file Come backmost.The most such as, if described file uses information to include file access times, the most described sequence mould File destination information minimum for file access times can be come foremost by block 60, and uses secondary according to file Each file destination information is ranked up by number order from less to more, wherein, and the target literary composition that access times are most Part information comes backmost.
Described interpolation module 70, for being added sequentially to file by the described each file destination information after sequence In managing listings;
Concrete, after described each file destination information is ranked up by described order module 60, described interpolation Described each file destination information after sequence well is added sequentially in file management list by module 70, i.e. institute The described each file destination information in file management list of stating is that sequence is good.With cache cleaner strategy as LRU As a example by, if described each file destination information is to use the order of time to carry out according to from the earliest to nearest file Sequence, each file destination information described in the most described file management list is also according to from the earliest to nearest literary composition Part uses the order of time to be ranked up, and comes the file destination letter of foremost in described file management list The file that breath is corresponding uses the time to be the earliest, comes rearmost file destination in described file management list File corresponding to information uses the time to be nearest.
Described determine module 80, for when detecting that file corresponding to described file destination information is used, The described file destination information used by file is defined as fileinfo to be updated;
Concrete, the described each file destination information after sequence is added sequentially to literary composition by described interpolation module 70 After in part managing listings, the described inotify interface reality determining that module 80 can pass through Linux supervision file Time monitor reading and the write of the respectively corresponding file of described each file destination information, when described target being detected When the file that fileinfo is corresponding is used, illustrate that inotify interface monitors is corresponding to this file destination information File is created, accesses or writes, now, and the described described mesh determining that file can be used by module 80 Mark fileinfo is defined as fileinfo to be updated.Wherein, the event bag that file is monitored by inotify interface Include: IN_ACCESS/IN_MODIFY/IN_CREATE/IN_DELETE/IN_DELETE_SELF/ IN_MODIFY/IN_MOVE_SELF/IN_MOVED_FROM/IN_MOVED_TO。
Described more new module 90, for corresponding to the fileinfo described to be updated in described file management list File use information be updated;
Described more new module 90, is additionally operable to use information to described file management list according to the file after updating In fileinfo described to be updated resequence;
Concrete, the described described file destination information determining that file is used by module 80 is defined as to be updated After fileinfo, described more new module 90 can be to the file described to be updated in described file management list File corresponding to information uses information to be updated, and uses information to described file according to the file after updating Fileinfo described to be updated in managing listings is resequenced, each in the most described file management list The file of file destination information correspondence respectively uses information to be real-time update.Such as, if described file management Each file destination information described in list is to use the order of time to arrange according to from the earliest to nearest file Sequence, then when inotify interface detects that file corresponding to one of them file destination information is accessed, institute State more new module 90 time to be used to be updated to nearest file corresponding for this file destination information file to make With the time, and this file destination information is discharged to finally.By the literary composition that real-time update file destination information is corresponding Part uses information, it is ensured that the accuracy when clearing up data cached, nearest or frequent to avoid cleaning out Use is data cached.
Described judge module 100, is used for after described cache cleaner module 30 is to data cached cleaning, Determine whether whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node makes By rate whether less than the second utilization rate threshold value;
Described stopping modular 110, if judging to be for described judge module 100 is then to stop caching number According to cleaning;
Described cache cleaner module 30, being judged as NO if being additionally operable to described judge module 100, obtaining the most further Take at least one new fileinfo for clearance, and at least one new fileinfo for clearance difference described The data cached of correspondence is cleared up;
Concrete, described cache cleaner module 30 is the most right at least one new fileinfo for clearance described Answer data cached clear up after, described judge module 100 continue detect described disk space utilization rate The most do not exceed standard with described index node utilization rate, if the most not exceeding standard, the most described stopping modular 110 Stopping data cached cleaning, the most described cache cleaner module 30 continues to clear up data cached, Until described disk space utilization rate and described index node utilization rate the most do not exceed standard.Wherein, carry out every time During data cached cleaning, cleaned quantity of documents can be all the file clean-up quantity preset.
Such as, if each file destination information is ranked up according to LRU described in described file management list, And described each file destination information is also to use the order of time to be ranked up according to from the earliest to nearest file , and the quantity of file destination information is 1000, and the file clean-up quantity preset is 100, then described in obtain Delivery block 20 is after getting described file management list, and described cache cleaner module 30 can be by described The data cached deletion that in file management list, front 100 file destination information are corresponding, then by described judgement Module 100 determines whether whether described disk space utilization rate is less than the first utilization rate threshold value, and described rope Draw whether node utilization rate is less than the second utilization rate threshold value, if judging to be is, the most described stopping modular 110 Stop clearing up data cached, otherwise continued to arrange described file management by described cache cleaner module 30 The data cached deletion that the file destination information of the 101st to the 200th in table is corresponding, by that analogy, Delete 100 file destination information corresponding data cached when carrying out data cached cleaning the most in order every time, Until described disk space utilization rate and described index node utilization rate the most do not exceed standard.The embodiment of the present invention is only Need in the default stage, directory listing is once traveled through, follow-up clear up data cached time without the most right Directory listing travels through, it is only necessary to obtain the file destination of specified quantity in file management list in order Information, such that it is able to all travel through brought huge every time to directory listing when avoiding clearing up caching Expense.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
Refer to Fig. 5, be the structural representation of another data cached processing means that the embodiment of the present invention provides Figure, described data cached processing means 1000 may include that at least one processor 1001, such as CPU, At least one network interface 1004, user interface 1003, memorizer 1005, at least one communication bus 1002. Wherein, communication bus 1002 is for realizing the connection communication between these assemblies.Wherein, user interface 1003 Can include display screen (Display), keyboard (Keyboard), optional user interface 1003 can also include The wireline interface of standard, wave point.Network interface 1004 optionally can include standard wireline interface, Wave point (such as WI-FI interface).Memorizer 1005 can be high-speed RAM memorizer, it is also possible to right and wrong Unstable memorizer (non-volatile memory), for example, at least one disk memory.Memorizer 1005 Optionally can also is that at least one is located remotely from the storage device of aforementioned processor 1001.As it is shown in figure 5, As the memorizer 1005 of a kind of computer-readable storage medium can include operating system, network communication module, Subscriber Interface Module SIM and equipment control application program.
In the data cached processing means 1000 shown in Fig. 5, user interface 1003 is mainly used in as user The interface of input is provided, obtains the data of user's output;And processor 1001 may be used for calling memorizer 1005 The equipment of middle storage controls application program, and specifically performs following steps:
Detection disk space utilization rate and index node utilization rate;
When described disk space utilization rate exceedes more than the first utilization rate threshold value and/or described index node utilization rate During the second utilization rate threshold value, obtain the file management list preset;Wherein, in described file management list The file of each file destination information correspondence respectively uses information to be real-time update;
Using in each file destination information of being ranked up of information according to described file, obtain in order to Few file destination information, using as at least one fileinfo for clearance, and to described at least one treat Cleaning fileinfo is cleared up by the data cached of correspondence respectively.
In one embodiment, described processor 1001 also performs following steps:
When detecting that file corresponding to described file destination information is used, the described mesh that file is used Mark fileinfo is defined as fileinfo to be updated;
Information is used to carry out more the file that the fileinfo described to be updated in described file management list is corresponding Newly;
Use information to the fileinfo described to be updated in described file management list according to the file after updating Resequence.
In one embodiment, described processor 1001 is performing detection disk space utilization rate and index node Before utilization rate, also perform following steps:
Directory listing is traveled through, to obtain each file destination information in described directory listing;
Use information to described each file destination information according to the file that described each file destination information is the most corresponding It is ranked up;
Described each file destination information after sequence is added sequentially in file management list;
Wherein, described file uses information to include file use time or file access times.
In one embodiment, directory listing is traveled through, to obtain by described processor 1001 in execution Before stating each file destination information in directory listing, also perform following steps:
Preset the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Wherein, the quantity of at least one fileinfo for clearance described is identical with described file clean-up quantity.
In one embodiment, described processor 1001 is using information carrying out according to described file in execution In each file destination information of sequence, obtain at least one file destination information in order, using as at least one Individual fileinfo for clearance, and at least one fileinfo for clearance described respectively corresponding data cached enter After row cleaning, also perform following steps:
Judge whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node uses Whether rate is less than the second utilization rate threshold value;
If judging, being is then to stop data cached cleaning;
If being judged as NO, obtain at least one new fileinfo for clearance the most further, and to described at least One new fileinfo for clearance the data cached of correspondence respectively is cleared up.
In the embodiment of the present invention, by detection disk space utilization rate and index node utilization rate, can be at magnetic Disk space utilization rate more than the first utilization rate threshold value and/or index node utilization rate more than the second utilization rate threshold value Time, obtain the file management list preset, the correspondence respectively of each file destination information in file management list File uses information to be real-time update, then at each file destination being ranked up according to file use information In information, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance the data cached of correspondence respectively is cleared up, owing to caching During cleaning, it is only necessary to obtain the file destination information of specified quantity in file management list in order, and nothing Again assigned catalogue need to be traveled through, it is possible to be effectively reduced system burden when carrying out cache cleaner.
One of ordinary skill in the art will appreciate that all or part of flow process realizing in above-described embodiment method, Can be by computer program and complete to instruct relevant hardware, described program can be stored in a calculating In machine read/write memory medium, this program is upon execution, it may include such as the flow process of the embodiment of above-mentioned each method. Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, Or random store-memory body (Random Access Memory, RAM) etc. ROM).
Above disclosed be only present pre-ferred embodiments, certainly can not with this limit the present invention it Interest field, the equivalent variations therefore made according to the claims in the present invention, still belong to the scope that the present invention is contained.

Claims (10)

1. a caching data processing method, it is characterised in that including:
Detection disk space utilization rate and index node utilization rate;
When described disk space utilization rate exceedes more than the first utilization rate threshold value and/or described index node utilization rate During the second utilization rate threshold value, obtain the file management list preset;Wherein, in described file management list The file of each file destination information correspondence respectively uses information to be real-time update;
Using in each file destination information of being ranked up of information according to described file, obtain in order to Few file destination information, using as at least one fileinfo for clearance, and to described at least one treat Cleaning fileinfo is cleared up by the data cached of correspondence respectively.
2. the method for claim 1, it is characterised in that also include:
When detecting that file corresponding to described file destination information is used, the described mesh that file is used Mark fileinfo is defined as fileinfo to be updated;
Information is used to carry out more the file that the fileinfo described to be updated in described file management list is corresponding Newly;
Use information to the fileinfo described to be updated in described file management list according to the file after updating Resequence.
3. the method for claim 1, it is characterised in that in described detection disk space utilization rate and Before the step of index node utilization rate, also include:
Directory listing is traveled through, to obtain each file destination information in described directory listing;
Use information to described each file destination information according to the file that described each file destination information is the most corresponding It is ranked up;
Described each file destination information after sequence is added sequentially in file management list;
Wherein, described file uses information to include file use time or file access times.
4. method as claimed in claim 3, it is characterised in that directory listing is traveled through described, Before step with each file destination information in the described directory listing of acquisition, also include:
Preset the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Wherein, the quantity of at least one fileinfo for clearance described is identical with described file clean-up quantity.
5. the method for claim 1, it is characterised in that using according to described file described In each file destination information that information is ranked up, obtain at least one file destination information in order, to make For at least one fileinfo for clearance, and at least one fileinfo for clearance described correspondence respectively is delayed After deposit data carries out the step cleared up, also include:
Judge whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node uses Whether rate is less than the second utilization rate threshold value;
If judging, being is then to stop data cached cleaning;
If being judged as NO, obtain at least one new fileinfo for clearance the most further, and to described at least One new fileinfo for clearance the data cached of correspondence respectively is cleared up.
6. a data cached processing means, it is characterised in that including:
Detection module, is used for detecting disk space utilization rate and index node utilization rate;
Acquisition module, for when described disk space utilization rate is more than the first utilization rate threshold value and/or described index When node utilization rate is more than the second utilization rate threshold value, obtain the file management list preset;Wherein, described literary composition The file of the correspondence respectively of each file destination information in part managing listings uses information to be real-time update;
Cache cleaner module, for believing at each file destination using information to be ranked up according to described file In breath, obtain at least one file destination information in order, using as at least one fileinfo for clearance, And at least one fileinfo for clearance described the data cached of correspondence respectively is cleared up.
7. device as claimed in claim 6, it is characterised in that also include:
Determine module, for when detecting that file corresponding to described file destination information is used, by file The described file destination information used is defined as fileinfo to be updated;
More new module, for the file corresponding to the fileinfo described to be updated in described file management list Use information is updated;
Described more new module, is additionally operable to use information in described file management list according to the file after updating Fileinfo described to be updated resequence.
8. device as claimed in claim 6, it is characterised in that also include:
Spider module, for traveling through directory listing, to obtain each target literary composition in described directory listing Part information;
Order module, uses information to described for the file the most corresponding according to described each file destination information Each file destination information is ranked up;
Add module, for the described each file destination information after sequence is added sequentially to file management row In table;
Wherein, described file uses information to include file use time or file access times.
9. device as claimed in claim 8, it is characterised in that also include:
Presetting module, is used for presetting the first utilization rate threshold value, the second utilization rate threshold value and file clean-up quantity;
Wherein, the quantity of at least one fileinfo for clearance described is identical with described file clean-up quantity.
10. device as claimed in claim 6, it is characterised in that also include:
Judge module, for after described cache cleaner module is to data cached cleaning, determines whether Whether described disk space utilization rate is less than the first utilization rate threshold value, and described index node utilization rate is the least In the second utilization rate threshold value;
For described judge module, stopping modular, if judging that being is then to stop data cached cleaning;
Described cache cleaner module, is judged as NO if being additionally operable to described judge module, then obtain at least further One new fileinfo for clearance, and to described at least one new fileinfo for clearance correspondence respectively Data cached clear up.
CN201510262136.4A 2015-05-21 2015-05-21 Cache data processing method and device Active CN106294206B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510262136.4A CN106294206B (en) 2015-05-21 2015-05-21 Cache data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510262136.4A CN106294206B (en) 2015-05-21 2015-05-21 Cache data processing method and device

Publications (2)

Publication Number Publication Date
CN106294206A true CN106294206A (en) 2017-01-04
CN106294206B CN106294206B (en) 2022-04-29

Family

ID=57632403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510262136.4A Active CN106294206B (en) 2015-05-21 2015-05-21 Cache data processing method and device

Country Status (1)

Country Link
CN (1) CN106294206B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943718A (en) * 2017-12-07 2018-04-20 网宿科技股份有限公司 A kind of method and apparatus for clearing up cache file
CN109002485A (en) * 2018-06-25 2018-12-14 郑州云海信息技术有限公司 A kind of management method of core file, device and storage medium
CN109656885A (en) * 2018-12-18 2019-04-19 Oppo广东移动通信有限公司 Memory space monitoring method and device, electric terminal, storage medium
CN110109879A (en) * 2018-01-18 2019-08-09 伊姆西Ip控股有限责任公司 The method, equipment and computer program product of metadata are washed away in multiple nucleus system
CN110287160A (en) * 2019-05-31 2019-09-27 广东睿江云计算股份有限公司 A kind of spatial cache method for cleaning and device
CN110362769A (en) * 2019-06-25 2019-10-22 苏州浪潮智能科技有限公司 A kind of data processing method and device
CN110750411A (en) * 2019-08-26 2020-02-04 上海商米科技集团股份有限公司 Method and device for monitoring, early warning and repairing file index node
CN111142803A (en) * 2019-12-29 2020-05-12 北京浪潮数据技术有限公司 Metadata disk refreshing method, device, equipment and medium
CN113342277A (en) * 2021-06-21 2021-09-03 上海哔哩哔哩科技有限公司 Data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804831A (en) * 2005-01-13 2006-07-19 陈翌 Network cache management system and method
US7200623B2 (en) * 1998-11-24 2007-04-03 Oracle International Corp. Methods to perform disk writes in a distributed shared disk system needing consistency across failures
CN101075241A (en) * 2006-12-26 2007-11-21 腾讯科技(深圳)有限公司 Method and system for processing buffer
CN102333079A (en) * 2011-02-25 2012-01-25 北京兴宇中科科技开发股份有限公司 Method for clearing disk space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7200623B2 (en) * 1998-11-24 2007-04-03 Oracle International Corp. Methods to perform disk writes in a distributed shared disk system needing consistency across failures
CN1804831A (en) * 2005-01-13 2006-07-19 陈翌 Network cache management system and method
CN101075241A (en) * 2006-12-26 2007-11-21 腾讯科技(深圳)有限公司 Method and system for processing buffer
CN102333079A (en) * 2011-02-25 2012-01-25 北京兴宇中科科技开发股份有限公司 Method for clearing disk space

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943718A (en) * 2017-12-07 2018-04-20 网宿科技股份有限公司 A kind of method and apparatus for clearing up cache file
CN110109879B (en) * 2018-01-18 2023-07-18 伊姆西Ip控股有限责任公司 Method, apparatus and computer readable medium for flushing metadata in a multi-core system
CN110109879A (en) * 2018-01-18 2019-08-09 伊姆西Ip控股有限责任公司 The method, equipment and computer program product of metadata are washed away in multiple nucleus system
CN109002485A (en) * 2018-06-25 2018-12-14 郑州云海信息技术有限公司 A kind of management method of core file, device and storage medium
CN109656885B (en) * 2018-12-18 2022-04-29 Oppo广东移动通信有限公司 Storage space monitoring method and device, electronic terminal and storage medium
CN109656885A (en) * 2018-12-18 2019-04-19 Oppo广东移动通信有限公司 Memory space monitoring method and device, electric terminal, storage medium
CN110287160A (en) * 2019-05-31 2019-09-27 广东睿江云计算股份有限公司 A kind of spatial cache method for cleaning and device
CN110287160B (en) * 2019-05-31 2023-09-12 广东睿江云计算股份有限公司 Cache space cleaning method and device
CN110362769A (en) * 2019-06-25 2019-10-22 苏州浪潮智能科技有限公司 A kind of data processing method and device
CN110750411A (en) * 2019-08-26 2020-02-04 上海商米科技集团股份有限公司 Method and device for monitoring, early warning and repairing file index node
CN110750411B (en) * 2019-08-26 2023-05-05 上海商米科技集团股份有限公司 Method and device for monitoring, early warning and repairing file index node
CN111142803A (en) * 2019-12-29 2020-05-12 北京浪潮数据技术有限公司 Metadata disk refreshing method, device, equipment and medium
CN111142803B (en) * 2019-12-29 2022-07-08 北京浪潮数据技术有限公司 Metadata disk refreshing method, device, equipment and medium
CN113342277A (en) * 2021-06-21 2021-09-03 上海哔哩哔哩科技有限公司 Data processing method and device

Also Published As

Publication number Publication date
CN106294206B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN106294206A (en) A kind of caching data processing method and device
CN107943718B (en) Method and device for cleaning cache file
CN103995855B (en) The method and apparatus of data storage
CN107533507A (en) According to the data in log-structured managing storage
US20110276578A1 (en) Obtaining file system view in block-level data storage systems
CN103761159B (en) Method and system for processing incremental snapshot
CN110232049A (en) A kind of metadata cache management method and device
CN109491928A (en) Buffer control method, device, terminal and storage medium
CN108255620A (en) A kind of business logic processing method, apparatus, service server and system
JP2008158993A (en) Storage system
US8635224B2 (en) Clustering streaming graphs
CN108108127A (en) A kind of file reading and system
CN105607986A (en) Acquisition method and device of user behavior log data
CN106682186A (en) File access control list (ACL) management method and related device and system
WO2014183514A1 (en) Method, device, and computer storage medium for hierarchical storage
CN106302595A (en) A kind of method and apparatus that server is carried out physical examination
CN104156321B (en) The method and device of a kind of data pre-fetching
CN109240611A (en) The cold and hot data hierarchy method of small documents, small documents data access method and its device
CN106445730A (en) Method for improving performance of virtual machine, and terminal
CN104657358B (en) Realize the method and system of web page program offline cache
CN105468538B (en) A kind of internal memory migration method and apparatus
US11934660B1 (en) Tiered data storage with ephemeral and persistent tiers
CN108694188A (en) A kind of newer method of index data and relevant apparatus
CN108319634A (en) The directory access method and apparatus of distributed file system
CN111124283A (en) Storage space management method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510000, Guangdong Province, Guangzhou, Panyu District Town, Huambo business district, Wanda Plaza, block B1, 28 floor

Applicant after: Guangzhou Huaduo Network Technology Co., Ltd.

Address before: 510655, Guangzhou, Whampoa Avenue, No. 2, creative industrial park, building 3-08,

Applicant before: Guangzhou Huaduo Network Technology Co., Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170104

Assignee: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000031

Denomination of invention: A buffer data processing method and device

License type: Common License

Record date: 20210125

GR01 Patent grant
GR01 Patent grant