CN104424120B - A kind of buffer memory management method and device - Google Patents

A kind of buffer memory management method and device Download PDF

Info

Publication number
CN104424120B
CN104424120B CN201310384995.1A CN201310384995A CN104424120B CN 104424120 B CN104424120 B CN 104424120B CN 201310384995 A CN201310384995 A CN 201310384995A CN 104424120 B CN104424120 B CN 104424120B
Authority
CN
China
Prior art keywords
page
caching
fragments
initial data
hit rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310384995.1A
Other languages
Chinese (zh)
Other versions
CN104424120A (en
Inventor
齐明
李少明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Founder Information Industry Holdings Co Ltd
Peking University Founder Group Co Ltd
Beijing Founder Electronics Co Ltd
Original Assignee
Founder Information Industry Holdings Co Ltd
Peking University Founder Group Co Ltd
Beijing Founder Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Founder Information Industry Holdings Co Ltd, Peking University Founder Group Co Ltd, Beijing Founder Electronics Co Ltd filed Critical Founder Information Industry Holdings Co Ltd
Priority to CN201310384995.1A priority Critical patent/CN104424120B/en
Publication of CN104424120A publication Critical patent/CN104424120A/en
Application granted granted Critical
Publication of CN104424120B publication Critical patent/CN104424120B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a kind of buffer memory management method and device, this method includes:The complete page is preserved with the first caching, the page fragments for being constituted the page are preserved with the second caching, the initial data for being constituted the page fragments is preserved with the 3rd caching;The cache hit rate of the page in first caching is calculated in real time;If the cache hit rate is higher than the first threshold of setting, keep the Page-saving in described first caches;Otherwise the page is deleted from the described first caching, and cache way is selected according to the data characteristicses of the page.Using the present invention, the service efficiency of caching can be improved.

Description

A kind of buffer memory management method and device
Technical field
The present invention relates to web technology field, in particular to a kind of buffer memory management method and device.
Background technology
At present in web technology field, most of website by by commonly using data carried out in the storage of more high speed Interim storage, directly using the data in cache to avoid the access to low-speed device when accessing, so as to improve The performance of system.
Each current website uses single cache during caching is used, however, the accessing characteristic of data is different , single caching can not solve the otherness of data, so as to cause the utilization ratio of caching low.
The content of the invention
The embodiment of the present invention provides a kind of buffer memory management method and device, to improve caching utilization ratio.
Therefore, the embodiment of the present invention provides following technical scheme:
A kind of buffer memory management method, including:
The complete page is preserved with the first caching, the page fragments for being constituted the page are preserved with the second caching, with the 3rd Caching preserves the initial data for constituting the page fragments;
The cache hit rate of the page in first caching is calculated in real time;
If the cache hit rate keeps delaying the Page-saving described first higher than the first threshold of setting In depositing;Otherwise the page is deleted from the described first caching, and cache way is selected according to the data characteristicses of the page.
Further, methods described also includes:
The cache hit rate of the page fragments in second caching is calculated in real time;
If the cache hit rate of the page fragments keeps protecting the page fragments higher than the Second Threshold of setting Exist in second caching;Otherwise the page is deleted from the described second caching.
Further, methods described also includes:
The cache hit rate of the initial data in the 3rd caching is calculated in real time;
If the cache hit rate of the initial data keeps protecting the initial data higher than the 3rd threshold value of setting Exist in the 3rd caching;Otherwise the initial data is deleted from the described 3rd caching.
Preferably, preserving the complete page with the first caching includes:
The page is preserved by key of the URL of the page.
Preferably, the page fragments for being constituted the page with the second caching preservation include:
The numbering of the page fragments is added to be that key is preserved the text of the page fragments with the URL of the page.
Preferably, the initial data for being constituted the page fragments with the 3rd caching preservation includes:
The initial data is stored as conjoint tendon using the object type of the initial data and object major key ID.
A kind of cache management device for website, including:
First caching, the complete page for preserving;
Second caching, the page fragments of the page are constituted for preserving;
3rd caching, the initial data of the page fragments is constituted for preserving;
Computing module, the cache hit rate for calculating the page in first caching in real time;
Update module, during for the cache hit rate in the page higher than the first threshold set, keeps the page Face is stored in first caching, otherwise deletes the page from the described first caching, and according to the data of the page Feature selects cache way.
Further, the computing module, is additionally operable to calculate the caching life of the page fragments in second caching in real time Middle rate;
Correspondingly, the update module, is additionally operable to second threshold of the cache hit rate higher than setting in the page fragments During value, keep the Page-saving in described second caches;Otherwise the page is deleted from the described second caching.
Further, the computing module, is additionally operable to calculate the caching life of the initial data in the 3rd caching in real time Middle rate;
Correspondingly, the update module, is additionally operable to threeth threshold of the cache hit rate higher than setting in the initial data During value, keep the initial data being stored in the 3rd caching;Otherwise delete described original from the described 3rd caching Data.
Preferably, first caching is preserved by key of the URL of the page to the page.Second caching The numbering of the page fragments is added to be that key is preserved the text of the page fragments with the URL of the page.Described 3rd Caching is stored the initial data as conjoint tendon using the object type of the initial data and object major key ID.
Buffer memory management method and device provided in an embodiment of the present invention, can be according to data using hierarchical cache mode Feature carries out pointedly selecting specific caching, so that the utilization ratio of caching is greatly improved.Further, lead to The occupation mode of adjust automatically caching is crossed, the use of caching can be dynamically adjusted in real time according to the characteristics of system data, so as to enter One step improves the service efficiency of caching.
Brief description of the drawings
Fig. 1 is the flow chart of buffer memory management method of the embodiment of the present invention;
Fig. 2 is the structural representation of cache management device of the embodiment of the present invention;
Fig. 3 is the configuration diagram of web station system hierarchical cache in the embodiment of the present invention;
Fig. 4 is the process schematic of website automatic adjustment system hierarchical cache in the embodiment of the present invention.
Embodiment
Below with reference to the accompanying drawings and in conjunction with the embodiments, the present invention is described in detail.
Single cache is used during caching is used for each website in the prior art so that Buffer Utilization is low The problem of, the embodiment of the present invention provides a kind of buffer memory management method and device for website, using hierarchical cache mode, can be with Carried out pointedly selecting specific caching according to the characteristics of data, so that the utilization ratio of caching is greatly improved. Further, the occupation mode cached by adjust automatically, can dynamically adjust caching in real time according to the characteristics of system data Use, so as to further improve the service efficiency of caching.
It should be noted that in embodiments of the present invention, implementing for the caching of different stage can use following sides Formula.
Page cache:The realization of page cache is realized when specific implementation by the technology of filter, in mistake Filter one-level intercepts and captures the page of request, and is cached the text of the page by key of URL.When implementing, it is desirable to System can not only be by page cache, additionally it is possible to be dynamically selected according to external command using caching or directly generate page Face.
Page fragments are cached:The caching of page fragments can use the technology of above page cache, generate the page when Time is cached page fragments, and caching adds segment number to be cached the text of page fragments as key using URL.Specific real When existing, the system of also requiring that can not only cache page fragments, can be also dynamically selected according to external command It is to be cached using page fragments or directly generate page fragments.
Primary data cache:Realizing for primary data cache can be carried out based on the object in internal memory, and initial data is carried It is encapsulated into after taking in internal storage data object, internal storage data pair is directly used during modification, deletion and inquiry afterwards As so as to avoid the access to database.When data stored in memory object, object type and object master can be used Key ID is stored as conjoint tendon, gets corresponding data object by key when data manipulation, then carry out data Processing and displaying.When implementing, the system of also requiring that can not only be by primary data cache, and also wanting being capable of basis External command is dynamically selected using primary data cache or directly accesses the database acquisition data.
As shown in figure 1, being the flow chart of buffer memory management method of the embodiment of the present invention, comprise the following steps:
Step 101, the complete page is preserved with the first caching, the page piece for being constituted the page is preserved with the second caching Section, the initial data for being constituted the page fragments is preserved with the 3rd caching;
Step 102, the cache hit rate of the page in first caching is calculated in real time;
Step 103, if the cache hit rate is kept the Page-saving in institute higher than the first threshold of setting State in the first caching;Otherwise the page is deleted from the described first caching, and selects slow according to the data characteristicses of the page Deposit mode.
Certainly, the page fragments in being cached for second, the page fragments in second caching can also be calculated in real time Cache hit rate.If the cache hit rate of the page fragments is kept the page higher than the Second Threshold of setting Fragment is stored in second caching;Otherwise the page is deleted from the described second caching.
Equally, the initial data in being cached for the 3rd, the initial data in the 3rd caching can also be calculated in real time Cache hit rate.If the cache hit rate of the initial data is higher than the 3rd threshold value of setting, holding will be described original Data are stored in the 3rd caching;Otherwise the initial data is deleted from the described 3rd caching.
The cache hit rate of page fragments refers to that the page fragments to be accessed find and in the buffer and not looked in the buffer To the proportionate relationship of (needing generation in real time), i.e., the number of times found in the buffer (is found in the buffer with total lookup number of times The summation of number of times and the number of times not found in the buffer) percentage.Similarly, refer to for the cache hit rate of initial data It is also to find and the proportionate relationship for not finding (need to access database acquisition) in the buffer, both computational methods in the buffer It is consistent.
It should be noted that above-mentioned first threshold, Second Threshold and the 3rd threshold value can be with identical, and can also be different, it is right This embodiment of the present invention is not limited.
It can be seen that, buffer memory management method provided in an embodiment of the present invention, can be according to the spies of data using hierarchical cache mode Point carries out pointedly selecting specific caching, so that the utilization ratio of caching is greatly improved.Further, pass through The occupation mode of adjust automatically caching, can dynamically adjust the use of caching, so as to enter one in real time according to the characteristics of system data Step improves the service efficiency of caching.
Correspondingly, the embodiment of the present invention also provides a kind of cache management device, as shown in Fig. 2 being a kind of knot of the device Structure schematic diagram.
In this embodiment, described device includes:
First caching 201, the complete page for preserving;
Second caching 202, the page fragments of the page are constituted for preserving;
3rd caching 203, the initial data of the page fragments is constituted for preserving;
Computing module 204, the cache hit rate for calculating the page in first caching in real time;
Update module 205, during for the cache hit rate in the page higher than the first threshold set, keeping will be described Otherwise Page-saving deletes the page in described first caches from the described first caching, and according to the number of the page Cache way is selected according to feature.
Certainly, in another embodiment of cache management device of the present invention, computing module 204 can also calculate described in real time The cache hit rate of page fragments in second caching.Correspondingly, update module 205, are additionally operable in the slow of the page fragments Deposit and shoot straight when the Second Threshold of setting, keep the page fragments being stored in second caching;Otherwise from institute State and the page is deleted in the second caching.
Equally, in another embodiment of cache management device of the present invention, computing module 204 can also calculate described in real time The cache hit rate of initial data in 3rd caching.Correspondingly, update module 205, are additionally operable in the slow of the initial data Deposit and shoot straight when three threshold value of setting, keep the initial data being stored in the 3rd caching;Otherwise from institute State in the 3rd caching and delete the initial data.
The calculating of the cache hit rate of page fragments and the cache hit rate of initial data and the cache hit rate of the page Calculating it is similar, be not described in detail herein.
The buffer memory management method and device of the embodiment of the present invention can apply to web station system, achieve hierarchical cache. As shown in figure 3, being the configuration diagram of web station system hierarchical cache in the embodiment of the present invention.
User accesses website, asks a page, the page is by several page fragments in the hierarchical cache framework of website Synthesis, while page fragments are also based on Raw Data Generation, initial data is then obtained from database.
Based on the framework shown in Fig. 3, the complete page for returning to user can be cached, can also by it is therein certain Individual page fragments are cached, and can also cache original data, thus constitute the framework of website hierarchical cache.
If full page cached, next user can obtain when accessing the page directly from caching The page, then eliminate below the page synthesis, page fragments generation, initial data obtain etc. process so that improve system visit The performance asked.Similarly, if page fragments cached, page fragments synthesis, initial data acquisition below can be saved Process, also improve to a certain extent system access performance.
Above-mentioned website hierarchical cache framework, the characteristics of being accessed according to different pieces of information uses different cache way.I.e. The suitable caching using full page of access of some data, the caching of the suitable page fragments of access of some data, and some Data are adapted to primary data cache, by that can maximize the use of caching with reference to multi-level buffer mode, can greatly improve number According to buffer efficiency.
Specifically, cache way can be determined according to whether the data of caching often change, such as, some pages are kept Constant, some pages often change.The page of such as book recommendation, it is assumed that the address of the page is:
http://www.book.com/recommend.jspPage=1
http://www.book.com/recommend.jspPage=2
http://www.book.com/recommend.jspPage=3
It is constant with regard to this page three holdings, then according to page cache mode can be that URL is cached for key by this page three, use Family just can be directly using the page that these are cached when recommendation is checked.But for just uncomfortable for the page of retrieval Page cache is done in conjunction, because these pages often change, and is differed greatly.For example, the address of retrieval is:
http://www.search.com/result.jspKeyword=starfishes
http://www.search.com/result.jspKeyword=thieves
http://www.search.com/result.jspKeyword=toilet brush
The result of one people's retrieval, is less likely to another people retrieval, if such case is using page cache Mode, then substantially committed memory in vain.
And in the retrieval result page if the region of one piece of popular term of display, then this region can be entered Row page fragments are cached, because the results page of different people's retrievals is different, but this block region in the page is all identical, Therefore it can be cached, directly bring and use when different results page synthesis.
Therefore, in embodiments of the present invention, different cache way can respectively be treated for different situations, certainly Three kinds of above-mentioned cache way can be used simultaneously.
As shown in figure 4, being the process schematic of website automatic adjustment system hierarchical cache in the embodiment of the present invention.
In this process, computing module needs the real-time statistics page, page fragments, the cache hit rate of initial data, example Such as it is directed to for the page, collects the page access number of times of statistics caching and the access times of physical page, and calculate the caching page Access times and total access times percentage, sent when less than given threshold (first threshold) to page cache and go to delay The instruction deposited, sends the instruction using caching when higher than given threshold, each corresponding cache module after instruction is received from The use of dynamic adjustment caching.
Specific adjustment process is as follows:
Step 1:System uses page cache first.
Step 2:Computing module calculates page cache hit rate, if calculating obtained page cache hit rate less than setting Threshold value (first threshold), then remove page cache, if being consistently higher than given threshold (first threshold), maintains page cache Use.
Step 3:When page cache hit rate is less than given threshold (first threshold), system removes page cache automatically, It is changed to cache using page fragments.
Step 4:Computing module continues to calculate page fragments cache hit rate, if calculating obtained page fragments caching life Middle rate is less than given threshold (Second Threshold), then removes page fragments caching, if being consistently higher than given threshold (Second Threshold), Then maintain the use of page fragments caching.
Step 5:When the hit rate that page fragments are cached is less than given threshold (Second Threshold), system removes the page automatically Fragment buffer, is changed to use primary data cache.
Step 6:Computing module continues to calculate primary data cache hit rate, if calculating obtained primary data cache life Middle rate is less than given threshold (the 3rd threshold value), then removes primary data cache, be changed to directly access the database acquisition data, if Given threshold (the 3rd threshold value) is consistently higher than, then maintains the use of primary data cache.
, can be with when implementing it should be noted that the above-mentioned calculating to different cache hit rates can be carried out simultaneously Completed by different threads.
As can be seen here, using buffer memory management method of the present invention and device, by website automatic adjustment system hierarchical cache, improve The utilization ratio of system level caching, and with following remarkable result:
1. improve systematic function:By the use of the use of hierarchical cache, and automatic regulating function, can cause be System can maximally utilize caching, so as to improve the response speed of system, improve in a kind of optimal mode using caching Performance.
2. improve the utilization rate of system resource:Caching is a kind of system resource, is also required to during the use of caching The system resource of bottom is used, and hierarchical cache and using for automatic regulating function make it that the utilization ratio of resource is higher, more can Enough play the value of resource.
3. realize the optimization collocation of system resource:During caching is used must the high speed of application system deposit Storage, such as internal memory, the use of these resources necessarily reduces the resource required for the operation of other systems function, especially makes in caching It is even more that one kind of resource is wasted in the case of improperly.And the realization of adjust automatically hierarchical cache can be optimized automatically Use resource, occur that resource is occupied and the problem of not high utilization ratio so as to avoid.The resource for improving system is excellent Change allocative abilities.
Buffer memory management method and device provided in an embodiment of the present invention, can pointedly be selected according to the characteristics of data Specific cache way is selected, so as to be greatly improved in the utilization ratio of caching, while the use of adjust automatically caching Mode can dynamically adjust the use of caching according to the characteristics of system data in real time, so as to accomplish further to improve the use of caching Efficiency.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can be with general Computing device realize that they can be concentrated on single computing device, or be distributed in multiple computing devices and constituted Network on, alternatively, the program code that they can be can perform with computing device be realized, it is thus possible to they are stored Performed in the storage device by computing device, either they are fabricated to respectively each integrated circuit modules or by they In multiple modules or step single integrated circuit module is fabricated to realize.So, the present invention is not restricted to any specific Hardware and software is combined.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (10)

1. a kind of buffer memory management method, it is characterised in that including:
The complete page is preserved with the first caching, the page fragments for being constituted the page are preserved with the second caching, with the 3rd caching Preserve the initial data for constituting the page fragments;
The cache hit rate of the page in first caching is calculated in real time;
If the cache hit rate is higher than the first threshold of setting, keep the Page-saving in the described first caching In;Otherwise the page is deleted from the described first caching, and cache way is selected according to the data characteristicses of the page.
2. according to the method described in claim 1, it is characterised in that also include:
The cache hit rate of the page fragments in second caching is calculated in real time;
If the cache hit rate of the page fragments keeps the page fragments being stored in higher than the Second Threshold of setting In second caching;Otherwise the page is deleted from the described second caching.
3. method according to claim 1 or 2, it is characterised in that also include:
The cache hit rate of the initial data in the 3rd caching is calculated in real time;
If the cache hit rate of the initial data keeps the initial data being stored in higher than the 3rd threshold value of setting In 3rd caching;Otherwise the initial data is deleted from the described 3rd caching.
4. according to the method described in claim 1, it is characterised in that preserving the complete page with the first caching includes:
The page is preserved by key of the URL of the page.
5. according to the method described in claim 1, it is characterised in that the page fragments for being constituted the page are preserved with the second caching Including:
The numbering of the page fragments is added to be that key is preserved the text of the page fragments with the URL of the page.
6. according to the method described in claim 1, it is characterised in that preserved with the 3rd caching and constituted the original of the page fragments Data include:
The initial data is stored as conjoint tendon using the object type of the initial data and object major key ID.
7. a kind of cache management device for website, it is characterised in that including:
First caching, the complete page for preserving;
Second caching, the page fragments of the page are constituted for preserving;
3rd caching, the initial data of the page fragments is constituted for preserving;
Computing module, the cache hit rate for calculating the page in first caching in real time;
Update module, during for the cache hit rate in the page higher than the first threshold set, the page is protected in holding Exist in first caching, otherwise delete the page from the described first caching, and according to the data characteristicses of the page Select cache way.
8. device according to claim 7, it is characterised in that
The computing module, is additionally operable to calculate the cache hit rate of the page fragments in second caching in real time;
The update module, is additionally operable to when the cache hit rate of the page fragments is higher than the Second Threshold set, keeping will The Page-saving is in described second caches;Otherwise the page is deleted from the described second caching.
9. the device according to claim 7 or 8, it is characterised in that
The computing module, is additionally operable to calculate the cache hit rate of the initial data in the 3rd caching in real time;
The update module, is additionally operable to when the cache hit rate of the initial data is higher than three threshold value set, keeping will The initial data is stored in the 3rd caching;Otherwise the initial data is deleted from the described 3rd caching.
10. device according to claim 7, it is characterised in that
First caching is preserved by key of the URL of the page to the page;
Second caching adds the numbering of the page fragments to be that key enters the text of the page fragments with the URL of the page Row is preserved;
3rd caching is using the object type of the initial data and object major key ID as conjoint tendon by the initial data Stored.
CN201310384995.1A 2013-08-29 2013-08-29 A kind of buffer memory management method and device Expired - Fee Related CN104424120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310384995.1A CN104424120B (en) 2013-08-29 2013-08-29 A kind of buffer memory management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310384995.1A CN104424120B (en) 2013-08-29 2013-08-29 A kind of buffer memory management method and device

Publications (2)

Publication Number Publication Date
CN104424120A CN104424120A (en) 2015-03-18
CN104424120B true CN104424120B (en) 2017-10-27

Family

ID=52973154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310384995.1A Expired - Fee Related CN104424120B (en) 2013-08-29 2013-08-29 A kind of buffer memory management method and device

Country Status (1)

Country Link
CN (1) CN104424120B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075236A (en) * 2006-06-12 2007-11-21 腾讯科技(深圳)有限公司 Apparatus and method for accelerating browser webpage display
CN101702173A (en) * 2009-11-11 2010-05-05 中兴通讯股份有限公司 Method and device for increasing access speed of mobile portal dynamic page
CN102955786A (en) * 2011-08-22 2013-03-06 北大方正集团有限公司 Method and system for caching and distributing dynamic webpage data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100509276B1 (en) * 2001-08-20 2005-08-22 엔에이치엔(주) Method for searching web page on popularity of visiting web pages and apparatus thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075236A (en) * 2006-06-12 2007-11-21 腾讯科技(深圳)有限公司 Apparatus and method for accelerating browser webpage display
CN101702173A (en) * 2009-11-11 2010-05-05 中兴通讯股份有限公司 Method and device for increasing access speed of mobile portal dynamic page
CN102955786A (en) * 2011-08-22 2013-03-06 北大方正集团有限公司 Method and system for caching and distributing dynamic webpage data

Also Published As

Publication number Publication date
CN104424120A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
CN104915319B (en) The system and method for cache information
US6651141B2 (en) System and method for populating cache servers with popular media contents
US20170134517A1 (en) Data storage based on content popularity
CN103178989B (en) Access hot statistics method and device
CN106649349A (en) Method, device and system for data caching, applicable to game application
CN107770259A (en) Copy amount dynamic adjusting method based on file temperature and node load
CN103491075A (en) Method and system for dynamically adjusting cached resource records of DNS recursive server
CN106899643A (en) A kind of user journal storage method and equipment
CN104346345A (en) Data storage method and device
CN106815260A (en) A kind of index establishing method and equipment
CN106293953B9 (en) A kind of method and system of the shared display data of access
Geethakumari et al. Single window stream aggregation using reconfigurable hardware
CN113094392A (en) Data caching method and device
CN109951317A (en) A kind of buffer replacing method of the popularity sensor model based on user's driving
CN105512051A (en) Self-learning type intelligent solid-state hard disk cache management method and device
CN104424120B (en) A kind of buffer memory management method and device
CN107707621A (en) A kind of method and device for realizing intelligent buffer
US20230026912A1 (en) Systems and methods for storing content items in secondary storage
CN103442000B (en) WEB caching replacement method and device, http proxy server
US20140325160A1 (en) Caching circuit with predetermined hash table arrangement
CN106331001B (en) A kind of cloud storage method and system of suitable mobile device access
US10686906B2 (en) Methods for managing multi-level flash storage and devices thereof
CN109460293B (en) Computing resource selection method under distributed computing environment in wireless cloud computing system
CN103365897A (en) Fragment caching method supporting Bigtable data model
WO2017049488A1 (en) Cache management method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171027

Termination date: 20190829

CF01 Termination of patent right due to non-payment of annual fee