CN103106153B - Based on the web cache replacement method of access density - Google Patents

Based on the web cache replacement method of access density Download PDF

Info

Publication number
CN103106153B
CN103106153B CN201310054554.5A CN201310054554A CN103106153B CN 103106153 B CN103106153 B CN 103106153B CN 201310054554 A CN201310054554 A CN 201310054554A CN 103106153 B CN103106153 B CN 103106153B
Authority
CN
China
Prior art keywords
cache
access
accintvl
density
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310054554.5A
Other languages
Chinese (zh)
Other versions
CN103106153A (en
Inventor
何慧
李乔
张伟哲
刘亚维
王健
王冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201310054554.5A priority Critical patent/CN103106153B/en
Publication of CN103106153A publication Critical patent/CN103106153A/en
Application granted granted Critical
Publication of CN103106153B publication Critical patent/CN103106153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Computer And Data Communications (AREA)

Abstract

Based on the web cache replacement method of access density, the present invention relates to web cache replacement method.The present invention will solve current LRU to there is locality and LFU and there is buffer memory and pollute, the problem that hit rate is low, and provides the web cache replacement method based on access density.Method: Already in whether cache object cache pool; Whether cache pool is full; Some initialization; Delete the cache object that density value is minimum, newly-increased cache object is added cache pool; Calculate current accessed interval; Whether be back-call; Calculate access density; According to formulae discovery access density, upgrade average access interval; Upgrade correlation; Exit.The present invention is applied to internet field of storage.

Description

Based on the web cache replacement method of access density
Technical field
The present invention relates to web cache replacement method.
Background technology
Along with the development in pluralism of web data, the distribution of web page contents progressively becomes the key factor affecting Web service performance.The Data Dissemination of current main-stream adopts content distributing network technology, is redirected to the request of user from its nearest server, thus reduce access time delay and source service end load pressure.In order to effectively promote service quality, content distributing network supplier disposes content proxy server at multiple network boundary.Such as Akamai company is 1 of more than 70 countries and regions, and more than 000 network internal is affixed one's name to more than 25,000 content server.Current, content distributing network pays close attention to the routing mechanism of the selection of proxy server deployed position and content usually, but the efficiency of buffer memory is the performance key factor affecting contents distribution.Herein for the caching mechanism in web content management, propose the cache replacement policy that a kind of density mixes with content size, reduce the pressure of contents distribution flow in process and the access time delay of user with this.
Buffer memory Exchange rings depends on following two principles: 1) information of frequent access should be buffered; 2) the temperature change of information means the change of access time interval.So far existing multiple cache replacement policy is based on heavily quoting the time, current cache replacement policy mainly adopts based on frequency and local locality as replacing benchmark, but all do not consider whole access history, and there is locality and LFU and there is buffer memory and pollute, the problem that hit rate is low in current LRU.
List of references " AnovelcachereplacementpolicyforISPmergedCDN ", (the InternatioanlConferenceonParallelandDistributedSystems such as QianLi, 708-709 page, on Dec 19th, 2009), disclose a kind of new cache replacement policy of web oriented contents distribution.
Summary of the invention
The present invention will solve the low problem of buffer memory rate and byte hit that the low problem of cache hit rate that current LRU replacement method and LFU replacement method relate to and GDSF replacement method relate to, and provides the web cache replacement method based on access density.
(1) cache object whether Already in cache pool, otherwise jump into step (2), be jump into step (5);
(2) whether cache pool is full, fullly then jumps into step (3), discontented then jump into step (4);
(3) delete the cache object that access density value is minimum, newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) Already in cache pool, calculates current accessed interval;
(6) whether is back-call, being that back-call then jumps into step (7), is not jump into step (8);
(7) make average access interval equal current accessed interval, visiting frequency+1, calculate access density, jump into step (10);
(8) not back-call, according to formulae discovery access density, upgrade average access interval;
(9) last visit position is upgraded, cache object visiting frequency+1;
(10) cache access total degree+1, exits.
Inventive principle:
One, suppose that spatial cache can hold at most M cache object, if spatial cache less than, then replacement process is consistent with other cache replacement policies;
Two, when spatial cache is expired, arrive cache object i for new, first calculate the current accessed interval now_accintvl of cache object i i:
If current accessed interval now_accintvl ilower than the average access interval avg_accintvl of i i, the temperature of meaning cache object i is on a declining curve, therefore reduces the density value ad_value of cache object i i;
If current accessed interval now_accintvl ihigher than the average access interval avg_accintvl of i i, the temperature of meaning cache object i is in rising trend, therefore increases the density value ad_value of cache object i i.
Invention effect:
By analyzing actual Web data access scenarios, find that the rate of change at access interval has higher accuracy for the impact of hit rate;
First the present invention extracts the URLs (UniformResourceLocator) of real network and analyzes the access behavior of user on campus network gateway, find in the buffer memory of LRU (least recently used), the URL of high popularity is often by the replacement of some low popularities; Secondly in order to avoid this hit loss, adopt and access the weight of interval variation rate as object, and carry out buffer memory replacement in conjunction with space size shared by object;
By embodiment by this strategy respectively with LRU (least recently used), LFU (recently frequent use) and GDSF (space mixes greed with frequency) contrasts, result shows the hit rate that can promote merely 3% ~ 5% based on the replacement algorithm of accessing interval variation, and the replacement algorithm of mixing promotes the rate and byte hit of 5% ~ 8% than GDSF (space mixes greed with frequency).
The present invention adopt based on access density replacement policy CPBAD (cachepolicybasedonaccessdensity) algorithm be each cache object arrange the out-of-service time with this avoid thoroughly buffer memory pollute.Simultaneously in order to solve the storage overhead of counter, while periodic maintenance, carry out counter replacement.
Accompanying drawing explanation
Fig. 1 is module frame figure of the present invention;
Fig. 2 is the data centralization URL distribution plan in tool embodiment; A (), (b) and (c) are extracted 427 in the network log of the campus gateway of continuous three days, URL visiting frequency under the common coordinate of 936 user's requests and the graph of a relation of temperature, d (), (e) and (f) are extracted 427 in the network log of the campus gateway of continuous three days, the URL visiting frequency under the log-log coordinate of 936 user's requests and the graph of a relation of URL temperature;
Fig. 3 is the access sequence figure of the popular URL in embodiment;
Fig. 4 is that the different zipf in embodiment divide and plant the impact of λ on algorithm;
Fig. 5 is that the cache replacement algorithm hit rate in embodiment compares; A () represents the hit rate under the algorithms of different under Dataset1, b () represents the hit rate under the algorithms of different under Dataset2, c () represents the hit rate under the algorithms of different under Dataset3, d () represents the hit rate under the algorithms of different under total data collection represent LFU, represent LRU, represent CPBAD;
Fig. 6 is the rate and byte hit comparison diagram of the multiple replacement algorithm in embodiment; , represent LRU, represent LFU, represent GDSF, represent CPBADS.
Embodiment
Embodiment one: the web cache replacement method based on access density of present embodiment realizes according to the following steps:
(1) cache object whether Already in cache pool, otherwise jump into step (2), be jump into step (5);
(2) whether cache pool is full, fullly then jumps into step (3), discontented then jump into step (4);
(3) delete the cache object that access density value is minimum, newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) Already in cache pool, calculates current accessed interval;
(6) whether is back-call, being that back-call then jumps into step (7), is not jump into step (8);
(7) make average access interval equal current accessed interval, visiting frequency+1, calculate access density, jump into step (10);
(8) not back-call, according to formulae discovery access density, upgrade average access interval;
(9) last visit position is upgraded, cache object visiting frequency+1;
(10) cache access total degree+1, exits.
Present embodiment effect:
By analyzing actual Web data access scenarios, find that the rate of change at access interval has higher accuracy for the impact of hit rate;
First present embodiment is extracted the URLs (UniformResourceLocator) of real network and is analyzed the access behavior of user on campus network gateway, find in the buffer memory of LRU (least recently used), the URL of high popularity is often by the replacement of some low popularities; Secondly in order to avoid this hit loss, adopt and access the weight of interval variation rate as object, and carry out buffer memory replacement in conjunction with space size shared by object;
By embodiment by this strategy respectively with LRU (least recently used), LFU (recently frequent use) and GDSF (space mixes greed with frequency) contrasts, result shows the hit rate that can promote merely 3% ~ 5% based on the replacement algorithm of accessing interval variation, and the replacement algorithm of mixing promotes the rate and byte hit of 5% ~ 8% than GDSF (space mixes greed with frequency).
Replacement policy CPBAD (cachepolicybasedonaccessdensity) algorithm based on access density that this embodiment adopts is that each cache object arranges the out-of-service time and avoids buffer memory thoroughly to pollute with this.Simultaneously in order to solve the storage overhead of counter, while periodic maintenance, carry out counter replacement.
Embodiment two: present embodiment and embodiment one unlike: access described in step (3) and be spaced apart this and hit cache object and hit the cache access number of times differed between this cache object last time.Other step and parameter identical with embodiment one.
Embodiment three: present embodiment and embodiment one or two unlike: access the ratio that density is the accessed number of times of cache object and the total access times of buffer memory within a period of time described in step (3).Other step and parameter identical with embodiment one or two.
Beneficial effect of the present invention is verified by following examples:
One, first the life cycle of cache object in spatial cache is analyzed;
Secondly two, analyze the behavior of web access and the distribution situation of URL: the distribution characteristics of research web request, is extracted 427 in the network log of the campus gateway of continuous three days, 936 users' requests, comprise 167,981 different URL;
Three, finally according to the temperature variation tendency of cache object, find to adopt the replacement algorithm of density to possess better predictive ability: because the locality of LRU and the buffer memory of LFU pollute character, the present embodiment proposes the variation tendency of observing cache object based on the replacement policy CPBAD (cachepolicybasedonaccessdensity) of access density within the longer period, and the weights reducing object by reducing temperature are effectively avoided buffer memory to pollute thus improved cache hit rate.
Web cache replacement method based on access density realizes according to the following steps:
(1) cache object whether Already in cache pool, otherwise jump into step (2), be jump into step (5);
(2) whether cache pool is full, fullly then jumps into step (3), discontented then jump into step (4);
(3) delete the cache object that access density value is minimum, newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) Already in cache pool, calculates current accessed interval;
(6) whether is back-call, being that back-call then jumps into step (7), is not jump into step (8);
(7) make average access interval equal current accessed interval, visiting frequency+1, calculate access density, jump into step (10);
(8) not back-call, according to formulae discovery access density, upgrade average access interval;
(9) last visit position is upgraded, cache object visiting frequency+1;
(10) cache access total degree+1, exits.
The life cycle of the cache object in the present embodiment in step one, each cache object has self life cycle in spatial cache, and the life cycle of the object A in buffer memory can be divided into two parts: 1 active period: from entering buffer memory until accessed for the last time; The 2 ossified phases: from accessed for the last time until be replaced out buffer memory; The active period of cache object is longer, and caching performance is higher, although can not accurately predicting cache object future behaviour, but to extend its active period as far as possible or reduce the long ossified phase object time be in the buffer the effective ways promoting buffer efficiency;
Fig. 2 is the distribution situation figure of the URL in the present embodiment in step 2, (a), b () and (c) is extracted 427 in the network log of the campus gateway of continuous three days, URL visiting frequency under the common coordinate of 936 user's requests and the graph of a relation of temperature, (d), e () and (f) is extracted 427 in the network log of the campus gateway of continuous three days, URL visiting frequency under the log-log coordinate of 936 user's requests and the graph of a relation of URL temperature, blue straight line in figure is the zipf matching to red-black curve, this data set meets typical zipf αdistribution, wherein 0.6< α <0.8, popular object means in relative short time repeatedly accessed, therefore extending these popular objects time is in the buffer the effective ways promoting buffer efficiency, and access interval is the key metrics of objective description object temperature,
Fig. 3 illustrates the access sequence of 8 URL the hottest in data centralization in step 2, y-axis represents this 8 URL, x-axis represents the order that each URL occurs in these three days, as can be known from Fig. 3, popular URL has the extremely low characteristic in interval within certain period, and the access interval of some popular URL diminishes gradually simultaneously, as url-2 and url-5, the access interval of some popular URL is comparatively even simultaneously, and as url-8, such URL generally can not be replaced in all kinds of cache replacement policy; And for this kind of URL of similar url-1, because its access interval has obvious periodicity, but period distances is longer, under the cache policy of LRU, will be replaced repeatedly, thus reduce caching performance; For this kind of URL of similar url-7, although have certain periodicity, its period distances constantly increases, and this kind of URL is under LFU strategy, and its weights rise all the time, but its temperature presents downtrending, thus causes buffer memory to pollute;
The access behavior that can obtain popular URL from above analysis can be divided into following 3 class situations:
1) evenly and have periodically, as url-8, url-4;
2) interval is accessed from sparse to dense, as url-5;
3) sudden growth, as url-3, url-6;
In view of the local locality of lru algorithm and the weights monotonicity of LFU algorithm, the present embodiment proposes the cache replacement algorithm based on access interval variation, to promote caching performance;
In order to effectively verify CPBAD algorithm,
a d _ value i n = I N I T I A L _ V A L U E , i f obj i i s a n e w o b j e c t a d _ value i n - 1 &CenterDot; &lambda; &CenterDot; a v g _ accintvl i a v g _ accintvl i + n o w _ accintvl i , i f a v g _ accintvl i < n o w _ accintvl i a d _ value i n - 1 &CenterDot; ( 1 + &lambda; &CenterDot; n o w _ a c c int v l a v g _ accintvl i + n o w _ accintvl i ) , i f a v g _ accintvl i > n o w _ accintvl i , 0 < &lambda; < 1 - - - ( 1 )
Wherein, described C totalrepresent the total degree that in certain period, buffer memory is accessed, last iposition when representing that object i last time is accessed on total access sequence, now iposition when object i last time is accessed on total access sequence, ad_value irepresent the density value of object i, freq irepresent total frequency that object i is accessed, avg_accintvl irepresent the average access interval of object i, now_accintvl irepresent the access interval that object i is current, n is the number of cache object;
Real network data set compares with LRU and LFU algorithm, and this data set is split into three sub-data sets by the date, as shown in table 1:
Table 1 gateway log data set
In order to determine the λ value in formula (1), need to determine the impact of λ on algorithm, therefore first the present embodiment generates 10,000 URL, and buffer memory is set and can holds 500 URL objects, observe algorithm and divide by changing λ the performance planted at different α, be illustrated in figure 4 different zipf and divide and plant the effect diagram of λ to algorithm represent α=0.6, represent α=0.7, represent α=0.8, represent α=0.9, represent α=1.0;
Obviously can find out that from Fig. 4 λ is when [0.6,0.8] is interval, hit rate is higher, and in experiment afterwards, the present embodiment selects λ=0.8 as empirical value;
In order to better compare replacement policy, the present embodiment is tested on 3 sub-data sets and total data collection: Fig. 5 shows the hit rate under algorithms of different, a () represents the hit rate under the algorithms of different under Dataset1, b () represents the hit rate under the algorithms of different under Dataset2, c () represents the hit rate under the algorithms of different under Dataset3, d () represents the hit rate under the algorithms of different under total data collection represent LFU, represent LRU, represent CPBAD;
Can be observed CPBAD algorithm and be better than LRU and LFU algorithm, known when cache size is 500 from Fig. 5 (c), LFU algorithm is obviously better than LRU; And known in fig. 2, the α value of data set dataset3 is greater than other 2 data sets, and it is larger that this meaning works as α value, and the hit rate of LFU is higher; Simultaneously when cache size is 2000, the hit rate of three kinds of algorithms is almost consistent, because when cache size is increased to a timing, namely the number of popular URL is close with cache size, increase spatial cache cannot promote hit rate; For Fig. 5 (d), although spatial cache rises to 8000, lru algorithm is still lower than other, and this is owing to storing the lower URL of a large amount of temperature in buffer memory; The advantage of CPBAD algorithm is can to reduce when hot data turns cold gradually its weights in spatial cache, thus promotes hit rate; When although in Fig. 5 (c), spatial cache is 2000, CPBAD and LFU is close, even when spatial cache is 500, LFU is higher than CPBAD, and this is that the buffer memory therefore in LFU keeps high hit condition always because the quantity of lasting hot data is close to spatial cache, and for CPBAD, change due to density value is weaker than LFU and keeps the freshness of buffer memory, thus some hot datas are replaced out buffer memory, and then causes hit rate to decline; But in general, CPBAD algorithm is better than LFU and LRU as a rule;
During for this replacement algorithm of actual deployment, spatial cache size can not use merely URL number as a reference but storage size, and therefore rate and byte hit is possessed of higher values for real system; Consider the diversity of web content size, the present embodiment is supposed that the size of the URL page meets between (1KB, 1MB) and is uniformly distributed; And modify for formula (1), increase file size parameter, as shown in formula (2), use weights as cache object calculate.
s i z e _ a d _ value i n = a d _ value i n + l o g ( o b j _ s i z e / c a c h e s i z e ) - - - ( 2 )
Suppose that the spatial cache upper limit is that 2GB, URL distribution uses total data collection dataset in an experiment.Fig. 6 shows the rate and byte hit of often kind of cache replacement algorithm on different spatial cache, zipf α=0.645, represent LRU, represent LFU, represent GDSF, represent CPBADS, as can be seen from Fig. 6 when buffer memory is lower, GDSF (GreedyDualSizeFrequency) is better than CPBAD, but when spatial cache increases gradually, the algorithm that the present embodiment proposes obviously is better than GDSF.This is due in object weight computing, and when space is enough, some present and can not be replaced very soon in GDSF to cold large files by heat, and CPBAD can reduce ad_value value very soon, thus reduce to reach more effective space availability ratio.
Generally speaking, adopt the cache replacement algorithm of density based to have the rate and byte hit higher than GDSF for web content distribution, and then reduce the performance loss of web server.Higher rate and byte hit also means that the synchronization bandwidth consumption in a distribution Web group of planes reduces simultaneously.By test by this strategy respectively with LRU (least recently used), LFU (recently frequent use) and GDSF (space mixes greed with frequency) contrasts, result shows the hit rate that can promote merely 3% ~ 5% based on the replacement algorithm of accessing interval variation, and the replacement algorithm of mixing promotes the rate and byte hit of 5% ~ 8% than GDSF (space mixes greed with frequency).

Claims (1)

1., based on the web cache replacement method of access density, it is characterized in that the web cache replacement method based on access density realizes according to the following steps:
(1) cache object whether Already in cache pool, otherwise jump into step (2), be jump into step (5);
(2) whether cache pool is full, fullly then jumps into step (3), discontented then jump into step (4);
(3) delete the cache object that access density value is minimum, newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) newly-increased cache object is added cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) Already in cache pool, calculates current accessed interval;
(6) whether is back-call, being that back-call then jumps into step (7), is not jump into step (8);
(7) make average access interval equal current accessed interval, visiting frequency+1, calculate access density, jump into step (10);
(8) not back-call, according to formulae discovery access density, upgrade average access interval; Wherein, described according to formulae discovery access density be specially:
a d _ value i n = I N I T I A L _ V A L U E , i f obj i i s a n e w o b j e c t a d _ value i n - 1 &CenterDot; &lambda; &CenterDot; a v g _ accintvl i a v g _ accintvl i + n o w _ accintvl i , i f a v g _ accintvl i < n o w _ accintvl i a d _ value i n - 1 &CenterDot; ( 1 + &lambda; &CenterDot; n o w _ accintvl i a v g _ accintvl i + n o w _ accintvl i ) , i f a v g _ accintvl i > n o w _ accintvl i , 0 < &lambda; < 1 - - - ( 1 )
Formula (1) is utilized to calculate access density according to formula (2);
s i z e _ a d _ value i n = a d _ value i n + l o g ( o b j _ s i z e / c a c h e s i z e ) - - - ( 2 )
Wherein, described ad_value irepresent the density value of object i, avg_accintvl irepresent the average access interval of object i, now_accintvl irepresent the access interval that object i is current, n is the number of cache object;
(9) last visit position is upgraded, cache object visiting frequency+1;
(10) cache access total degree+1, exits;
Wherein, described access is spaced apart this hit cache object and hit the cache access number of times differed between this cache object last time; Described access density is the ratio of the accessed number of times of cache object and the total access times of buffer memory within a period of time.
CN201310054554.5A 2013-02-20 2013-02-20 Based on the web cache replacement method of access density Active CN103106153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310054554.5A CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310054554.5A CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Publications (2)

Publication Number Publication Date
CN103106153A CN103106153A (en) 2013-05-15
CN103106153B true CN103106153B (en) 2016-04-06

Family

ID=48314026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310054554.5A Active CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Country Status (1)

Country Link
CN (1) CN103106153B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440207B (en) * 2013-07-31 2017-02-22 北京智谷睿拓技术服务有限公司 Caching method and caching device
CN103793517B (en) * 2014-02-12 2017-07-28 浪潮电子信息产业股份有限公司 A kind of file system journal dump dynamic compatibilization method based on monitoring mechanism
US10223286B2 (en) 2014-08-05 2019-03-05 International Business Machines Corporation Balanced cache for recently frequently used data
US10095628B2 (en) 2015-09-29 2018-10-09 International Business Machines Corporation Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage
US10241918B2 (en) 2015-09-29 2019-03-26 International Business Machines Corporation Considering a frequency of access to groups of tracks to select groups of tracks to destage
US10120811B2 (en) 2015-09-29 2018-11-06 International Business Machines Corporation Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
CN106681995B (en) * 2015-11-05 2020-08-18 菜鸟智能物流控股有限公司 Data caching method, data query method and device
CN106294216B (en) * 2016-08-11 2019-03-05 电子科技大学 A kind of buffer replacing method for wind power system
CN106383792B (en) * 2016-09-20 2019-07-12 北京工业大学 A kind of heterogeneous polynuclear cache replacement method based on missing perception
CN106909518B (en) * 2017-01-24 2020-06-26 朗坤智慧科技股份有限公司 Real-time data caching mechanism
CN106973088B (en) * 2017-03-16 2019-07-12 中国人民解放军理工大学 A kind of buffering updating method and network of the joint LRU and LFU based on shift in position
CN107291635B (en) * 2017-06-16 2021-06-29 郑州云海信息技术有限公司 Cache replacement method and device
CN107451071A (en) * 2017-08-04 2017-12-08 郑州云海信息技术有限公司 A kind of caching replacement method and system
CN108829344A (en) * 2018-05-24 2018-11-16 北京百度网讯科技有限公司 Date storage method, device and storage medium
CN111258929B (en) * 2018-12-03 2023-09-26 北京京东尚科信息技术有限公司 Cache control method, device and computer readable storage medium
CN111400308B (en) * 2020-02-21 2023-05-26 中国平安财产保险股份有限公司 Processing method of cache data, electronic device and readable storage medium
CN112733060B (en) * 2021-01-13 2023-12-01 中南大学 Cache replacement method and device based on session cluster prediction and computer equipment
CN113676513B (en) * 2021-07-15 2022-07-01 东北大学 Intra-network cache optimization method driven by deep reinforcement learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266742B1 (en) * 1997-10-27 2001-07-24 International Business Machines Corporation Algorithm for cache replacement
US6425057B1 (en) * 1998-08-27 2002-07-23 Hewlett-Packard Company Caching protocol method and system based on request frequency and relative storage duration
CN100395750C (en) * 2005-12-30 2008-06-18 华为技术有限公司 Buffer store management method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Web缓存优化模型研究;张艳等;《计算机工程》;20090430;第35卷(第8期);第85-90页 *
Web缓存命中率与字节命中率关系;石磊等;《计算机工程》;20070731;第37卷(第5期);第84-86页 *
Web缓存替换策略与预取技术的研究;张旺俊;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110930;第I137-27页 *

Also Published As

Publication number Publication date
CN103106153A (en) 2013-05-15

Similar Documents

Publication Publication Date Title
CN103106153B (en) Based on the web cache replacement method of access density
Wang et al. Intra-AS cooperative caching for content-centric networks
Wu et al. Objective-optimal algorithms for long-term web prefetching
Puzhavakath Narayanan et al. Reducing latency through page-aware management of web objects by content delivery networks
CN104572502B (en) Self-adaptive method for cache strategy of storage system
Ma et al. Weighted greedy dual size frequency based caching replacement algorithm
CN106462589A (en) Dynamic cache allocation and network management
Shi et al. An applicative study of Zipf’s law on web cache
Lee et al. Adaptive prefetching scheme using web log mining in Cluster-based web systems
He et al. Edge QoE: Intelligent big data caching via deep reinforcement learning
CN101887400B (en) The method and apparatus of aging caching objects
Zhao et al. GDSF-based low access latency web proxy caching replacement algorithm
Chen et al. Coordinated data prefetching by utilizing reference information at both proxy and web servers
Zhang et al. A dynamic social content caching under user mobility pattern
Wu et al. Web cache replacement strategy based on reference degree
Alkassab et al. Benefits and schemes of prefetching from cloud to fog networks
Zhijun et al. Towards efficient data access in mobile cloud computing using pre-fetching and caching
Fang et al. Mobile Edge Data Cooperative Cache Admission Based on Content Popularity
Rodríguez et al. Improving performance of multiple-level cache systems
Wang et al. Feasibility analysis and self-organizing algorithm for RAN cooperative caching
Lau et al. Optimal pricing for selfish users and prefetching in heterogeneous wireless networks
Pang et al. Understanding performance of edge prefetching
Katsaros et al. Cache management for Web-powered databases
Abdel-Baset et al. Cache Policies for Smartphone in Flexible Learning: A Comparative Study.
Zhao et al. Temperature matrix-based data placement using improved hungarian algorithm in edge computing environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230826

Address after: 100085 4th floor, building 3, yard 1, Shangdi East Road, Haidian District, Beijing

Patentee after: Beijing Topsec Network Security Technology Co.,Ltd.

Patentee after: Topsec Technologies Inc.

Patentee after: BEIJING TOPSEC SOFTWARE Co.,Ltd.

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY