CN103905545A - Reinforced LRU cache replacement method in content-centric network - Google Patents
Reinforced LRU cache replacement method in content-centric network Download PDFInfo
- Publication number
- CN103905545A CN103905545A CN201410117148.3A CN201410117148A CN103905545A CN 103905545 A CN103905545 A CN 103905545A CN 201410117148 A CN201410117148 A CN 201410117148A CN 103905545 A CN103905545 A CN 103905545A
- Authority
- CN
- China
- Prior art keywords
- node
- cache
- data
- lru
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention relates to a reinforced LRU cache replacement method in a content-centric network. The reinforced LRU cache replacement method is characterized by comprising the steps of judging whether a node has enough space to cache data when the node receives new data and needs to cache the data, caching the data directly if the node has enough space to cache the data, otherwise, performing cache failure judgment; when cache failure judgment is performed, judging whether a node cache block meeting a judgment condition exists, wherein the judgment condition means whether all the neighbors of the node request the cache block, replacing the cache block with the new data if the cache block meeting the judgment condition is found, otherwise, performing content replacement by means of an LRU cache replacement algorithm.
Description
Technical field
The present invention relates to the LRU buffer replacing method strengthening in a kind of content center network.
Background technology
Along with the fast development of computer and Internet technology, people obtain the demand of large data from the Internet increasing.Therefore, a kind of novel network centered by content (Content-Centric Network, CCN) architecture is arisen at the historic moment.Nearly 2 years, attract wide attention with the research of content center network architecture.Also comprise NDN(Named Data Networking with the next generation network framework of CCN common development), MobilityFirst, NEBULA, XIA(eXpressive Internet Architecture) etc.In numerous next generation network architectures, CCN has more advantage, is also to study at present more architecture.Network-caching (In-Network Caching) is the important component part of CCN architecture, by content caching is arrived to CCN node, can alleviate server stress, reduce offered load, reduce request of data delay, strengthen mobility etc., thereby improve network performance.
The effect of CCN nodal cache is data in network to store temporarily, improves the efficiency of request of data.At present, in CCN, node carries out buffer memory without distinction to the data of all processes, be called " through be buffer memory " mode, due to the spatial cache of node, to compare network data much smaller, when spatial cache need to take cache replacement algorithm to replace out historical data with buffer memory new data completely time.CCN node has similar function to route utensil in traditional IP, is responsible for routing addressing and the data retransmission of packet in network, and different is that CCN node carries out Route Selection according to the name of packet, has data buffer storage function simultaneously.For guaranteeing efficiently deal with data of CCN node, should not adopt the cache replacement policy that complexity is higher.At present, in CCN, generally adopt LRU (Least Recently Used least recently used algorithm) cache replacement policy, the features such as LRU cache replacement policy algorithm is simple, be easy to realize, convenient deployment, realize data cached maintenance when can guaranteeing CCN node with surface speed forwarding data.
The application of LRU cache policy in CCN has good performance, but also can cause cache invalidate data in spatial cache in some cases.CCN node stores the content of process into local cache, has next time identical content request to arrive this node again and directly extracts data from local cache.As the content item (C of CCN nodal cache
i) by after all neighbor request, C so
ito can again do not accessed in following a period of time.According to LRU cache replacement policy, as node Node
jcontent item C in buffer memory
iafter all neighbor request, C
iwill be placed on the top of LRU buffer queue, work as Node
jreceive after new request C
iposition in buffer queue moves down one.Suppose Node
jcan a buffer memory N content item, so at least to pass through N secondary data request, C
ijust can be eliminated and replace out buffer memory.In fact, C
ithe process of being replaced out buffer memory after all neighbor request to being eliminated is taking spatial cache always, but C
ican again do not asked by neighbours again, claim C now
ifor " dead piece ", " dead piece " caused the waste of spatial cache.
Summary of the invention
The object of the invention is to provide the LRU buffer replacing method strengthening in a kind of content center network, can effectively avoid spatial cache waste, improves spatial cache utilance.
Realize the object of the invention technical scheme:
The LRU buffer replacing method strengthening in a kind of content center network, it is characterized in that: in the time that node receives a new data and need to carry out buffer memory to data, first whether decision node has these data of sufficient space buffer memory, if there is sufficient space, directly buffer memory, otherwise carry out cache invalidation judgement; When cache invalidation is judged, determine whether and have the nodal cache Cache piece that meets decision condition, said decision condition refers to whether this Cache piece was proposed request by all neighbours of node, if find the Cache piece that meets decision condition, replace this Cache piece by new data block, otherwise utilize LRU cache replacement algorithm to carry out content replacement.
Each node has all been set up ESC table and neighbor table, and cache invalidation judges that the ESC table by checking node realizes, and said ESC table comprises requesting node attribute, is used for the information of neighbor nodes of record request cache contents item.
When node receives new request of data, according to the name lookup ECS table of request msg, while finding the identical cache entry of title, the neighbor node mark of request msg is appended in the requesting node attribute of this cache entry.
If the content of request, not in the buffer memory of node, when node is from data source request msg, and is cached to ECS when table, neighbours' mark of response data request is appended in the requesting node attribute of cache entry.
ESC table also comprises name and the data of query contents.
Neighbor table comprises interface, the response time of neighbor node and the final updating time of neighbor node that identification information, the neighbor node of neighbor node is corresponding.
The beneficial effect that the present invention has:
The present invention has increased cache invalidation judgement on the basis of LRU cache replacement policy, in buffer memory replacement process, introduce the impact of nodal cache content on neighbor node, increase " dead piece " evaluation algorithm, in the time having request of data, advanced row cache lost efficacy and judged, the content item that meets decision condition is directly replaced out buffer memory, the content item not satisfying condition in buffer memory adopts LRU cache replacement policy to carry out buffer memory replacement, after certain content item in buffer memory is by all neighbor request, when arriving, next data directly this content item is replaced out to buffer memory.The present invention has taken into full account the relation between cache contents and neighbor request, and the content item going to zero by judging accessed probability, replaces out buffer memory by content item, avoids spatial cache waste, improves spatial cache utilance.
The present invention has expanded the data structure in CCN architecture, and each node has all been set up ESC table (Extend CS, expansion content storage list) and on content storage list (CS) basis, expanded requested field, records content item by which neighbor request.In addition, the present invention has also also increased neighbor table newly at each node, for the neighborhood of memory node, shows whether can determine certain content item by all neighbor request mistakes with neighbor table by ESC.Suppose that user data requests meets Poisson distribution, through emulation experiment, prove that the application of the present invention in CCN can effectively improve cache hit rate, reduce request of data and postpone.
Accompanying drawing explanation
Fig. 1 is overview flow chart of the present invention;
Fig. 2 is Cache piece deterministic process schematic diagram of the present invention;
Fig. 3 is the structural representation of expansion content storage ESC table of the present invention;
Fig. 4 is the structural representation of neighbor table of the present invention;
Fig. 5 is node type schematic diagram of the present invention;
Fig. 6 is the topological schematic diagram of experiment of the present invention;
Fig. 7 is the present invention and LRU hit rate contrast schematic diagram;
Fig. 8 is that the present invention postpones contrast schematic diagram with operation LRU request of data.
Embodiment
The present invention is the LRU buffer replacing method (A-LRU buffer replacing method) strengthening in a kind of content center network, and the course of work comprises two parts: a part is A-LRU buffer memory replacement process, and another part is the maintenance of ECS table.The maintenance of ECS table, for A-LRU judges that validity provides foundation, the ECS that need to upgrade in time in the time having request of data table.
Take node i as example, the A-LRU strategy course of work that the present invention proposes is described.The Cache space size of node i is K, and ECS table can buffer memory K bar content.All cache contents entry size are identical.Node i has 1 neighbor node at least, nodes add up to M.E
ij(1≤j≤K) represents j Cache piece in node i, r
il(1≤l<M) represents that node i has l neighbours.
As shown in Figure 1 and Figure 2, when node i receives a new data e
inewand need to carry out buffer memory to data time, first whether decision node i has these data of sufficient space buffer memory.If there is the direct buffer memory of sufficient space, judge (cache invalidation judgement) otherwise start A-LRU.The Cache space size of supposing node i is K, passes through e
inew<=K
freewhether decision node i has these data of sufficient space buffer memory.
When A-LRU judges, the ECS table of traversal node i (Extend CS, expansion content storage list), judgement { e
ij| whether 1≤j≤K} has the Cache piece that meets A-LRU decision condition, i.e. neighbours { the r of node i
il| 1≤l≤M, whether l ≠ i} has all asked Cache piece e
ijif find the Cache piece e that meets decision condition
ijuse new data e
inewreplace Cache piece e
ij.If do not find the Cache piece e that meets A-LRU decision condition
ij, utilize LRU cache replacement algorithm to carry out content replacement.
Each node has all been set up ESC table and neighbor table.In the ECS table of node, record content item in this nodal cache all by which neighbor request, judged that by requesting node attribute content item in buffer memory is whether by whole neighbor node requests.If exist content item by all neighbor node requests, found the Cache piece that meets A-LRU Rule of judgment.As shown in Figure 3, comprising name (Name), data (Data) and the requesting node attribute (RequestedNode) of wanted query contents, requesting node attribute is used for the information of neighbor nodes of content item in record request buffer memory to ECS tableau format.Neighbours' tableau format as shown in Figure 4, comprising interface (FaceNumber) corresponding to identification information (NodeId), the neighbours of neighbor node, neighbours' response time (Delay) and neighbours' final updating time (UpdateTime).Neighbor table (NNT) has recorded the neighborhood of node, and it carries out Dynamic Maintenance by Neighbor Discovery Protocol (Neighbor Discovery Protocol, NDP).
When node i receives new request of data, according to the name lookup ECS table of request msg, find the cache entry e that title is identical
ijtime, neighbours' mark of request msg is appended to cache entry e
ijrequest node (RequestedNode) attribute in.If the content of node request, not in the buffer memory of node i, when node i is from data source request msg, and is cached to ECS when table, neighbours' mark of response data request is appended to cache entry e
ijrequest node (RequestedNode) attribute in.
Fig. 5 is A-LRU node type schematic diagram in CCN.A-LRU is two classes according to the annexation of node by node definition in CCN: a class is the node being directly connected with user, is called leaf node (Border Node, BNode); An other class is the node being only connected with CCN node, is called cache node (Cache Node, CNode).As shown in Figure 5, wherein square representative of consumer node is content requests node, and circle represents CCN node.A-LRU strategy is only applicable to CNode, i.e. node 1,3,5,9 in figure.
Below by emulation experiment, further illustrate beneficial effect of the present invention.
In order to analyze the specific performance of A-LRU replacement policy, the ndnSIM platform based on NS3 has carried out experiment simulation to the present invention.
Fig. 6 is the topological schematic diagram for testing of the embodiment of the present invention.This emulation experiment configures 72 nodes altogether, comprising a data serving node, be responsible for receiving in network request of data and respond, it is that the quantity of CS table cache content item is 200 that response contents is fixed size 1024B. Node configuration spatial cache size, the routing policy forwarding (Flooding) that is configured to flood, configures respectively different cache replacement algorithms according to experimental design node in addition.Choosing 40 nodes in network is requesting node, configuration data request applications.Set totally 3000 kinds of request contents, user data requests process meets Poisson distribution, and request frequency is 2 times per second.Transmission delay 10ms between node, simulated time 100s.
Fig. 7 is LRU and A-LRU hit rate of the present invention contrast schematic diagram.In order to verify performance of the present invention, this experiment is deployed to LRU and A-LRU respectively on CCN node on emulation platform, has realized LRU replacement policy and A-LRU replacement policy simultaneously.LRU and A-LRU replacement policy are compared, tested respectively the average cache hit rate of LRU and A-LRU node.Fig. 7 has shown the result of test, for the average cache hit rate performance of more different replacement policies.The average cache hit rate of described node is the average probability that all nodal caches hit.As can be seen from Figure 7, A-LRU contrast lru algorithm cache hit rate has on average promoted 12 percentage points.Hit rate is along with the increase of running time, and A-LRU dominance of strategies is more obvious.
Fig. 8 is that LRU postpones contrast schematic diagram with operation A-LRU request of data of the present invention.In order to verify performance of the present invention, this experiment is deployed to LRU and A-LRU respectively on CCN node on emulation platform, has realized LRU replacement policy and A-LRU replacement policy simultaneously.LRU and A-LRU replacement policy are compared, tested respectively LRU and the request of A-LRU average data and postponed.Fig. 8 has shown the result of test, postpones for the average data request of more different replacement policies.As can be seen from Figure 8, between 40s~70s A-LRU strategy play a role replace following short-term can be by the data of neighbor request, increased the ratio of valid data in spatial cache, thereby reduced request of data delay.Request of data postpones to be stabilized in 14s, and lru algorithm is compared in the application of A-LRU cache replacement algorithm, has on average reduced the request of 10s node data and has postponed.
Show by the simulation experiment result, A-LRU strategy of the present invention has promoted lru algorithm cache hit rate, has reduced request of data delay.Find that by simulation result A-LRU strategy, in promoting lru algorithm performance, has still retained the character of LRU cache replacement algorithm in addition.
Claims (6)
1. the LRU buffer replacing method strengthening in a content center network, it is characterized in that: in the time that node receives a new data and need to carry out buffer memory to data, first whether decision node has these data of sufficient space buffer memory, if there is sufficient space, directly buffer memory, otherwise carry out cache invalidation judgement; When cache invalidation is judged, determine whether and have the nodal cache Cache piece that meets decision condition, said decision condition refers to whether this Cache piece was proposed request by all neighbours of node, if find the Cache piece that meets decision condition, replace this Cache piece by new data block, otherwise utilize LRU cache replacement algorithm to carry out content replacement.
2. the LRU buffer replacing method strengthening in content center network according to claim 1, it is characterized in that: each node has all been set up ESC table and neighbor table, cache invalidation is judged the ESC table realization by checking node, said ESC table comprises requesting node attribute, is used for the information of neighbor nodes of record request cache contents item.
3. the LRU buffer replacing method strengthening in content center network according to claim 2, it is characterized in that: when node receives new request of data, according to the name lookup ECS table of request msg, while finding the identical cache entry of title, the neighbor node mark of request msg is appended in the requesting node attribute of this cache entry.
4. the LRU buffer replacing method strengthening in content center network according to claim 3, it is characterized in that: if the content of request is not in the buffer memory of node, when node is from data source request msg, and be cached to ECS when table, neighbours' mark of response data request is appended in the requesting node attribute of cache entry.
5. the LRU buffer replacing method strengthening in content center network according to claim 4, is characterized in that: ESC table also comprises name and the data of query contents.
6. the LRU buffer replacing method strengthening in content center network according to claim 5, is characterized in that: neighbor table comprises interface, the response time of neighbor node and the final updating time of neighbor node that identification information, the neighbor node of neighbor node is corresponding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410117148.3A CN103905545A (en) | 2014-03-22 | 2014-03-22 | Reinforced LRU cache replacement method in content-centric network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410117148.3A CN103905545A (en) | 2014-03-22 | 2014-03-22 | Reinforced LRU cache replacement method in content-centric network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103905545A true CN103905545A (en) | 2014-07-02 |
Family
ID=50996699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410117148.3A Pending CN103905545A (en) | 2014-03-22 | 2014-03-22 | Reinforced LRU cache replacement method in content-centric network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103905545A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104253855A (en) * | 2014-08-07 | 2014-12-31 | 哈尔滨工程大学 | Content classification based category popularity cache replacement method in oriented content-centric networking |
CN105022700A (en) * | 2015-07-17 | 2015-11-04 | 哈尔滨工程大学 | Named data network cache management system based on cache space division and content similarity and management method |
CN106021126A (en) * | 2016-05-31 | 2016-10-12 | 腾讯科技(深圳)有限公司 | Cache data processing method, server and configuration device |
CN106790421A (en) * | 2016-12-01 | 2017-05-31 | 广东技术师范学院 | A kind of step caching methods of ICN bis- based on corporations |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101551781A (en) * | 2009-05-22 | 2009-10-07 | 中国科学院计算技术研究所 | Method of magnetic disc cache replacement in P2P video on demand system |
CN103294912A (en) * | 2013-05-23 | 2013-09-11 | 南京邮电大学 | Cache optimization method aiming at mobile equipment and based on predication |
CN103365897A (en) * | 2012-04-01 | 2013-10-23 | 华东师范大学 | Fragment caching method supporting Bigtable data model |
-
2014
- 2014-03-22 CN CN201410117148.3A patent/CN103905545A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101551781A (en) * | 2009-05-22 | 2009-10-07 | 中国科学院计算技术研究所 | Method of magnetic disc cache replacement in P2P video on demand system |
CN103365897A (en) * | 2012-04-01 | 2013-10-23 | 华东师范大学 | Fragment caching method supporting Bigtable data model |
CN103294912A (en) * | 2013-05-23 | 2013-09-11 | 南京邮电大学 | Cache optimization method aiming at mobile equipment and based on predication |
Non-Patent Citations (1)
Title |
---|
BIN TANG等: "An Advanced LRU cache replacement strategy for content-centric network", 《APPLIED MECHANICS AND MATERIALS》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104253855A (en) * | 2014-08-07 | 2014-12-31 | 哈尔滨工程大学 | Content classification based category popularity cache replacement method in oriented content-centric networking |
CN104253855B (en) * | 2014-08-07 | 2018-04-24 | 哈尔滨工程大学 | Classification popularity buffer replacing method based on classifying content in a kind of content oriented central site network |
CN105022700A (en) * | 2015-07-17 | 2015-11-04 | 哈尔滨工程大学 | Named data network cache management system based on cache space division and content similarity and management method |
CN105022700B (en) * | 2015-07-17 | 2017-12-19 | 哈尔滨工程大学 | A kind of name data network cache management system and management method based on spatial cache division and content similarity |
CN106021126A (en) * | 2016-05-31 | 2016-10-12 | 腾讯科技(深圳)有限公司 | Cache data processing method, server and configuration device |
CN106021126B (en) * | 2016-05-31 | 2021-06-11 | 腾讯科技(深圳)有限公司 | Cache data processing method, server and configuration equipment |
CN106790421A (en) * | 2016-12-01 | 2017-05-31 | 广东技术师范学院 | A kind of step caching methods of ICN bis- based on corporations |
CN106790421B (en) * | 2016-12-01 | 2020-11-24 | 广东技术师范大学 | ICN two-step caching method based on community |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101431539B (en) | Domain name resolution method, system and apparatus | |
CN104009920B (en) | The processing method of data source movement, the method for E-Packeting and its device | |
CN103051740B (en) | Domain name analytic method, dns server and domain name analysis system | |
CN103347068B (en) | A kind of based on Agent cluster network-caching accelerated method | |
CN108881515B (en) | Domain name resolution method, device and network equipment | |
CN105450780B (en) | A kind of CDN system and its return source method | |
CN101656765B (en) | Address mapping system and data transmission method of identifier/locator separation network | |
CN105530324B (en) | The method and system of process resource request | |
CN103905538A (en) | Neighbor cooperation cache replacement method in content center network | |
CN107645525A (en) | Detection processing, dispatching method and related device, the node of content distributing network | |
CN104683485A (en) | C-RAN based internet content caching and preloading method and system | |
JP2016506113A (en) | Packet transmission method for content owner and node in content-centric network | |
CN103685583A (en) | Method and system for resolving domain names | |
CN101662483A (en) | Cache system for cloud computing system and method thereof | |
CN103905545A (en) | Reinforced LRU cache replacement method in content-centric network | |
CN101014046B (en) | Method for integrating service location with service quality routing in service loading network | |
CN105653473B (en) | Cache data access method and device based on binary mark | |
CN103139252B (en) | The implementation method that a kind of network proxy cache is accelerated and device thereof | |
CN108965479B (en) | Domain collaborative caching method and device based on content-centric network | |
CN105681413A (en) | Method and device for cooperative processing of data between CDN (Content Delivery Network) and ISP (Internet Service Provider) | |
CN102594885A (en) | Sensor network analyzing intercommunicating platform, sensor network intercommunicating method and system | |
CN107070988A (en) | Message processing method and device | |
CN110365810A (en) | Domain name caching method, device, equipment and storage medium based on web crawlers | |
CN103401953A (en) | End-to-end voice communication node addressing method based on dual-layer structure | |
CN103179161B (en) | A kind of content acquisition method, device and network system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140702 |