CN113965588A - Autonomous intra-domain collaborative caching method for content-centric networking - Google Patents

Autonomous intra-domain collaborative caching method for content-centric networking Download PDF

Info

Publication number
CN113965588A
CN113965588A CN202111210937.8A CN202111210937A CN113965588A CN 113965588 A CN113965588 A CN 113965588A CN 202111210937 A CN202111210937 A CN 202111210937A CN 113965588 A CN113965588 A CN 113965588A
Authority
CN
China
Prior art keywords
content
node
cache
hash value
autonomous domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111210937.8A
Other languages
Chinese (zh)
Other versions
CN113965588B (en
Inventor
王天博
张思嘉
夏春和
陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202111210937.8A priority Critical patent/CN113965588B/en
Publication of CN113965588A publication Critical patent/CN113965588A/en
Application granted granted Critical
Publication of CN113965588B publication Critical patent/CN113965588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1042Peer-to-peer [P2P] networks using topology management mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1061Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
    • H04L67/1065Discovery involving distributed pre-established resource-based relationships among peers, e.g. based on distributed hash tables [DHT] 

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a cooperative caching method in an autonomous domain for a content-centric network, which is used for performing cooperative caching among nodes by taking the autonomous domain as a cooperative unit in the content-centric network. The method comprises the following steps: establishing a mapping relation between the cache data based on consistent hash and nodes in the autonomous domain; a hash-based routing and forwarding mechanism; on-line popularity prediction and preferential caching strategy based on request rate; the method is oriented to decentralized autonomous domain networks, mapping of network nodes and content name prefixes is achieved through a consistent hash algorithm, cache contents are distributed in cooperation with requested times and popularity ranking of the cache contents, and then preferential caching is conducted. Compared with a method of cooperative caching along the route, the method has the advantages of low caching redundancy in the adjacent nodes and high caching visibility between the adjacent nodes. Compared with a cluster cooperative caching method, the method has the advantages of low computing pressure of the central node and low time delay of cache resource allocation.

Description

Autonomous intra-domain collaborative caching method for content-centric networking
Technical Field
The present invention relates to the field of network communications, and in particular, to a cooperative caching method in an autonomous domain for a content-centric network.
Background
A future network architecture proposed by Van Jacobson in a Content-Centric Networking (CCN) aims to solve the problem that a host-Centric communication mode in a traditional TCP/IP network is difficult to meet the requirements of security, mobility, reliability and flexibility of a network proposed by a new era of application. The core idea of the content-centric network CCN is to break through the end-to-end communication mode centered on the host in the current network, and turn to be centered on the content chunk (content chunk). Network nodes (network nodes) in the CCN route (routing) and forward (forwarding) based on data naming (content name). Also, data naming is an identifier in the CCN that is used to identify data.
Referring to fig. 1, the nodes (e.g., routers, gateways, etc.) of the CCN maintain three types of data structures inside: a Content Store (CS), a Pending Interest Table (PIT), and a Forwarding Information Base (FIB). The advantage of CCN is that it floods a centralized In-Network Cache (In-Network Cache). In-network caching enables content chunks (content chunk) to be stored in individual nodes (nodes) in the CCN, which uses a data name (content name) to identify request messages (request) that are queried locally in its CS as they flow through the network node (network node). If there is a corresponding content chunk (content chunk) in the CS, the CS responds by directly using the content chunk (content chunk) in the CS. Referring to content centric network core technology, pages 9-10, authors: zhang Sha, Lanjulong, Yipeng, Cheng Guo Zheng, version 1 of 5 months in 2018.
The caching decision mechanism of the CCN decides which nodes in the network cache which content chunks. The default caching mechanism is the lce (leave Copy everywhere) mechanism, that is, all the content blocks (content chunk) flowing through are cached until the CS space is full. The main problems of the LCE caching strategy are that the redundancy of the data packet along the request path (request path) is too high, the diversity of the cache contents is low, and the utilization efficiency of the cache space of the nodes along the request path is low. Referring to the inter-domain request and response scenario structure diagram in the CCN shown in fig. 2, the high redundancy between multiple inter-domains (autonomous domain AS-1, autonomous domain AS-2, autonomous domain AS-3) due to the CCN default caching policy is disclosed. Referring to content centric network core technology, pages 44-45, authors: zhang Sha, Lanjulong, Yipeng, Cheng Guo Zheng, version 1 of 5 months in 2018.
The existing main content-centric network collaborative caching mechanism aims at optimizing the use efficiency of a caching space, is deployed in network nodes, integrates information of each node, and considers the caching optimization problem from the overall perspective of the network. The main representatives include LCE, LCD, MPC.
A four-place cache (LCE) is the default caching mechanism in a content-centric network, which is intended to minimize network upstream bandwidth requirements and downstream latency. In LCE, packets are buffered at each node (node) in the slave response path. Thus, LCE is a typical homogeneous non-cooperative caching mechanism. Since each node (node) holds a copy of a packet, there is a problem that the cache redundancy is high.
In the move-Down Copy (LCD) caching approach, a new Copy of a requested document is cached only at the upper level, and each request message (request) pushes a content chunk (content chunk) to one hop closer to the requester. When the network topology evolves into an arbitrary graph, it is difficult to define the hierarchical relationships between different nodes (nodes). Furthermore, any cache may have out-of-path requests. The buffer approach of the LCD does not allow counting or responding to out-of-path requests.
In the Most Popular Content (MPC), each node (node) counts the request messages (request) for each data name (Content name) and stores (Content name: request count) into the "popularity table". Once a content name locally reaches the "popularity threshold", the content name is marked as "popular", and if the node owns the content, it caches it by suggesting its neighboring nodes with a "suggestion cache flag". The disadvantage of MPC is that each node performs popularity counting individually, resulting in wasted storage and computation resources during the counting process. Meanwhile, because the popularity prediction method only counts the requests flowing through the popularity prediction method, the popularity prediction result only represents the popularity of the data named at the local node. Refer to MPC: popularity-based gaming Stratagy for Content Central Networks, 11 months and 07 days in 2013.
In summary, the existing content-centric network cooperative cache performs inter-node cooperative cache along a request path (request path) mainly in an implicit or explicit manner, so as to eliminate inter-node cache redundancy. In the cache method, nodes along the path are taken as a cache cooperative unit, and the inter-node cooperative cache is performed in the cache unit, so that the cache redundancy rate in the cooperative unit is eliminated. Such a method has the following disadvantages:
first, the Neighbor node (Neighbor node) is not cache-aware, and if the request path (request path) does not pass through the Neighbor node, the request path (request message) cannot be directly obtained by using a default mechanism even if there is a cache instruction of the request message in the Neighbor node.
Secondly, the Local request rate (Local request rate) is hard to perceive, and the main function of the in-network cache of the node (node) is to provide a cache content block (content chunk) for the neighboring node, reduce the average request delay of the neighboring node, and reduce the overall network load. In order to make a reasonable caching decision, a caching node (cache node) for executing caching should select content with a high request rate for caching, but because a request can be sent by any user and a forwarding path is not preset, a single caching node (cache node) is difficult to sense the local request rate.
Disclosure of Invention
The invention mainly solves the technical problems that: the method has the advantages that the problem of storage resource waste is caused by overhigh node cache redundancy on cache nodes (cache nodes) in an autonomous domain in a content-centric networking (CCN). The invention adopts a distributed caching mode in the autonomous domain, reduces the caching redundancy in neighbor nodes (neighbor nodes) in the content center network CCN, distributes caching tasks in the autonomous domain, predicts the popularity of data content based on the request rate, and performs distributed caching based on the popularity.
The invention provides a distributed caching mode in an autonomous domain facing a content center network, which reduces the caching redundancy in adjacent nodes in the content center network, distributes caching tasks in the autonomous domain, predicts the popularity of data content based on a request rate, and performs distributed caching based on the popularity.
The difference between caching in a content-centric networking network and conventional overlay caching is that a centralized caching decision is changed into a decentralized decision environment. The cache capacity of the network node is relatively small compared to the capacity of the content circulating within the network. For this reason, one of the most critical aspects of in-network caching is the balanced caching of tasks and caching of content within a limited cache space. The method of the invention focuses on the dispersion of data content between adjacent network nodes in the network node cache, real-time popularity prediction and collaborative content cache. The method aims to reduce the redundancy of the cache between adjacent nodes, particularly reduce the redundancy of the cache in an autonomous domain, more effectively utilize the available cache resources of the adjacent nodes and reduce the request hop count and the network flow of popular contents.
The invention relates to a cooperative caching method in an autonomous domain facing a content center network, which comprises the following steps:
step 1, calculating hash values of all nodes in an autonomous domain in a content center network;
performing hash value calculation on each node in an autonomous domain in a content center network by adopting a consistent hash algorithm to obtain a hash value of each node;
in the present invention, the hash value is 1 to (2)32-1) of any number in between.
The constructed association table of the autonomous domain nodes and the hash values comprises the following contents: node number, node storage capacity and hash value;
the autonomous domain node set is denoted as MNODE, and MNODE ═ node1,node2,…,nodec,…,noded,…,nodeD};
node1Representing a first autonomous domain node; by usingConsistent Hash algorithm for autonomous domain node1After the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000041
node2Representing a second autonomous domain node; autonomous domain node pair adopting consistent Hash algorithm2After the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000042
nodecRepresenting a c-th autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmcAfter the node numbering process, the obtained node hash value is recorded as
Figure BDA0003308907380000043
nodedRepresenting the d-th autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmdAfter the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000044
The lower corner mark d represents the identification number of the autonomous domain node;
nodeDrepresenting the last autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmDAfter the node numbering process, the obtained node hash value is recorded as
Figure BDA0003308907380000045
The lower corner mark D represents the total number of nodes in one autonomous domain;
step 2, the node hash values are sequenced from small to large to generate a hash ring;
communicating each node according to the size of the hash value on the node to form a closed-loop hash value ring network topology structure;
step 3, calculating the content name prefix-hash value of each cache content;
the content storage bank CS of the content center network has cache content;
firstly, extracting a content name prefix from cached content; then, performing hash value calculation on the content name prefix by adopting a consistent hash algorithm to obtain a content name prefix-hash value;
in the present invention, the hash value is 1 to (2)32-1) of any number in between.
The contents in the constructed association table of the cache contents and the hash values are as follows: the method comprises the steps of caching node numbers, caching contents, content name prefixes and hash values;
the cache contents set is denoted as MCC, and MCC ═ CC1,CC2,…,CCa,…,CCb,…,CCA};
CC1Representing a first piece of cached content; the cache content CC1Prefix of content name of (2), denoted as Prefix _ CC1(ii) a Content name Prefix-CC by adopting consistent hash algorithm1After the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000051
CC2Representing a second piece of cache content; the cache content CC2Prefix of content name of (2), denoted as Prefix _ CC2(ii) a Content name Prefix-CC by adopting consistent hash algorithm2After the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000052
CCaRepresenting the a-th cache content; the cache content CCaPrefix of content name of (2), denoted as Prefix _ CCa(ii) a Content name Prefix-CC by adopting consistent hash algorithmaAfter the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000057
The subscript a indicates the identification number of the cache content;
CCbrepresenting the b-th cache content; the cache content CCbPrefix of content name of (2), denoted as Prefix _ CCb(ii) a Content name Prefix-CC by adopting consistent hash algorithmbAfter the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000053
CCARepresenting the last piece of cache content; the cache content CCAPrefix of content name of (2), denoted as Prefix _ CCA(ii) a Content name Prefix-CC by adopting consistent hash algorithmAAfter processing, the hash value of the obtained content is recorded as
Figure BDA0003308907380000054
The subscript a represents the total number of cached content;
step 4, distributing cache contents on a hash value ring;
distributing the cache content on a hash value ring according to the content name prefix-hash value to form a hash value ring structure of the cache content and the node;
step 5, comparing the hash values to obtain cache nodes;
if the cache content-hash value is positioned between the hash values of the two adjacent nodes, selecting the node with the small hash value as the cache node;
step 6, collecting all cache contents with identities on cache nodes;
whether to cache contents CC in one cache cycle HaThe nodes stored according to popularity are marked as cache nodes
Figure BDA0003308907380000055
Step 7, collecting the requested times of the cache content;
if the content CC is cachedaRequested, recorded number of times requested, noted
Figure BDA0003308907380000056
Step 8, ranking and predicting the popularity of the cache content based on the damping trend;
the popularity of the damping trend is expressed as
Figure BDA0003308907380000061
popt+1Representing the popularity ranking of the next caching time t + 1;
popt-1representing the popularity ranking of the last cache time t-1;
poptrepresenting the popularity ranking of the current caching time t;
Lta level estimation value representing a time series of the pre-buffering time t; said LtRepresenting the request rate weighted square of two units of cache time;
Lt-1a level estimation value representing a time series of a last buffering time t-1;
Pta trend estimation value representing a time series of the previous buffering time t;
Pt-1a trend estimation value representing the time sequence of the last cache time t-1;
phi represents a damping coefficient; phi is a2Represents the square of the damping coefficient; phi is ahRepresents the damping coefficient to the power of h;
alpha represents a first horizontal smooth parameter, and the value of alpha is more than or equal to 1 and more than or equal to 0;
beta represents a second horizontal smooth parameter, and the value of beta is more than or equal to 1 and more than or equal to 0; in the present invention, α and β are different values;
and 9, finishing the prior storage of the cache content on the cache node according to the storage capacity of the node.
The autonomous intra-domain collaborative caching method for the content-centric network has the advantages that:
the autonomous domain is adopted as a coordination unit to carry out coordination cache, so that a user can obtain the requested content from a neighboring node, and the cache redundancy rate in the autonomous domain is low.
Secondly, a consistent Hash method is adopted to distribute cache tasks, so that the network nodes in the autonomous domain can carry out node matching in the autonomous domain on unknown requests.
And thirdly, performing preferential caching by adopting content popularity factor prediction based on the content request rate, and improving the utilization rate of cache resources.
The mapping of the network nodes and the content name prefixes is realized through a consistent hash algorithm for the decentralized autonomous domain network, and cache contents are distributed in cooperation with the requested times and popularity ranking of the cache contents, so that preferred caching is performed. Compared with a method of cooperative caching along the route, the method has the advantages of low caching redundancy in the adjacent nodes and high caching visibility between the adjacent nodes. Compared with a cluster cooperative caching method, the method has the advantages of low computing pressure of the central node and low time delay of cache resource allocation.
Drawings
Fig. 1 is a data structure diagram of a content-centric network CCN.
Fig. 2 is a topology structure diagram of a content-centric network CCN.
Fig. 3 is a flowchart of the cooperative caching method in the autonomous domain of the content-centric network according to the present invention.
Fig. 4 is a hash ring graph generated by a node of the present invention.
FIG. 5 is a hash ring diagram generated by the node and cache contents of the present invention.
Fig. 6 is a graph showing a relationship between the cache efficiency and the cache capacity in embodiment 1 of the present invention.
Fig. 7 is a graph showing a relationship between the caching efficiency and the Zipf parameter in embodiment 1 of the present invention.
Fig. 8 is a graph of network performance versus cache capacity in embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
In the present invention, a plurality of cache contents, i.e., a cache Content set MCC ═ CC, exist in a Content storage repository (CS) in a Content-centric network CCN1,CC2,…,CCa,…,CCb,…,CCA}. Whether to cache contents CC in one cache cycle HaThe nodes stored according to popularity are marked as cache nodes
Figure BDA0003308907380000071
In the present invention, please refer to "research on consistent hash algorithm in distributed storage system" published in 8 months 2011, author: yangxue sword, forest waves.
In a cache cycle H, marking the current cache moment as t; the moment before the current cache moment t is marked as t-1 (referred to as the last cache moment for short); and the time after the current buffering time t is recorded as t +1 (referred to as the next buffering time). A period from the last buffering time t-1 to the buffering time t is referred to as a time period Δ t, i.e., Δ t is t- (t-1).
In the invention, the source node through which the request path passes when the cache content is forwarded is marked as nodeSource. Destination node, denoted as nodeDest. Neighbor node, denoted as nodeNeighbor
In the present invention, the cache content is denoted as CC. The plurality of cache contents form a cache content block set, which is denoted as MCC, and MCC ═ CC1,CC2,…,CCa,…,CCb,…,CCA}。
CC1Representing the first piece of cache content. The cache content CC1Prefix of content name of (2), denoted as Prefix _ CC1. Content name Prefix-CC by adopting consistent hash algorithm1After the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000081
CC2Representing the second piece of cache content. The cache content CC2Prefix of content name of (2), denoted as Prefix _ CC2. Content name Prefix-CC by adopting consistent hash algorithm2After the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000082
CCaRepresenting the content of the a-th cache. The cache content CCaPrefix of content name of (2), denoted as Prefix _ CCa. Content name Prefix-CC by adopting consistent hash algorithmaAfter the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000083
CCbRepresenting the b-th cache content. The cache content CCbPrefix of content name of (2), denoted as Prefix _ CCb. Content name Prefix-CC by adopting consistent hash algorithmbAfter the processing, the obtained content hash value is recorded as
Figure BDA0003308907380000084
CCAIndicating the last piece of cached content. The cache content CCAPrefix of content name of (2), denoted as Prefix _ CCA. Content name Prefix-CC by adopting consistent hash algorithmAAfter processing, the hash value of the obtained content is recorded as
Figure BDA0003308907380000085
For convenience of illustration, the cache content CCaAlso referred to as any piece of cached content. The CCaThe lower subscript a of (a) represents the identification number of the cache content, and the CCAThe subscript a of (a) indicates the total number of cached content. Cache content CCaAnd cache the content CCbThe content is cached for different content.
Table 1 association table of cache contents and hash values
Caching node numbering Caching content Content name prefix Hash value
In the present invention, table 1 is a 4-column, multi-row table. The hash value is 1 to (2)32-1) of any number in between.
In the present invention, the cache content is distributed to the hash ring shown in fig. 4 according to the content hash value of the cache content, so as to obtain the hash ring structure of the cache content and the node shown in fig. 5.
In the invention, nodes in the autonomous domain in the content-centric network CCN are marked as nodes. A plurality of autonomous domain nodes form an autonomous domain node set, which is denoted as MNODE, and the MNODE ═ node1,node2,…,nodec,…,noded,…,nodeD}。
node1Representing the first autonomous domain node. Autonomous domain node pair adopting consistent Hash algorithm1After the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000086
node2Representing a second autonomous domain node. Autonomous domain node pair adopting consistent Hash algorithm2After the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000087
nodecRepresenting the c-th autonomous domain node. Autonomous domain node pair adopting consistent Hash algorithmcAfter the node numbering process, the obtained node hash value is recorded as
Figure BDA0003308907380000091
nodedRepresenting the d-th autonomous domain node. Autonomous domain node pair adopting consistent Hash algorithmdAfter the node number is processed, the obtained node hash value is recorded as
Figure BDA0003308907380000092
nodeDRepresenting the last autonomous domain node. Autonomous domain node pair adopting consistent Hash algorithmDAfter the node numbering process, the obtained node hash value is recorded as
Figure BDA0003308907380000093
For convenience of explanation, the autonomous domain nodedAlso referred to as any autonomous domain node. The nodedThe lower subscript d of (a) represents the identification number of the node of the autonomous domain, said nodeDThe subscript D of (a) indicates the total number of nodes within an autonomous domain. Node of autonomous domaindAnd autonomous domain nodecAre different node identities.
In the present invention, the node storage capacity R refers to the maximum number of cache contents that a node can store in a time period Δ t. The general setting is that R is 5 ~ 20.
Table 2, association table of autonomous domain nodes and hash values
Figure BDA0003308907380000094
In the present invention, table 2 is a table with 3 columns and a plurality of rows, and records nodes in the content-centric network autonomous domain and their corresponding hash values. The hash value is 1 to (2)32-1) of any number in between. In order to cache TaskMCCThe hash domains are distributed to each node as evenly as possible, the node numbers are used for mapping the hash domains, and the hash rings are segmented by the hash values of the node numbers (as shown in fig. 4).
In the present invention, in the case of the present invention,according to node hash value, autonomous domain node set MNODE ═ node1,node2,…,nodec,…,noded,…,nodeDThe nodes in the tree are sorted according to the hash values from small to large, and form a closed circular ring, i.e. a hash value ring, as shown in fig. 4.
Cache content popularity ranking prediction method based on damping trend
In the present invention, the number of requested times is recorded as
Figure BDA0003308907380000095
The requested number of times is the number of times that the cache content CC is requested in a time period Δ t in one cache cycle H. The time period Δ t in the buffer period H is the same. The time period Δ t may be a period from the last buffering time t-1 to the buffering time t, i.e., Δ t ═ t- (t-1).
For example, the content CC is buffered for a time period Δ t1Is requested 5 times and caches content CC2Is requested 20 times and caches content CC3Is requested 5 times and caches content CC4Is requested 15 times and caches content CC5Is requested for 8 times and caches content CC6Requested 3 times, cache content CC7Is requested 12 times and caches content CC8The number of requested times is 10. With high to low ordering 20>15>12>10>8>5>3, obtaining a cache content order CC2>CC4>CC7>CC8>CC5>CC1>CC3>CC6. And selecting the number of the cache contents on the cache node according to the storage capacity R of the node.
For example, when R is 5 during a time period Δ t, the slave cache content sequence CC is cached at the cache node2>CC4>CC7>CC8>CC5>CC1>CC3>CC6Selected CC2、CC4、CC7、 CC8And CC5
In the present invention, popularity ranking, denoted pop. The popularity ranking of the current cache time t is recorded as popt. The popularity ranking of the last cache time t-1 is marked as popt-1. The popularity ranking of the next cache time t +1, denoted popt+1. In the present invention, popularity ranking refers to the ranking of the number of requested times. The popularity adopts a Zipf distribution method. Therefore, the more times the cache content is requested, the higher the popularity of the cache content is considered, and the higher the popularity rank.
In the invention, the damping trend-based cache content popularity ranking prediction method, referred to as a damping trend popularity formula for short, is a formula (1):
Figure BDA0003308907380000101
popt+1representing the popularity ranking for the next cache time t + 1.
popt-1Representing the popularity ranking of the last cache time t-1.
poptRepresenting the popularity ranking of the current caching time t.
LtA level estimate representing a time series of pre-buffering instants t. Said LtRepresenting the request rate weighted square of two units of buffer time.
Lt-1Representing the level estimate of the time series of the last buffering instant t-1.
PtA trend estimate representing a time series of pre-buffer times t.
Pt-1The trend estimate of the time series of the last buffer instant t-1 is represented.
Phi represents a damping coefficient; phi is a2Represents the square of the damping coefficient; phi is ahRepresents the damping coefficient to the h power.
Alpha represents the first horizontal smooth parameter, and the value is more than or equal to 1 and more than or equal to 0.
Beta represents a second horizontal smoothing parameter, and the value is more than or equal to 1 and more than or equal to 0. In the present invention, α and β are different values.
In the formula (1), three parameters need to be preset, and the smaller the phi, alpha and beta are, the smoother the prediction curve is, and conversely, the larger the alpha is, the steeper the prediction curve is. Similarly, the smaller β, the smoother the prediction curve. The damping coefficient phi is used to trend to a flat prediction value when the popularity changes faster. For values falling between [0,1], the popularity of different cache contents can be smoothly predicted by using formula (1).
When making caching decisions, network traffic is often much larger than the cache capacity within the autonomous domain, and caching policies decide which nodes cache which content within the limited cache space of the entire autonomous domain. In the CCN network, the CS uses the prefix of the content name to identify the request and the packet at the network layer, so that the cache at the CCN network layer has the characteristic of fine granularity. Caching at the application layer in a TCP/IP network can be moved down to the network layer. Data content flowing through the CCN network is much larger than the local cache space, and over time, the popularity ranking pop of the cache content flowing through the CCN network may change greatly. Thus, the content CC is cachedaAt a node
Figure BDA0003308907380000111
In order to obtain the maximum use efficiency of the cache space, when the named data flows through the nodes in the autonomous domain, the cache content CC needs to be updatedaThe popularity ranking of the next cache time t +1
Figure BDA0003308907380000112
And (6) performing prediction. In order to solve the technical problem, the invention combines the requested times of the cache content and the popularity ranking of Zipf distribution to carry out online prediction and preferential cache.
In the invention, the prediction steps of the cache content popularity ranking prediction method based on the damping trend are as follows:
(A) collecting all cache contents of a Content Store (CS) of a CCN in a cache period H, and recording the number of the cache contents, wherein the total number of the cache contents is recorded as K;
(B) the number of times that each cache content is requested in one cache cycle H is collected.
(C) Zipf distribution is carried out on the requested times, and preferential caching is realized.
Table 3 association table of cache contents and requested times, called popularity table for short:
caching content Number of requested times Popularity ranking
In the present invention, table 3 is a table format of 3 columns and a plurality of rows.
The popularity ranking of cached content in the CS of the CCN network is not constant, but changes over time. Therefore, the method and the device adopt the buffer content popularity ranking based on the damping trend to predict, and can improve the network efficiency.
Example hash Ring formed by 18 nodes
Step 1, calculating hash values of all nodes;
performing hash value calculation on each node in the content center network by adopting a consistent hash algorithm to obtain a hash value of each node;
correlation table of autonomous domain nodes and hash values
Figure BDA0003308907380000121
Step 2, the node hash values are sequenced from small to large to generate a hash ring;
according to the size of the hash value on the node
Figure BDA0003308907380000122
And communicating all the nodes to form a closed-loop network topology structure, namely a hash value ring. As shown in fig. 4.
Step 3, calculating the content name prefix-hash value of each cache content;
firstly, extracting a content name prefix from cached content; and then, carrying out hash value calculation on the content name prefix by adopting a consistent hash algorithm to obtain a content name prefix-hash value.
Correlation table of cache contents and hash values
Figure BDA0003308907380000123
Step 4, distributing cache contents on a hash value ring;
and distributing the cache content on a hash value ring according to the content name prefix-hash value. As shown in fig. 5.
Step 5, comparing the hash values to obtain cache nodes;
in the invention, if the cache content-hash value between two adjacent nodes is located between the hash values of the two adjacent nodes, the node with the small hash value is selected as the cache node.
Caching content CC1The corresponding cache node is a node4
Caching content CC2The corresponding cache node is a node1
Caching content CC3The corresponding cache node is a node7
Obtaining the association table of the cache content and the hash value after the cache node
Figure BDA0003308907380000131
Step 6, collecting all cache contents with identities on cache nodes;
node1The upper cache content has CC2、CC11、CC12、CC13
Node4The upper cache content has CC1、CCa、CCA
Node7The upper cache content has CC3、CC31、CC32、CC33、CC34、CC35(ii) a Association table of cache nodes, cache contents and hash values
Figure BDA0003308907380000132
Step 7, collecting the requested times of the cache content;
node1Table of times the upper cache contents are requested:
caching content Number of requested times Popularity ranking
CC
2 3
CC11 11
CC 12 10
CC 13 8
Node4And uploading the popularity table of the cached content:
caching content Number of requested times Popularity ranking
CC
1 20
CCa 11
CCA 17
Node7And uploading the popularity table of the cached content:
caching content Number of requested times Popularity ranking
CC3 13
CC31 11
CC 32 10
CC 33 8
CC34 17
CC 35 20
Step 8, ranking and predicting the popularity of the cache content based on the damping trend; node1And uploading the popularity table of the cached content:
caching content Number of requested times Popularity ranking
CC
2 3 4
CC11 11 1
CC 12 10 2
CC 13 8 3
Node4And uploading the popularity table of the cached content:
caching content Number of requested times Popularity ranking
CC
1 20 1
CCa 11 3
CCA 17 2
Node7And uploading the popularity table of the cached content:
caching content Number of requested times Popularity ranking
CC3 13 3
CC31 11 4
CC 32 10 5
CC 33 8 6
CC34 17 2
CC 35 20 1
Step 9, finishing the prior storage of the cache content on the cache node according to the storage capacity of the node;
when the storage capacity of the node is 4, the node1All CCs are saved at the next cache time t +12、CC11、 CC12And CC13
When the storage capacity of the node is 4, the node4All CCs are saved at the next cache time t +11、CCaAnd CCA
When the storage capacity of the node is 4, the node7All CCs are saved at the next cache time t +13、CC31、 CC34And CC35
Performance analysis
Simulation parameter setting
The invention adopts a simulation platform SocialCCN to carry out quantitative evaluation on the cache strategy in the aspects of cache efficiency and network performance improvement.
The simulation scenario in embodiment 1 is a three-layer autonomous domain network, as shown in fig. 6, an autonomous domain is connected to a content server through two layers of gateways, and network service quality provided by caches of other nodes in the autonomous domain is better than that of a content server or other autonomous domain nodes.
The embodiment 1 evaluates the proposed caching strategy from two aspects of caching efficiency and network performance improvement, wherein index caching hit rate is used for specific quantification in the caching efficiency evaluation, and average hop number is used for quantification in the network performance improvement.
The specific simulation parameter settings in example 1 are shown in table 4.
Table 4 simulation parameter settings
Parameter name Parameter value
Network topology map GEANT
Request features Trending Zipf distribution requests
Amount of content in a network 2000
Parameters of Zipf distribution 1.0(0.2~1.2)
Number of nodes 20
General slowStorage capacity (percentage of total network content) 20%(5%~35%)
Cache efficiency-cache capacity:
in the caching efficiency-caching capacity, firstly, a Dynamic Intra-AS POP method is analyzed to be compared with an LCE strategy and a main stream MPC and LCD strategy under different caching capacities. AS a result, AS shown in fig. 7, the cache hit rate increases with the increase of the cache capacity, and the Dynamic Intra-AS POP method performs significantly better than the conventional mainstream LCE, LCD, MPC methods.
The method is called Dynamic Intra-AS POP for short.
Cache efficiency-Zipf parameter:
in the caching efficiency-Zipf parameters, the Dynamic Intra-AS POP method is compared with the LCE strategy and the mainstream MPC and LCD strategies under different Zipf parameters. As a result, as shown in fig. 7, the larger the parameter α of the Zipf distribution, which increases with the increase of the Zipf parameter, indicates the more concentrated the user's preference for content requests. Conversely, a smaller parameter α of the Zipf distribution indicates a more distributed preference of the user for content requests. The more concentrated the request preference is, the higher the caching efficiency is, and the Dynamic Intra-AS POP method is obviously superior to the existing mainstream LCE, LCD and MPC methods.
Network performance-cache capacity:
in the network performance-cache capacity, the average hop count is compared under different cache capacities. The smaller the average hop count, the better the network performance. The Dynamic Intra-AS POP method is compared with the LCE strategy and the mainstream MPC and LCD strategies. The comparison result is shown in fig. 8, the average hop count increases with the increase of the cache capacity, and the improvement of the network performance by the Dynamic Intra-AS POP method is superior to the existing mainstream LCE, LCD and MPC methods.
Hop count distribution: in order to further study the way of improving the network performance by the policy of the present invention, the distribution of the hop counts under different policies is studied besides the statistical mean value of the hop counts, and the distribution is shown in fig. 8. AS can be seen from the figure, the Dynamic Intra-AS POP policy reduces the number of request hops for cached content in the autonomous domain by caching the most popular content, and although the number of request hops for unpopular content is increased, the policy of the present invention can still bring about an improvement in average network performance compared with the default policy.

Claims (2)

1. A cooperative caching method in an autonomous domain facing a content center network is characterized by comprising the following steps:
step 1, calculating hash values of all nodes in an autonomous domain in a content center network;
performing hash value calculation on each node in an autonomous domain in a content center network by adopting a consistent hash algorithm to obtain a hash value of each node;
the constructed association table of the autonomous domain nodes and the hash values comprises the following contents: node number, node storage capacity and hash value;
the autonomous domain node set is denoted as MNODE, and MNODE ═ node1,node2,…,nodec,…,noded,…,nodeD};
node1Representing a first autonomous domain node; autonomous domain node pair adopting consistent Hash algorithm1After the node numbering process, the obtained node hash value is recorded as
Figure FDA0003308907370000011
node2Representing a second autonomous domain node; autonomous domain node pair adopting consistent Hash algorithm2After the node numbering process, the obtained node hash value is recorded as
Figure FDA0003308907370000012
nodecRepresenting a c-th autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmcAfter the node numbering process, the obtained node hash value is recorded as
Figure FDA0003308907370000013
nodedRepresenting the d-th autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmdAfter the node numbering process, the obtained node hash value is recorded as
Figure FDA0003308907370000014
The lower corner mark d represents the identification number of the autonomous domain node;
nodeDrepresenting the last autonomous domain node; autonomous domain node pair adopting consistent Hash algorithmDAfter the node numbering process, the obtained node hash value is recorded as
Figure FDA0003308907370000015
The lower corner mark D represents the total number of nodes in one autonomous domain;
step 2, the node hash values are sequenced from small to large to generate a hash ring;
communicating each node according to the size of the hash value on the node to form a closed-loop hash value ring network topology structure;
step 3, calculating the content name prefix-hash value of each cache content;
the content storage bank CS of the content center network has cache content;
firstly, extracting a content name prefix from cached content; then, performing hash value calculation on the content name prefix by adopting a consistent hash algorithm to obtain a content name prefix-hash value;
the contents in the constructed association table of the cache contents and the hash values are as follows: the method comprises the steps of caching node numbers, caching contents, content name prefixes and hash values;
the cache contents set is denoted as MCC, and MCC ═ CC1,CC2,…,CCa,…,CCb,…,CCA};
CC1Representing a first piece of cached content; the cache content CC1Prefix of content name of (2), denoted as Prefix _ CC1(ii) a By usingConsistent hashing algorithm to content name Prefix _ CC1After the processing, the obtained content hash value is recorded as
Figure FDA0003308907370000021
CC2Representing a second piece of cache content; the cache content CC2Prefix of content name of (2), denoted as Prefix _ CC2(ii) a Content name Prefix-CC by adopting consistent hash algorithm2After the processing, the obtained content hash value is recorded as
Figure FDA0003308907370000022
CCaRepresenting the a-th cache content; the cache content CCaPrefix of content name of (2), denoted as Prefix _ CCa(ii) a Content name Prefix-CC by adopting consistent hash algorithmaAfter the processing, the obtained content hash value is recorded as
Figure FDA0003308907370000023
The subscript a indicates the identification number of the cache content;
CCbrepresenting the b-th cache content; the cache content CCbPrefix of content name of (2), denoted as Prefix _ CCb(ii) a Content name Prefix-CC by adopting consistent hash algorithmbAfter the processing, the obtained content hash value is recorded as
Figure FDA0003308907370000024
CCARepresenting the last piece of cache content; the cache content CCAPrefix of content name of (2), denoted as Prefix _ CCA(ii) a Content name Prefix-CC by adopting consistent hash algorithmAAfter the processing, the obtained content hash value is recorded as
Figure FDA0003308907370000025
The subscript a represents the total number of cached content;
step 4, distributing cache contents on a hash value ring;
distributing the cache content on a hash value ring according to the content name prefix-hash value to form a hash value ring structure of the cache content and the node;
step 5, comparing the hash values to obtain cache nodes;
if the cache content-hash value is positioned between the hash values of the two adjacent nodes, selecting the node with the small hash value as the cache node;
step 6, collecting all cache contents with identities on cache nodes;
whether to cache contents CC in one cache cycle HaThe nodes stored according to popularity are marked as cache nodes
Figure FDA0003308907370000026
Step 7, collecting the requested times of the cache content;
if the content CC is cachedaRequested, recorded number of times requested, noted
Figure FDA0003308907370000027
Step 8, ranking and predicting the popularity of the cache content based on the damping trend;
the popularity of the damping trend is expressed as
Figure FDA0003308907370000031
popt+1Representing the popularity ranking of the next caching time t + 1;
popt-1representing the popularity ranking of the last cache time t-1;
poptrepresenting the popularity ranking of the current caching time t;
Lta level estimation value representing a time series of the pre-buffering time t; said LtRepresenting the request rate weighted square of two cache time units;
Lt-1a level estimation value representing a time series of a last buffering time t-1;
Pta trend estimation value representing a time series of the previous buffering time t;
Pt-1a trend estimation value representing the time sequence of the last cache time t-1;
phi represents a damping coefficient; phi is a2Represents the square of the damping coefficient; phi is ahRepresents the damping coefficient to the power of h;
alpha represents a first horizontal smooth parameter, and the value of alpha is more than or equal to 1 and more than or equal to 0;
beta represents a second horizontal smooth parameter, and the value of beta is more than or equal to 1 and more than or equal to 0; in the present invention, α and β are different values;
and 9, finishing the prior storage of the cache content on the cache node according to the storage capacity of the node.
2. The content-centric network-oriented autonomous intra-domain collaborative caching method according to claim 1, wherein: the hash value is 1 to (2)32-1) of any number in between.
CN202111210937.8A 2021-10-18 2021-10-18 Content-centric-network-oriented autonomous domain collaborative caching method Active CN113965588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111210937.8A CN113965588B (en) 2021-10-18 2021-10-18 Content-centric-network-oriented autonomous domain collaborative caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111210937.8A CN113965588B (en) 2021-10-18 2021-10-18 Content-centric-network-oriented autonomous domain collaborative caching method

Publications (2)

Publication Number Publication Date
CN113965588A true CN113965588A (en) 2022-01-21
CN113965588B CN113965588B (en) 2023-08-22

Family

ID=79464303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111210937.8A Active CN113965588B (en) 2021-10-18 2021-10-18 Content-centric-network-oriented autonomous domain collaborative caching method

Country Status (1)

Country Link
CN (1) CN113965588B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160094553A1 (en) * 2014-09-26 2016-03-31 Futurewei Technologies, Inc. Hash-Based Forwarding In Content Centric Networks
US20160241669A1 (en) * 2015-02-16 2016-08-18 Telefonaktiebolaget L M Ericsson (Publ) Temporal caching for icn
CN106533733A (en) * 2016-08-30 2017-03-22 中国科学院信息工程研究所 CCN collaborative cache method and device based on network clustering and Hash routing
CN110958573A (en) * 2019-11-22 2020-04-03 大连理工大学 Mobile perception cooperative caching method based on consistent Hash under vehicle-mounted content center network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160094553A1 (en) * 2014-09-26 2016-03-31 Futurewei Technologies, Inc. Hash-Based Forwarding In Content Centric Networks
US20160241669A1 (en) * 2015-02-16 2016-08-18 Telefonaktiebolaget L M Ericsson (Publ) Temporal caching for icn
CN106533733A (en) * 2016-08-30 2017-03-22 中国科学院信息工程研究所 CCN collaborative cache method and device based on network clustering and Hash routing
CN110958573A (en) * 2019-11-22 2020-04-03 大连理工大学 Mobile perception cooperative caching method based on consistent Hash under vehicle-mounted content center network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李伟;潘沛生;: "协作缓存及数学建模在CCN中的应用", 计算机技术与发展, no. 07 *

Also Published As

Publication number Publication date
CN113965588B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN109905480B (en) Probabilistic cache content placement method based on content centrality
CN105049254B (en) Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN
CN108366089B (en) CCN caching method based on content popularity and node importance
CN110958573B (en) Mobile perception cooperative caching method based on consistent Hash under vehicle-mounted content center network
CN110365801B (en) Partition-based cooperative caching method in information center network
CN111107000B (en) Content caching method in named data network based on network coding
CN105681438B (en) centralized content center network cache decision method
Wu et al. MBP: A max-benefit probability-based caching strategy in information-centric networking
Nakajima et al. Color-based cooperative cache and its routing scheme for telco-cdns
CN112399485A (en) CCN-based new node value and content popularity caching method in 6G
CN108769252A (en) A kind of ICN network pre-cache methods based on request content relevance
CN108173903B (en) Application method of autonomous system cooperation caching strategy in CCN
Alahmri et al. Efficient pooling and collaborative cache management for NDN/IoT networks
Khodaparas et al. A multi criteria cooperative caching scheme for internet of things
Yu et al. Dynamic popularity-based caching permission strategy for named data networking
CN113965588A (en) Autonomous intra-domain collaborative caching method for content-centric networking
CN115473854B (en) Intelligent flow control method for multi-mode network
Alduayji et al. PF-EdgeCache: Popularity and freshness aware edge caching scheme for NDN/IoT networks
CN106790421B (en) ICN two-step caching method based on community
CN115604311A (en) Cloud fusion computing system and self-adaptive routing method for service network
CN112822275B (en) Lightweight caching strategy based on TOPSIS entropy weight method
CN112422449A (en) Medical data forwarding and caching system and method based on caching support network
CN114785692A (en) Virtual power plant aggregation regulation and control communication network flow balancing method and device
CN110012071B (en) Caching method and device for Internet of things
Ma et al. Source routing over protocol-oblivious forwarding for named data networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant