CN110365801B - Partition-based cooperative caching method in information center network - Google Patents

Partition-based cooperative caching method in information center network Download PDF

Info

Publication number
CN110365801B
CN110365801B CN201910787735.6A CN201910787735A CN110365801B CN 110365801 B CN110365801 B CN 110365801B CN 201910787735 A CN201910787735 A CN 201910787735A CN 110365801 B CN110365801 B CN 110365801B
Authority
CN
China
Prior art keywords
content
cache
router
partition
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910787735.6A
Other languages
Chinese (zh)
Other versions
CN110365801A (en
Inventor
柳寰宇
李黎
王小明
张立臣
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN201910787735.6A priority Critical patent/CN110365801B/en
Publication of CN110365801A publication Critical patent/CN110365801A/en
Application granted granted Critical
Publication of CN110365801B publication Critical patent/CN110365801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A cooperative caching method based on partitions in an Information Center Network (ICN) comprises the following steps: s100: partitioning the cache nodes in the information center network according to the closeness degree of the connection between the network nodes and the load balance of the whole network; s200: and taking the partitions as units, adopting the content centrality measurement indexes as the basis for selecting the cache nodes in the partitions, and cooperatively placing the cache contents in each partition. The method obviously reduces the network cache redundancy, realizes the network load balance, improves the node cache hit rate and the cache content diversity and simultaneously considers the content acquisition delay.

Description

Partition-based cooperative caching method in information center network
Technical Field
The disclosure belongs to the technical field of information communication and network caching, and particularly relates to a partition-based cooperative caching method in an information center network.
Background
With the rapid development of the internet, internet applications are transitioning from sender-driven peer-to-peer communication mode to recipient-driven mass content acquisition mode. To better accommodate this transition, information-centric networking (ICN) has recently been developed in which an information-centric communication mode replaces the traditional host-centric communication mode. As a new network architecture system which is gradually approved, CCN/NDN, DONA, NetInf and the like are typical.
According to the characteristic that the user does not care about the position (i.e. where) where the information/content is stored but only the information/content itself (i.e. what), the network uniformly identifies the content and performs positioning, routing and transmission based on the content. Meanwhile, ICN advocates an in-network caching (in-network caching) key technology to improve the performance of the network. The core idea of in-network caching is to add a built-in caching function to all routers in a network, cache content resources on the routers with caching space, so that content requests sent by users can be directly responded through nearby services during routing without accessing a content source (server) for the next time, thereby reducing server load, network flow and content acquisition delay. Therefore, a reasonable and efficient caching method has a crucial influence on improving the overall performance of the ICN.
For the default adoption of the space copy everywhere (LCE) method in ICN, caching the requested content resources in all nodes on the content delivery path is advocated. Although the LCE method is simple to operate and easy to implement, it often wastes cache resources and limits the improvement of the overall performance of the network. Because the cache space of the network is limited, indiscriminately caching the same content on each node can generate serious cache redundancy and reduce the diversity of cache contents. Meanwhile, the LCE method ensures that the cache content is unreasonably distributed in space, and reduces the utilization rate of the cache space. Unlike the LCE method, Chai et al propose an idea of caching content resources only in a part of nodes on a content delivery path, and propose a caching method based on betweenness centricity. The betweenness caching method utilizes the betweenness concept of the nodes and caches the content only on the node with the largest betweenness selected in the content transmission path. Because the nodes with large betweenness have higher cache hit probability, the utilization rate of cache resources and the overall cache hit rate are improved. However, since the contents are all cached on the nodes with large betweenness, the betweenness caching method often has the problems that the cached contents are not reasonably distributed in space, the content request load on the nodes is not uniform, and the like.
Disclosure of Invention
In order to solve the above problem, the present disclosure provides a partition-based cooperative caching method in an information centric networking ICN, including the following steps:
s100: partitioning the cache nodes in the information center network according to the closeness degree of the connection between the network nodes and the load balance of the whole network;
s200: and taking the partitions as units, adopting the content centrality measurement indexes as the basis for selecting the cache nodes in the partitions, and cooperatively placing the cache contents in each partition.
According to the technical scheme, firstly, region division is introduced, and under the condition of considering both network delay and load balance, cache nodes in the network are partitioned, so that the cache contents are distributed more reasonably in space. Secondly, the method takes the partition as a unit, adopts the content centrality index as the basis for selecting the cache nodes in the partition, and carries out cooperative caching in the partition. The method effectively reduces the network cache redundancy, realizes load balance on the whole network, and improves the cache content diversity and the content cache efficiency.
Drawings
Fig. 1 is a schematic flowchart of a partition-based cooperative caching method in an information centric networking ICN according to an embodiment of the present disclosure;
FIG. 2 is a graph comparing the impact of content quantity on cache hit rate in one embodiment of the present disclosure;
FIG. 3 is a graph comparing the effect of content quantity on hop count reduction rate in one embodiment of the present disclosure;
FIG. 4 is a comparison graph of the impact of content quantity on content diversity in one embodiment of the present disclosure;
FIG. 5 is a graph comparing the effect of content quantity on the Gini coefficient of node load in one embodiment of the present disclosure;
FIG. 6 is a comparison graph of the impact of content quantity on cache redundancy in one embodiment of the present disclosure;
FIG. 7 is a graph comparing the effect of node cache capacity on cache hit rate in one embodiment of the present disclosure;
FIG. 8 is a graph comparing the effect of node cache capacity on hop count reduction rate in one embodiment of the present disclosure;
FIG. 9 is a graph comparing the effect of node cache capacity on content diversity in one embodiment of the present disclosure;
FIG. 10 is a graph comparing the effect of node cache capacity on the Gini coefficient of node load in one embodiment of the present disclosure;
FIG. 11 is a graph comparing the effect of node cache capacity on cache redundancy in one embodiment of the present disclosure.
Detailed Description
In one embodiment, as shown in fig. 1, a partition-based cooperative caching method in an information-centric networking ICN is disclosed, the method comprising the steps of:
s100: partitioning the cache nodes in the information center network according to the closeness degree of the connection between the network nodes and the load balance of the whole network;
s200: and taking the partitions as units, adopting the content centrality measurement indexes as the basis for selecting the cache nodes in the partitions, and cooperatively placing the cache contents in each partition.
For the embodiment, the cache nodes in the network are divided into regions, so that the cache nodes which are closely connected are divided into the same partition, and meanwhile, the load balance of the whole network is considered; on the basis of regional division, a cache node with the maximum content centrality value is selected from each partition passing through on a content transmission path by taking the partition as a unit to perform intra-partition cooperative caching, so that the cache content is more reasonably distributed in space, and the aims of improving the load balance on the node, reducing cache redundancy, increasing the diversity of the cache content, improving the cache hit rate and reducing content acquisition delay are fulfilled.
In this embodiment, the information-centric network includes a caching node, a content origin server, and a user that sends a content request. The user obtains the needed content object from the network by sending the interest packet, and the service node (server or router with cache function) receiving the interest packet and storing the corresponding content returns the data packet to the user according to the reverse path.
In another embodiment, an information centric networking ICN system is modeled asOne undirected network graph G ═ (V, E). Where V represents a set of cache nodes and E represents a set of all links. Note that S denotes a content source server set, and U denotes a user set. C ═ C1,c2,...,cRIn which c isi∈C
Representing the ith content block and C representing the set of all content blocks. It is assumed that the content is composed of several content blocks, each of which has the same size and is stored in only one content source server s (c)i) E.g. in S. The associated symbol definitions are shown in table 1.
TABLE 1 legends
Figure BDA0002179365800000051
In another embodiment, step S100 further comprises the steps of:
s101: dividing the closely-connected cache nodes into the same partition, so that the distance between each cache node in the same partition is as small as possible;
s102: the number of cache nodes included in each partition is not very different.
For the embodiment, the partitioning stage is to add a step of "community scale detection" on the basis of a classical community discovery GN algorithm to further obtain a reasonable partitioning result.
The GN algorithm is used for segmenting the network based on maximum edge betweenness deletion, and is a split hierarchical clustering algorithm. The GN algorithm is used for acquiring the partitions, so that the close connection of nodes in the partitions can be ensured, and the size of each partition can be approximate in most cases. The concrete expression is as follows: when the network has no obvious community structure, the GN algorithm can rapidly divide the network into a plurality of relatively uniform disconnected sub-networks, and the nodes in the sub-networks are relatively closely connected; when the network has an obvious community structure and the community sizes are close, the GN algorithm can directly extract the community structure, the obtained community sizes are close, and the nodes in the community are closely connected; when the network has an obvious community structure but the scales of partial communities are different greatly, the scales of the communities obtained by the GN algorithm cannot be guaranteed to be similar, but the nodes in the communities are closely connected.
In order to ensure the load balance of each partition in the partition stage, a step of community scale detection needs to be added. When the network is divided by using the GN algorithm, there may be a case that the size of a certain subnet or some subnets is much larger than that of other subnets, and at this time, the content of the subnet is prone to load unevenness, and congestion occurs. By adding a step of detecting the community scale, the subnets with larger scale in the network are divided again to obtain subnets with similar scale as much as possible, so that the load balance of each partition is ensured.
The method uses GN algorithm for reference, and partially expands the GN algorithm according to the concerned regional division target of the method to perform regional division on cache nodes in the network to obtain a partition result, and the method mainly comprises the following steps:
(1) calculating edge betweenness indexes of all edges in the network topological graph;
(2) deleting the edge with the maximum edge index of one edge;
(3) repeating the step (1) and the step (2) until the number of disconnected subnets in the network is equal to the partition number k;
(4) checking the scale of each subnet, if the size of the subnet is different greatly, executing the step (5), and if the size of each subnet is similar, executing the step (6);
(5) for a subnet with a larger scale in the network, performing region division again by using a GN algorithm (namely, executing the step (1) and the step (2)), and repeating the step (4);
(6) each sub-network in the network is each partition divided in the final network.
With this embodiment, the closeness of the connection between nodes and the load balancing of the network as a whole are considered under the constraint of limited cache resources. The aims of improving the load balancing capability of the ICN, the diversity and the effectiveness of cache contents are achieved through reasonable region division. The method gives consideration to network delay and load balance in the partition, and has the following characteristics: firstly, the cache nodes in the partitions are closely connected, namely, the distance between the cache nodes is as small as possible, so that the access time delay in the partitions is ensured to be as small as possible; secondly, the sizes of the partitions are as close as possible, that is, the number of cache nodes contained in the partitions is not large, so that load balance of the partitions is ensured.
In another embodiment, the cache node comprises a router with caching functionality.
In another embodiment, the router with the cache function caches the passed content object through a cache policy, and simultaneously realizes routing and forwarding by inquiring and maintaining the content storage table CS, the pending interest table PIT and the forwarding information table FIB.
In another embodiment, step S200 further comprises the steps of:
s201: processing an interest packet;
s202: and (5) processing the data packet.
For this embodiment, two data types are included in the ICN: interest packets and data packets. The interest packet contains the name of the content object requested by the user, and the data packet contains the content object requested by the user. The content objects are cached and transferred in blocks, and all the content blocks have the same size.
In another embodiment, step S201 further comprises the steps of:
s2010: initializing the maximum content centrality value of a router with a cache space in each partition passing through on the shortest path;
s2011: if the copy of the requested content is cached in the partition where the router with the cache space accessed by the user is located, directly accessing the partition; if the copy of the requested content is not cached in the partition where the router with the cache space accessed by the user is located, accessing the server;
s2012: for each router with a cache space on the shortest path from a user to a content source server, if cache is hit, returning the requested content data, and if no cache is hit, acquiring the content centrality value of the router with the cache space passing through each partition;
s2013: if the content centrality value is larger than the maximum content centrality value, updating the maximum content centrality value of the router with the cache space in each partition passing through on the shortest path;
s2014: and forwarding the interest packet to the next-hop router.
For the embodiment, the processing procedure of the interest packet records the maximum content centrality value in each partition passed by on the content delivery path, and provides a basis for subsequent intra-partition cooperative caching.
In another embodiment, the intra-partition access in step S2011 specifically refers to: the interest packet is forwarded to a node where the content copy is requested in the partition along the shortest path from the accessed router with the cache space, and the node where the content copy is requested in the partition acquires the requested content data; the server access in step S2011 specifically includes: the interest packet is forwarded along the shortest path from the accessed router with the cache space to the content source server.
In another embodiment, step S202 specifically includes: judging whether the content centrality value of the router with the passing cache space is matched with the recorded maximum content centrality value in the partition to which the router belongs or not; if not, forwarding to a next hop router; and if so, performing caching decision.
Wherein forwarding to the next hop router is accomplished by looking up a forwarding information table (FIB).
In the embodiment, the processing process of the data packet enables the cache contents to be distributed more reasonably in space, network cache redundancy is obviously reduced, network load balance is realized, the node cache hit rate and the cache content diversity are improved, and meanwhile, the content acquisition delay is considered.
In another embodiment, the caching decision specifically is: judging whether a copy of the requested content is cached in the partition where the router is located; if yes, forwarding to a next hop router; if not, judging whether the cache space of the router is full; if the cache space is not full, directly caching corresponding request content and forwarding the request content to a next hop router; and if the cache space is full, performing replacement caching according to the LRU cache replacement strategy and forwarding to the next hop router until the user requesting the content is forwarded.
For this embodiment, forwarding to the next hop router is accomplished by looking up the pending interest table PIT and the forwarding information table FIB. Caching the corresponding request content is achieved by means of the content storage table CS of the router.
In another embodiment, the content centrality measure in step S200 is defined as:
Figure BDA0002179365800000091
wherein σv(u, c) represents the number of shortest paths that user u passing through caching node v requests content block c; σ (u, c) represents the shortest path number of the user u requesting the content block c, and CC (v) represents the content centrality of the cache node v; c denotes a set of content blocks, U denotes a set of users, pcRepresenting the probability of a user's request for content.
For the embodiment, the content centrality metric comprehensively considers the location centrality and the content popularity of the cache node. Using probability p of user request for contentcTo distinguish the popularity of different content, where pcSatisfy Σc∈CpcThe greater the probability that content is requested, the higher its popularity.
In another embodiment, in order to verify the effectiveness of the partition-based cooperative caching method, i.e., the PCCM method, in the information center network ICN, the following embodiment selects a representative everywhere caching method LCE, and performs a comparison experiment based on the probability caching method MBP for maximizing caching profit and the Betw caching method Betw based on Betw numbers. The cache replacement strategy adopts an LRU (least recently used) strategy.
A representative OS3E network topology containing 34 nodes and 42 edges was chosen for the experiment. A content source server is provided in a network. Each router is assumed to have the same cache capacity. The request process of the user follows Poisson distribution, and the request mode of the user follows Zipf distribution. The main parameter settings of the experiment are shown in table 2.
TABLE 2 Experimental parameters
Figure BDA0002179365800000101
In another embodiment, five indexes of cache hit rate, hop count reduction rate, content diversity, Gini coefficient of node load, and cache redundancy are used to evaluate the performance of the PCCM method.
(1) Cache hit rate
Cache Hit Ratio (CHR) can characterize the capacity of a cache method to reduce the load of a server, and reflect the efficiency of a cache system. The higher the cache hit rate, the less the server load is stressed, and the higher the efficiency of the cache system. It is defined as the probability of responding to a user request by a caching node rather than an origin server:
Figure BDA0002179365800000111
wherein h represents the number of content requests responded at the cache node, and r represents the total number of all content requests sent by the user.
(2) Hop count reduction rate
The Hop Reduction Rate (HRR) characterizes the amount of increase in the speed at which a user acquires content. The higher the hop count reduction rate is, the less the time delay required for the user to acquire the content is, and the faster the response speed of the cache system is. It is defined as:
Figure BDA0002179365800000112
wherein D represents the access hop number required by the content request to obtain response at the cache node or the source server node, D represents the access hop number required by all the content requests to obtain response only at the source server node, and r represents the total number of all the content requests sent by the user.
(3) Content diversity
Content Diversity (CD) indicators are used to measure the degree of differential caching. The more the types of the cached contents are, the smaller the consumption of the caching space is, the larger the content diversity value is, and the higher the differentiated caching degree is. It is defined as:
Figure BDA0002179365800000121
the Type _ num represents the number of cached Content types, the Content _ num represents the number of all Content types in the Content source server, the Consumed _ num represents the average cache number Consumed in the network, and the Total _ num represents the Total cache number of all nodes in the network.
(4) Gini coefficient of node load
In order to reflect the degree of load balance, a Gini coefficient (Gini coefficient) evaluation index for measuring the degree of probability distribution unevenness is introduced. The larger the value of the Gini coefficient is, the larger the node load difference is, i.e., the larger the degree of load unevenness is. Defining the load of a node as the number of times that the node responds to a user content request within a period of time, and calculating the Gini coefficient (LoadGini) of the load of the node according to the formula:
Figure BDA0002179365800000122
wherein x isiIndicating the load of node i, i.e. the number of times node i responds to the user's content request,<x>the average load of each node, namely the average number of times that each node responds to the user content request, and n represents the number of cache nodes.
(5) Cache redundancy
Reducing cache redundancy in a network is one of the main objectives to improve ICN cache performance. The redundancy degree of the content in the cache is described by using a Cache Redundancy (CR) index. The lower the value of the cache redundancy is, the fewer the number of times the content is repeatedly stored in the cache is, and the better the cache performance is. It is defined as:
Figure BDA0002179365800000123
the Cache _ num represents the number of cached contents, and the Type _ num represents the number of cached content types.
In another embodiment, in order to observe the influence of a certain parameter on the cache performance, only the value of a single parameter is changed in the following experiments, and the values of other parameters are kept unchanged, which are shown in table 2.
As can be seen from fig. 2, 3, and 4, as the number of contents in the network increases, the cache hit rate, the hop count reduction rate, and the diversity of the contents of each caching method all show a decreasing trend as a whole. This is because as the number of requested contents in the network increases, the number of content blocks to be cached increases, and the caching space of the node is limited, thereby reducing the caching performance. However, the cache hit rate and content diversity of the PCCM approach have been superior to other cache approaches. Meanwhile, as the number of contents increases, the rate of hop count reduction of the PCCM method is always higher than that of the Betw method.
As can be seen from fig. 5 and 6, the Gini coefficient and the cache redundancy of the node load of the PCCM method are clearly superior to other cache methods at all times. This is because the PCCM approach allows for load balancing and cooperative caching of nodes.
For the purposes of this embodiment, the impact of the content quantity on the performance of the different caching methods is mainly analyzed. With the change of the content quantity, compared with a comparative caching method, the PCCM method has certain advantages in five evaluation indexes of cache hit rate, hop count reduction rate, content diversity, Gini coefficient of node load and cache redundancy. The PCCM method considers the regional division and the cooperative caching of the caching nodes, so that the hit rate and the diversity of the caching contents are improved, the caching redundancy is reduced, the load balance is achieved, and the time delay required by a user for accessing the contents is reduced.
In another embodiment, as can be seen from fig. 7, 8 and 9, as the node cache capacity increases, the cache hit rate, the hop count reduction rate and the content diversity of each caching method are improved. This is because as the cache capacity of the node increases, the number of content copies cached in the network increases, and the probability of content requests being hit in the cache node increases, thereby improving the overall cache performance. The PCCM approach has shown superior performance in terms of cache hit rate and content diversity compared to other caching approaches. The PCCM method has been superior to the Betw and LCE methods in terms of hop count reduction rate.
As can be seen from fig. 10 and 11, the Gini coefficient and the cache redundancy of the node load of the PCCM method are clearly superior to other cache methods at all times.
For the purposes of this embodiment, the impact of node cache capacity on the performance of different caching methods is mainly analyzed. With the change of the node cache capacity, compared with a comparative cache method, the PCCM method has certain advantages in five evaluation indexes of cache hit rate, hop count reduction rate, content diversity, Gini coefficient of node load and cache redundancy. The PCCM method considers the regional division and the cooperative caching of the caching nodes, so that the hit rate and the diversity of the caching contents are improved, the caching redundancy is reduced, the load balance is achieved, and the time delay required by a user for accessing the contents is reduced.
Although the embodiments of the present invention have been described above with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments and application fields, and the above-described embodiments are illustrative, instructive, and not restrictive. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto without departing from the scope of the invention as defined by the appended claims.

Claims (8)

1. A cooperative caching method based on partitions in an Information Center Network (ICN) comprises the following steps:
s100: partitioning cache nodes in the information center network according to the closeness degree of the connection between the network nodes and the load balance of the whole network, wherein the partitioning is carried out by adding a step of 'community scale detection' on the basis of a community discovery GN algorithm;
s200: taking the partitions as units, adopting content centrality measurement indexes as the basis for selecting cache nodes in the partitions, and cooperatively placing cache contents in each partition;
wherein, step S100 further comprises the following steps:
s101: dividing the closely-connected cache nodes into the same partition;
the content centrality measure in step S200 is defined as:
Figure 683355DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 137339DEST_PATH_IMAGE002
representing the number of shortest paths of the user u which passes through the caching node v for requesting the content block c;
Figure 742764DEST_PATH_IMAGE003
the shortest path number representing the user u requesting the content block c;
Figure 97129DEST_PATH_IMAGE004
representing the content centrality of the caching node v;
Figure DEST_PATH_IMAGE005
a set of blocks of content is represented,
Figure 559334DEST_PATH_IMAGE006
a set of users is represented as a set of users,
Figure DEST_PATH_IMAGE007
representing the probability of a user's request for content.
2. The method of claim 1, wherein the caching node comprises a router having caching functionality.
3. The method of claim 2, wherein the router with cache function caches the passed content object through a cache policy, and simultaneously realizes routing and forwarding by inquiring and maintaining a content storage table CS, a pending interest table PIT and a forwarding information table FIB.
4. The method of claim 2, the step S200 further comprising the steps of:
s201: processing an interest packet;
s202: and (5) processing the data packet.
5. The method according to claim 4, step S201 further comprising the steps of:
s2010: initializing the maximum content centrality value of a router with a cache space in each partition passing through on the shortest path;
s2011: if the copy of the requested content is cached in the partition where the router with the cache space accessed by the user is located, directly accessing the partition; if the copy of the requested content is not cached in the partition where the router with the cache space accessed by the user is located, accessing the server;
s2012: for each router with a cache space on the shortest path from a user to a content source server, if cache is hit, returning the requested content data, and if no cache is hit, acquiring the content centrality value of the router with the cache space passing through each partition;
s2013: if the content centrality value is larger than the maximum content centrality value, updating the maximum content centrality value of the router with the cache space in each partition passing through on the shortest path;
s2014: and forwarding the interest packet to the next-hop router.
6. The method of claim 5, wherein the accessing in the partition in step S2011 specifically includes: the interest packet is forwarded to a node where the content copy is requested in the partition along the shortest path from the accessed router with the cache space, and the node where the content copy is requested in the partition acquires the requested content data; the server access in step S2011 specifically includes: the interest packet is forwarded along the shortest path from the accessed router with the cache space to the content source server.
7. The method according to claim 4, wherein step S202 is specifically: judging whether the content centrality value of the router with the passing cache space is matched with the recorded maximum content centrality value in the partition to which the router belongs or not; if not, forwarding to a next hop router; and if so, performing caching decision.
8. The method of claim 7, wherein the caching decision is specifically: judging whether a copy of the requested content is cached in the partition where the router is located; if yes, forwarding to a next hop router; if not, judging whether the cache space of the router is full; if the cache space is not full, directly caching corresponding request content and forwarding the request content to a next hop router; and if the cache space is full, performing replacement caching according to the LRU cache replacement strategy and forwarding to the next hop router until the user requesting the content is forwarded.
CN201910787735.6A 2019-08-26 2019-08-26 Partition-based cooperative caching method in information center network Active CN110365801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910787735.6A CN110365801B (en) 2019-08-26 2019-08-26 Partition-based cooperative caching method in information center network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910787735.6A CN110365801B (en) 2019-08-26 2019-08-26 Partition-based cooperative caching method in information center network

Publications (2)

Publication Number Publication Date
CN110365801A CN110365801A (en) 2019-10-22
CN110365801B true CN110365801B (en) 2021-12-17

Family

ID=68225380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910787735.6A Active CN110365801B (en) 2019-08-26 2019-08-26 Partition-based cooperative caching method in information center network

Country Status (1)

Country Link
CN (1) CN110365801B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110839166B (en) * 2019-11-19 2022-01-25 中国联合网络通信集团有限公司 Data sharing method and device
CN112702399B (en) * 2020-12-14 2022-04-19 中山大学 Network community cooperation caching method and device, computer equipment and storage medium
CN112751911B (en) * 2020-12-15 2022-10-21 北京百度网讯科技有限公司 Road network data processing method, device, equipment and storage medium
CN113225380B (en) * 2021-04-02 2022-06-28 中国科学院计算技术研究所 Content distribution network caching method and system based on spectral clustering
CN114710452B (en) * 2021-11-29 2023-09-08 河南科技大学 Multi-node negotiation information center network flow optimization control system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
CN107835129A (en) * 2017-10-24 2018-03-23 重庆大学 Content center network fringe node potential energy strengthens method for routing
CN108965479A (en) * 2018-09-03 2018-12-07 中国科学院深圳先进技术研究院 A kind of domain collaboration caching method and device based on content center network
CN109905480A (en) * 2019-03-04 2019-06-18 陕西师范大学 Probability cache contents laying method based on content center

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9960999B2 (en) * 2015-08-10 2018-05-01 Futurewei Technologies, Inc. Balanced load execution with locally distributed forwarding information base in information centric networks
US10469373B2 (en) * 2017-05-05 2019-11-05 Futurewei Technologies, Inc. In-network aggregation and distribution of conditional internet of things data subscription in information-centric networking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634231A (en) * 2013-12-02 2014-03-12 江苏大学 Content popularity-based CCN cache partition and substitution method
CN107835129A (en) * 2017-10-24 2018-03-23 重庆大学 Content center network fringe node potential energy strengthens method for routing
CN108965479A (en) * 2018-09-03 2018-12-07 中国科学院深圳先进技术研究院 A kind of domain collaboration caching method and device based on content center network
CN109905480A (en) * 2019-03-04 2019-06-18 陕西师范大学 Probability cache contents laying method based on content center

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Centrality-based Caching for Privacy in Information-Centric Networks;Noor Abani;《Milcom 2016 Track 3 - Cyber Security and Trusted Computing》;20161103;全文 *

Also Published As

Publication number Publication date
CN110365801A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110365801B (en) Partition-based cooperative caching method in information center network
CN109905480B (en) Probabilistic cache content placement method based on content centrality
Dutta et al. Caching scheme for information‐centric networks with balanced content distribution
CN108366089B (en) CCN caching method based on content popularity and node importance
KR20140067881A (en) Method for transmitting packet of node and content owner in content centric network
Le et al. Social caching and content retrieval in disruption tolerant networks (DTNs)
CN108769252B (en) ICN network pre-caching method based on request content relevance
Wu et al. MBP: A max-benefit probability-based caching strategy in information-centric networking
CN109040163B (en) Named data network privacy protection caching decision method based on k anonymity
CN108965479B (en) Domain collaborative caching method and device based on content-centric network
CN112399485A (en) CCN-based new node value and content popularity caching method in 6G
Lal et al. A popularity based content eviction scheme via betweenness-centrality caching approach for content-centric networking (CCN)
Gui et al. A cache placement strategy based on entropy weighting method and TOPSIS in named data networking
Liu et al. A novel cache replacement scheme against cache pollution attack in content-centric networks
Aloulou et al. Taxonomy and comparative study of NDN forwarding strategies
Nguyen et al. Adaptive caching for beneficial content distribution in information-centric networking
Fan et al. Popularity and gain based caching scheme for information-centric networks
Mahananda et al. Performance of homogeneous and heterogeneous cache policy for named data network
Zhou et al. Popularity and age based cache scheme for content-centric network
Boddu et al. Improving data accessibility and query delay in cluster based cooperative caching (CBCC) in MANET using LFU-MIN
Saucez et al. Minimizing bandwidth on peering links with deflection in named data networking
Kim et al. Comprehensive analysis of caching performance under probabilistic traffic patterns for content centric networking
Chen et al. Gain-aware caching scheme based on popularity monitoring in information-centric networking
CN111917658B (en) Privacy protection cooperative caching method based on grouping under named data network
Gulati et al. AdCaS: Adaptive caching for storage space analysis using content centric networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant