CN113783779B - Hierarchical random caching method in named data network - Google Patents

Hierarchical random caching method in named data network Download PDF

Info

Publication number
CN113783779B
CN113783779B CN202111061590.5A CN202111061590A CN113783779B CN 113783779 B CN113783779 B CN 113783779B CN 202111061590 A CN202111061590 A CN 202111061590A CN 113783779 B CN113783779 B CN 113783779B
Authority
CN
China
Prior art keywords
data
cache
data packet
packet
router
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111061590.5A
Other languages
Chinese (zh)
Other versions
CN113783779A (en
Inventor
侯睿
沙莫
张成俊
金继欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
South Central Minzu University
Original Assignee
Wuhan Textile University
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University, South Central University for Nationalities filed Critical Wuhan Textile University
Priority to CN202111061590.5A priority Critical patent/CN113783779B/en
Publication of CN113783779A publication Critical patent/CN113783779A/en
Application granted granted Critical
Publication of CN113783779B publication Critical patent/CN113783779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/20Hop count for routing purposes, e.g. TTL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/44Distributed routing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a hierarchical random cache method in a named data network, which comprises the steps of adding an interest packet routing hop field in an interest packet, and adding a data packet routing hop field and a data packet cache flag bit field in a data packet; the router on the transmission path is divided into a plurality of levels of cache routers, and when the data request node requests the data packets with the same content at different times, the data packets are read from the cache routers at different levels and cached to the next level of router. Therefore, data redundancy in the network is reduced, the diversity of data contents in the network is improved, and the network performance is improved.

Description

Hierarchical random caching method in named data network
Technical Field
The invention relates to the field of computer network distributed cache, in particular to a hierarchical random method in a Named Data Network (NDN).
Background
Named Data networking (ndn) is a network system with named Data as a main communication object, and is a special case of an information center network. In the NDN, data is named and communication is carried out according to name information of a data packet, so that the communication mode of the current network based on an IP address is changed, and the NDN is particularly remarkable in robustness, expansibility and the like compared with a TCP/IP network. The communication in the NDN is driven by a content requester, namely the content requester firstly needs to send a data request, and the data publishing node sends corresponding data to the content requester according to the data request. The NDN comprises two format packets, namely an Interest Packet (Interest Packet) and a Data Packet (Data Packet), wherein the Interest Packet is a request Packet which is sent by a user for requesting Data and contains name information, the Data Packet contains real Data required by the user, and the name information of the Data Packet is the same as that of the Internet Packet. All packets are forwarded by routers in a hop-by-hop forwarding manner, and each router comprises three table structures, namely an Information forwarding table FIB (forwarding Information base), an undetermined request table PIT (pending Interest Table) and a content storage CS (content store). The CS stores data sent by a data publishing node, and the PIT stores name information of an interest packet which is already forwarded by the current node and a corresponding 'Up Stream' interface; the FIB table records a set of "downstream" (Down Stream) interfaces to which the interest packet corresponding to the name information can be forwarded, and is used to forward the interest packet to the next matched interface, and if no interface is matched, the interest packet is discarded.
A data packet storage method in an NDN is a research hotspot in the NDN field. The currently proposed routing method includes: according to the method of ubiquitous Caching (CEE), when each router in the NDN receives a data packet, caching operation is carried out on the data content, a large amount of data redundancy exists in the network along with the increase of the data content, meanwhile, the diversity of data in the network is low, when a new data packet arrives in the network, data replacement is generated, the hit rate of the whole network Cache is low due to frequent data replacement, and the network performance is reduced; a cache Copy (LCD) method, this method buffers the data content in the direct next hop node which hits the router on the return path, this cache method can buffer the data content requested with high frequency in the router closer to data request node, promote the network performance, this method is with the data content requested in the network increases, the data redundancy also promotes accordingly, cache replacement in the network will reduce the whole network and buffer the hit ratio at the same time; a probability-based caching (ProbCache) method caches data contents on a router with the highest probability value by calculating the caching probability of the router on the data contents, reduces data redundancy in a network by the method, but has certain randomness based on the probability caching method, and has low pertinence to the data contents with high-frequency requests, so that the caching method causes high cache replacement rate and low cache hit rate of the router close to a data request node.
Although these approaches achieve the goal of caching data content, they fail to achieve greater improvements in network performance. Therefore, a cache method capable of reducing data redundancy in a network, improving network data diversity, improving the cache hit rate of the whole network and reducing network delay is required to be provided.
Disclosure of Invention
The invention aims to provide a hierarchical random cache method in a named data network aiming at the problems in the prior art, which can effectively reduce data redundancy in the network, improve the cache hit rate of the whole network and improve the network performance.
The above purpose of the invention is realized by the following technical scheme:
the hierarchical random caching method in the named data network comprises the following steps:
step 1, when a data request node interest packet, adding an interest packet routing hop field IntPassHop in the interest packet, and when a data publishing node sends a data packet, adding a data packet routing hop field DataPassHop and a data packet cache flag bit field CacheTag in the data packet;
step 2, the routers on the transmission path are divided into three levels, namely a first level cache router, a second level cache router and a third level cache router, the total number of the routers in the transmission path is T, the number of the first level cache routers in the transmission path is recorded as C1, the number of the second level cache routers in the transmission path is C2, the number of the third level cache routers in the transmission path is C3,
And 3, when the data request node requests a data packet with the content name of data1 for the first time, the data publishing node encapsulates the corresponding data information into a data packet and sends the data packet into a named data network, and randomly assigns a value to the cache flag field CacheTag of the data packet, the field value of the cache flag field CacheTag of the data packet is reduced by one every time the data packet passes through one router in the transmission process, when the cache flag field CacheTag of the data packet is reduced to 0, the data packet arrives at a third-level cache router, when the cache flag field CacheTag of the data packet is 0, the data packet is cached in the current cache router, when the data packet is cached, the field value M of the hop count field DataPassHop of the data packet is the route number which has been jumped from the data publishing node when the data packet is cached, the cache finishes forwarding the data packet to the data request node, and does not perform caching on other routers any more.
When the data request node requests a data packet named data1 for the second time, the data packet is sent from the current cache router and randomly assigned for the cache flag field CacheTag of the data packet, when the data packet passes through one router in the transmission process, the field value of the cache flag field CacheTag of the data packet is reduced by one, and the cache flag field CacheTag of the data packet is reduced to 0, the data packet reaches the second-level cache router and is cached in the current cache router, when the data packet is cached, the field value M of the route hop field DataPassHop of the data packet is the number of routes which are already jumped from the data publishing node when the data packet is cached, the data packet is continuously forwarded to the data request node after the cache is finished, and the cache operation is not performed on other routers any more;
When the data request node requests a data packet named data1 for the third time, the data packet is sent from the current cache router and randomly assigned for the cache flag field CacheTag of the data packet, when the data packet passes through one router in the transmission process, the field value of the cache flag field CacheTag of the data packet is reduced by one, and the cache flag field CacheTag of the data packet is reduced to 0, the data packet reaches the first-level cache router and is cached in the current cache router, when the data packet is cached, the field value M of the route hop count field DataPassHop is the route number which is jumped from the data publishing node when the data packet is cached, the caching is finished, the data packet is continuously forwarded to the data request node, and the caching operation is not performed on other routers any more;
when the data request node requests the data packet named data1 for the fourth time, if the current cache router is the router closest to the data request node, the current cache router forwards the data packet to the data request node, if the current cache router is not the router closest to the data request node, the data packet is sent from the current cache router and assigns a value to a packet cache flag field CacheTag, the packet cache flag field CacheTag is T-M, the field value of the packet cache flag field CacheTag is reduced by one every time the data packet passes through one router during transmission, the data packet is cached in the router closest to the data request node when the packet cache flag field CacheTag is 0, and the field value M of the packet route hop count field DataPassHop is the route number that the data packet has jumped to when the data packet is cached, and after the cache is finished, continuing to forward the data packet to the data request node.
In step 2 as described above, the total number of routers in the current transmission path is T; the number of the first-level cache routers in the current transmission path is C1 floor (T/3); the number of second-level cache routers in the current transmission path is C2 ═ floor (T/3); the number of the third-level cache routers in the current transmission path C3 is floor (T/3) + T% 3, where floor () is a rounded symbol and T% 3 is the remainder of dividing T by 3.
In step 3, as described above, the field value of the packet cache tag bit field CacheTag is set to K,
K∈(0,C3], M=0
K∈[C3-M+1,C2+C3-M], 0<M≤C3
K∈[C2+C3-M+1,T-M],C3<M≤C2+C3
K=T-M,C2+C3<M≤T
compared with the current cache method adopted in the NDN, the method has the advantages that:
the method comprises the steps of classifying routers on a transmission path, defining a CacheTag field value in a data packet, determining a specific cache position of the data packet on the transmission path, and caching the same data content in the transmission path at most three times, so that data redundancy in a network is reduced, the diversity of the data content in the network is improved, and the network performance is improved.
Drawings
Fig. 1 shows the formats of Interest packets (Interest packets) and Data packets (Data packets) according to the present invention.
Fig. 2 is a schematic diagram of a router hierarchy of a transmission path according to the present invention (the number of stages can be extended to more than three stages).
Fig. 3 is a cache hit ratio comparison diagram of the ubiquitous cache method, the probabilistic cache method, the cache copy method, and the hierarchical random cache method of the present invention, in which four methods are used when the cache capacity ratio R is C/N0.01 and the Zipf coefficient α is 0.7.
Fig. 4 is a comparison graph of average routing hops of the ubiquitous caching method, the probabilistic caching method, the cache copy method and the hierarchical random caching method of the present invention in four methods when the cache capacity ratio R is 0.01 and the Zipf coefficient α is 0.7.
Fig. 5 is a comparison graph of average request delays of four methods when the buffer capacity ratio R is C/N0.01 and the Zipf coefficient α is 0.7, in a ubiquitous cache method, a probabilistic cache method, a cache copy method, and a hierarchical random cache method of the present invention.
Fig. 6 is a comparison graph of cache hit rates of four methods when the cache capacity R is between (0.01,0.05) and the Zipf coefficient α is 0.7, in a ubiquitous cache method, a probabilistic cache method, a cache copy method, and a hierarchical random cache method of the present invention.
Fig. 7 is a comparison graph of cache hit rates of four methods, namely, a ubiquitous cache method, a probabilistic cache method, a cache copy method, and a hierarchical random cache method of the present invention, between a cache capacity R of 0.01 to C/N and a Zipf coefficient α of (0.4, 1.4).
Fig. 8 is a comparison diagram of average routing hops of the ubiquitous caching method, the probabilistic caching method, the cache copy method and the hierarchical random caching method according to the present invention in four methods, where the cache capacity R is between (0.01,0.05) and the zip coefficient α is 0.7.
Fig. 9 is a comparison diagram of average route hops of the ubiquitous caching method, the probabilistic caching method, the cache copy method and the hierarchical random caching method according to the present invention in four methods, where the cache capacity R is C/N0.01 and the Zipf coefficient α is (0.4, 1.4).
Fig. 10 is a comparison graph of average request delays of the ubiquitous caching method, the probabilistic caching method, the cache copy method and the hierarchical random caching method of the present invention when the cache capacity R is between (0.01,0.05) and the zip coefficient α is 0.7.
Fig. 11 is a comparison diagram of average request delays of four methods, i.e., a ubiquitous cache method, a probabilistic cache method, a duplicate cache method and a hierarchical random cache method according to the present invention, between a cache capacity R of 0.01 to C/N and a zip coefficient α of (0.4, 1.4).
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
The hierarchical random caching method in the named data network comprises the following steps:
step 1, modifying formats of an interest packet (interest packet) and a data packet (data packet);
The data request node sends an interest packet to a named data network, and an interest packet routing hop count field IntPassHop is added in the interest packet and used for recording the interest packet routing hop count. When the data publishing node sends a data packet, a data packet routing hop field DataPassHop and a data packet cache flag bit field CacheTag are added into the data packet, the CacheTag is used as the cache flag bit field, and the initial value is 0.
Step 2, grading transmission paths in the network;
the routers on the transmission path are divided into three levels, namely a first-level cache router, a second-level cache router and a third-level cache router.
According to the field value L of the interest packet routing hop field IntPassHop in the interest packet and the field value M of the data packet routing hop field DataPassHop in the corresponding data packet stored in the router, the following calculation can be carried out:
total number of routers in current transmission path: t ═ L + M.
The number of first-level cache routers in the current transmission path is as follows: c1 ═ floor (T/3),
the number of second-level cache routers in the current transmission path is as follows: c2 floor (T/3),
the number of routers of the third-level cache in the current transmission path is as follows: c3 ═ floor (T/3) + T% 3,
floor () is the integer symbol and T% 3 is the remainder of T divided by 3.
The value of the packet cache tag bit field CacheTag is set to K, and the K value calculation formula is:
K∈(0,C3], M=0
K∈[C3-M+1,C2+C3-M], 0<M≤C3
K∈[C2+C3-M+1,T-M],C3<M≤C2+C3
K=T-M,C2+C3<M≤T
step 3, forwarding the data packet;
after receiving the interest packet, the Data publishing node encapsulates the corresponding Data Content into a Data packet and sends the Data packet to a named Data network, wherein the Data packet comprises a Content Name field, a Data Content Data, a Data packet routing hop number field, a Data PassHop and a Data packet cache flag bit, and the Data packet is sent back to the Data requesting node along a reverse path of the path through which the interest packet passes through from the original path.
When a data request node requests a data packet with the content name of data1 for the first time, a data publishing node encapsulates corresponding data information into the data packet and sends the data packet to a named data network, and randomly assigns a data packet cache flag field CacheTag, wherein the random assignment range of the data packet cache flag field CacheTag is (0, C3), the value of the data packet cache flag field CacheTag is reduced by one when the data packet passes through one router in the transmission process, when the data packet cache flag field CacheTag is equal to 0, the data packet is cached in a router A (a current cache router) which passes through recently, and when the data packet is cached, the field value M of the data packet route hop number field dataPassHop is the route number which has been jumped from the data publishing node when the data packet is cached, the caching is finished, the data packet is continuously forwarded to the data request node, and caching operation is not performed on other routers any more.
When the data request node requests a data packet named data1 for the second time, the data packet is sent from the current cache router, that is, from router a, and is randomly assigned to the packet cache flag field CacheTag, the random assignment range of the packet cache flag field CacheTag is (C3-M +1, C2+ C3-M), the field value of the packet cache flag field CacheTag is reduced by one every time the data packet passes through one router during transmission, when the packet cache flag field CacheTag is equal to 0, the data packet is cached in router B (the current cache router), and when the data packet is cached, the field value M of the packet route hop count field DataPassHop is the route number that has been jumped from the data publishing node when the data packet is cached, the caching is completed, the data packet is continuously forwarded to the data request node, and the caching operation is no longer performed on other routers.
When the data requesting node requests a data packet named data1 for the third time, the data packet is sent from the current cache router, that is, from router B, and is randomly assigned to the packet cache flag field CacheTag, where the range of random assignment of the packet cache flag field CacheTag is (C2+ C3-M +1, T-M), the field value of the packet cache flag field CacheTag is reduced by one every time the data packet passes through one router during transmission, when the packet cache flag field CacheTag is equal to 0, the data packet is cached in router C (the current cache router), and when the data packet is cached, the field value M of the packet route hop count field DataPassHop is the number of routes that have been jumped from the data publishing node when the data packet is cached, the caching is completed, the data packet is continuously forwarded to the data requesting node, and the caching operation is no longer performed on other routers.
When the data request node requests the data packet named data1 for the fourth time, if the current cache router is the closest router to the data request node, that is, the router C is the closest router to the data request node, the router C forwards the data packet to the data request node, and updates the storage of the data packet data 1. If the router C is not the route nearest to the data request node, the data packet is sent from the router C and is assigned to the packet cache flag bit field CacheTag, the packet cache flag bit field CacheTag is T-M, the field value of the packet cache flag bit field CacheTag is reduced by one every time the data packet passes through one router in the transmission process, when the packet cache flag bit field CacheTag is 0, the data packet is cached in the router D nearest to the data request node, and when the data packet is cached, the field value M of the packet route hop count field dataPassHop is the number of the routes which have been jumped from the data publishing node when the data packet is cached, and the data packet is continuously forwarded to the data request node after the caching is finished.
Finally, the performance simulation analysis is carried out on the method (HRC). The adopted simulation platform is ndnSIM, the ndnSIM is an NDN simulation module which is compiled by C + + based on ns-3 network simulation software to realize a CCNx protocol, and the functions of a basic network protocol, a routing forwarding strategy, data packet node caching and the like are realized. The network topology comprises 10 data requesting nodes, one data publishing node and 489 routers and 10 links. Setting the bandwidth of each link to be 10Gbps, the time delay of each link to be 10 seconds, the buffer capacity to be (0.01, 0.05), and Zipf distribution: α is 0.7.
Fig. 3, fig. 4, and fig. 5 are a comparison graph of cache hit rate, average route hop count, and average request delay of four methods when the cache capacity ratio R/N is 0.01 and the Zipf coefficient α is 0.7, respectively, which shows that under the same condition, the cache hit rate of the method of the present invention (HRC) is the highest, and the average route hop count and average request delay are lower than those of the other three caching methods.
Fig. 6 is a comparison graph of cache hit rates of the ubiquitous cache method, the probabilistic cache method, the cache copy method, and the hierarchical random cache method of the present invention when the cache capacity R is between (0.01,0.05) and the zip coefficient α is 0.7. It can be seen from the figure that as the cache capacity ratio R is increased, the cache hit rates of the four cache methods are all improved, and the HRC method provided by the present invention has the highest cache hit rate and the largest increase amplitude in the change range of the cache capacity ratio.
Fig. 7 is a comparison graph of cache hit rates of four methods, namely, a ubiquitous cache method, a probabilistic cache method, a cache copy method, and a hierarchical random cache method of the present invention, between a cache capacity R of 0.01 to C/N and a Zipf coefficient α of (0.4, 1.4). As can be seen, the cache hit rates of the four methods increase with increasing Zipf parameter. The cache hit rate of the LCD method is only smaller than that of the ProbCache method when the Zipf coefficient alpha is 0.5, and the HRC method provided by the invention has obvious advantages compared with other three methods in the change range of the Zipf coefficient alpha value.
Fig. 8 is a comparison diagram of average routing hops of the ubiquitous caching method, the probabilistic caching method, the cache copy method and the hierarchical random caching method according to the present invention in four methods, where the cache capacity R is between (0.01,0.05) and the zip coefficient α is 0.7. As can be seen from the figure, as the cache capacity ratio R is increased, the overall network average routing hop count of the four methods shows a downward trend. As the cache capacity of routers increases, the diversity of data in the network increases, and thus the average number of routing hops decreases. The HRC method provided by the invention has the advantages that the average route hop count is reduced most, and the average route hop count is lowest in the variation range of the cache capacity ratio.
Fig. 9 is a comparison diagram of average routing hops of four methods, i.e., a ubiquitous caching method, a probabilistic caching method, a cache copy method, and a hierarchical random caching method according to the present invention, between a cache capacity R of 0.01 to C/N and a zip coefficient α of (0.4, 1.4). It can be seen from the figure that as the Zipf parameter α increases, the average request delays of the four methods tend to be the same, and the average routing hop count of the HRC method is always smaller than those of the other three caching methods in the change range of the Zipf coefficient α.
Fig. 10 is a comparison graph of average request delays of four methods when the buffer capacity R is between (0.01,0.05) and the Zipf coefficient α is 0.7, in a ubiquitous cache method, a probabilistic cache method, a cache copy method, and a hierarchical random cache method of the present invention. As shown in the figure, when the buffer capacity ratio is increased, the average request delay of the four buffer methods is significantly reduced, wherein the HRC method proposed by the present invention is reduced by the largest amount, and the average request delay is always the lowest within the variation range of (0.01,0.05) where R ═ C/N.
Fig. 11 is a comparison diagram of average request delays of four methods, i.e., a ubiquitous cache method, a probabilistic cache method, a duplicate cache method and a hierarchical random cache method according to the present invention, between a cache capacity R of 0.01 to C/N and a zip coefficient α of (0.4, 1.4). As can be seen from the comparison graph, the average request delay of the four caching methods decreases with the increase of the Zipf coefficient α, and the average request experiments of the four caching methods tend to be equal with the increase of the Zipf coefficient α. Wherein when the Zipf coefficient alpha is 0.4, the average request time delay of the HRC method provided by the invention is minimum. In the variation range of the Zipf coefficient alpha, the average request delay of the HRC method is always smaller than that of the other three caching methods.
As described above, according to the technical scheme of the invention, the routers in the NDN network are classified, the formats of the interest packets and the data packets are improved to record the number of hops passed by the interest packets and the data packets, and then the router of a transmission path on which the data content is cached is determined according to the CacheTag field value in the data packets, so that the NDN network can cache more data content under the condition that the routers are limited, the data diversity in the network is improved, and the network performance is improved.
The foregoing description of the embodiments is merely illustrative of the spirit of the invention and the scope of the invention is not limited thereto, but rather it will be apparent to those skilled in the art that various modifications, additions and substitutions can be made to the described embodiments without departing from the spirit of the invention or exceeding the scope of the appended claims.

Claims (2)

1. The hierarchical random caching method in the named data network is characterized by comprising the following steps:
step 1, when a data request node sends an interest packet, adding an interest packet routing hop count field in the interest packet, and when a data publishing node sends a data packet, adding a data packet routing hop count field and a data packet cache flag bit field in the data packet;
step 2, the routers on the transmission path are divided into three levels, namely a first level cache router, a second level cache router and a third level cache router, the total number of the routers in the transmission path is T, the number of the first level cache routers in the transmission path is recorded as C1, the number of the second level cache routers in the transmission path is C2, the number of the third level cache routers in the transmission path is C3,
step 3, when the data request node requests a data packet with the content name of data1 for the first time, the data publishing node packages the corresponding data information into a data packet and sends the data packet to the named data network, randomly assigning values to the packet cache flag field, wherein the random assignment range of the packet cache flag field CacheTag is (0, C3), the field value of the packet cache flag field is reduced by one every time a packet passes through a router during transmission, when the bit field of the data packet buffer flag bit is reduced to 0, the data packet arrives at the third-level buffer router, when the bit field of the data packet buffer flag bit is 0, the data packet is buffered in the current buffer router, when the data packet is cached, the field value M of the data packet routing hop number field is the number of the routes which are already jumped from the data publishing node when the data packet is cached, the data packet is continuously forwarded to the data requesting node after the caching is finished, and the caching operation is not performed on other routers any more;
When the data request node requests a data packet named data1 for the second time, the data packet is sent from the current cache router and randomly assigned to the cache flag bit field of the data packet, the cache flag bit field CacheTag of the data packet has the random assignment range of (C3-M +1, C2+ C3-M), 0 < M is less than or equal to C3, when the data packet passes through one router in the transmission process, the cache flag bit field of the data packet is reduced by one, and when the cache flag bit field of the data packet is reduced to 0, the data packet reaches the second level cache router and is cached in the current cache router, when the data packet is cached, the field value M of the route hop field is the route number which is already jumped from the data issuing node when the data packet is cached, the cache is finished, the data packet is continuously forwarded to the data request node, and the cache operation is not performed on other routers any more;
when the data request node requests a data packet named data1 for the third time, the data packet is sent out from the current cache router, randomly assigning values to the packet cache flag bit field, wherein the random assignment range of the packet cache flag bit field CacheTag is (C2 + C3-M +1, T-M), C3 is more than M and is less than or equal to C2+ C3, the field value of the packet cache flag bit field is reduced by one when a packet passes through a router in the transmission process, and when the data packet buffer memory mark bit field is reduced to 0, the data packet arrives at the first-level buffer memory router and is buffered in the current buffer memory router, when the data packet is cached, the field value M of the data packet routing hop count field is the number of the routes which are already jumped from the data publishing node when the data packet is cached, the data packet is continuously forwarded to the data requesting node after the caching is finished, and the caching operation is not performed on other routers any more;
When the data request node requests the data packet named data1 for the fourth time, if the current cache router is the router closest to the data request node, the current cache router forwards the data packet to the data request node, if the current cache router is not the route closest to the data request node, the data packet is sent from the current cache router and assigns a value to a packet cache flag bit field, the packet cache flag bit field is CacheTag = T-M, C2+ C3 < M ≦ T, the field value of the packet cache flag field is decreased by one every time the data packet passes through one router during transmission, the data packet is cached in the router closest to the data request node when the packet cache flag bit field is 0, the field value M of the packet route hop count field is the number of routes that have been jumped from the data issue node when the data packet is cached, and after the cache is finished, continuing to forward the data packet to the data request node.
2. The method according to claim 1, wherein in step 2, the total number of routers in the current transmission path is T; the number of the first-level cache routers in the current transmission path is C1= floor (T/3); the number of second-level cache routers in the current transmission path is C2 = floor (T/3); the number of the third-level cache routers in the current transmission path C3 = floor (T/3) + T%3, where floor () is a rounded symbol and T%3 is the remainder of dividing T by 3.
CN202111061590.5A 2021-09-10 2021-09-10 Hierarchical random caching method in named data network Active CN113783779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111061590.5A CN113783779B (en) 2021-09-10 2021-09-10 Hierarchical random caching method in named data network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111061590.5A CN113783779B (en) 2021-09-10 2021-09-10 Hierarchical random caching method in named data network

Publications (2)

Publication Number Publication Date
CN113783779A CN113783779A (en) 2021-12-10
CN113783779B true CN113783779B (en) 2022-06-28

Family

ID=78842360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111061590.5A Active CN113783779B (en) 2021-09-10 2021-09-10 Hierarchical random caching method in named data network

Country Status (1)

Country Link
CN (1) CN113783779B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025020B (en) * 2022-01-06 2022-04-22 中南民族大学 Named data network caching method based on dichotomy
CN114257654B (en) * 2022-02-28 2022-05-20 中南民族大学 Named data network sequential caching method based on hierarchical idea
CN114828079B (en) * 2022-03-21 2024-05-24 中南大学 Efficient NDN multi-source multi-path congestion control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101223A (en) * 2016-06-12 2016-11-09 北京邮电大学 A kind of caching method mated with node rank based on content popularit
CN111314224A (en) * 2020-02-13 2020-06-19 中国科学院计算技术研究所 Network caching method for named data
CN111935031A (en) * 2020-06-22 2020-11-13 北京邮电大学 NDN architecture-based traffic optimization method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1839172A2 (en) * 2004-12-08 2007-10-03 B-Obvious Ltd. Bidirectional data transfer optimization and content control for networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101223A (en) * 2016-06-12 2016-11-09 北京邮电大学 A kind of caching method mated with node rank based on content popularit
CN111314224A (en) * 2020-02-13 2020-06-19 中国科学院计算技术研究所 Network caching method for named data
CN111935031A (en) * 2020-06-22 2020-11-13 北京邮电大学 NDN architecture-based traffic optimization method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
内容中心网络中基于区域集中化控制的协作缓存路由机制;刘贵财等;《计算机应用研究》;20170315;第35卷(第02期);全文 *

Also Published As

Publication number Publication date
CN113783779A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113783779B (en) Hierarchical random caching method in named data network
CN106101223B (en) One kind is based on content popularit and the matched caching method of node rank
JP3591420B2 (en) Cache table management device and program recording medium in router
CN104753797B (en) A kind of content center network dynamic routing method based on selectivity caching
CN109905480B (en) Probabilistic cache content placement method based on content centrality
Li et al. A chunk caching location and searching scheme in content centric networking
CN105049254B (en) Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN
CN108900570B (en) Cache replacement method based on content value
KR20140067881A (en) Method for transmitting packet of node and content owner in content centric network
CN108366089B (en) CCN caching method based on content popularity and node importance
CN111107000B (en) Content caching method in named data network based on network coding
CN106911574B (en) Name data network multiple constraint routing algorithm based on population
CN105262833B (en) A kind of the cross-layer caching method and its node of content center network
CN111935031B (en) NDN architecture-based traffic optimization method and system
CN112399485A (en) CCN-based new node value and content popularity caching method in 6G
CN111294394A (en) Adaptive caching strategy based on complex network intersection point
CN111314223A (en) Routing interface ranking-based forwarding method in NDN (named data networking)
CN114025020B (en) Named data network caching method based on dichotomy
CN109818855B (en) Method for obtaining content by supporting pipeline mode in NDN (named data networking)
CN111262785B (en) Multi-attribute probability caching method in named data network
CN107135271B (en) Energy-efficient content-centric network caching method
CN113382053B (en) Content active pushing method based on node semi-local centrality and content popularity
Alahmadi A New Efficient Cache Replacement Strategy for Named Data Networking
CN112822275B (en) Lightweight caching strategy based on TOPSIS entropy weight method
Gulati et al. AdCaS: Adaptive caching for storage space analysis using content centric networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant