WO2017077363A1 - Selective caching for information-centric network based content delivery - Google Patents
Selective caching for information-centric network based content delivery Download PDFInfo
- Publication number
- WO2017077363A1 WO2017077363A1 PCT/IB2015/058505 IB2015058505W WO2017077363A1 WO 2017077363 A1 WO2017077363 A1 WO 2017077363A1 IB 2015058505 W IB2015058505 W IB 2015058505W WO 2017077363 A1 WO2017077363 A1 WO 2017077363A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- metric
- influence
- interest
- nodes
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Definitions
- the present disclosure relates to communication networks, and in particular, to a method, node and system for selective caching of objects at one or more nodes in an information-centric network (ICN).
- ICN information-centric network
- IP networks are based on mapping from what a user requests to where the requested content is located. For example, in general, IP packets contain two identifiers, namely, a source address and a destination address. These addresses represent the network's where in which the nodes in the IP network path forward the IP packet based on the addresses.
- these existing IP networks are host-centric as connectivity is based on end-to-end principles that names hosts, i.e., address(es).
- these IP networks are not without issues. For example, fast, reliable content access requires awkward, pre-planned, application- specific mechanisms like Content Distribution Networks (CDNs) and Point-to-Point (P2P) networks. Also, there exists a location dependency in that content is mapped to host locations, but this serves to complicate network configurations and
- CDNs Content Distribution Networks
- P2P Point-to-Point
- ICN Information-Centric Networking
- CCN Content- Centric Network
- a CCN there are two types of packets, interest packets and data packets.
- An interest packet indicates a consumer has an interest in certain content in which any CCN node receiving the interest packet and having the requested content can respond with a data packet including the requested content/data.
- interest packets request content while data packets are the response to the request.
- CCN routing may be performed hop-by-hop using longest-prefix matching on the content name.
- the content/data follows the reverse path of the corresponding interest packet.
- a CCN Name could look like:
- the CCN name would denote the third page of the "Real-Time cloud” whitepaper released by the company.
- a prefix in the CCN name is called the routable name, and should be known to enough routers as to be routable in the network. In this case, the routable name would likely be "/company" in which the remainder of the name would be interpreted on-site.
- Data satisfies the interest packet (interest) if the content name in the interest packet is a prefix of the content name in the data packet.
- CCN nodes such as forwarders and routers hold more state information than only IP routers. These CCN nodes hold a Forwarding Information Base (FIB) in which the CCN nodes use the FIB for forwarding such as to match ⁇ CCN Name (or prefix), ingress port(s) -> egress port(s) ⁇ .
- FIB Forwarding Information Base
- the CCN node In addition to the FIB, the CCN node must be able to perform reverse-path routing for data packets, i.e., content object packets. This means that the CCN node must track which requests have been forwarded, and from which interface(s) to which interface(s). This state is kept in a table called the Pending Interest Table (PIT).
- PIT Pending Interest Table
- the CCN node checks the CCN name against the PIT in order to determine on which interface(s) to route the data packet.
- the CCN nodes are allowed to cache data packets locally in a table called a content store which is based on local policy.
- CCN content distribution network
- items or content are cached based on the geographical and physical connectivity based measure of closeness such that content is cached close to the edge.
- this geographical notion of topology for deciding to cache content at edge nodes does not optimize usage of resources because caching based on geography may still lead to inefficient and wasteful caching.
- a node is provided.
- the node is configured to communicate with a plurality of other nodes.
- the node includes processing circuitry including a processor and a memory.
- the memory contains instructions that, when executed by the processor, configure the processor to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric.
- the influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
- the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes.
- the interest metric of a respective node is based on a number of faces of the respective node that have received a request for an object.
- the interest metric of a respective node is further based on a total number of faces of the respective node.
- the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
- the memory contains further instructions that, when executed by the processor, configure the processor to compare an interest metric of the node with an interest metric of each of the plurality of other nodes, and determine the influence metric of the node based on the comparison.
- the interest metric is further based on a similarity metric of a respective node, the similarity metric indicating a measure of similarity between the object and at least one other object cached in memory.
- the memory contains further instructions that, when executed by the processor, cause the processor to classify the object and the at least one other object in an expander graph.
- the similarity metric is based on a Euclidean distance in the expander graph between the object and the at least one other object.
- the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph.
- the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data.
- the memory contains further instructions that, when executed by the processor, configure the processor to compare the influence metric of the node with a predefined threshold. The determination to cache the object in memory is based on the influence metric meeting the predefined threshold.
- the node is a Content-Centric Network (CCN) node.
- the object is one of a content object and service instance.
- a method for a node configured to communicate with a plurality of other nodes is provided.
- An influence metric for the node is determined.
- the influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
- determination is made to cache the object at the node based at least in part on the determined influence metric.
- the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes, the interest metric of a respective node being based on a number of faces of the respective node that have received a request for an object.
- the interest metric of a respective node is further based on a total number of faces of the respective node.
- the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
- an interest metric of the node is compared with an interest metric of each of the plurality of other nodes, the influence metric of the node is determined based on the comparison.
- the interest metric is further based on a similarity metric of a respective node.
- the similarity metric indicates a measure of similarity between the object and at least one other object.
- the object and the at least one other object are classified in an expander graph.
- the similarity metric is based on a Euclidean distance in the expander graph between the object and the at least one other object.
- the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph.
- the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data.
- the influence metric of the node is compared with a predefined threshold. The determination to cache the object at the node is based on the influence metric meeting the predefined threshold.
- the node is a Content-Centric Network (CCN) node.
- the object is one of a content object and service instance.
- a node includes a face module configured to communicate with a plurality of other nodes.
- the node includes a processing module configured to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric.
- the influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
- a computer readable storage medium stores executable instructions, which when executed by a processor, cause the processor to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric.
- the influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
- FIG. 1 is a block diagram of an exemplary system for caching objects at one or more nodes in accordance with the principles of the disclosure
- FIG. 2 is a block diagram of an exemplary node for determining whether to cache objects in accordance with the principles of the disclosure
- FIG. 3 is flow diagram of an exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure
- FIG. 4 is a flow diagram of another exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure
- FIG. 5 is a flow diagram of yet another exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure
- FIG. 6 is a block diagram of an exemplary expander graph for determining the influence of caching objects in accordance with the principles of the disclosure
- FIG. 7 is an exemplary block diagram of one embodiment of the system for caching objects in accordance with the principles of the disclosure.
- FIG. 8 is an exemplary block diagram of another embodiment of a node for determining whether to cache objects in accordance with the principles of the disclosure
- FIG. 9 is an exemplary block diagram of a request drop policy procedure for forwarding interest packets in accordance with the disclosure.
- FIG. 10 is an exemplary block diagram of a request drop policy procedure for forwarding data packets in accordance with the disclosure.
- relational terms such as “first,” “second,” “top” and “bottom,” and the like, may be used solely to distinguish one entity or element from another entity or element without necessarily requiring or implying any physical or logical relationship or order between such entities or elements.
- the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts described herein.
- the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
- the joining term, "in communication with” and the like may be used to indicate electrical or data communication, which may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example.
- electrical or data communication may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example.
- the method(s), node(s) and system(s) described herein advantageously provide for selective caching of objects at one or more nodes based on an information topology.
- information topology represents a higher level connectivity of objects such as physical, logical and/or semantic relationship between objects such that the caching decision is able to identify objects having a potential chance of being requested by other nodes.
- the instant disclosure is able to determine whether caching objects at one or more nodes is an efficient use of node resources. This approach achieves optimal placement of objects in order to increase access of objects to potential object requesters while using minimum resources.
- the caching decision is based on an influence of a node in a certain context that is related to the type of object and centrality of the node.
- the influence of a node indicates a level of diversity of nodes that are interested in an object as well as the extent of the interest.
- the influence may indicate an influence or impact that caching an object at the node will have on that node and/or the plurality of other nodes.
- the influence (i.e., influence metric) that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of a graph such as a level of connectivity between nodes, centrality and/or other expansion properties of the graph as described herein.
- the influence (i.e., influence metric) that caching the object at the node will have on the plurality of other nodes is a relative measure of the likelihood that caching a particular object will satisfy a current or future interest of other nodes in the network.
- the influence of a node on the other nodes can be estimated/calculated/inferred with respect to an object such that a larger influence metric value indicates a high probability that caching the object will anticipate forthcoming requests for similar content.
- System 10 includes one or more nodes 12a-12n (collectively referred to as “node 12") that are in communication with each other via one or more networks (not shown).
- nodes 12 are Content-Centric Nodes (CCNs) and/or Information-Centric Nodes (ICNs). While the disclosure is described with respect to CCN/ICN technology, the disclosure is equally applicable to other network types that use routing tables.
- Each node 12 receives respective interest packets 14a-14n (collectively referred to as interest 14) from one or more devices, nodes and/or other elements of system 10 that are configured to transmit an interest packet to node 12 that requests one or more objects.
- object refers to content chunk(s), i.e., content object(s) or content data, and/or a service instance with active functionality such as text and multimedia files, software, streaming audio and video and other types of
- Node 12 includes one or more faces 16a-16n (collectively referred to as face
- Node 12 includes memory 18 for storing various data, objects and programmatic code, as described herein and with respect to FIG. 2.
- memory 18 includes influence code 20 for performing the influence process that determines whether to cache one or more objects at node 12, as described in detail with respect to FIGS. 3-5.
- FIG. 2 is a block diagram of an exemplary configuration of node 12.
- Node 12 includes one or more faces 16 for receiving and transmitting packets such as interest packets and data packets. Faces 16 are not limited to solely receiving and transmitting interest and data packets. Other packets can be sent and received via faces 16.
- Node 12 includes one or more processors 22 for performing node 12 functions described herein.
- Node 12 includes memory 18 that is configured to store data, code and/or other information as described herein. Memory 18 includes
- FIB log 24 that tracks interest in one or more objects.
- FIB log 24 tracks interest based on received interest packets.
- FIB log 24 is discussed in detail with respect to FIG. 7.
- memory 18 includes a FIB for forwarding interest packets as is well known in the art.
- Memory 18 also includes Pending Interest Table (PIT) 26 that stores interests and/or interest packets forwarded towards one or more content sources, i.e., nodes 12, as is known in the art.
- PIT Pending Interest Table
- Memory 18 also includes content store 28 for at least temporarily storing arriving data packets containing requested object(s).
- Memory 18 is also configured to store programmatic code such as influence code 20.
- influence code 20 includes instructions that, when executed by processor 22, causes processor 22 to perform the influence process for determining whether to cache one or more objects as discussed in detail with respect to FIG. 3.
- the influence process advantageously considers a higher level of connectivity of objects, i.e., an information topology, in order to determine the effect or influence on other nodes of caching one or more objects at node 12.
- the influence process anticipates potential interest in objects at node 12 and determines whether to cache the objects at node 12 based on a measure of the potential interest in objects from other nodes 12.
- This connectivity of objects may be interpreted as physical, logical or semantic relationships among entities in system 10.
- influence code 20 includes instructions which when executed by processor 22, cause processor 22 to perform another influence process for classifying one or more objects and determining whether to cache the one or more objects based on an influence metric as discussed in detail with respect to FIG. 4.
- influence code 20 includes instructions which when executed by processor 22, cause processor 22 to perform yet another influence process for classifying one or more objects and determining whether to cache one or more objects based on a number of forwarding faces 16 at node 12 as discussed in detail with respect to FIG. 5.
- processor 22 and memory 18 form processing circuitry 30.
- Memory 18 contains instructions that, when executed by processor 22 configure processor 22 to perform one or more functions described with respect to
- processing circuitry 30 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific Integrated Circuitry).
- Processor 22 may be configured to access (e.g., write to and/or reading from) memory 18, which may comprise any kind of volatile and/or non-volatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory) and/or ROM (Read-Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only
- Such memory 18 may be configured to store code executable by processor 22 and/or other data, e.g., data pertaining to communication, e.g., configuration and/or address data of nodes, etc.
- Processing circuitry 30 may be configured to control any of the methods and/or processes described herein and/or to cause such methods and/or processes to be performed, e.g., by node 12. Corresponding instructions may be stored in the memory 18, which may be readable and/or readably connected to processor 22.
- processing circuitry 30 may comprise a microprocessor and/or microcontroller and/or FPGA (Field-Programmable Gate Array) device and/or ASIC (Application Specific Integrated Circuit) device.
- FPGA Field-Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- FIG. 3 is a flow diagram of an exemplary influence process for determining whether to cache an object at node 12 based on a measure of an anticipated interested in the object, i.e., influence metric ( ⁇ ), as discussed below.
- influence metric ⁇
- the influence process is embodied in influence code 20.
- Processing circuitry 30 determines an influence metric for node 12 with respect to one or more other nodes 12 (Block S100).
- the influence metric ( ⁇ ) is defined as:
- N is a set of physical and/or logical neighbor nodes.
- the differential interest metric is defined as follows:
- A/ C (x, x i ) i° if de 9cM - deg c ( Xi ) ⁇ 0 (3 ⁇ 4
- the interest metric is based on a number of faces of respective node 12 that have received a request for the object.
- the interest metric (I c (x)) for any node x may be defined as
- deg c CCNR represents a degree of node x in Graph G (discussed in detail in FIG. 6) with respect to a requested content object.
- deg c l (CCNR) is the degree of subgraph Gi of Graph G with respect to the object which, in one or more embodiments, is quantified as a count of the number of faces 16 of node x receiving interest in the object.
- Gl-Gn are subgraphs of Graph G as discussed in detail with respect to FIG. 6.
- deg G CCNR is the degree of the union of all subgraphs Gi which, in one or more embodiments, is quantified as the total number of faces 16 of node x.
- Processing circuitry 30 determines to cache the object in memory 18 at node 12 based at least in part on the determined influence metric (Block SI 02). In one or more embodiments, processing circuitry 30 compares the influence metric ⁇ with a predefined threshold ( ⁇ ) to determine whether to cache the object in memory 18 at node 12. For example, if the influence metric ⁇ is greater than or equal to the predefined threshold, the determination is made to cache the object at node 12. If the influence metric ⁇ is less than the predefined threshold, the determination is made not to cache the object at node 12. In either case, the object may still be forwarded to requesting node 12 such as if the influence process is performed after receiving the data packet but before forwarding of the data packet.
- a predefined threshold a predefined threshold
- the decision whether to cache an object at node 12 is based at least in part on the determined interest metric of one or more other nodes 12. While the influence process is described with respect to determining whether to cache an object at node 12, the influence process is equally applicable to one or more embodiments where a group of nodes, i.e., a cluster, performs collaborative or distributed caching.
- FIG. 4 illustrates a flow diagram of another exemplary influence process for classifying an object and determining whether to cache the object at node 12 based on a measure of an anticipated interested in the object, i.e., influence metric ( ⁇ ), as discussed below.
- influence metric ⁇
- Processing circuitry 30 classifies an object (Block S104).
- processing circuitry 30 classifies an object that was received in a data packet.
- the object is classified in a graph (G) along with other objects cached at node 12a and/or that have been requested from node 12a.
- Processing circuitry 30 determines interest metrics (Block S106). In this example, processing circuity 30 determines respective interest metrics of nodes 12a-12e as discussed above with respect to Block SI 00 (Block S106).
- Processing circuitry 30 determines an influence metric as discussed above with respect to Block S102 (Block S108).
- the influence i.e., influence metric
- the influence of a node on the other nodes can be estimated/calculated/inferred with respect to an object such that a larger influence metric value indicates a high probability that caching the object will anticipate forthcoming requests for similar content.
- Processing circuitry 30 determines whether the influence metric meets a predefined threshold as discussed above with respect to Block S102 (Block SI 10).
- the predefined threshold may be different for different types of objects and attributes. If processing circuitry 30 determines the influence metric meets the predefined threshold, processing circuitry causes the object to be cached in content store 28 of memory 18 (Block SI 12). If processing circuitry determines the influence metric does not meet the predefined threshold, processing circuitry 30 does not cache the object (Block SI 14).
- FIG. 5 illustrates a flow diagram of yet another exemplary influence process for determining whether to cache an object at node 12 based on a number of forwarding faces 16 in accordance with the principles of the disclosure.
- Processing circuitry 30 classifies object as discussed above with respect to FIG. 4 (Block S104).
- Processing circuitry 30 determines whether a FIB log entry for the object exists in FIB log 24 (Block SI 16). If processing circuitry 30 determines a FIB log entry does not exist in FIB log 24, processing circuitry 30 drops and does not cache the object (Block SI 18).
- processing circuitry 30 determines a number (#) of forwarding faces 16 at node 12 that have expressed interest in the object via one or more interest packets (Block S120).
- the number of forwarding faces 16 which have expressed interest in the object is determined based on entries in FIB log 24 that tracks faces 16 of node 12 that have received an interest in the object and a count/number of received interests in the object. The number of forwarding faces that have expressed interest corresponds to the degree of node 12 with respect to the requested object.
- Processing circuitry 30 determines whether the number of forwarding faces 16 that have expressed interest in the object is greater than a predefined threshold (Block S122).
- the threshold value is determined as a function of one or more parameters such as popularity of the object, relevance of the object to other objects, number of FIB log entries for items similar to the object, available local resources such as cache storage, etc.
- the one or more parameters may be assigned one or more predefined weights denoting the difference of importance between objects.
- one or more prefixes may be given more weight than other prefixes such that certain prefixes may be more likely to be cached at node 12.
- processing circuitry 30 determines that the number of forwarding faces 16 is greater than the predefined threshold, processing circuitry 30 caches the object in content store 28 (Block S124). If processing circuitry 30 determines that the number of forwarding faces 16 is less than the predefined threshold, processing circuitry 30 does not cache the object (Block S126). FIG.
- FIG. 6 illustrates a block diagram of an exemplary graph 32 for determining the influence of node 12 on other nodes 12 if node 12 were to cache one or more objects in accordance with the principles of the disclosure.
- the instant disclosure advantageously models the information topology of the network that is used to determine whether to cache one or more objects at one or more nodes 12, at least in part, through the use of expander graphs. While the object can be classified into regions of the graph using know methods, the influence of caching the object at node 12 is determined based on the characteristics of the graph.
- V represents a finite set of objects characterized by one or more respective attributes such a name, type, etc.
- E represents the edges of the graph in which the edges may be assigned one or more weights based on relationships between objects as inferred by the name prefixes of the objects.
- one or more prefixes may be given more weight than other prefixes such that certain prefixes may be more likely to be cached at node 12.
- prioritized prefixes may be in a deeper hierarchy, e.g., "/a/b/c", than one or more FIB log entries such as "/a/b”.
- the measure of edge expansion or isoperimetric number of graph 32 for an object provides a deterministic approach for determining the influence metric of node 12 with respect to the object.
- each object may be classified to fall within a region spanned by k-dimensional space, i.e., based on k number of features/attributes.
- an object may be represented by a set of k attributes as follows:
- obj c (a£, a ? ., ... , a£) where a is the n th attribute of obj c .
- object 34a is classified in region Rl while object 34b is classified in region R2.
- the distance or semantic similarity of two objects can be measured by the Euclidean distance assuming that the features belong to
- the number of attributes used for classification of objects may be reduced by dimension reduction techniques such as Principle Component Analysis (PCA).
- PCA Principle Component Analysis
- a smaller set of the most significant attributes for classification may be derived. For example, assume object information about m attributes is available but classifying based on m attributes is computationally expensive with no significant added value.
- the most n significant features/attributes that affect the decision can be used instead of all m attributes, where n ⁇ m. This advantageously reduces the amount of computer resources needed for classification of one or more objects.
- FIG. 7 is an exemplary block diagram of one embodiment of system 10.
- system 10 includes nodes 12a-12f in which node 12a includes faces 16a- 16d, and nodes 12b-12f include at least one respective face 16 (referred to as face "fl" in FIG. 7).
- Node 12b receives interest 14b in content object 34a (via an interest packet) from node 12e that received interest 14b from one or more devices (not shown) that are in communication with node 12e.
- Interest 14b is forwarded to face 16a of node 12a.
- Node 12c receives interest 14c in content objects 34a and 34b from node 12f that received interest 14c from one or more devices that are in
- node 12d receives interest 14d in content object 34a from node 12g that received interest 14d from one or more devices that are in communication with node 12d. Interest 14d is forwarded to face 16c of node 12a.
- content object 34a is classified in region Rl of graph G and content object 34b is classified in region R2, as illustrated in FIG. 6.
- Node 12a includes FIB log 24.
- the content or data included in FIB log 24 is separated by regions R or subgraphs.
- section Rl of FIB log 24 includes information regarding interest received at faces 16a- 16c for content objects 34 classified in region Rl such as content object 34a.
- the information includes face 16, e.g., face 16a, where the interest was received and a count of the number of times the content object belonging to region Rl has been required and satisfied.
- the count in FIG log 24 corresponds to a number of request for one or more content objects irrespective of whether the request for the objects were satisfied. In this embodiment of FIG.
- section R2 of FIB log 24 includes information regarding interest received at faces 16a- 16c for content objects 34 classified in region R2 such as content object 34b. Similar to section Rl of FIB log 24, the information of section R2 includes face 16, e.g., face 16b, where the interest was received and a count of the number of times the content object belonging to region R2 has been required and satisfied.
- node 12e has content objects 34a and 34b cached in memory 18, and node 12a forwards interest 14b-14d to node 12e. In response to forwarded interest 14b-14d, node 12a receives data packets including content object 34a and 34b from node 12e.
- nodes 12b, 12c and 12d include respective FIG logs 24. Using the example illustrated in FIG. 7, respective FIG logs 24 are illustrated below.
- the degree of node 12c with respect to objects in Rl and R2 is one.
- the degree of nodes 12b and 12d with respect to objects in Rl is one while the degree of nodes 12b and 12d with respect to objects in R2 is null.
- node 12a will forward content objects 34a and 34b to respective nodes 12 that expressed interest in respective content objects 34, and nodes 12b and 12d will cache object 34a, but node 12c will not cache objects 34a and 34b based on the influence process described above.
- FIG. 8 is an exemplary block diagram of another embodiment of node 12.
- node 12 includes one or more face modules 36 for receiving and transmitting interest and data packets, among other packets and/or data.
- processing module 38 for performing node 12 functions as described herein.
- node 12 includes FIB log module 40, PIT module 42 and content store module 44 that store respective data as describe above with respect to modified FIB log 24, PIT 26 and content store 28.
- Node 12 further includes influence module 46 for performing the influence processes for determining whether to cache an object as described herein.
- FIG. 9 is an exemplary block diagram of a request drop policy procedure for forwarding interest packets in accordance with the disclosure.
- Node 12b receives interest packet 14 and creates a PIT entry corresponding to the interest that tracks face 16 on which interest packet 14 was received.
- node 12b transmits interest packet 14 according to FIB information such that interest packet 14 is transmitted to nodes 12a and 12c.
- node 12b updates information/data contained in interest packet 14.
- node 12a tracks face 16, i.e., face_id, of node 12a on which the interest packet arrived by creating a PIT entry and/or updating FIG log 24.
- node 12c updates its FIB log 24 to reflect the interest.
- node 12b may update its FIB log 24 to reflect face 16 on which the interest packet was receive and the count.
- interest packet 14 received from node 12c will be received after interest packet 14 has already been satisfied such that node 12a will respond with a NULL identifier or other identifier that indicates the request with respect to node 12c was not satisfied as discussed in detail with respect to FIG. 10.
- FIG. 10 is an exemplary block diagram of a request drop policy procedure for forwarding data packets in accordance with the disclosure.
- node 12a responds to interest packet 14 received from node 12b by satisfying the request.
- node 12b transmits a data packet to node 12b including the requested object.
- the data packet may include a response identification (response_id) containing a request identification (request_id) and face identification (face_id), i.e.,
- response_id ⁇ ⁇ request_id ⁇ - ⁇ face_id ⁇ ⁇ .
- node 12b receive the response, i.e., data packet, from node 12a and transmits the data packet to node 12d according to its PIT entry.
- node 12b updates the response_id to indicate that node 12b transmitted the data packet to node 12d.
- node 12a transmits a respond to interest packet 14 received from node 12c.
- response_id NULL or other identifier indicating a specific request has not been satisfied
- node 12a will drop the PIT entry for the request identification.
- response_id ⁇ request_id ⁇ - ⁇ face_id ⁇ .
- the concepts described herein may be embodied as a method, data processing system, and/or computer program product. Accordingly, the concepts described herein may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a "circuit" or "module.” Furthermore, the disclosure may take the form of a computer program product on a tangible computer usable storage medium having computer program code embodied in the medium that can be executed by a computer. Any suitable tangible computer readable medium may be utilized including hard disks, CD-ROMs, electronic storage devices, optical storage devices, or magnetic storage devices.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- Computer program code for carrying out operations of the concepts described herein may be written in an object oriented programming language such as Java® or C++.
- the computer program code for carrying out operations of the disclosure may also be written in conventional procedural programming languages, such as the "C" programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
- the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A node for a communication network is provided and is adapted to selectively cache objects. The node is configured to communicate with a plurality of other nodes. The node includes processing circuitry including a processor and a memory. The memory contains instructions that, when executed by the processor, configure the processor to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric. The influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes and is a measure of an anticipated interest in the object, i.e. of a popularity of the object. The node may be an information-centric node or a content-centric node.
Description
SELECTIVE CACHING FOR INFORMATION-CENTRIC NETWORK BASED CONTENT DELIVERY
TECHNICAL FIELD
The present disclosure relates to communication networks, and in particular, to a method, node and system for selective caching of objects at one or more nodes in an information-centric network (ICN).
BACKGROUND
Existing Internet Protocol (IP) networks are based on mapping from what a user requests to where the requested content is located. For example, in general, IP packets contain two identifiers, namely, a source address and a destination address. These addresses represent the network's where in which the nodes in the IP network path forward the IP packet based on the addresses. In other words, these existing IP networks are host-centric as connectivity is based on end-to-end principles that names hosts, i.e., address(es). However, these IP networks are not without issues. For example, fast, reliable content access requires awkward, pre-planned, application- specific mechanisms like Content Distribution Networks (CDNs) and Point-to-Point (P2P) networks. Also, there exists a location dependency in that content is mapped to host locations, but this serves to complicate network configurations and
implementation of network services.
In order to help alleviate these problems with existing IP networks while at the same time keeping up with the increase in content distribution and retrieval in these IP networks, a new paradigm has been is introduced in which content retrieval is based on the content itself rather than where in the network the content is located. This new paradigm is referred to as Information-Centric Networking (ICN) where a Content- Centric Network (CCN) is a particular implementation of ICN. ICN and CCN are communication architectures build on named data or information.
In a CCN there are two types of packets, interest packets and data packets. An interest packet indicates a consumer has an interest in certain content in which any CCN node receiving the interest packet and having the requested content can respond with a data packet including the requested content/data. In other words, interest packets request content while data packets are the response to the request. CCN
routing may be performed hop-by-hop using longest-prefix matching on the content name. The content/data follows the reverse path of the corresponding interest packet. As a fictional example, a CCN Name could look like:
/company/whitepapers/real-time-cloud/pages/3
The CCN name would denote the third page of the "Real-Time cloud" whitepaper released by the company. A prefix in the CCN name is called the routable name, and should be known to enough routers as to be routable in the network. In this case, the routable name would likely be "/company" in which the remainder of the name would be interpreted on-site. Data satisfies the interest packet (interest) if the content name in the interest packet is a prefix of the content name in the data packet. In general, CCN nodes such as forwarders and routers hold more state information than only IP routers. These CCN nodes hold a Forwarding Information Base (FIB) in which the CCN nodes use the FIB for forwarding such as to match {CCN Name (or prefix), ingress port(s) -> egress port(s) }.
In addition to the FIB, the CCN node must be able to perform reverse-path routing for data packets, i.e., content object packets. This means that the CCN node must track which requests have been forwarded, and from which interface(s) to which interface(s). This state is kept in a table called the Pending Interest Table (PIT). When a response is received at the CCN node, the CCN node checks the CCN name against the PIT in order to determine on which interface(s) to route the data packet. In one embodiment, the CCN nodes are allowed to cache data packets locally in a table called a content store which is based on local policy.
Based on existing CCN implementation, content will be cached on all nodes that the content traverses in the reverse-path back to the requester. However, this leads to unnecessary usage of storage resources globally throughout the network. In another example, in content distribution network (CDN) based solutions, items or content are cached based on the geographical and physical connectivity based measure of closeness such that content is cached close to the edge. However, this geographical notion of topology for deciding to cache content at edge nodes does not optimize usage of resources because caching based on geography may still lead to inefficient and wasteful caching.
SUMMARY
The present disclosure advantageously provides a method, node and system for selective caching of objects at one or more nodes in an information-centric network (ICN). According to one embodiment of the disclosure, a node is provided. The node is configured to communicate with a plurality of other nodes. The node includes processing circuitry including a processor and a memory. The memory contains instructions that, when executed by the processor, configure the processor to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric. The influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
According to one embodiment of this aspect, the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes. The interest metric of a respective node is based on a number of faces of the respective node that have received a request for an object. According to another embodiment of this aspect, the interest metric of a respective node is further based on a total number of faces of the respective node. According to another embodiment of this aspect, the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
According to another embodiment of this aspect, the memory contains further instructions that, when executed by the processor, configure the processor to compare an interest metric of the node with an interest metric of each of the plurality of other nodes, and determine the influence metric of the node based on the comparison. According to another embodiment of this aspect, the interest metric is further based on a similarity metric of a respective node, the similarity metric indicating a measure of similarity between the object and at least one other object cached in memory. According to another embodiment of this aspect, the memory contains further instructions that, when executed by the processor, cause the processor to classify the object and the at least one other object in an expander graph. The similarity metric is based on a Euclidean distance in the expander graph between the object and the at least one other object.
According to another embodiment of this aspect, the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph. According to another embodiment of this aspect, the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data. According to another embodiment of this aspect, the memory contains further instructions that, when executed by the processor, configure the processor to compare the influence metric of the node with a predefined threshold. The determination to cache the object in memory is based on the influence metric meeting the predefined threshold. According to another embodiment of this aspect, the node is a Content-Centric Network (CCN) node. According to another embodiment of this aspect, the object is one of a content object and service instance.
According to another embodiment of the disclosure, a method for a node configured to communicate with a plurality of other nodes is provided. An influence metric for the node is determined. The influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes. A
determination is made to cache the object at the node based at least in part on the determined influence metric.
According to another embodiment of this aspect, the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes, the interest metric of a respective node being based on a number of faces of the respective node that have received a request for an object. According to another embodiment of this aspect, the interest metric of a respective node is further based on a total number of faces of the respective node. According to another embodiment of this aspect, the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
According to another embodiment of this aspect, an interest metric of the node is compared with an interest metric of each of the plurality of other nodes, the influence metric of the node is determined based on the comparison. According to another embodiment of this aspect, the interest metric is further based on a similarity metric of a respective node. The similarity metric indicates a measure of similarity between the object and at least one other object. According to another embodiment of
this aspect, the object and the at least one other object are classified in an expander graph. The similarity metric is based on a Euclidean distance in the expander graph between the object and the at least one other object.
According to another embodiment of this aspect, the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph. According to another embodiment of this aspect, the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data. According to another embodiment of this aspect, the influence metric of the node is compared with a predefined threshold. The determination to cache the object at the node is based on the influence metric meeting the predefined threshold. According to another embodiment of this aspect, the node is a Content-Centric Network (CCN) node. According to another embodiment of this aspect, the object is one of a content object and service instance.
According to another embodiment of the disclosure, a node is provided. The node includes a face module configured to communicate with a plurality of other nodes. The node includes a processing module configured to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric. The influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
According to another embodiment of the disclosure, a computer readable storage medium is provided. The computer readable storage medium stores executable instructions, which when executed by a processor, cause the processor to determine an influence metric for the node, and determine to cache the object in the memory based at least in part on the determined influence metric. The influence metric indicates an influence that caching an object at the node will have on the plurality of other nodes.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present disclosure, and the attendant advantages and features thereof, will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:
FIG. 1 is a block diagram of an exemplary system for caching objects at one or more nodes in accordance with the principles of the disclosure;
FIG. 2 is a block diagram of an exemplary node for determining whether to cache objects in accordance with the principles of the disclosure;
FIG. 3 is flow diagram of an exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure;
FIG. 4 is a flow diagram of another exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure;
FIG. 5 is a flow diagram of yet another exemplary influence process for determining whether to cache an object in accordance with the principles of the disclosure;
FIG. 6 is a block diagram of an exemplary expander graph for determining the influence of caching objects in accordance with the principles of the disclosure;
FIG. 7 is an exemplary block diagram of one embodiment of the system for caching objects in accordance with the principles of the disclosure;
FIG. 8 is an exemplary block diagram of another embodiment of a node for determining whether to cache objects in accordance with the principles of the disclosure;
FIG. 9 is an exemplary block diagram of a request drop policy procedure for forwarding interest packets in accordance with the disclosure; and
FIG. 10 is an exemplary block diagram of a request drop policy procedure for forwarding data packets in accordance with the disclosure.
DETAILED DESCRIPTION
Before describing in detail exemplary embodiments that are in accordance with the disclosure, it is noted that the embodiments reside primarily in combinations of apparatus/node components and processing steps related to determining whether to
cache one or more objects at one or more nodes. Accordingly, components have been represented where appropriate by conventional symbols in drawings, showing only those specific details that are pertinent to understanding the embodiments of the disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
As used herein, relational terms, such as "first," "second," "top" and "bottom," and the like, may be used solely to distinguish one entity or element from another entity or element without necessarily requiring or implying any physical or logical relationship or order between such entities or elements. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts described herein. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including" when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In embodiments described herein, the joining term, "in communication with" and the like, may be used to indicate electrical or data communication, which may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example. One having ordinary skill in the art will appreciate that multiple components may interoperate and modifications and variations are possible of achieving the electrical and data communication.
The method(s), node(s) and system(s) described herein advantageously provide for selective caching of objects at one or more nodes based on an information
topology. In particular, information topology represents a higher level connectivity of objects such as physical, logical and/or semantic relationship between objects such that the caching decision is able to identify objects having a potential chance of being requested by other nodes. By anticipating future interest in one or more objects, the instant disclosure is able to determine whether caching objects at one or more nodes is an efficient use of node resources. This approach achieves optimal placement of objects in order to increase access of objects to potential object requesters while using minimum resources. In other words, there are a limited number of cache resources for objects in which the caching decision is based on an influence of a node in a certain context that is related to the type of object and centrality of the node. The influence of a node, in one or more embodiments, indicates a level of diversity of nodes that are interested in an object as well as the extent of the interest. In some embodiments, the influence may indicate an influence or impact that caching an object at the node will have on that node and/or the plurality of other nodes. In one or more embodiments, the influence (i.e., influence metric) that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of a graph such as a level of connectivity between nodes, centrality and/or other expansion properties of the graph as described herein. In one or more embodiments, the influence (i.e., influence metric) that caching the object at the node will have on the plurality of other nodes is a relative measure of the likelihood that caching a particular object will satisfy a current or future interest of other nodes in the network. The influence of a node on the other nodes can be estimated/calculated/inferred with respect to an object such that a larger influence metric value indicates a high probability that caching the object will anticipate forthcoming requests for similar content.
Referring now to drawing figures in which like reference designators refer to like elements there is shown in FIG. 1 an exemplary system for information topology aware caching of objects in accordance with the principles of the disclosure and designated generally as "10." System 10 includes one or more nodes 12a-12n (collectively referred to as "node 12") that are in communication with each other via one or more networks (not shown). In one or more embodiments, one or more nodes 12 are Content-Centric Nodes (CCNs) and/or Information-Centric Nodes (ICNs).
While the disclosure is described with respect to CCN/ICN technology, the disclosure is equally applicable to other network types that use routing tables. Each node 12 receives respective interest packets 14a-14n (collectively referred to as interest 14) from one or more devices, nodes and/or other elements of system 10 that are configured to transmit an interest packet to node 12 that requests one or more objects. As used in this disclosure, object refers to content chunk(s), i.e., content object(s) or content data, and/or a service instance with active functionality such as text and multimedia files, software, streaming audio and video and other types of
content/service instance(s) known in the art.
Node 12 includes one or more faces 16a-16n (collectively referred to as face
16) for receiving and transmitting packets such as interest packets and data packets. As used herein the term "face" refers to a physical interface such as a hardware network interface, a logical portion of the physical interface and/or logical interface between application processes within a machine or node 12. Node 12 includes memory 18 for storing various data, objects and programmatic code, as described herein and with respect to FIG. 2. In one or more embodiments, memory 18 includes influence code 20 for performing the influence process that determines whether to cache one or more objects at node 12, as described in detail with respect to FIGS. 3-5.
FIG. 2 is a block diagram of an exemplary configuration of node 12. Node 12 includes one or more faces 16 for receiving and transmitting packets such as interest packets and data packets. Faces 16 are not limited to solely receiving and transmitting interest and data packets. Other packets can be sent and received via faces 16. Node 12 includes one or more processors 22 for performing node 12 functions described herein. Node 12 includes memory 18 that is configured to store data, code and/or other information as described herein. Memory 18 includes
Forwarding Information Base (FIB) log 24 that tracks interest in one or more objects. In one or more embodiments, FIB log 24 tracks interest based on received interest packets. FIB log 24 is discussed in detail with respect to FIG. 7. In one or more embodiments, memory 18 includes a FIB for forwarding interest packets as is well known in the art. Memory 18 also includes Pending Interest Table (PIT) 26 that stores interests and/or interest packets forwarded towards one or more content
sources, i.e., nodes 12, as is known in the art. Memory 18 also includes content store 28 for at least temporarily storing arriving data packets containing requested object(s).
Memory 18 is also configured to store programmatic code such as influence code 20. For example, influence code 20 includes instructions that, when executed by processor 22, causes processor 22 to perform the influence process for determining whether to cache one or more objects as discussed in detail with respect to FIG. 3. In particular, the influence process advantageously considers a higher level of connectivity of objects, i.e., an information topology, in order to determine the effect or influence on other nodes of caching one or more objects at node 12. In other words, the influence process anticipates potential interest in objects at node 12 and determines whether to cache the objects at node 12 based on a measure of the potential interest in objects from other nodes 12. This connectivity of objects may be interpreted as physical, logical or semantic relationships among entities in system 10. In another example, influence code 20 includes instructions which when executed by processor 22, cause processor 22 to perform another influence process for classifying one or more objects and determining whether to cache the one or more objects based on an influence metric as discussed in detail with respect to FIG. 4. In yet another example, influence code 20 includes instructions which when executed by processor 22, cause processor 22 to perform yet another influence process for classifying one or more objects and determining whether to cache one or more objects based on a number of forwarding faces 16 at node 12 as discussed in detail with respect to FIG. 5.
In one or more embodiments, processor 22 and memory 18 form processing circuitry 30. Memory 18 contains instructions that, when executed by processor 22 configure processor 22 to perform one or more functions described with respect to
FIGS. 3-5. In addition to a traditional processor and memory, processing circuitry 30 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific Integrated Circuitry). Processor 22 may be configured to access (e.g., write to and/or reading from) memory 18, which may comprise any kind of volatile and/or non-volatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory) and/or ROM (Read-Only Memory)
and/or optical memory and/or EPROM (Erasable Programmable Read-Only
Memory). Such memory 18 may be configured to store code executable by processor 22 and/or other data, e.g., data pertaining to communication, e.g., configuration and/or address data of nodes, etc.
Processing circuitry 30 may be configured to control any of the methods and/or processes described herein and/or to cause such methods and/or processes to be performed, e.g., by node 12. Corresponding instructions may be stored in the memory 18, which may be readable and/or readably connected to processor 22. In other words, processing circuitry 30 may comprise a microprocessor and/or microcontroller and/or FPGA (Field-Programmable Gate Array) device and/or ASIC (Application Specific Integrated Circuit) device.
FIG. 3 is a flow diagram of an exemplary influence process for determining whether to cache an object at node 12 based on a measure of an anticipated interested in the object, i.e., influence metric (ψ), as discussed below. In one or more embodiments, the influence process is embodied in influence code 20. Processing circuitry 30 determines an influence metric for node 12 with respect to one or more other nodes 12 (Block S100). The influence metric (ψ) is defined as:
ψ = ^ Δ/£;(χ, χί) (1)
ieN
which is the summation of differential interest metrics (AIc(x, X;)). N is a set of physical and/or logical neighbor nodes. The differential interest metric is defined as follows:
A/C(x, xi) = i° if de9cM - degc(Xi)≤ 0 (¾
( |degc(x)— degc(xi) \ otherwise where degc(x) is the degree of node x that may cache content or object c ("object") and degc(xi) is the degree of other node x; such as a neighbor node or node in logical communication with node x. In one or more embodiments, the interest metric is based on a number of faces of respective node 12 that have received a request for the object. For example, in general, the interest metric (Ic(x)) for any node x may be defined as
degG c l(CCNR)
Ic (x) = degG(CCNR) ■ η (3)
where η is a similarity metric determined by a Euclidian measure between objects as discussed in detail with respect to FIG. 6. Further, degc CCNR) represents a degree of node x in Graph G (discussed in detail in FIG. 6) with respect to a requested content object. In other words, degc l (CCNR) is the degree of subgraph Gi of Graph G with respect to the object which, in one or more embodiments, is quantified as a count of the number of faces 16 of node x receiving interest in the object. The term degG CCNR) represents the degree of node x in in Graph G where G = {G1 U G2 U ... U G„}. Gl-Gn are subgraphs of Graph G as discussed in detail with respect to FIG. 6. In other words, degG CCNR) is the degree of the union of all subgraphs Gi which, in one or more embodiments, is quantified as the total number of faces 16 of node x.
Processing circuitry 30 determines to cache the object in memory 18 at node 12 based at least in part on the determined influence metric (Block SI 02). In one or more embodiments, processing circuitry 30 compares the influence metric ψ with a predefined threshold (τ) to determine whether to cache the object in memory 18 at node 12. For example, if the influence metric ψ is greater than or equal to the predefined threshold, the determination is made to cache the object at node 12. If the influence metric ψ is less than the predefined threshold, the determination is made not to cache the object at node 12. In either case, the object may still be forwarded to requesting node 12 such as if the influence process is performed after receiving the data packet but before forwarding of the data packet. Therefore, the decision whether to cache an object at node 12 is based at least in part on the determined interest metric of one or more other nodes 12. While the influence process is described with respect to determining whether to cache an object at node 12, the influence process is equally applicable to one or more embodiments where a group of nodes, i.e., a cluster, performs collaborative or distributed caching.
FIG. 4 illustrates a flow diagram of another exemplary influence process for classifying an object and determining whether to cache the object at node 12 based on a measure of an anticipated interested in the object, i.e., influence metric (ψ), as discussed below. In this embodiment it is assumed that the determination is being made as to whether to cache an object at node 12a, and nodes 12b-e are other nodes in system 10 that are in logical communication with node 12a. Processing circuitry 30
classifies an object (Block S104). In one or more embodiment, processing circuitry 30 classifies an object that was received in a data packet. In particular, the object is classified in a graph (G) along with other objects cached at node 12a and/or that have been requested from node 12a. The classification of objects allows for node 12a to determine relationships between the object that is the subject of the caching decision and other objects cached and/or known to node 12a. The classification is discussed in detail with respect to FIG. 6. Processing circuitry 30 determines interest metrics (Block S106). In this example, processing circuity 30 determines respective interest metrics of nodes 12a-12e as discussed above with respect to Block SI 00 (Block S106).
Processing circuitry 30 determines an influence metric as discussed above with respect to Block S102 (Block S108). In one or more embodiments, the influence (i.e., influence metric) that caching the object at the node will have on the plurality of other nodes is a relative measure of the likelihood that caching a particular object will satisfy a current or future interest of other nodes in the network. The influence of a node on the other nodes can be estimated/calculated/inferred with respect to an object such that a larger influence metric value indicates a high probability that caching the object will anticipate forthcoming requests for similar content. Processing circuitry 30 determines whether the influence metric meets a predefined threshold as discussed above with respect to Block S102 (Block SI 10). In one or more embodiments, the predefined threshold may be different for different types of objects and attributes. If processing circuitry 30 determines the influence metric meets the predefined threshold, processing circuitry causes the object to be cached in content store 28 of memory 18 (Block SI 12). If processing circuitry determines the influence metric does not meet the predefined threshold, processing circuitry 30 does not cache the object (Block SI 14).
FIG. 5 illustrates a flow diagram of yet another exemplary influence process for determining whether to cache an object at node 12 based on a number of forwarding faces 16 in accordance with the principles of the disclosure. Processing circuitry 30 classifies object as discussed above with respect to FIG. 4 (Block S104). Processing circuitry 30 determines whether a FIB log entry for the object exists in FIB log 24 (Block SI 16). If processing circuitry 30 determines a FIB log entry does not
exist in FIB log 24, processing circuitry 30 drops and does not cache the object (Block SI 18). In other words, if a FIG log entry for the object does not exist in FIG log 24, then the object has not been previously requested on face 16 such that caching the object with little to no demand would be an inefficient use of resources. Hence, the object is dropped and not cached.
If processing circuitry 30 determines a FIB log entry for the object exists in FIB log 24, processing circuitry 30 determines a number (#) of forwarding faces 16 at node 12 that have expressed interest in the object via one or more interest packets (Block S120). In one or more embodiments, the number of forwarding faces 16 which have expressed interest in the object is determined based on entries in FIB log 24 that tracks faces 16 of node 12 that have received an interest in the object and a count/number of received interests in the object. The number of forwarding faces that have expressed interest corresponds to the degree of node 12 with respect to the requested object.
Processing circuitry 30 determines whether the number of forwarding faces 16 that have expressed interest in the object is greater than a predefined threshold (Block S122). In one or more embodiments, the threshold value is determined as a function of one or more parameters such as popularity of the object, relevance of the object to other objects, number of FIB log entries for items similar to the object, available local resources such as cache storage, etc. In one or more embodiments, the one or more parameters may be assigned one or more predefined weights denoting the difference of importance between objects. In one or more embodiments, one or more prefixes may be given more weight than other prefixes such that certain prefixes may be more likely to be cached at node 12. These prioritized prefixes may be in a deeper hierarchy, e.g., "/a/b/c", than one or more FIB log entries such as "/a/b". In one or more embodiments, these prioritized prefixes may not be present in FIB log. If processing circuitry 30 determines that the number of forwarding faces 16 is greater than the predefined threshold, processing circuitry 30 caches the object in content store 28 (Block S124). If processing circuitry 30 determines that the number of forwarding faces 16 is less than the predefined threshold, processing circuitry 30 does not cache the object (Block S126).
FIG. 6 illustrates a block diagram of an exemplary graph 32 for determining the influence of node 12 on other nodes 12 if node 12 were to cache one or more objects in accordance with the principles of the disclosure. In particular, the instant disclosure advantageously models the information topology of the network that is used to determine whether to cache one or more objects at one or more nodes 12, at least in part, through the use of expander graphs. While the object can be classified into regions of the graph using know methods, the influence of caching the object at node 12 is determined based on the characteristics of the graph.
Interest in content, i.e., received interest packets are represented as a graph G=(V, E) of vertices V and edges E. V represents a finite set of objects characterized by one or more respective attributes such a name, type, etc. E represents the edges of the graph in which the edges may be assigned one or more weights based on relationships between objects as inferred by the name prefixes of the objects. In one or more embodiments, one or more prefixes may be given more weight than other prefixes such that certain prefixes may be more likely to be cached at node 12. These prioritized prefixes may be in a deeper hierarchy, e.g., "/a/b/c", than one or more FIB log entries such as "/a/b". The measure of edge expansion or isoperimetric number of graph 32 for an object provides a deterministic approach for determining the influence metric of node 12 with respect to the object.
One embodiment illustrated in FIG. 6 shows an example for classifying objects into four regions based on two features, i.e., two axes, in which each region R will see a different topology view as G=(V,E). In general, each object may be classified to fall within a region spanned by k-dimensional space, i.e., based on k number of features/attributes. Specifically, an object may be represented by a set of k attributes as follows:
objc = (a£, a?., ... , a£) where a is the nth attribute of objc. As illustrated in FIG. 6, object 34a is classified in region Rl while object 34b is classified in region R2. The distance or semantic similarity of two objects can be measured by the Euclidean distance assuming that the features belong to
perpendicular axis.
In one or more embodiments, the number of attributes used for classification of objects may be reduced by dimension reduction techniques such as Principle
Component Analysis (PCA). Using dimension reduction techniques, a smaller set of the most significant attributes for classification may be derived. For example, assume object information about m attributes is available but classifying based on m attributes is computationally expensive with no significant added value. Using a dimension reduction technique, the most n significant features/attributes that affect the decision can be used instead of all m attributes, where n < m. This advantageously reduces the amount of computer resources needed for classification of one or more objects.
FIG. 7 is an exemplary block diagram of one embodiment of system 10. In particular, system 10 includes nodes 12a-12f in which node 12a includes faces 16a- 16d, and nodes 12b-12f include at least one respective face 16 (referred to as face "fl" in FIG. 7). Node 12b receives interest 14b in content object 34a (via an interest packet) from node 12e that received interest 14b from one or more devices (not shown) that are in communication with node 12e. Interest 14b is forwarded to face 16a of node 12a. Node 12c receives interest 14c in content objects 34a and 34b from node 12f that received interest 14c from one or more devices that are in
communication with node 12f in which the interest is forwarded to face 16b of node 12a. Further, node 12d receives interest 14d in content object 34a from node 12g that received interest 14d from one or more devices that are in communication with node 12d. Interest 14d is forwarded to face 16c of node 12a. In this example, it is assumed that content object 34a is classified in region Rl of graph G and content object 34b is classified in region R2, as illustrated in FIG. 6.
Node 12a includes FIB log 24. In one or more embodiments, the content or data included in FIB log 24 is separated by regions R or subgraphs. For example, as illustrated in FIG. 7, section Rl of FIB log 24 includes information regarding interest received at faces 16a- 16c for content objects 34 classified in region Rl such as content object 34a. In one or more embodiments, the information includes face 16, e.g., face 16a, where the interest was received and a count of the number of times the content object belonging to region Rl has been required and satisfied. In one or more embodiments, the count in FIG log 24 corresponds to a number of request for one or more content objects irrespective of whether the request for the objects were satisfied. In this embodiment of FIG. 7, degree of node 12a with respect to objects in Rl is three, i.e., degree = 3, and with respect to objects in R2 is one, i.e., degree = 1.
Also, section R2 of FIB log 24 includes information regarding interest received at faces 16a- 16c for content objects 34 classified in region R2 such as content object 34b. Similar to section Rl of FIB log 24, the information of section R2 includes face 16, e.g., face 16b, where the interest was received and a count of the number of times the content object belonging to region R2 has been required and satisfied. In this embodiment, node 12e has content objects 34a and 34b cached in memory 18, and node 12a forwards interest 14b-14d to node 12e. In response to forwarded interest 14b-14d, node 12a receives data packets including content object 34a and 34b from node 12e.
Similarly, nodes 12b, 12c and 12d include respective FIG logs 24. Using the example illustrated in FIG. 7, respective FIG logs 24 are illustrated below.
FIB LOG 24
R1 , count) R2 count)
FIG LOG 24 Table of Nodes 12b and 12d
The degree of node 12c with respect to objects in Rl and R2 is one. The degree of nodes 12b and 12d with respect to objects in Rl is one while the degree of nodes 12b and 12d with respect to objects in R2 is null.
When node 12a receives content objects 34a and 34b, node 12a applies the influence process of influence code 20 to determine whether to cache content object 34a and 34b. In one or more embodiments, node 12a determines it has good edge expansion with respect to content object 34a but not with respect to content 34b. In another example, applying term iew A/c( , ;) of equation (1) to data in FIB log 24
of node 12c leads to (1-0) + (1-1) + (1-0) or ψ=2. In this embodiment, the threshold (τ) =μ*η where μ £ (0,1] and n is the number of faces 16. Choosing μ=1 leads to a threshold of 3 for node 12c assuming node 12c has three faces. Therefore, for R2, ψ=2 does not meet threshold (τ) = 3 so the object belonging to R2 is not cached at node 12c. In this embodiment, ψ must be greater than or equal to τ in order for τ to be met.
In yet another example, applying the term iew A/c( , i) °f equations (1) data in FIB log 24 of node 12d leads to ψ=7. In this embodiment, the threshold (τ) =μ*η where μ £ (0,1] and n is the number of faces 16. Choosing μ=1 leads to a threshold of 3 for node 12d assuming node 12d has three faces. Therefore, for Rl, ψ=7 meets threshold (τ) = 3, i.e., ψ is greater than or equal to τ, so the object belonging to Rl is cached at node 12d. Therefore, node 12a will forward content objects 34a and 34b to respective nodes 12 that expressed interest in respective content objects 34, and nodes 12b and 12d will cache object 34a, but node 12c will not cache objects 34a and 34b based on the influence process described above.
FIG. 8 is an exemplary block diagram of another embodiment of node 12. In particular, node 12 includes one or more face modules 36 for receiving and transmitting interest and data packets, among other packets and/or data. Node 12 includes processing module 38 for performing node 12 functions as described herein. Further, node 12 includes FIB log module 40, PIT module 42 and content store module 44 that store respective data as describe above with respect to modified FIB log 24, PIT 26 and content store 28. Node 12 further includes influence module 46 for performing the influence processes for determining whether to cache an object as described herein.
FIG. 9 is an exemplary block diagram of a request drop policy procedure for forwarding interest packets in accordance with the disclosure. In particular, node 12d transmit interest packet 14 at time t=l in which the interest packet includes a namePrefix indicating the requested object and a request identification (request_id) corresponding to the node transmitting interest packet 14. In this case, node 12d transmits interest packet 14 such that the request identification is request_id = 12d as illustrated in FIG. 9. Node 12b receives interest packet 14 and creates a PIT entry corresponding to the interest that tracks face 16 on which interest packet 14 was
received. At time t=2, node 12b transmits interest packet 14 according to FIB information such that interest packet 14 is transmitted to nodes 12a and 12c.
Further, node 12b updates information/data contained in interest packet 14. For example, the request_id of interest packet 14 is updated to include node 12b, i.e., request_id ={ 12d,12b} in this example. Node 12a receives interest packet with request_id ={ 12d, 12b} and satisfies the interest packet at time t=3 as discussed in detail with respect to FIG. 10. In particular, node 12a tracks face 16, i.e., face_id, of node 12a on which the interest packet arrived by creating a PIT entry and/or updating FIG log 24. In this example, interest packet 14 arrives on face_id = {3 } of node 12a.
Also, node 12c receives interest packet 14 with request_id ={ 12d, 12b} and creates a PIT entry noting face 16 of node 12c on which interest packet 14 arrived. In one or more embodiments, node 12c updates its FIB log 24 to reflect the interest. Subsequently, node 12c transmits an updated interest packet 14 to node 12a at time t=3. In particular, node 12c updates the request_id of the interest packet to indicate that interest packet 14 was transmitted from node 12c, i.e., request_id is updated to request_id ={ 12d,12b, 12c} . Further, node 12b may update its FIB log 24 to reflect face 16 on which the interest packet was receive and the count. However, interest packet 14 received from node 12c will be received after interest packet 14 has already been satisfied such that node 12a will respond with a NULL identifier or other identifier that indicates the request with respect to node 12c was not satisfied as discussed in detail with respect to FIG. 10.
FIG. 10 is an exemplary block diagram of a request drop policy procedure for forwarding data packets in accordance with the disclosure. At time t=3, node 12a responds to interest packet 14 received from node 12b by satisfying the request. In this case, node 12b transmits a data packet to node 12b including the requested object. The data packet may include a response identification (response_id) containing a request identification (request_id) and face identification (face_id), i.e.,
response_id={ {request_id}-{face_id} }. At time t=4, node 12b receive the response, i.e., data packet, from node 12a and transmits the data packet to node 12d according to its PIT entry. In one or more embodiments, node 12b updates the response_id to indicate that node 12b transmitted the data packet to node 12d. Also, at time t=4, node 12a transmits a respond to interest packet 14 received from node 12c. In
particular, since the interest from node 12d has already been satisfied at time t=3, node 12a respond to node 12c with a data packet include a response identification set to NULL, i.e., response_id=NULL or other identifier indicating a specific request has not been satisfied, and node 12a will drop the PIT entry for the request identification. In general at node 12, if interest with request_id={ {request_id}-{face_id} } is already satisfied, node 12 will drop the PIT entry and respond with response_id=NULL. Else, node 12 responds to the request/interest with response_id={request_id}-{face_id}. The processes of FIGS. 9 and 10 thereby help ensure no duplication of data packets in cases where no duplication detection is employed at nodes 12.
As will be appreciated by one of skill in the art, the concepts described herein may be embodied as a method, data processing system, and/or computer program product. Accordingly, the concepts described herein may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a "circuit" or "module." Furthermore, the disclosure may take the form of a computer program product on a tangible computer usable storage medium having computer program code embodied in the medium that can be executed by a computer. Any suitable tangible computer readable medium may be utilized including hard disks, CD-ROMs, electronic storage devices, optical storage devices, or magnetic storage devices.
Some embodiments are described herein with reference to flowchart illustrations and/or block diagrams of methods, systems and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable memory or storage medium that can direct a computer or other
programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It is to be understood that the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that
communication may occur in the opposite direction to the depicted arrows.
Computer program code for carrying out operations of the concepts described herein may be written in an object oriented programming language such as Java® or C++. However, the computer program code for carrying out operations of the disclosure may also be written in conventional procedural programming languages, such as the "C" programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly
repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, all embodiments can be combined in any way and/or combination, and the present specification, including the drawings, shall be construed to constitute a complete written description of all combinations and subcombinations of the embodiments described herein, and of the manner and process of making and using them, and shall support claims to any such combination or subcombination.
It will be appreciated by persons skilled in the art that the disclosure is not limited to what has been particularly shown and described herein above. In addition, unless mention was made above to the contrary, it should be noted that all of the accompanying drawings are not to scale. A variety of modifications and variations are possible in light of the above teachings, which is limited only by the following claims.
Claims
1. A node (12), the node (12) configured to communicate with a plurality of other nodes, the node (12) comprising:
processing circuitry (30), the processing circuitry (30) including a processor (22) and a memory (18), the memory (18) containing instructions that, when executed by the processor (22), configure the processor (22) to:
determine an influence metric for the node (S100), the influence metric indicating an influence that caching an object at the node will have on the plurality of other nodes; and
determine to cache the object in the memory based at least in part on the determined influence metric (SI 02).
2. The node (12) of Claim 1, wherein the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes, the interest metric of a respective node being based on a number of faces of the respective node that have received a request for an object.
3. The node (12) of Claim 2, wherein the interest metric of a respective node is further based on a total number of faces of the respective node.
4. The node (12) of Claim 1, wherein the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
5. The node (12) of Claim 2, wherein the memory (18) contains further instructions that, when executed by the processor (22), configure the processor (22) to:
compare an interest metric of the node with an interest metric of each of the plurality of other nodes; and
determine the influence metric of the node based on the comparison.
6. The node (12) of Claim 2, wherein the interest metric is further based on a similarity metric of a respective node, the similarity metric indicating a measure of similarity between the object and at least one other object.
7. The node (12) of Claim 6, wherein the memory (18) contains further instructions that, when executed by the processor (22), cause the processor (22) to classify the object and the at least one other object in an expander graph; and
the similarity metric being based on a Euclidean distance in the expander graph between the object and the at least one other object.
8. The node (12) of Claim 7, wherein the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph.
9. The node (12) of Claim 2, wherein the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data.
10. The node (12) of Claim 1, wherein the memory (18) contains further instructions that, when executed by the processor (22), configure the processor (22) to:
compare the influence metric of the node (12) with a predefined threshold; and the determination to cache the object in memory being based on the influence metric meeting the predefined threshold.
11. The node (12) of Claim 1, wherein the node (12) is a Content-Centric
Network (CCN) node.
12. The node of Claim 1, wherein the object is one of a content object and service instance.
13. A method for a node (12) that is configured to communicate with a plurality of other nodes, the method comprising:
determining, by the node, an influence metric for the node, the influence metric indicating an influence that caching an object at the node will have on the plurality of other nodes (SI 00); and
determining, by the node, to cache the object at the node based at least in part on the determined influence metric (SI 02).
14. The method of Claim 13, wherein the influence metric is determined based at least in part on an interest metric for each of the plurality of other nodes, the interest metric of a respective node being based on a number of faces of the respective node that have received a request for an object.
15. The method of Claim 14, wherein the interest metric of a respective node is further based on a total number of faces of the respective node.
16. The method of Claim 13, wherein the interest metric of a respective node is further based on an information topology of the respective node with respect to the object and a total number of faces of the respective node.
17. The method of Claim 14, further comprising:
comparing, by the node, an interest metric of the node with an interest metric of each of the plurality of other nodes; and
determining, by the node, the influence metric of the node based on the comparison.
18. The method of Claim 14, wherein the interest metric is further based on a similarity metric of a respective node, the similarity metric indicating a measure of similarity between the object and at least one other object.
19. The method of Claim 18, further comprising:
classifying, by the node, the object and the at least one other object in an expander graph; and
the similarity metric being based on a Euclidean distance in the expander graph between the object and the at least one other object.
20. The method of Claim 19, wherein the influence that caching the object at the node will have on the plurality of other nodes corresponds to at least one expansion property of the graph.
21. The method of Claim 14, wherein the interest metric for each of the plurality of other nodes is based on Forwarding Information Base (FIB) data.
22. The method of Claim 13, further comprising:
comparing, by the node, the influence metric of the node (12) with a predefined threshold (SI 10); and
the determination to cache the object at the node (12) being based on the influence metric meeting the predefined threshold.
23. The method of Claim 13, wherein the node (12) is a Content-Centric Network (CCN) node.
24. The method of Claim 13, wherein the object is one of a content object and service instance.
25. A node (12), comprising:
a face module (36) configured to communicate with a plurality of other nodes; and
a processing module (38) configured to:
determine an influence metric for the node, the influence metric indicating an influence that caching an object at the node will have on the plurality of other nodes (SI 00); and
determine to cache the object in the memory based at least in part on the determined influence metric (SI 02).
26. A computer readable storage medium storing executable instructions, which when executed by a processor, cause the processor to:
determine an influence metric for the node (SI 00), the influence metric indicating an influence that caching an object at the node will have on the plurality of other nodes; and
determine to cache the object in the memory based at least in part on the determined influence metric (SI 02).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2015/058505 WO2017077363A1 (en) | 2015-11-03 | 2015-11-03 | Selective caching for information-centric network based content delivery |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2015/058505 WO2017077363A1 (en) | 2015-11-03 | 2015-11-03 | Selective caching for information-centric network based content delivery |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017077363A1 true WO2017077363A1 (en) | 2017-05-11 |
Family
ID=54771162
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2015/058505 WO2017077363A1 (en) | 2015-11-03 | 2015-11-03 | Selective caching for information-centric network based content delivery |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017077363A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107302571A (en) * | 2017-06-14 | 2017-10-27 | 北京信息科技大学 | Information Center Network Routing and Cache Management Method Based on Drosophila Algorithm |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130198351A1 (en) * | 2012-01-27 | 2013-08-01 | Alcatel-Lucent Usa Inc | Flexible Caching in a Content Centric Network |
WO2015066313A1 (en) * | 2013-10-30 | 2015-05-07 | Interdigital Patent Holdings, Inc. | Enabling information centric networks specialization |
-
2015
- 2015-11-03 WO PCT/IB2015/058505 patent/WO2017077363A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130198351A1 (en) * | 2012-01-27 | 2013-08-01 | Alcatel-Lucent Usa Inc | Flexible Caching in a Content Centric Network |
WO2015066313A1 (en) * | 2013-10-30 | 2015-05-07 | Interdigital Patent Holdings, Inc. | Enabling information centric networks specialization |
Non-Patent Citations (2)
Title |
---|
ABDALLAH ESLAM G ET AL: "Detection and Prevention of Malicious Requests in ICN Routing and Caching", 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY; UBIQUITOUS COMPUTING AND COMMUNICATIONS; DEPENDABLE, AUTONOMIC AND SECURE COMPUTING; PERVASIVE INTELLIGENCE AND COMPUTING, IEEE, 26 October 2015 (2015-10-26), pages 1741 - 1748, XP032836200, DOI: 10.1109/CIT/IUCC/DASC/PICOM.2015.262 * |
JASON MIN WANG ET AL: "Progressive caching in CCN", GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2012 IEEE, IEEE, 3 December 2012 (2012-12-03), pages 2727 - 2732, XP032375086, ISBN: 978-1-4673-0920-2, DOI: 10.1109/GLOCOM.2012.6503529 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107302571A (en) * | 2017-06-14 | 2017-10-27 | 北京信息科技大学 | Information Center Network Routing and Cache Management Method Based on Drosophila Algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11601522B2 (en) | Data management in an edge network | |
CN105376211B (en) | Probabilistic Delayed Forwarding Technology Without Validation in Content-Centric Networks | |
EP3320670B1 (en) | Method and apparatus for pushing data in a content-centric networking (ccn) network | |
CN107078960B (en) | Optimization method of network function link based on flow | |
KR20200040722A (en) | Method for transmitting packet of node and content owner in content centric network | |
US10484271B2 (en) | Data universal forwarding plane for information exchange | |
US9609014B2 (en) | Method and apparatus for preventing insertion of malicious content at a named data network router | |
US10129368B2 (en) | Adjusting entries in a forwarding information base in a content centric network | |
JP6601784B2 (en) | Method, network component, and program for supporting context-aware content requests in an information-oriented network | |
EP2983340B1 (en) | Explicit strategy feedback in name-based forwarding | |
US20160072715A1 (en) | Interest keep alives at intermediate routers in a ccn | |
US10069729B2 (en) | System and method for throttling traffic based on a forwarding information base in a content centric network | |
US9954795B2 (en) | Resource allocation using CCN manifests | |
KR20170064996A (en) | Explicit content deletion commands in a content centric network | |
EP3482558B1 (en) | Systems and methods for transmitting and receiving interest messages | |
KR20150113844A (en) | Multi-object interest using network names | |
WO2017077363A1 (en) | Selective caching for information-centric network based content delivery | |
US10033642B2 (en) | System and method for making optimal routing decisions based on device-specific parameters in a content centric network | |
US10264099B2 (en) | Method and system for content closures in a content centric network | |
JP2016005271A (en) | Assignment of consumer status by interest in content-centric networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15804231 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15804231 Country of ref document: EP Kind code of ref document: A1 |