WO2018182467A1 - Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations - Google Patents

Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations Download PDF

Info

Publication number
WO2018182467A1
WO2018182467A1 PCT/SE2017/050290 SE2017050290W WO2018182467A1 WO 2018182467 A1 WO2018182467 A1 WO 2018182467A1 SE 2017050290 W SE2017050290 W SE 2017050290W WO 2018182467 A1 WO2018182467 A1 WO 2018182467A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
popularity
router
node
message
Prior art date
Application number
PCT/SE2017/050290
Other languages
English (en)
Inventor
Zhang FU
Adeel Mohammad MALIK
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/SE2017/050290 priority Critical patent/WO2018182467A1/fr
Publication of WO2018182467A1 publication Critical patent/WO2018182467A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • Embodiments relate to the field of computer networking; and more specifically, to techniques for congestion control in information-centric networks.
  • ICN Information-Centric Networking
  • IP Internet Protocol
  • a core belief underlying the concept of ICN is that the Internet is commonly used by users who, in most cases, are interested in content and not the location of content. Accordingly, ICN uses content as the primitive and decouples content from its location. Entities can communicate by providing or requesting named data without regard to its location. This facilitates several features such as request aggregation, caching, multi-path forwarding, etc.
  • Request aggregation is a key advantage of the ICN approach, which allows requests for a same content item to be aggregated along the path from the requesters to the data source, forming a data distribution tree. Accordingly, data can be distributed in a multicast fashion, resulting in bandwidth savings and as a consequence, improved scalability as each requester does not require a separate unicast data stream to be originated from the source. Rather, a stream for a particular data object can be unique between two consecutive ICN hops.
  • ICN has two approaches to route requests towards the content source.
  • name-based routing also referred to herein as “coupled routing”
  • This approach utilizes a global name-based routing protocol to share routing information of content in the network.
  • decoupled routing is where names of content objects are decoupled from the routing identifiers.
  • This second approach utilizes a global Name Resolution System (NRS) that routes requests using a two-step process. In the first step, a requested content object name is resolved into a routing identifier. In the second step, the routing identifier is used to route the request towards the content source.
  • NRS Global Name Resolution System
  • the decoupled routing approach can also be executed in a hop-by-hop fashion where every intermediate node between a requester and a source sends a separate name resolution query to the NRS, and the NRS constructs the request path in a stepwise manner.
  • This process which can be referred to as "non-transparent caching", is somewhat similar to how most Content Delivery Network (CDN) providers currently function.
  • CDN Content Delivery Network
  • ICN Internet-of-Things
  • an object can also represent a file segment.
  • each video chunk can be packaged in an ICN object.
  • each video chunk can be arbitrarily long.
  • a method in a first node in an information centric network serving a plurality of clients, wherein the first node is to store popularity values for different content objects includes receiving, from a client or a second node in the information centric network, a first message comprising the number of interests xj for a content object j. The method also includes updating a first popularity value Pj for content object j by
  • the method also includes determining a first condition value QJ, where QJ is a function of Pj .
  • the method also includes determining a second condition value Q ' j where Q ' j is a function of P j being a second popularity value for object j which has been determined before receiving the first message comprising the number of interests xj .
  • the method also includes sending a second message comprising the value yj to a third node in the information centric network.
  • the predetermined condition is that the first condition value QJ is within a first predefined value range and the second condition value Q ' j is within a second predefined value range and wherein the first and second vale ranges are not overlapping each other.
  • the first condition value QJ is set as a ratio between the first popularity value Pj and the sum of all stored first popularity values for different objects and wherein the second condition value Q ' j is set as a ratio between the second popularity value PJ and the sum of all stored second popularity values for different objects.
  • the first condition value QJ is set as a ratio between the first popularity value Pj and the sum of the first popularity value Pj and a third popularity value Pk for content object k and wherein the second condition value Q ' j is set as a ratio between the second popularity value P'j and the sum of the second popularity value P'j and a corresponding fourth popularity value P'k for content object k.
  • the first and the third node are routers, the third node being a next hop router.
  • the second node is a router and the first message received from the second node and is a popularity update message PUj for the content object j comprising the value xj .
  • the second node is a name resolution server and the first message received from the name resolution server is a popularity update message PUj for the content object j comprising the value xj.
  • a first node in an information centric network is to serve a plurality of clients and is further to store popularity values for different content objects.
  • the node comprises a processor coupled to a non-transitory memory storing computer program instructions and the processor is further coupled to a communication interface.
  • the processor executes the instructions, the first node is caused to receive from a client or a second node in the information centric network a first message comprising the number of interests xj for a content object j .
  • the first node is also caused to determine a first condition value Qj, where Qj is a function of Pj .
  • the first node is also caused to determine a second condition value Q j where Q ' j is a function of P ' j being a second popularity value for object j which has been determined before receiving the first message comprising the number of interests xj .
  • the first node is also caused to send a second message comprising the value yj to a third node in the information centric network.
  • the predetermined condition is that the first condition value Qj is within a first predefined value range and the second condition value Q ' j is within a second predefined value range and wherein the first and second vale ranges are not overlapping each other.
  • the first condition value Qj is set as a ratio between the first popularity value Pj and the sum of all stored first popularity values for different objects and wherein the second condition value Q ' j is set as a ratio between the second popularity value P'j and the sum of all stored second popularity values for different objects.
  • the first condition value Qj is set as a ratio between the first popularity value Pj and the sum of the first popularity value Pj and a third popularity value Pk for content object k and wherein the second condition value Q j is set as a ratio between the second popularity value P'j and the sum of the second popularity value P'j and a corresponding fourth popularity value P'k for content object k.
  • the first and the third node are routers, the third node being a next hop router.
  • the second node is a router.
  • the second node is a resolution server.
  • a non-transitory computer-readable storage medium stores instructions which, when executed by a processor of an electronic device, cause the electronic device to perform any of the above methods.
  • a device comprises one or more processors and the non-transitory computer-readable storage medium.
  • a device is to implement a first node in an information centric network comprising a plurality of nodes that serve a plurality of clients.
  • the first node is to store popularity values for different content objects.
  • the device comprises a reception module to receive, from a client or a second node in the information centric network, a first message comprising the number of interests xj for a content object j .
  • the device also comprises a first condition module to determine a first condition value Qj, where Qj is a function of Pj .
  • the device also comprises a second condition module to determine a second condition value Q j where Q j is a function of P ' j being a second popularity value for object j which has been determined before receiving the first message comprising the number of interests xj .
  • the device also comprises a transmission module to send a second message comprising the value yj to a third node in the information centric network.
  • a system comprises the device of the preceding paragraph and the plurality of nodes of the preceding paragraph.
  • a method in a resolution server implemented by one or more electronic devices is for providing popularity-based routing in an information centric network (ICN) comprising a plurality of routers.
  • the method includes receiving, at the resolution server from a client, an interest request message identifying a first object that the client seeks to acquire.
  • the method also includes updating, by the resolution server, a popularity value corresponding to the first object.
  • the popularity value indicates how many clients are currently interested in the first object.
  • the method also includes determining, by the resolution server, a next hop router that the client is to use for acquiring the first object.
  • the method also includes transmitting, by the resolution server, a message comprising an identifier of the next hop router.
  • the method also includes determining, by the resolution server based at least in part upon the updated popularity value, whether to transmit one or more popularity update (PU) messages, each including the updated popularity value, to one or more routers of the plurality of routers.
  • PU popularity update
  • the method further includes responsive to determining to transmit the one or more PU messages, transmitting the one or more PU messages to the one or more routers.
  • the one or more routers includes at least the determined next hop router.
  • the one or more routers further includes a second router, wherein the second router and the next hop router are on a data path to be used to provide the first object to the client.
  • said determining whether to transmit the one or more PU messages comprises: determining whether a current condition value, that is based upon the updated popularity value, together with a previous condition value, satisfy one or more defined logical statements.
  • the method further includes responsive to determining, by the resolution server, that a path through the ICN for the first object is to be changed: transmitting, to a second router on the path, a message indicating that the second router is to unsubscribe from a third router for the first object and further indicating that the second router is to subscribe to a fourth router for the first object; and transmitting, to the third router, a message indicating that the third router is to unsubscribe to the first object.
  • the method further comprises: receiving, from the fourth router, a message indicating a request for a next hop to be used by the fourth router to reach the first object; and transmitting, to the fourth router, a message comprising an identifier of another router of the plurality of routers or an identifier of a source node that provides the first object.
  • a non-transitory computer readable storage medium has instructions which, when executed by one or more processors of an electronic device, cause the electronic device to implement a resolution server to perform any of the methods described in the preceding two paragraphs.
  • an electronic device comprises one or more processors and the non-transitory computer readable storage medium of the preceding paragraph.
  • an electronic device is to implement a resolution server to provide popularity-based routing in an information centric network (ICN) comprising a plurality of routers.
  • the electronic device includes a reception module to receive, from a client, an interest request message identifying a first object that the client seeks to acquire.
  • the electronic device also includes a popularity tracking module to update a popularity value corresponding to the first object. The popularity value indicates how many clients are currently interested in the first object.
  • the electronic device also includes a next hop determination module to determine a next hop router that the client is to use for acquiring the first object.
  • the electronic device also includes a transmission module to transmit a message comprising an identifier of the next hop router.
  • the electronic device also includes a PU message determination module to determine, based at least in part upon the updated popularity value, whether to transmit one or more popularity update (PU) messages, each including the updated popularity value, to one or more routers of the plurality of routers.
  • PU popularity update
  • a computer program product has computer program logic to put into effect any of the preceding methods.
  • Embodiments described herein can easily enable efficient congestion control in ICN networks to be employed via use of tracked popularity values for requested objects.
  • the ICN router upon congestion at an ICN router, the ICN router can have visibility into the popularity of the objects having data packets and make routing/forwarding decisions (e.g., prioritization of processing) based upon this popularity, which can improve the overall quality of experience for the largest number of users.
  • an NRS can track and use object popularity information to intelligently design paths for object traffic, and/or upon detection of a possible or actual congestion issue, efficiently re-route ones of these paths with full knowledge of how many users would be affected by such a change. Accordingly, some embodiments can provide superior quality of experience for the largest possible number of the clients utilizing an ICN network.
  • Figure 1 is a high-level block diagram illustrating some exemplary routing problems in an ICN-adherent system.
  • Figure 2 is a flow diagram illustrating exemplary operations for processing an ICN interest message to enable popularity tracking according to some embodiments.
  • Figure 3 is a flow diagram illustrating exemplary operations for processing a PU update message to enable popularity tracking according to some embodiments.
  • Figure 4 is a high-level block diagram illustrating a weighted fair queuing mechanism that can be used for object popularity-based forwarding according to some embodiments.
  • Figure 5A is a high-level block diagram illustrating a decoupled routing ICN system architecture providing priority-based ICN routing according to some embodiments.
  • Figure 5B is a high-level block diagram illustrating an example of priority-based ICN routing with decoupled routing according to some embodiments.
  • Figure 6 is a flow diagram illustrating operations for handling interest requests in a decoupled routing configuration according to some embodiments.
  • Figure 7 is a sequence diagram illustrating operations for data path modification is according to some embodiments.
  • Figure 8 is a flow diagram illustrating operations for utilizing and propagating object popularity information according to some embodiments.
  • Figure 9 is a flow diagram illustrating operations for centralized object popularity tracking according to some embodiments.
  • Figure 10A illustrates a non-limiting example functional block diagram of a server computing device in accordance with some embodiments.
  • Figure 10B illustrates a non-limiting example functional block diagram of a network device in accordance with some embodiments.
  • Figure 11 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments.
  • Figure 1 IB illustrates an exemplary way to implement a special-purpose network device according to some embodiments.
  • FIG 11C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments.
  • VNEs virtual network elements
  • Figure 1 ID illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments.
  • NE network element
  • Figure 1 IE illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments.
  • Figure 1 IF illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single V E within one of the virtual networks, according to some embodiments.
  • Figure 12 illustrates a general purpose control plane device with centralized control plane (CCP) software according to some embodiments.
  • CCP centralized control plane
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • a request message for an object can be referred to as an "interest message,” and also the corresponding requestor may be referred to as “subscribing” to the object.
  • the terms “request message”, “interest,” and/or “subscribe message” may be used synonymously unless indicated otherwise or unless it is apparent based upon the context of use.
  • request aggregation is one aspect of ICN that can provide a key advantage over traditional IP -based networks as it can provide substantial savings in bandwidth and network forwarding processing.
  • request aggregation also introduces additional challenges such as congestion control and Quality of Experience (QoE).
  • QoE Quality of Experience
  • requests are aggregated by an ICN node, the number of requesters behind a particular request is not visible to a next-hop ICN node. This will hinder the next-hop ICN node in determining how to prioritize between incoming requests based on their popularity. Accordingly, in a congestion situation this can result in a severely degraded QoE for users.
  • ICN Content-Centric Networking
  • multiple object interests for a same object are aggregated into one interest at their first interest forwarding routers.
  • intermediate routers along the path have no idea how many users (clients) are interested in the object, and as a result intermediate routers may treat different objects equally by allocating equal bandwidth to them.
  • clients users
  • the existing solution utilized in CCN will likely result in non-optimal global quality of experience (or total throughput).
  • Figure 1 is a high-level block diagram illustrating some exemplary routing problems in an ICN-adherent routing system 100.
  • the four circles on the left-hand side of the Figure having thin borders and diagonal stripes represent clients 111 that are interested in a first object (or "object 1" or "Obj 1"), whereas the single circle with a thick border and horizontal stripes represents a client 112 interested in a second object (or "object 2" or "Obj2").
  • a client can be an electronic device (or a software module or application executed by an electronic device) such as a server device or end user device - e.g., workstations or personal computers (PCs), laptops, tablets, smartphones, etc.
  • a server device or end user device - e.g., workstations or personal computers (PCs), laptops, tablets, smartphones, etc.
  • PCs personal computers
  • all of the clients 111 seeking object 1 may connect to a first router "Rl" 110 as the "first hop" ICN router to indicate their interest in object 1.
  • this connection may not be a direct physical connection between the two, and may or may not pass through other network nodes (e.g., switches, routers) and/or networks.
  • router Rl 110 Due to request aggregation, router Rl 110 will only send one interest message to router R2 120 for object 1, even though four clients 111 are actively seeking this object. Additionally, router Rl 110 will also send one interest message to router R2 120 for object 2 (based upon client 112 sending its interest request to router Rl 110 for object 2).
  • router R2 120 has no idea how many clients are actually interested in each of these objects. Accordingly, when router R2 120 receives object 1 and object 2 from source 1 130 and source 2 140, respectively, it will forward these objects to router Rl 110.
  • router R2 120 may treat these objects - object 1 and object - equally and may thus drop the packets of either (or both) objects somewhat indiscriminately, perhaps according to some configured scheme that is not based upon the importance of the objects. Thus, upon suffering congestion, router R2 120 may decide to drop packets of object 1 while allowing packets of object 2 to be forwarded. This results in the overall quality of experience for the four clients 111 being degraded substantially, while the single client 112 may not suffer a degraded experience.
  • router R2 120 prioritizes the processing/forwarding of packets of objectl over object2 due to the substantially larger number of clients seeking that object, which would result in the four clients 111 interested in object 1 having a sufficient quality of experience while only one client 1 12 would experience a degraded quality of experience.
  • the overall QoE of the routing system 100 can be increased, which can be especially important for live streaming applications, where an ICN router should be able to give preferential treatment to objects belong to a more popular live stream than objects belonging to a less popular live stream.
  • embodiments disclosed herein can enable improved routing in ICN systems by allowing preferential routing treatment to be given to objects based upon the popularity of these objects.
  • actors in an ICN routing system e.g., ICN routers
  • ICN routing can be decoupled from the name resolution and the popularity information can be propagated in the name resolution system.
  • name resolution servers can send the popularity information to the corresponding ICN router(s) along the data path.
  • the name resolution system also controls the data path searching, and can also use the popularity of the objects to find an optimal data path for each object.
  • every hop along the path from the requester to the data source can keep track of the number of users behind a particular request - i.e., the request popularity. In a congestion situation, this information can be useful in prioritizing between different requests. Prioritizing between requests, in this context, could mean allocating bandwidth proportional to the request popularity when serving the request.
  • Some embodiments utilizing these disclosed techniques can be implemented extremely simply and can be easy to be adopted by the current ICN implementations using small extensions. Some embodiments can also provide, to end users, information describing how popular a particular request (or object) is. For example, in some embodiments providing live video streaming, a user can be provided an indication of how many people are watching the stream at a particular moment, which can be achieved by propagating the popularity rating downstream towards the user along with the content.
  • ICN routers propagate and maintain an indication of the number of clients that are interested in a specific object or flow/live stream. The information can then be used, for example, to implement a traffic prioritization scheme when network congestion exists.
  • Embodiments can utilize coupled routing techniques or decoupled routing techniques, as described in turn below.
  • ICN clients can send interests (i.e., interest messages) for objects directly to the ICN routers. These ICN routers will then forward the interests according to their respective forwarding tables until an interest message reaches a router that has the object in its cache (or reaches a node that is the source of the object). Then, the object is sent back along the same data path to the client(s).
  • interests i.e., interest messages
  • ICN routers will then forward the interests according to their respective forwarding tables until an interest message reaches a router that has the object in its cache (or reaches a node that is the source of the object). Then, the object is sent back along the same data path to the client(s).
  • FIG. 2 is a flow diagram illustrating exemplary operations 200 for processing an ICN interest message to enable popularity tracking according to some embodiments.
  • the operations in the flow diagrams will be described with reference to the exemplary embodiments of the other figures. However, it should be understood that the operations of the flow diagrams can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams.
  • the operations 200 may be performed by an ICN router 110/120 of Figure 1.
  • Given a particular ICN router it may receive an interest message (e.g., interest message210) either from an ICN client or a neighbor ICN router.
  • an interest message e.g., interest message210
  • the ICN router will first determine whether the interest already exists in its Pending Interest Table (PIT) at block 220.
  • PIT Pending Interest Table
  • a PIT includes one or entries, each of which identifies one or more incoming interfaces of a particular interest message, along with an identifier of the object identified by the interest message, so that resulting data packets of the object can be delivered back on the same path via those interfaces toward the requesting client(s).
  • the ICN router can place the sender's identifier (or "ID", such as an interface upon which the interest message was received, etc.) into the entry of that interest in the PIT at block 225.
  • the ICN router maintains an additional "popularity" number in each entry of the PIT. This number indicates how many clients are interested in (e.g., have a current interest in) each object.
  • the corresponding popularity value (or "Pj") in the PIT is increased (e.g., by 1), which may be propagated to the corresponding next hop router as described herein.
  • an ICN router may have to update the popularity numbers in its own PIT as well as propagate the updates to the appropriate neighbors. Accordingly, in some embodiments a new type of message for ICN can be utilized, which is referred to herein as a Popularity Update (PU) message, though other names can be used as well.
  • PU Popularity Update
  • a PU message can be sent upon receipt, by an ICN router, of every interest message or unsubscribe message, this approach is not optimal and may violate the interest aggregation principle of ICN. Accordingly, in some embodiments, a PU message is sent when the corresponding popularity satisfies some set of one or more conditions (or put another way, the popularity - or a derivation thereof - causes one or more logical statements to evaluate to true), examples of which will be discussed further herein.
  • condition value that is based upon the popularity value (Pj), and when this condition value (perhaps with other values) satisfies one or more defined logical statements, the PU message will be sent.
  • condition value does not satisfy the logical statement(s)
  • the PU message will not be sent, despite the ICN router having received a new interest message or unsubscribe message and updated the popularity value Pj.
  • a previous condition value (or "Q ' j") can be tracked that indicates a previous value of the condition value Qj.
  • the popularity value Pj can be adjusted (e.g., increased by one) and the condition value Qj and previous condition value Q ' j can be updated.
  • the operations 200 further include block 240; otherwise, the flow continues to block 260 and ends.
  • a previous popularity value (or "P ' j") can a l so be tracked, which indicates the value of the popularity value Pj at the time when a last PU message for that PIT entry was transmitted.
  • the previous popularity value P ' j can be set to the current popularity value Pj.
  • an Update Value (here denoted as xj) can be included in the PU message.
  • the Update Value can be the difference between the current popularity value Pj and the popularity of the time when previous PU message is sent (e.g., previous popularity value P ' j).
  • the ICN router also maintains P ' j and the Update Value xj, which can be determined as equal to Pj - P ' j, which is reflected in block 240. Then, at block 245, the PU message can be transmitted to the next hop router for the object, which includes the Update Value xj.
  • the update value is a negative number, such as when clients unsubscribe from an object.
  • P P + x
  • the current popularity value Pj can be adjusted based upon (e.g., by adding) the received update value xj, and the current and previous condition values Qj and Q ' j can similarly be updated based upon the values of the current popularity value Pj and the previous popularity value P'j, respectively.
  • the operations 300 may terminate at the finish block 335. However, when the current condition value Q does satisfy the "condition" (e.g., defined logical statement(s)), the operations 300 may continue to block 325, where another update value xj (tracked by this ICN router) can be updated, and the previous popularity value P ' j can also be updated. At block 330, a PU message that includes the update value xj can be sent on to the next hop router, and at block 335 the operations can complete.
  • the condition value Q e.g., defined logical statement(s)
  • the operations 300 may terminate at the finish block 335. However, when the current condition value Q does satisfy the "condition” (e.g., defined logical statement(s)), the operations 300 may continue to block 325, where another update value xj (tracked by this ICN router) can be updated, and the previous popularity value P ' j can also be updated.
  • a PU message that includes the update value xj can be sent on to the next hop
  • a popularity value e.g., a current popularity value Pj, previous popularity value P ' j, current condition value Qj, previous condition value Q ' j
  • a popularity value e.g., a current popularity value Pj, previous popularity value P ' j, current condition value Qj, previous condition value Q ' j
  • condition example 1 One logical statement could be as follows. Multiple
  • levels can be defined based upon a current popularity value - e.g., level 1 is [1, 100), level 2 is [100, 1000), level 3 is [1000, 10000), level 4 is [10000, infinity). Again, the number of levels used and the particular bounds of these levels is merely exemplary, and other numbers of levels and bounds are used in different embodiments.
  • the current condition value QJ can be defined to be the level corresponding to the current popularity value Pj. Thus, if the current popularity value Pj is 50, then the current condition value QJ can be " 1" (for level one); likewise, if the current popularity value Pj is 1160, then the current condition value QJ can be "3" (for level three).
  • the previous condition value Q ' j can likewise represent the level corresponding to the previous popularity value P ' j.
  • the ICN router can determine that it is to send a PU message when P and P' belong to different levels - i.e., when the current condition value Qj is not equal to the previous condition value Q ' j.
  • This test can be part of block 235 of Figure 2, block 320 of Figure 3, etc.
  • condition example 2 can be as follows: when there are multiple interests in the PIT for a same object, the "levels" can also be defined as the ratio between the popularity number of the specific object and the sum of the popularities of all the objects in PIT.
  • level 1 can be [0, 0.1)
  • level 2 can be [0.1, 0.2)
  • level 3 can be [0.2, 0.3)
  • level 4 can be [0.3, 0.8)
  • level 5 can be [0.8, 1.0].
  • the particular level number can be represented as the current condition value Qj and the previous condition value Q ' j, based upon the current popularity value Pj and the previous popularity value P ' j, respectively.
  • the ICN router may send a PU message for object i when Pz denotes its current popularity value.
  • the ratio can also be calculated based on individual (interest) outgoing interfaces (i.e., links), which also include the incoming interface of the corresponding object packets. For example, if interests of object 1 and object 2 are sent to interface 1 (e.g., "ifal"), while interests of object 3 and object 4 are sent to interface 2 (e.g., "if 2"), then for object 1, the ICN router can send a PU message when a PU message.
  • some embodiments can utilize multiple logical statements to determine when a PU message is to be transmitted. For example, in some embodiments the above-presented conditions could be combined, e.g., the PU message is sent when either condition example 1 or condition example 2 is met; or the PU message is sent when both condition example 1 and condition example 2 is met, etc. [0085] Forwarding/Receiving objects according to Popularity
  • a weighted fair queuing (WFQ) mechanism can be defined as shown in Figure 4, which is a high-level block diagram illustrating a WFQ mechanism of an ICN router 400 that can be used for object popularity-based forwarding according to some embodiments.
  • the illustrated embodiment assumes that a number of "levels” are utilized, such as those described above with regard to the logical statements tested to determine whether a PU message is to be sent by an ICN router - e.g., "level 1", “level 2", and so on.
  • a queue (e.g., queue 425 A) is created for each level being utilized in the system.
  • a queue e.g., queue 425 A
  • an object "belongs to" level i if Pz belongs to level i (or, if Qz equals z).
  • a classifier unit 420 e.g., hardware circuitry, and/or software logic
  • a WFQ scheduler unit e.g., hardware circuitry, and/or software logic
  • weights are assigned to the queues 425A-425d of different levels.
  • the weight for a specific queue affects the chances that the packets in that queue will be placed into the NIC outgoing queue.
  • the weight for level 1 could be 1; the weight for level 2 could be 2, etc., though in other embodiments the weights can be of different values that are selected based upon the particular usage scenario.
  • the packets in queue 2 will have twice the chance to be placed into the outgoing queue than the packets in queue 1.
  • the ICN client when an ICN client seeks to acquire some object or subscribe to some live stream, the ICN client first sends an interest request to a Name Resolution System (NRS).
  • NRS Name Resolution System
  • the NRS will either provide (to the ICN client) the address of an entity (either a router or the object source) that has the object (to provide to the client), or an address of a first hop ICN router to which the ICN client should send the interest.
  • the NRS does not keep the popularity number of the objects or live streams.
  • the techniques described above regarding "coupled routing" can be utilized to provide popularity information and allow for popularity- based routing, with a difference is that the ICN client may first get the address of the object source or the first hop ICN router from NRS.
  • the NRS itself may track object popularities.
  • the popularities of objects can be tracked by the NRS, which can use these popularity values to establish "optimal" (e.g., regarding QoE) paths between the object sources and the ICN clients.
  • "optimal" e.g., regarding QoE
  • Figure 5A is a high-level block diagram illustrating a decoupled routing ICN system
  • the NRS 510 can communicate with ICN routers 520A-520D.
  • NRS is shown as a central node; however, it can be implemented in a distributed manner (i.e., with different subcomponents executed at different places, with multiple versions of the NRS 510 at different locations, etc.) and/or in different locations than as shown.
  • the NRS 510 is not external to the routing network (shown as a cloud), though in other embodiments the NRS 510 is external to the routing network.
  • an ICN client 530 when an ICN client 530 wants to receive an object, it sends a request to NRS 510, and in response the NRS 510 will update the popularity of that object (similar to as described above as being performed by an ICN router) and send back the address of the first hop ICN router (e.g., router 520A) to the client 530.
  • the client 530 may then transmit an interest message to that ICN router 520A.
  • the ICN router 520A may itself send a request to the NRS 510, and the NRS 510 can send back the address of the next hop router.
  • the router 520A may send the interest message to router 520D, which itself may then send a request to the NRS 510 for a next hop, and so on, until the source 540 is reached. In this way, a path from a client 530 to the source 540 can be established.
  • Figure 5B is a high-level block diagram illustrating an example of priority-based ICN routing with decoupled routing according to some embodiments.
  • Figure 5B focuses upon ICN routers 570A-570E, as well as a source node 575.
  • a client wants object 1 and sends a request to NRS (not shown).
  • the RS returns to the client the address of router Rl .
  • the client can send an interest message to router Rl for object 1, and in response router Rl will send a request to the NRS, which will send back the address of router R3 as the next hop.
  • router Rl sends the interest message to router R3, and upon receipt, router R3 similarly sends a request to the NRS, which returns the address of router R4 as the next hop. Then, router R3 sends the interest message to router R4, triggering router R4 to send a request to the NRS to get back the address of the source (shown as si, 2).
  • the NRS 510 may proactively configure the routers on that path by sending messages to each router on the path (and not, as described above, only in response to each router's request). This approach can eliminate the time needed for each router to send a request upon its receipt of an interest message, and the time needed for the NRS to send responses back to each requesting ICN router, before the path is established.
  • the NRS can send PU messages to the ICN routers, and the messages can be sent according some conditions, e.g. "condition example 1" and/or "condition example 2" presented above.
  • the ICN routers can use the popularities to deal with congestion as described above.
  • One exemplary set of operations 600 for an NRS to handle interest requests is shown in Figure 6.
  • an interest request message for object "j" is received.
  • decision block 610 a determination is made as to whether the interest request message is from a client. If not - e.g., it is from an ICN router - at block 640 the operations 600 include determining a next hop router (from the perspective of the requesting router) and sending an identifier of the next hop (e.g., a network address) to the requesting router, and finishing at block 645.
  • the flow can continue to block 615, where the current popularity value of the object is adjusted (e.g., incremented by one), and optionally the current condition value OJ and previous condition value Q ' j are also determined.
  • a next hop is determined (e.g., as part of a path computation process), and an identifier of the next hop router (e.g., a network address, name, or other identifier) is sent to the client (allowing the client to then send an interest message using that identifier toward that router).
  • an identifier of the next hop router e.g., a network address, name, or other identifier
  • the flow includes sending a PU message (with the updated Update Value xj, to one or more of the routers in the system - e.g., one or more of the routers along the path, all routers, etc., before the flow finishes at block 645.
  • a nearest router (to the client, in terms of hops, geography, etc.) can be selected.
  • the NRS can cause ICN routers to change the data path for an object. For example, turning back to Figure 5B, suppose there are 100 clients asking for object 1 that connect to router Rl, but only 1 client asking for object 2 that connects to router R2. We also assume that the NRS sets up the path for object 1 as ⁇ R1, R3, R4, Sl,2> (meaning that the path is from router Rl to router R3 to router R4 to the source Sl,2), and sets up the path for object 2 as ⁇ R2, R3, R5, Sl,2>. Then, suppose at a later point in time, one thousand (1000) clients seeking object 3 connect to router R4.
  • object 1 has to "compete" with object 3 at router R4, and the traffic of object 1 may receive less bandwidth than that of object 3, since it has less number of clients.
  • the NRS can provide a better solution by changing the path of object 1 to ⁇ R1, R3, R5, Sl,2> in order to remove object 1 from transiting through router R4 and thus eliminate the resource contention arising due to the traffic of objects 1 and 3.
  • the NRS can instruct router R3 to "unsubscribe" object 1 to router R4, and "subscribe” to router R5. Similarly, the NRS can instruct router R5 to subscribe to Sl,2.
  • An exemplary sequence diagram for such a data path modification is shown in Figure 7.
  • Figure 7 shows the NRS 510, router R3 570C, router R4 570D, router R5 570E, and source Sl,2 575.
  • the NRS 510 changes the path of object 1 from ⁇ R1, R3, R4, Sl,2> to ⁇ R1, R3, R5, Sl,2> in order to remove the data of object 1 from transiting through router R4 570D.
  • the NRS 510 can send a message 705 instructing R3 570C to unsubscribe from router R4 570D and subscribe to router R5 570E.
  • Router R3 570C can then send a message 710 to router R4 570D seeking to unsubscribe from object 1, and send a message 715 to router R5 570E to subscribe to object 1.
  • the RS 510 can also send a message 720 to router R4 570D, instructing it to unsubscribe from object 1.
  • router R4 570D can send a message 725 to source Sl,2 575 indicating that it is unsubscribing from object 1.
  • the router R5 570E can then send a request message 730 for object 1 to the NRS 510 seeking next hop information.
  • the NRS 510 can send a message 735 including an identifier of the next hop (e.g., an address of source SI, 2 575).
  • the router R5 570E can then send a subscribe message 740 for object 1 to source Sl,2 575. At this point, the change has been effected and the new path is operational.
  • messages 720 and 725 can be sent substantially simultaneously with (or even before) messages 710 and 715.
  • messages 730, 735, and 740 might be skipped in cases where router R5 570E may have already subscribed to object 1 with source Sl,2 575 (e.g., there could be some interest from other routers).
  • the particular messages and ordering is intended to be merely exemplary of one particular set of operations in one particular setting, and other operations and settings can be utilized in various embodiments.
  • computing data paths through a network may involve heavy computation, as it can involve solving a non-linear optimization problem. Therefore, in some embodiments a data path is not computed every single time when an interest request is received by NRS. Instead, a path may be computed periodically, or when some other condition is met, e.g., condition example 1 as disclosed above.
  • a simple (or “approximate”) algorithm can be utilized.
  • the following greedy algorithm can be of use in such scenarios: for a specific object, the next hop router with the smallest aggregated client number is chosen as the next hop router in the data path. For instance, with regard to router R3 in Figure 5B, before choosing the next hop router for object 1, the aggregated client number in router R4 is 1000, whereas the aggregated client number of router R5 is 1. Therefore, using the greedy algorithm, router R5 can be chosen as the next hop router of router R3 for object 1.
  • Figure 8 is a flow diagram illustrating operations 800 for utilizing and propagating object popularity information according to some embodiments.
  • the operations 800 of this flow can be performed, for example, by an ICN router as described herein.
  • the operations 800 include, at block 805, receiving, from a client or a second node in an information centric network, a first message comprising the number of interests xj for a content object j.
  • the operations 800 further include, at block 815, determining a first condition value Qj, where Qj is a function of Pj .
  • the operations 800 further include, at block 820, determining a second condition value Q ' j where Q ' j is a function of P ' j being a second popularity value for object j which has been determined before receiving the first message comprising the number of interests xj.
  • the operations 800 further include, at block 830, sending a second message comprising the value yj to a third node in the information centric network.
  • Figure 9 is a flow diagram illustrating operations 900 for centralized object popularity tracking according to some embodiments.
  • the operations 900 of this flow can be performed, for example, by an RS 510 (or “resolution server”) as described herein.
  • the operations 900 include, at block 905, receiving, from a client, an interest request message identifying a first object that the client seeks to acquire.
  • the operations 900 further include, at block 910, updating a popularity value corresponding to the first object, wherein the popularity value indicates how many clients are currently interested in the first object.
  • the operations 900 further include, at block 915, determining a next hop router that the client is to use for acquiring the first object.
  • the operations 900 further include, at block 920, transmitting a message comprising an identifier of the next hop router.
  • the operations 900 further include, at block 925, determining, based at least in part upon the updated popularity value, whether to transmit one or more popularity update (PU) messages, each including the updated popularity value, to one or more routers of the plurality of routers.
  • PU popularity update
  • Figure 10A illustrates a non-limiting example functional block diagram of a server computing device in accordance with some embodiments
  • Figure 10B illustrates a non- limiting example functional block diagram of a network device in accordance with some embodiments.
  • each of these modules be implemented as physically separate units. Some or all modules may be combined in a physical unit. Also, the modules need not be implemented strictly in hardware. It is envisioned that the units may be implemented through a combination of hardware and software.
  • either electronic device 1000 and 1050 may include one or more central processing units executing program instructions stored in a non-transitory storage medium or in firmware to perform the functions of the modules.
  • the electronic device 1000 can implement a resolution server module 1045, which can include a reception module 1005, a popularity tracking module 1010, a next hop determination module 1015, a transmission module 1020, and/or a PU message determination module 1025.
  • a resolution server module 1045 can include a reception module 1005, a popularity tracking module 1010, a next hop determination module 1015, a transmission module 1020, and/or a PU message determination module 1025.
  • the reception module 1005 can be adapted for receiving, from a client, an interest request message identifying a first object that the client seeks to acquire.
  • the popularity tracking module 1010 can be adapted for updating a popularity value corresponding to the first object, wherein the popularity value indicates how many clients are currently interested in the first object.
  • the next hop determination module 1015 can be adapted for determining a next hop router that the client is to use for acquiring the first object.
  • the transmission module 1020 can be adapted for transmitting a message comprising an identifier of the next hop router.
  • the PU message determination module 1025 can be adapted for determining, based at least in part upon the updated popularity value, whether to transmit one or more popularity update (PU) messages, each including the updated popularity value, to one or more routers of the plurality of routers.
  • PU popularity update
  • the electronic device 1050 can implement a router module 1085, which can include a reception module 1055, a popularity tracking module 1060, a first condition module 1065, a second condition module 1070, a calculation module 1075, and/or a transmission module 1080.
  • a router module 1085 can include a reception module 1055, a popularity tracking module 1060, a first condition module 1065, a second condition module 1070, a calculation module 1075, and/or a transmission module 1080.
  • the reception module 1055 can be adapted for receiving, from a client or a second node in an information centric network, a first message comprising the number of interests xj for a content object j .
  • the first condition module 1065 can be adapted for determining a first condition value Qj, where Qj is a function of Pj .
  • the second condition module 1070 can be adapted for determining a second condition value Q j where Q ' j is a function of P ' j being a second popularity value for object j which has been determined before receiving the first message comprising the number of interests xj .
  • the transmission module 1080 can be adapted for sending a second message comprising the value yj to a third node in the information centric network.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine-readable media also called computer-readable media
  • machine-readable storage media e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals.
  • an electronic device e.g., a computer
  • includes hardware and software such as a set of one or more processors coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • an electronic device may include non- volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • volatile memory e.g., dynamic random access memory (DRAM), static random access memory (SRAM)
  • Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • network connections to transmit and/or receive code and/or data using propagating signals.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device ( D) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • Figure 11 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments.
  • Figure 11 A shows NDs 1 lOOA-1100H, and their connectivity by way of lines between 1100 A- 1100B, 1100B- 1100C, 1100C- 1100D, 1100D- 1100E, 1100E- 11 OOF, 1100F- 1100G, and 1 lOOA-1100G, as well as between 1100H and each of 1100 A, 1 lOOC, HOOD, and 1100G.
  • These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • NDs 1100 A, 1100E, and HOOF An additional line extending from NDs 1100 A, 1100E, and HOOF illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 11 A are: 1) a special-purpose network device 1102 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 1104 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 1102 includes networking hardware 1110 comprising compute resource(s) 1112 (which typically include a set of one or more processors), forwarding resource(s) 1114 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 1116 (sometimes called physical ports), as well as non-transitory machine readable storage media 1118 having stored therein networking software 1120 and router code 1190 A that can be used to implement an ICN router as described herein.
  • networking hardware 1110 comprising compute resource(s) 1112 (which typically include a set of one or more processors), forwarding resource(s) 1114 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 1116 (sometimes called physical ports), as well as non-transitory machine readable storage media 1118 having stored therein networking software 1120 and router code 1190 A that can be used to implement an ICN router as described herein.
  • compute resource(s) 1112 which typically include a set of one
  • a physical NI is hardware in a ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs 1 lOOA-1100H.
  • a network connection e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)
  • WNIC wireless network interface controller
  • NIC network interface controller
  • Each of the networking software instance(s) 1122, and that part of the networking hardware 1110 that executes that network software instance form a separate virtual network element 1130A-113 OR.
  • Each of the virtual network element(s) (VNEs) 1130A-113 OR includes a control communication and configuration module 1132A-1132R (sometimes referred to as a local control module or control
  • a given virtual network element e.g., 1130A
  • the control communication and configuration module e.g., 1132A
  • a set of one or more forwarding table(s) e.g., 1134A
  • portion of the networking hardware 1110 that executes the virtual network element e.g., 1130A
  • the special-purpose network device 1102 is often physically and/or logically considered to include: 1) a ND control plane 1124 (sometimes referred to as a control plane) comprising the compute resource(s) 1112 that execute the control communication and configuration module(s) 1132A-1132R; and 2) a ND forwarding plane 1126 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding
  • the ND control plane 1124 (the compute resource(s) 1112 executing the control communication and configuration module(s) 1132A-1132R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 1134A-1134R, and the ND forwarding plane 1126 is responsible for receiving that data on the physical NIs 1116 and forwarding that data out the appropriate ones of the physical NIs 1116 based on the forwarding table(s) 1134A-1134R.
  • data e.g., packets
  • the ND forwarding plane 1126 is responsible for receiving that data on the physical NIs 1116 and forwarding that data out the appropriate ones of the physical NIs 1116 based on the forwarding table(s) 1134A-1134R.
  • Figure 1 IB illustrates an exemplary way to implement the special-purpose network device 1102 according to some embodiments.
  • Figure 1 IB shows a special-purpose network device including cards 1138 (typically hot pluggable). While in some embodiments the cards 1138 are of two types (one or more that operate as the ND forwarding plane 1126
  • alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card).
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general purpose network device 1104 includes hardware 1140 comprising a set of one or more processor(s) 1142 (which are often COTS processors) and network interface controller(s) 1144 (NICs; also known as network interface cards) (which include physical NIs 1146), as well as non-transitory machine readable storage media 1148 having stored therein software 1150 including router code 1190B that can be used to implement an ICN router as described herein.
  • processors which are often COTS processors
  • NICs network interface controller
  • NICs network interface cards
  • non-transitory machine readable storage media 1148 having stored therein software 1150 including router code 1190B that can be used to implement an ICN router as described herein.
  • the processor(s) 1142 execute the software 1150 to instantiate one or more sets of one or more applications 1164A-1164R.
  • the virtualization layer 1154 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 1162A-1162R called software containers that may each be used to execute one (or more) of the sets of
  • applications 1164A-1164R where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • multiple software containers also called virtualization engines, virtual private servers, or jails
  • user spaces typically a virtual memory space
  • the virtualization layer 1154 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 1164A-1164R is run on top of a guest operating system within an instance 1162A-1162R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS seraces) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS seraces
  • unikernel can be implemented to run directly on hardware 1140, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 1154, unikernels running within software containers represented by instances 1162A-1162R, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
  • instance 1162A-1162R if implemented, and that part of the hardware 1140 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 1160A-1160R.
  • the virtual network element(s) 1160A-1160R perform similar functionality to the virtual network element(s) 1130A-1130R - e.g., similar to the control communication and configuration module(s) 1132A and forwarding table(s) 1134A (this virtualization of the hardware 1140 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • FV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE).
  • each instance 1162A-1162R corresponding to one VNE 1160A-1160R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 1162A-1162R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the virtualization layer 1154 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 1162A-1162R and the NIC(s) 1144, as well as optionally between the instances 1162A-1162R; in addition, this virtual switch may enforce network isolation between the VNEs 1160A-1160R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 11 A is a hybrid network device 1106, which includes both custom ASICs/special-purpose OS and COTS
  • a platform VM i.e., a VM that that implements the functionality of the special-purpose network device 1102 could provide for para- virtualization to the networking hardware present in the hybrid network device 1106.
  • each of the VNEs e.g., VNE(s) 1130A-1130R,
  • VNEs 1160A-1160R receives data on the physical NIs (e.g., 1116, 1146) and forwards that data out the appropriate ones of the physical NIs (e.g., 1116, 1146).
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port” and “destination port” refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (HDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • HDP user datagram protocol
  • TCP Transmission Control Protocol
  • DSCP differentiated services code point
  • Figure 11C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments.
  • Figure 11C shows VNEs 1170 A.1-1170A.P (and optionally VNEs 1170A.Q-1170A.R) implemented in ND l lOOA and VNE 1170H.1 in ND 1100H.
  • VNEs 1170A.1-P are separate from each other in the sense that they can receive packets from outside ND 1100 A and forward packets outside of ND 1100 A; VNE 1170 A.1 is coupled with VNE 1170H.1, and thus they communicate packets between their respective NDs; VNE 1170A.2-1170A.3 may optionally forward packets between themselves without forwarding them outside of the ND 1100 A; and VNE 1170A.P may optionally be the first in a chain of VNEs that includes VNE 1170A.Q followed by VNE 1170A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 11C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and
  • the NDs of Figure 11 A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including
  • VOIP Voice Over Internet Protocol
  • GPS Global Positioning System
  • wearable devices gaming systems, set-top boxes, Internet-enabled household appliances
  • VPNs virtual private networks
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to- peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., username/password accessed webpages providing email services), and/or corporate networks over VPNs.
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 11 A may also host one or more such servers (e.g., in the case of the general purpose network device 1104, one or more of the software instances 1162A-1162R may operate as servers; the same would be true for the hybrid network device 1106; in the case of the special- purpose network device 1102, one or more such servers could also be run on a virtualization layer executed by the compute resource(s) 1112); in which case the servers are said to be co- located with the VNEs of that ND.
  • the servers are said to be co- located with the VNEs of that ND.
  • a virtual network is a logical abstraction of a physical network (such as that in Figure 11 A) that provides network services (e.g., L2 and/or L3 services).
  • a virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
  • IP Internet Protocol
  • a network virtualization edge sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network.
  • a virtual network instance is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND).
  • a virtual access point is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
  • Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IPVPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)
  • Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
  • quality of service capabilities e.g., traffic classification marking, traffic conditioning and scheduling
  • security capabilities e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements
  • management capabilities e.g., full detection and processing
  • Fig. 1 ID illustrates a network with a single network element on each of the NDs of Figure 11 A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments.
  • Figure 1 ID illustrates network elements (NEs) 1170A-1170H with the same connectivity as the NDs 1 lOOA-1100H of Figure 11 A.
  • Figure 1 ID illustrates that the distributed approach 1172 distributes responsibility for generating the reachability and forwarding information across the NEs 1170A-1170H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 1132A-1132R of the ND control plane 1124 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RS VP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • TE Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • the NEs 1170A-1170H e.g., the compute resource(s) 1112 executing the control communication and configuration module(s) 1132A-1132R
  • the NEs 1170A-1170H perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by distributively determining the reachability within the network and calculating their respective forwarding information.
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 1124.
  • the ND control plane 1124 programs the ND forwarding plane 1126 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 1124 programs the adjacency and route information into one or more forwarding table(s) 1134A-1134R (e.g., Forwarding
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 1102, the same distributed approach 1172 can be implemented on the general purpose network device 1104 and the hybrid network device 1106.
  • Figure 1 ID illustrates that a centralized approach 1174 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 1174 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 1176 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 1176 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 1176 has a south bound interface 1182 with a data plane 1180 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 1170A-1170H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 1176 includes a network controller 1178, which includes a centralized reachability and forwarding information module 1179 that determines the reachability within the network and distributes the forwarding information to the NEs 1170A-1170H of the data plane 1180 over the south bound interface 1182 (which may use the OpenFlow protocol).
  • the network intelligence is centralized in the centralized control plane 1176 executing on electronic devices that are typically separate from the NDs.
  • each of the control communication and configuration module(s) 1132A-1132R of the ND control plane 1124 typically include a control agent that provides the VNE side of the south bound interface 1182.
  • the ND control plane 1124 (the compute resource(s) 1112 executing the control communication and configuration module(s) 1132A-1132R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 1176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 1179 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 1132A-1132R, in addition to communicating with the centralized control plane 1176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 1174, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 1176 to receive
  • each of the V E 1160A-1160R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 1176 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 1179; it should be understood that in some embodiments of the invention, the VNEs 1160A-1160R, in addition to communicating with the centralized control plane 1176, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 1106.
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run
  • NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 1 ID also shows that the centralized control plane 1176 has a north bound interface 1184 to an application layer 1186, in which resides application(s) 1188.
  • the centralized control plane 1176 has the ability to form virtual networks 1192 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 1170A- 1170H of the data plane 1180 being the underlay network)) for the application(s) 1188.
  • virtual networks 1192 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 1170A- 1170H of the data plane 1180 being the underlay network)
  • the centralized control plane 1176 maintains a global view of all NDs and configured
  • NEs/VNEs maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • Figure 1 ID shows the distributed approach 1172 separate from the centralized approach 1174
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach 1174 (e.g., SDN), but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • Such embodiments are generally considered to fall under the centralized approach 1174, but may also be considered a hybrid approach.
  • Figure 1 ID illustrates the simple case where each of the NDs 1 lOOA-1100H implements a single NE 1170A-1170H
  • the network control approaches described with reference to Figure 1 ID also work for networks where one or more of the NDs 1 lOOA-1100H implement multiple VNEs (e.g., VNEs 1130A-1130R, VNEs 1160A- 1160R, those in the hybrid network device 1106).
  • the network controller 1178 may also emulate the implementation of multiple VNEs in a single ND.
  • the network controller 1178 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 1192 (all in the same one of the virtual network(s) 1192, each in different ones of the virtual network(s) 1192, or some combination).
  • the network controller 1178 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 1176 to present different VNEs in the virtual network(s) 1192 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • Figures 1 IE and 1 IF respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 1178 may present as part of different ones of the virtual networks 1192.
  • Figure 1 IE illustrates the simple case of where each of the NDs 1100A- 1100H implements a single NE 1170A-1170H (see Figure 1 ID), but the centralized control plane 1176 has abstracted multiple of the NEs in different NDs (the NEs 1170A-1170C and 1170G-1170H) into (to represent) a single NE 11701 in one of the virtual network(s) 1192 of Figure 1 ID, according to some embodiments.
  • Figure 1 IE shows that in this virtual network, the NE 11701 is coupled to NE 1170D and 1170F, which are both still coupled to NE 1170E.
  • Figure 1 IF illustrates a case where multiple VNEs (VNE 1170A.1 and VNE 1170H.1) are implemented on different NDs (ND 1100 A and ND 1100H) and are coupled to each other, and where the centralized control plane 1176 has abstracted these multiple VNEs such that they appear as a single VNE 1170T within one of the virtual networks 1192 of Figure 1 ID, according to some embodiments.
  • the abstraction of a NE or VNE can span multiple NDs.
  • the electronic device(s) running the centralized control plane 1176, and thus the network controller 1178 including the centralized reachability and forwarding information module 1 179 may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include compute resource(s), a set or one or more physical NICs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software.
  • Figure 12 illustrates, a general purpose control plane device 1204 including hardware 1240 comprising a set of one or more processor(s) 1242 (which are often COTS processors) and network interface controller(s) 1244 (NICs; also known as network interface cards) (which include physical NIs 1246), as well as non-transitory machine readable storage media 1248 having stored therein centralized control plane (CCP) software 1250 and NRS software 1251 which, when executed, can implement the NRS module 1299 that performs operations of a NRS 510 disclosed herein.
  • processors which are often COTS processors
  • NICs network interface controller
  • NICs network interface controller
  • NRS software 1251 which, when executed, can implement the NRS module 1299 that performs operations of a NRS 510 disclosed herein.
  • the processor(s) 1242 typically execute software to instantiate a virtualization layer 1254 (e.g., in one embodiment the virtualization layer 1254 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 1262A-1262R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more
  • a virtualization layer 1254 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 1262A-1262R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more
  • the virtualization layer 1254 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 1262A-1262R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • a virtual machine which in some cases may be considered a tightly isolated form of software container
  • an application is implemented as a unikernel, which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application, and the unikernel can run directly on hardware 1240, directly on a hypervisor represented by virtualization layer 1254 (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 1262A-1262R).
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS services
  • LibOS library operating system
  • the unikernel can run directly on hardware 1240, directly on a hypervisor represented by virtualization layer 1254 (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 1262A-1262R).
  • CCP instance 1276A an instance of the CCP software 1250 (illustrated as CCP instance 1276A) is executed (e.g., within the instance 1262A) on the virtualization layer 1254.
  • CCP instance 1276A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general purpose control plane
  • the CCP instance 1276A includes a network controller instance 1278, which can implement an NRS module 1299 to perform operations of an NRS 510 as disclosed herein.
  • the NRS module 1299 includes a centralized reachability and forwarding information module instance 1279 (which is a middleware layer providing the context of the network controller 1178 to the operating system and communicating with the various NEs), and an CCP application layer 1280 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces).
  • this CCP application layer 1280 within the centralized control plane 1176 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
  • the centralized control plane 1176 transmits relevant messages to the data plane 1180 based on CCP application layer 1280 calculations and middleware layer mapping for each flow.
  • a flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers.
  • Different NDs/NEs/VNEs of the data plane 1180 may receive different messages, and thus different forwarding information.
  • the data plane 1 180 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
  • Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets.
  • the model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
  • MAC media access control
  • Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched).
  • Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet.
  • TCP transmission control protocol
  • an unknown packet for example, a "missed packet” or a "match- miss” as used in OpenFlow parlance
  • the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 1176.
  • the centralized control plane 1176 will then program forwarding table entries into the data plane 1180 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 1180 by the centralized control plane 1176, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address).
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Each VNE e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable.
  • each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s).
  • AAA authentication, authorization, and accounting
  • Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.
  • a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher-layer protocol interface is configured and associated with that physical entity.
  • a physical entity e.g., physical NI, channel
  • a logical entity e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)
  • network protocols e.g., routing protocols, bridging protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Un premier nœud dans un réseau centré sur des informations stocke des valeurs de popularité pour différents objets de contenu. Le premier nœud reçoit, en provenance d'un client ou d'un deuxième nœud dans le réseau centré sur des informations, un premier message comprenant le nombre d'intérêts pour un objet de contenu, et met à jour une première valeur de popularité pour l'objet de contenu. Le premier nœud détermine une première valeur de condition qui dépend de la première valeur de popularité et une seconde valeur de condition qui dépend d'une valeur de popularité précédente pour l'objet. Le premier nœud calcule, lorsque la relation entre la première valeur de condition et la seconde valeur de condition satisfait une condition prédéfinie, la différence entre les première et seconde valeurs de popularité, et règle également la seconde valeur de popularité pour qu'elle soit égale à la première valeur de popularité. Le premier nœud envoie un second message comprenant la valeur de différence à un troisième nœud dans le réseau centré sur des informations.
PCT/SE2017/050290 2017-03-27 2017-03-27 Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations WO2018182467A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050290 WO2018182467A1 (fr) 2017-03-27 2017-03-27 Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2017/050290 WO2018182467A1 (fr) 2017-03-27 2017-03-27 Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations

Publications (1)

Publication Number Publication Date
WO2018182467A1 true WO2018182467A1 (fr) 2018-10-04

Family

ID=58609943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2017/050290 WO2018182467A1 (fr) 2017-03-27 2017-03-27 Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations

Country Status (1)

Country Link
WO (1) WO2018182467A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11444996B2 (en) 2020-04-20 2022-09-13 Cisco Technology, Inc. Two-level cache architecture for live video streaming through hybrid ICN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227048A1 (en) * 2012-02-28 2013-08-29 Futurewei Technologies, Inc. Method for Collaborative Caching for Content-Oriented Networks
US8862814B2 (en) * 2011-08-10 2014-10-14 International Business Machines Corporation Video object placement for cooperative caching
US20150254249A1 (en) * 2014-03-10 2015-09-10 Palo Alto Research Center Incorporated System and method for ranking content popularity in a content-centric network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8862814B2 (en) * 2011-08-10 2014-10-14 International Business Machines Corporation Video object placement for cooperative caching
US20130227048A1 (en) * 2012-02-28 2013-08-29 Futurewei Technologies, Inc. Method for Collaborative Caching for Content-Oriented Networks
US20150254249A1 (en) * 2014-03-10 2015-09-10 Palo Alto Research Center Incorporated System and method for ranking content popularity in a content-centric network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11444996B2 (en) 2020-04-20 2022-09-13 Cisco Technology, Inc. Two-level cache architecture for live video streaming through hybrid ICN
US11843650B2 (en) 2020-04-20 2023-12-12 Cisco Technology, Inc. Two-level cache architecture for live video streaming through hybrid ICN

Similar Documents

Publication Publication Date Title
US9596173B2 (en) Method and system for traffic pattern generation in a software-defined networking (SDN) system
US10291555B2 (en) Service based intelligent packet-in buffering mechanism for openflow switches by having variable buffer timeouts
US11159421B2 (en) Routing table selection in a policy based routing system
US10069732B2 (en) Techniques for architecture-independent dynamic flow learning in a packet forwarder
EP3834365B1 (fr) Versionnage d'arbres de distribution multidiffusion pour minimiser les perturbations du trafic des groupes de multidiffusion
US9509631B2 (en) Quality of service (QoS) for information centric networks
US9521458B2 (en) IPTV targeted messages
US9313117B2 (en) Alternate method to give operators flexibility to choose LFAs
US20160301632A1 (en) Method and system for burst based packet processing
EP3488564A1 (fr) Chemin de commande rapide et convergence de chemin de données dans des réseaux de recouvrement de couche 2
US11294730B2 (en) Process placement in a cloud environment based on automatically optimized placement policies and process execution profiles
EP3437270A1 (fr) Procédé et appareil de contrôle de flux adaptatif d'informations d'état de liaison, d'une source d'état de liaison à un protocole de passerelle frontière (bgp)
EP3935814B1 (fr) Sélection de réseau d'accès dynamique sur la base d'informations d'orchestration d'application dans un système de nuage de périphérie
US11671483B2 (en) In-band protocol-based in-network computation offload framework
US10291957B2 (en) Quicker IPTV channel with static group on IGMP loopback interface
US11876881B2 (en) Mechanism to enable third party services and applications discovery in distributed edge computing environment
WO2018182467A1 (fr) Techniques de contrôle d'encombrement dans des réseaux centrés sur des informations
US10944582B2 (en) Method and apparatus for enhancing multicast group membership protocol(s)
US20230421473A1 (en) Method and system for efficient input/output transfer in network devices
WO2022254246A1 (fr) Procédé de priorisation et de délestage de trafic mobile en nuage de périphérie en 5g

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17719024

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17719024

Country of ref document: EP

Kind code of ref document: A1