EP4331200A2 - System, klassifizierer und verfahren zur netzwerkrichtlinienbasierten verkehrsverwaltung von datenströmen - Google Patents

System, klassifizierer und verfahren zur netzwerkrichtlinienbasierten verkehrsverwaltung von datenströmen

Info

Publication number
EP4331200A2
EP4331200A2 EP22796762.7A EP22796762A EP4331200A2 EP 4331200 A2 EP4331200 A2 EP 4331200A2 EP 22796762 A EP22796762 A EP 22796762A EP 4331200 A2 EP4331200 A2 EP 4331200A2
Authority
EP
European Patent Office
Prior art keywords
data flow
network
incoming data
cloud
classification identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22796762.7A
Other languages
English (en)
French (fr)
Inventor
Romain Lenglet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aviatrix Systems Inc
Original Assignee
Aviatrix Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aviatrix Systems Inc filed Critical Aviatrix Systems Inc
Publication of EP4331200A2 publication Critical patent/EP4331200A2/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]

Definitions

  • Embodiments of the disclosure relate to the field of networking. More specifically, one embodiment of the disclosure relates to a cloud network infrastructure that reliably associates applications pertaining to a cloud instance to data flows propagating over the cloud network.
  • IaaS Infrastructure as a Service
  • VPCs virtual private cloud networks
  • one software platform features a controller and a group of gateways, which are deployed as software components of a VPC and are communicatively coupled to each other.
  • the controller and gateways may be configured to support the transmission of a data flow (e.g., a routing of data packets) over a cloud network, where the packets associated with the data flow are routed from a source (e.g., a first application) to a destination (e.g., a second application).
  • a data flow e.g., a routing of data packets
  • IP Internet Protocol
  • IP addresses As IP addresses become increasingly ephemeral, their use in identifying an application as the source of a data flow is becoming less and less reliable. Stated differently, due exponential growth of resources identified by an IP address within the cloud network, these IP addresses will need to become more ephemeral, and thus, reliance on IP address for source identification will become less reliable over time.
  • FIG. 1 is a first exemplary embodiment of a cloud network infrastructure that performs policy-based data flow classification.
  • FIG. 2 is a more detailed representation of the cloud network infrastructure of FIG. 1.
  • FIG 3 is an exemplary decision tree structure illustrative of a determination of a network policy or network policies associated with a data flow conducted by the ingress gateway within the cloud network infrastructure of FIG. 1.
  • FIG. 4A is a first exemplary embodiment of a logical architecture of the ingress gateway of FIG. 2.
  • FIG. 4B is a second exemplary embodiment of a logical architecture of the ingress gateway of FIG. 2.
  • FIG. 5 is an exemplary embodiment of the general logical operations of the ingress gateway of FIG. 2.
  • FIG. 6 is second exemplary embodiment of a cloud network infrastructure including the second type of classifier that performs policy -based data flow classification.
  • FIG. 7 is a third exemplary embodiment of a cloud network infrastructure including a third type of classifier that performs policy -based data flow classification.
  • FIGS. 8A-8E are exemplary embodiments of the logical structure of messages associated with classified data flows transmitted from the ingress gateways of FIGS. 2, 6-7. DETAILED DESCRIPTION
  • Embodiments of a system and method directed to an improved cloud network infrastructure based on a policy -based, data traffic management scheme is described.
  • the cloud network infrastructure supports policy -based routing of a data flow (e.g., a message or a series of messages), which may be achieved through assignment of a classification identifier to each data flow propagating over a cloud network infrastructure.
  • the classification identifier (hereinafter, “ClassID”) identifies the type of data flow, where such identification is predicated on which user-defined network policy (or which group of two or more network policies) includes requirements regarding the forwarding of data flows that are satisfied by certain attributes associated with the source and/or destination of the data flow and attributes of the flow itself.
  • the ClassID may correspond to a determined network policy (e.g., one-to- one mapping between each ClassID and a corresponding network policy) or the ClassID may correspond to a certain group (combination) of network policies.
  • the use of the ClassID would provide a more reliable association between applications and their data flows propagating over the cloud network or multiple (different) cloud networks operating as a collective cloud network (i.e., multi -cloud network) as well as the context of the data flow itself.
  • One embodiment of the cloud network infrastructure may pertain to a load-balanced, full-mesh network within a public cloud network, which has been configured to mitigate disruption of communications directed to or from virtual private cloud networks (VPCs) due to communication link failures.
  • the full-mesh network may be accomplished by establishing (i) cloud-based networking infrastructures that operate as virtual private cloud networks at the edge of the cloud network (hereinafter, “edge VPCs”) and (ii) a cloud-based networking infrastructure operating as a virtual private cloud network that supports the propagation of data traffic from one VPC to another (hereinafter, “transit VPC”).
  • a first edge VPC may include at least one gateway (hereinafter, “ingress gateway”), which are communicatively coupled to one or more cloud instances (e.g., each cloud instance may support one or more applications).
  • a second edge VPC may include at least one gateway (hereinafter, “egress gateway”), which is communicatively coupled to one or more cloud instances as well.
  • the ingress gateway and the egress gateway may be communicatively coupled to a set of (e.g., two or more) gateways deployed within the transit VPC (hereinafter, “transit gateways”) via one or more peer-to-peer communication links operating in accordance with a secure network protocol such as Internet Protocol Security (IPSec) tunnels for example.
  • IPSec Internet Protocol Security
  • Each of these gateways may be accessed in accordance with a unique Classless Inter-Domain Routing (CIDR) routing address to propagate messages over the network.
  • CIDR Classless Inter-Domain Routing
  • each ingress gateway is configured to assign a ClassID to an incoming data flow based on attributes associated with the data flow being in compliance with, and thereby satisfying, certain requirements of one or more of the network policies defined for the cloud network infrastructure by an administrator for a particular user (e.g., company, consortium, etc.).
  • a network policy generally specifies a desired state, which may be represented by a collection of requirements that govern the forwarding of data flows (messages) between network devices such as the gateways.
  • These network devices may be physical network devices (e.g., electronic devices with circuitry such as a hardware router, hardware controller, endpoint devices such as computers, smartphones, tablets, etc.) or virtual network devices (e.g., software constructs operating as a particular network device).
  • physical network devices e.g., electronic devices with circuitry such as a hardware router, hardware controller, endpoint devices such as computers, smartphones, tablets, etc.
  • virtual network devices e.g., software constructs operating as a particular network device.
  • the ClassID may be represented as a 24-bit or 32-bit value, which may be assigned with “local” granularity (e.g., ClassID only pertains to a segment of a data flow between neighboring network devices for that communication session) or may be assigned with “global” granularity (e.g., ClassID is unique and pertains to a particular data flow for any communications throughout the private cloud network).
  • the “global” ClassID reduces complexity in flow analytics (e.g., sampling of the propagation of particular messages) and improves overall network efficiency as the rate of change of ClassIDs is diminished to reduce the frequency of gateway configuration changes being made by the controller to address ClassID changes) and shall be discussed hereinafter.
  • the attributes associated with the data flow may be based, at least in part, on static attributes and dynamic attributes.
  • the static attributes associated with the data flow may be ascertained from information associated with the ingress gateway, given that the ingress gateway is co-located with an application of a cloud instance that is the source of the data flow.
  • static attributes may include, but are not limited or restricted to location -based attributes (e.g., same cloud region, same cloud zone, same geo-location such as country, state, city, community or other geographic area, same cloud provider, etc.).
  • the dynamic attributes may be obtained from content of the data flow, such as through the use of the source address of the data flow as an index to an address- to-attribute mapped data store, as described below.
  • the ClassID may be determined through a decision tree structure, which may assign the resultant ClassID based on which network policy or combination of network policies is most closely correlated to certain attributes associated with the data flow.
  • the ClassID may be at the controller level in which data flows associated with each application is classified and an IP address to ClassID mapping table is provided to each ingress gateway by the controller.
  • the number of ClassIDs may correspond to the number of network policies so that ClassIDs change only when requirements associated with a particular network policy change.
  • the ClassID may be determined through use of an Application Programming Interface (API).
  • API Application Programming Interface
  • the ingress gateway operating as an egress gateway of a Kubernetes cluster being part of the first edge VPC, accesses the API to retrieve attributes associated with the data flow. These attributes may include attributes associated with the source application for example. Based on these attributes along with attributes acquired from the data flow itself, the ClassID value may be determined in accordance with a decision tree or other type of deterministic scheme.
  • the ClassID may be obtained based on information included as part of a certificate exchanged between the source application and the ingress gateway, operating as an egress gateway with the service mesh deployment, as described below.
  • Instance Subnets Multiple instance subnets may be supported by an edge VPC so that data flows from a cloud instance of a particular instance subnet are forwarded to a selected ingress gateway.
  • Cloud Instance A collection of software components that are configured to receive incoming data flows (one or more messages) and/or transmit outgoing data flows within a cloud (or multi-cloud) network.
  • the cloud instance may be comprised of a virtual web server, a plurality of applications being processed by the virtual web server, and a database maintained by the virtual web server.
  • the cloud instance may generate (and transmit) different types of data flows that are classified differently depending on the attributes of the data flows. For example, data flows initiated by a backup agent being a first application of the applications operating on the web server would be classified differently than a browser application being one of the plurality of applications associated with the same cloud instance.
  • Gateways Multiple gateways may be deployed in one or more VPCs to control the routing of data flows from a cloud instance, including a source application, to a cloud instance inclusive of a destination application. Having similar architectures, the gateways may be identified differently based on their location/operability within a cloud (or multi-cloud) network.
  • the “ingress” gateways are configured to interact with cloud instances including applications while “transit” gateways are configured to further assist in the propagation of data flows (e.g., one or more messages) directed to an ingress gateway within another edge VPC.
  • IP Sec tunnels Secure peer-to-peer communication links established between gateways, where the gateways may be located within the same VPC or located within different, neighboring VPCs.
  • the peer-to-peer communication links are secured through a secure network protocol suite referred to as “Internet Protocol Security” (IPSec).
  • IPSec Internet Protocol Security
  • an edge VPC may include “M” gateways (e.g., M>1) and a neighboring (transit) VPC has N gateways (N>1)
  • M x N IPSec tunnels may be created between the edge VPC and the transit VPC.
  • Gateway routing In gateway routing table, routing paths between a gateway and an IP addressable destination to which the tunnel terminates (e.g., another gateway, on-prem computing device, etc.), identified by a virtual tunnel interface (VTI) for example, may be governed, at least in part, by the ClassID generated at the ingress gateway. The routing paths may be further governed, at least in part, on analytics conducted on certain information associated with data traffic (e.g., 5-tuple - Source IP address, Destination IP address, Source port, Destination port, selected transmission protocol). If any of the IPSec tunnels state is changed or disabled (or re-activated), the corresponding VTI may be removed (or added) from consideration as to termination points for the selected routing path.
  • VTI virtual tunnel interfaces
  • the terms “logic” and “device” is representative of hardware, software or a combination thereof, which is configured to perform one or more functions.
  • the logic may constitute control logic, which may include circuitry having data processing or storage functionality. Examples of such control circuitry may include, but are not limited or restricted to a processor (e.g., a microprocessor, one or more processor cores, a microcontroller, controller, programmable gate array, an application specific integrated circuit, etc.), wireless receiver, transmitter and/or transceiver, semiconductor memory, or combinatorial logic.
  • the logic may be software in the form of one or more software modules.
  • the software module(s) may include an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions.
  • the software module(s) may be coded as a processor, namely a virtual processor.
  • the software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals).
  • suitable non-transitory storage medium e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals.
  • non- transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.
  • the logic may operate as firmware stored in persistent storage.
  • the term “gateway” may be construed as a virtual or physical logic.
  • the gateway may correspond to virtual logic in the form of a software component, such as a virtual machine (VM)-based data routing component that is assigned a Private IP address within an IP address range associated with a VPC including the gateway.
  • the gateway allows Cloud Service Providers (CSPs) and enterprises to enable datacenter and cloud network traffic routing between virtual and physical networks, including a public network (e.g., Internet).
  • the gateway may correspond to physical logic, such as an electronic device that is communicatively coupled to the network and assigned the hardware (MAC) address and IP address.
  • MAC hardware
  • cloud network infrastructure generally refers to a combination of software components (e.g., instances) generated based on execution of certain software by hardware associated with the public cloud network or may be deployed within a multi-cloud network.
  • Each software component (or combination of software components) may constitute a virtual network resource associated with the public cloud (or multi-cloud) network, such as a virtual switch, virtual gateway, or the like.
  • the term “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable delivery protocol. Hence, each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format.
  • a “data flow” generally refers to as one or more messages transmitted from a source (e.g., a first application instance or other software component) to a destination (e.g., a second application instance or other software component).
  • the term “communication link” may be construed as a physical or logical communication path between two or more network devices.
  • wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used.
  • RF radio frequency
  • the communication link may be an Application Programming Interface (API) or other software construct that provides for a transfer of information between two software components that may constitute two network devices with logical representations.
  • API Application Programming Interface
  • a first exemplary embodiment of a cloud network infrastructure 110 which is deployed within a public cloud network 100 and is accessible to users associated with a particular enterprise.
  • the cloud network infrastructure 110 includes a collection of virtual private cloud networks (VPCs), which support reliable communications between one or more cloud instances residing in different VPCs.
  • VPCs virtual private cloud networks
  • the cloud network infrastructure 110 may be configured to operate as a load-balanced, full-mesh network as described in U.S. Patent Application No. 17/079,399 filed October 23, 2020 entitled “Active Mesh Network System and Method,” the entire contents of which are incorporated by reference herein.
  • the cloud network infrastructure 110 may be configured to multiple VPCs managed by a controller 115.
  • the controller 115 is communicatively coupled to provide information to one or more virtual network devices within these VPCs to perform data flow classification through user-defined network policies and control data flow routing relying, at least in part, on the classification identifier (hereinafter, “ClassID”) of the data flow.
  • ClassID classification identifier
  • the VPCs include a first VPC (hereinafter, “first edge VPC”) 120, a second edge VPC 130 and a third VPC (hereinafter, “transit VPC”) 140.
  • the transit VPC 140 enables communications between the first edge VPC 120 and the second edge VPC 130.
  • two edge VPCs 120 and 130 are illustrated in FIG. 1 for clarity sake, it is contemplated that the cloud network infrastructure 110 may deploy additional edge VPCs and multiple transit VPCs.
  • the first edge VPC 120 is configured with one or more instance subnetworks 150 (hereinafter, “subnets”), where each of these instance subnets 150 may include one or more cloud instances.
  • subnets instance subnetworks 150
  • an application 157 within a cloud instance of a cloud subnet 150 e.g., cloud instance 155 may be configured to exchange data flows with class allocation routing logic 160.
  • the class allocation routing logic 160 may be configured to (i) analyze content (e.g., header information, meta-information, etc.) associated with each message of an incoming data flow 165 from the source application 157, (ii) assign a ClassID 170 to the data flow 165, and (iii) encapsulate the ClassID 170 into a message (or each of the messages) associated with the data flow 165.
  • content e.g., header information, meta-information, etc.
  • the content of the data flow 165 may be analyzed to identify certain attributes 167 associated with the data flow 165.
  • These attributes 167 may be identified by accessing an attribute lookup data store (not shown) provided from the controller 115, where a portion of a 5-tuple (e.g., a value based on one or more elements of the 5 -tuple - Source IP address, Destination IP address, Source port, Destination port, transport protocol) may be used to access certain attributes associated with source application 157 and/or destination application).
  • the class allocation routing logic 160 may determine a user-defined network policy 180 that is directed to this type of data flow 165.
  • the ClassID 170 is predicated on which network policy 180 (and its requirements) are correlated with (and satisfied by) the identified attributes 167 of the data flow 165. [0047] Thereafter, the encapsulation scheme for placement of the ClassID 170 into the message(s) associated with the data flow 165, which produces a classified data flow 175, may be dependent on the transmission protocol supported by the cloud network infrastructure 110, as illustrated in FIGS. 8A-8E. In general, the ClassID 170 may be encapsulated into a tunneling header for each of the message(s) to form the classified the data flow 175.
  • the transit VPC 140 forwards the classified data flow 175 through different gateways, where the forwarding may be influenced by the ClassID 170.
  • Re-routing logic 185 being a component of the second edge VPC 130, may be configured to remove the ClassID 170 from the classified data flow 175 and direct contents of the originally transmitted data flow 165 to a targeted destination cloud instance 190 being part of an instance subnet 195 supported by the second edge VPC 130.
  • FIG. 2 a more detailed representation of the exemplary embodiment of the cloud network infrastructure 110 of FIG. 1, which includes the first edge VPC 120 and the second edge VPC 130 communicatively coupled via the transit VPC 140, is shown.
  • the first edge VPC 120 is configured with the instance subnet(s) 150, where the cloud instance 155 within the instance subnet 150 is configured to exchange data flows with the class allocation routing logic 160, namely a gateway of a set of (e.g., two or more) gateways 200i- 200 M (M>2) maintained in the first edge VPC 120.
  • these gateways 200 I -200 M are referred to as “ingress gateways” 200I-200M.
  • the controller 115 for the cloud network infrastructure 110 is configured to manage communications between the instance subnet(s) 150 and the set of ingress gateways 200 I -200 M through use of a VPC routing table 210, which is initially configured to identify which ingress gateway 200i...or 200 M is responsible for interacting with which instance subnets 150 or cloud instances.
  • each of the cloud instances 155 may be comprised of multiple software components operating collectively as a virtual resource.
  • the cloud instance 155 may correspond to a virtual web server configured to execute a plurality of applications 205, where these applications 205 may generate and output different types of data flows 165.
  • the cloud network infrastructure 110 may be accomplished by peering the set of ingress gateways 200i- 200 M deployed within the edge VPC 120 to a set of gateways 220 I -220 N (N>2) deployed within the transit VPC 140, which may be referred to as “transit gateways” 220 I -220 N .
  • the set of ingress gateways 200 I -200 M is represented as a first ingress gateway 200i and a second ingress gateway 200 2 , although three or more ingress gateways may be deployed within the edge VPC 120.
  • the set of transit gateways 220 I -220 N is represented by a first transit gateway 220 1 and a second transit gateway 220 2 , although three or more transit gateways may be deployed within the transit VPC 140.
  • the ingress gateway 200i is configured for communications with transit gateways 220 I -220 2 via peer-to-peer communication links 230.
  • the ingress gateway e.g., ingress gateway 200i
  • the transit gateway 220 3 -220 4 may be communicatively coupled to other transit gateways (e.g., transit gateways 22O 1 -22O 2 ) via peer-to-peer communication links 232 as well as a set of gateways 240i-240p (P>2) maintained in the second edge VPC 130 via peer-to-peer communication links 234.
  • these gateways 240i-240p are referred to as “egress gateways” 240i-240p.
  • the peer-to-peer communication links 230, 232 and/or 234 may constitute cryptographically secure tunnels, such as IPSec tunnels. The management of the IPSec tunnels 230, 232 and 234 may be accomplished through gateway routing tables (not shown) maintained by each of the respective gateways 200i-2002, 220I-2204 and 240I-2402.
  • the first edge VPC 120 is configured with one or more instance subnets 150, which include a plurality of cloud instances inclusive of cloud instance 155.
  • Cloud instance 155 is configured to provide the data flow 165 to the ingress gateway 200i.
  • the ingress gateway 200i is configured to analyze content of the data flow 165 and assign the ClassID 170 thereto.
  • the ClassID 170 is predicated on which network policy from a group of network policies 250 includes requirements have a high degree of correlation to attributes of the incoming data flow 165.
  • the ClassID 170 may be based, at least in part, on which network policy 180 from the group of user-defined network policies 250 is composed of requirements that correlate to attributes of the data flow 165.
  • the ingress gateway 200i is configured to analyze content of the data flow 165 by determining its attributes 167.
  • These the attributes 167 may include static attributes 260 and dynamic attributes 265.
  • the static attributes 260 may be available from properties associated with the ingress gateway 200i based on the co-location of both the ingress gateway 200i and the cloud instance 155.
  • Examples of the static attributes 260 may include information associated with the location of the cloud instance 155 including a source application for the data flow 165, which would be the same location as the ingress gateway 200i (e.g., cloud provider, cloud region, cloud zone, geo-location such as country, state, city, community or other sub-areas).
  • the dynamic attributes 265 may be available to the ingress gateway 200i through an IP-address-to-attribute mapping 270 provided by the controller 115.
  • the mapping 270 identifies attributes that may be applicable to the source application. These attributes may include, but are not limited or restricted to the following attributes set forth in Table A:
  • the ClassID 170 may be determined, at least in part, based on the values of some or all of these attributes 260 and 265. [0057] According to other embodiments of the disclosure, the ClassID 170 may be determined, at least in part, through a decision tree analysis that associates values for particular attributes to decisions that would represent a correlation with requirements of a network policy.
  • a decision tree structure 300 for use in determining a network policy or network policies associated with the data flow 165 is shown in FIG. 3.
  • the decision tree structure 300 may feature decisions 310 based on a presence (or absence) of particular attributes and/or the value of these attributes.
  • a result of a first decision 320 may identify that the data flow 165 is associated with a first network policy 330 or is subject to a second decision 340.
  • a result 345 is produced that identifies the data flow 165 is associated with a second network policy 350 or is subject to a third decision 360.
  • This decision-tree analyses are conducted until the network policy 180 is determined.
  • the ingress gateway 200i may assign a ClassID corresponding to that network policy or group of network policies to which the attributes of the data flow 165 are highly correlated.
  • the manner of encapsulation of the ClassID 170 into the data flow 165, which produces the classified data flow 175, may be dependent on the transmission protocol supported by the cloud network infrastructure 110.
  • the ClassID 170 may be implemented with an encrypted body segment (e.g. after the ESP header, after the Wireguard header, etc. ) as shown in FIGS. 8A-8E and described below.
  • the transit VPC 140 forwards the classified data flow 175 through different transit gateways 220 I -220 4 , where the forwarding may be influenced by the ClassID 170.
  • the ClassID 170 may be used to determine which of the communication links 232 to use in routing the classified data flow to the egress gateway 240 1 .
  • each of the transit gateways 220 I -220 4 may be configured to conduct filtering operations based, at least in part, on the ClassID 170 in lieu of conventional firewall techniques of relying on source or destination IP addresses.
  • a transit gateway may conduct traffic limiting operations by eliminating data flows exceeding a certain size (in bytes), exceeding a certain burst size or burst length, exceeding a bandwidth threshold, constituting a particular type of data flow that is precluded from transmission at all (or to a particular application or to a particular edge VPC), or the like.
  • Egress gateway 240i being a component of the second edge VPC 130, is responsible for removing the ClassID 170 from the classified data flow 165 and directing contents of the data flow 165 to a targeted destination cloud instance 190 being part of the subnet 195 supported by the second edge VPC 130.
  • the ingress gateway 200i includes an interface 400, control logic 410, queues 420 and non-transitory storage medium (e.g., data store) 430.
  • the data store 430 features queue monitoring and selection logic 440, ClassID analytic logic 450, message reconfiguration logic 460 and the network policies 250.
  • the ingress gateway 200i is configured to receive the data flow 165 (e.g., one or more messages) via the interface 400 and to generate the ClassID 170 associated with the data flow 165 for transmission, as part of the data flow 165, from the interface 400.
  • the queues 420 may be incoming queues 422 and/or outgoing queues 424.
  • the outgoing queues 424 may also be used as temporary storage for the classified data flows 175 awaiting transmission from the ingress gateway 200i.
  • the outgoing queues 424 may be structured in accordance with a classification priority in which transmission of the classified data flows 175 may be prioritized based on the assigned ClassID.
  • the queuing policy may be based, at least in part on the ClassID assigned to the data flow 165.
  • the queue monitoring and selection logic 440 executed by the control logic 410 (e.g., one or more processors) may detect storage of content associated with the data flow 165 within the incoming queues 422 and signals the ClassID analytic logic 450 accordingly.
  • the ClassID analytic logic 450 is configured to (i) determine which of the network policies 250 is applicable to the data flow 165 and (ii) assign the ClassID 170 in accordance with the determined network policy.
  • the ClassID 170 may be selected by determining, based on the attributes 167 of the data flow 165, which requirements of the network policies 250 correlate to these attributes 167.
  • the ClassID 170 may correspond to the network policy or group of network policies with requirements that best correlate to the attributes of the data flow 165.
  • the message reconfiguration logic 460 is adapted to encapsulate the ClassID 170 appropriately into the data flow 165 to generate the classified data flow 175 for transmission directed to a targeted cloud instance. Additionally, the message reconfiguration logic 460 may include route prediction logic to select the particular transit gateway and communication link to receive the classified data flow. Such selection may be based, at least in part, on the ClassID 170 encapsulated into the classified data flow 175. For example, the classified data flow 175 may be routed to a particular transit gateway 220 2 , which is configured with a certain security policy that is needed for the particular data flow (e.g., transit gateway 220 2 supports Payment Card Industry Data Security Standard “PCI DSS” in the event that the classified data flow 175 is credit card information.
  • PCI DSS Payment Card Industry Data Security Standard
  • Concurrent e.g., at least partially overlapping in time
  • the queue monitoring and selection logic 440 executed by the control logic 410, may select one of the outgoing queues 424 based on the ClassID 170 associated with the data flow 165 and encapsulated into the classified data flow 175.
  • the outgoing queues 424 may be assigned certain priorities so that classified data flows 175 associated with a particular ClassID may be transmitted in advance of classified data flows 175 associated with another ClassID.
  • the ingress gateway 200i includes the interface 400, the control logic 410, the queues 420 and the non-transitory storage medium (e.g., data store) 430 as illustrated in FIG. 4A.
  • the data store 430 includes a ClassID assignment logic 480 is configured to operate in combination with an attributes-to-network policy data store 485, gateway properties data store (for static attributes) 490, and an Network Policy -to-ClassID data store 495.
  • the ClassID assignment logic 480 is configured to determine the network policy 180 from the network policies 250 that is applicable to the data flow 165 by at least accessing static attributes from the gateway properties data store 490 and dynamic attributes from the content of the data flow 165. Collectively, certain attributes (e.g., static, dynamic or a combination of static and dynamic attributes) may be used to determine which of the network policies 250 are applicable to the data flow 165. Thereafter, the ClassID assignment logic 480 accesses the Network Policy -to-ClassID data store 495 to determine the ClassID 170 associated with the data flow 165 originating from the cloud instance 155. Of course, as an alternative embodiment (not shown), the ClassID assignment logic 480 may simply access a prescribed table based on the attributes-to-ClassID relationship.
  • the ingress gateway 200i includes ClassID assignment logic 500, route prediction logic 520, traffic limiter logic 540, and queue selection logic 560.
  • the incoming data flow 165 is received by the ClassID assignment logic 500, which assigns a ClassID to the data flow 165 based on which network policy (or policies) are applicable to the data flow 165.
  • the ClassID 170 is encapsulated within the data flow 165 to generate the classified data flow 175.
  • the classified data flow 175 is provided to the route prediction logic 520.
  • the route prediction logic 520 is configured to determine the particular transit gateway and corresponding communication link to receive the classified data flow 175 for routing to a targeted application. This determination may be based, at least in part, on the selected ClassID 170.
  • the traffic limiter logic 540 is configured to receive the classified data flow 175 and to “shape” the traffic by controlling propagation of the classified data flow through filtering.
  • the queue selection logic 560 determines which outgoing queues 424 to receive the classified data flows 175, especially when different outgoing queues 424 are assigned different priorities.
  • a Kubernetes cluster 610 may be deployed as part of the first edge VPC 120.
  • Kubernetes is open-source orchestration software for deploying, managing, and scaling containers.
  • the Kubernetes cluster 610 features a plurality of nodes 620, including a master node 630 and one or more worker nodes 650.
  • the nodes 620 can be either physical network devices or virtual network devices (e.g., virtual machines) as shown.
  • the master node 630 controls the state of the Kubernetes cluster 610 while the worker node(s) 650 are the components that perform tasks (e.g., running applications, etc.) assigned by the master node 630.
  • the master node 630 may feature an API server 640 that exposes a Representational State Transfer (RESTful) API interface 645 to all Kubernetes resources and provides for communications with controller logic 648 with certain functionality associated with the controller 115 of FIG. 2.
  • the controller logic 648 may be provided access to local storage populated by the controller 115 of FIG. 2 with an IP/ Attribute mapping, where the “IP” may be the IP address of the source application for example. It is contemplated that the mapping may pertain to involve 5-tuple characteristics associated with the messages of the data flow 165.
  • each of the worker nodes 650 may be configured with one or more containers, namely logical devices that run an application.
  • a first worker node 652 may include one or more containers operating as the ingress gateway 2001 of FIG. 2, hereinafter “ingress node” 652.
  • the ingress node 652 Upon receipt of the data flow 165 from another worker node 654 (e.g., a virtual machine operating as the cloud instance 155 of FIG. 1 operating as source application such as a web browser application), the ingress node 652 is configured to access the API server 640 via the API interface 645 to obtain attributes 660 associated with the data flow 165 from the controller logic 648. These attributes 660 may be obtained based on an IP address 665 of the source application 654. These attributes 600 may be combined with attributes associated with the source application 656.
  • the ingress node 652 is configured to determine the network policy 180 (or group of network policies) comporting with the data flow 165 (e.g., via a decision-tree analysis or other type of deterministic scheme) based on an attribute-policy mapping 670 provided from the controller 115 of FIG. 2. Based on a mapping 675 between the network policy 180 applicable to the data flow 165 and ClassIDs, the ClassID 170 may be determined and encapsulated into the dataflow 165 to form the classified dataflow 175 prior to transmission from the Kubernetes cluster 610 toward a targeted source application.
  • a Kubernetes cluster 710 may be deployed as part of the first edge VPC 120.
  • the Kubernetes cluster 710 features a plurality of nodes 720, including a master node 730 and one or more worker nodes 750.
  • the master node 730 controls the state of the Kubernetes cluster 710 while the worker node(s) 750 are the components that perform tasks (e.g., running applications, etc.) assigned by the master node 730.
  • this classifier obtains the attributes from a digital (TLS) certificate exchanged between two containers within the same or different worker nodes 750.
  • TLS digital
  • a first container 760 establishes a secure communication link 780 (e.g., Transport Layer Security “TLS” link) that terminates at a second container 770 operating as the ingress gateway 200i of FIG. 2, hereinafter “ingress container” 770.
  • the first container 760 may operate as a cloud instance running the source application, including a namespace 762 being a virtual cluster that overlays a physical cluster and includes attributes 765 associated with the source application and/or a service account 764 being a data store to maintain attributes 765 for the source application running in the first container 760.
  • the attributes 765 may be included in and obtained from a TLS certificate 790 exchanged between the first container 760 and the ingress container 770.
  • the ingress container 770 upon accessing the attributes 765 along with attributes included as part of the data flow 165, the ingress container 770 is configured to determine the network policy 180 (or group of network policies) comporting with the data flow 165 (e.g., via a decision-tree analysis or other type of deterministic scheme) based on an attribute-policy mapping 795 provided from the controller 115 of FIG. 2. Based on another mapping 797 between the network policy 180 applicable to the data flow 165 and ClassID 170, the ClassID 170 may be determined and encapsulated by encapsulate logic 798 into the data flow 165 to form the classified data flow 175 prior to transmission from the Kubemetes cluster 710 toward a targeted destination application.
  • FIGS. 8A-8E exemplary embodiments of the logical structure of messages associated with classified data flows transmitted from the ingress gateways of FIGS. 2, 6-7 are shown.
  • a first communication protocol ESP
  • IP packet IP packet
  • the encapsulated message 805 includes a tunneling header 810, which may include an optional User Datagram Protocol (UDP) header 812, an Encapsulating Security Protocol (ESP) header 814 and the determined ClassID 170.
  • UDP User Datagram Protocol
  • ESP Encapsulating Security Protocol
  • the ClassID 170 is part of the encapsulated message to prevent tampering during transmission by an interloper or any malicious application or entity.
  • the encapsulated message 805 is included as part of an IP message, thereby having an IP header 820, for transmission from the ingress gateway over an IP -based communication link.
  • the incoming message (e.g., IP packet) 800 associated with the data flow 175 is received and are encapsulated for transmission over communication links (e.g., tunnels).
  • the encapsulated message 825 includes a tunneling header 830, which may include an optional User Datagram Protocol (UDP) header 832, a WireGuard header 834 and the determined ClassID 170.
  • the encapsulated message 825 is included as part of an IP message, thereby having an IP header 840, for transmission from the ingress gateway over an IP -based communication link.
  • UDP User Datagram Protocol
  • the incoming message (e.g., IP packet) 800 associated with the data flow 175 is received and are encapsulated for transmission over communication links (e.g., tunnels).
  • the encapsulated message 850 includes a Generic Routing Encapsulation (GRE) header 860, which may include available fields to include the determined ClassID 170.
  • GRE Generic Routing Encapsulation
  • the encapsulated message 850 is included as part of an IP message, thereby having an IP header 865, for transmission from the ingress gateway over an IP -based communication link.
  • the incoming message (e.g., IP packet) 800 associated with the data flow 175 is received and are encapsulated for transmission over communication links (e.g., tunnels).
  • the encapsulated message 870 includes a VXLAN header 875, which may include the determined ClassID 170 and the ClassID 170 may be included as part of the encapsulated message 870.
  • the encapsulated message 870 is included as part of an IP message with an IP header 875 for routing a transmission from the ingress gateway over an IP -based communication link.
  • the incoming message (e.g., IP packet) 800 associated with the data flow 175 is received and are encapsulated for transmission over communication links (e.g., tunnels).
  • the encapsulated message 870 includes a VXLAN header 875, which may include the determined ClassID 170 (e.g., placed in a 24-bit VNI field) and the ClassID 170 may be included as part of the encapsulated message 870.
  • the encapsulated message 870 is included as part of an IP message with an IP header 880 for routing a transmission from the ingress gateway over an IP -based communication link.
  • the incoming message (e.g., IP packet) 800 associated with the data flow 175 is received and are encapsulated for transmission over communication links (e.g., tunnels).
  • the encapsulated message 885 includes a Geneve header 890, which may include the determined ClassID 170.
  • the encapsulated message 885 is included as part of an IP message with an IP header 895 for routing a transmission from the ingress gateway over an IP -based communication link.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
EP22796762.7A 2021-04-30 2022-04-28 System, klassifizierer und verfahren zur netzwerkrichtlinienbasierten verkehrsverwaltung von datenströmen Pending EP4331200A2 (de)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163182691P 2021-04-30 2021-04-30
US202217727891A 2022-04-25 2022-04-25
US202217727899A 2022-04-25 2022-04-25
PCT/US2022/026808 WO2022232445A2 (en) 2021-04-30 2022-04-28 System, classifier and method for network policy-based traffic management of data flows

Publications (1)

Publication Number Publication Date
EP4331200A2 true EP4331200A2 (de) 2024-03-06

Family

ID=83848859

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22796762.7A Pending EP4331200A2 (de) 2021-04-30 2022-04-28 System, klassifizierer und verfahren zur netzwerkrichtlinienbasierten verkehrsverwaltung von datenströmen

Country Status (2)

Country Link
EP (1) EP4331200A2 (de)
WO (1) WO2022232445A2 (de)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11381380B2 (en) * 2018-04-03 2022-07-05 Veniam, Inc. Systems and methods to improve end-to-end control and management in a network of moving things that may include, for example, autonomous vehicles
US10855588B2 (en) * 2018-12-21 2020-12-01 Juniper Networks, Inc. Facilitating flow symmetry for service chains in a computer network

Also Published As

Publication number Publication date
WO2022232445A2 (en) 2022-11-03
WO2022232445A3 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
US10972437B2 (en) Applications and integrated firewall design in an adaptive private network (APN)
CN108293004B (zh) 用于网络切片管理的系统和方法
US10498765B2 (en) Virtual infrastructure perimeter regulator
US20190132251A1 (en) Method and system for supporting multiple qos flows for unstructured pdu sessions
US10897475B2 (en) DNS metadata-based signaling for network policy control
US20220239701A1 (en) Control access to domains, servers, and content
US10708146B2 (en) Data driven intent based networking approach using a light weight distributed SDN controller for delivering intelligent consumer experience
EP3449600B1 (de) Datengestützte absichtsbasierte vernetzung mit einem leichtgewichtigen verteilten sdn-steuergerät für intelligente verbrauchererfahrungen
US7738457B2 (en) Method and system for virtual routing using containers
EP3243304B1 (de) Selektives routing von netzwerkverkehr zur ferninspektion in computernetzwerken
US20200322181A1 (en) Scalable cloud switch for integration of on premises networking infrastructure with networking services in the cloud
CN115843429A (zh) 用于网络切片中隔离支持的方法与装置
US20230319635A1 (en) Apparatus and method for providing n6-lan using service function chaining in wireless communication system
US11943223B1 (en) System and method for restricting communications between virtual private cloud networks through security domains
CN114175583B (zh) 自愈网络中的系统资源管理
EP4331200A2 (de) System, klassifizierer und verfahren zur netzwerkrichtlinienbasierten verkehrsverwaltung von datenströmen
CN117201574A (zh) 一种基于公有云的vpc之间的通信方法及相关产品
US10623279B2 (en) Method and network entity for control of value added service (VAS)
WO2022232441A1 (en) Ingress gateway with data flow classification functionality
CN115150312A (zh) 一种路由方法及设备
CN117693932A (zh) 用于数据流的基于网络策略的流量管理的系统、分类器和方法
CN117652133A (zh) 具有数据流分类功能性的入口网关
US11916883B1 (en) System and method for segmenting transit capabilities within a multi-cloud architecture
US11258720B2 (en) Flow-based isolation in a service network implemented over a software-defined network
EP4221155A1 (de) Dienstfunktionsverkettungsverbesserung

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231130

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR