US20130308439A1 - Highly scalable modular system with high reliability and low latency - Google Patents
Highly scalable modular system with high reliability and low latency Download PDFInfo
- Publication number
- US20130308439A1 US20130308439A1 US13/897,028 US201313897028A US2013308439A1 US 20130308439 A1 US20130308439 A1 US 20130308439A1 US 201313897028 A US201313897028 A US 201313897028A US 2013308439 A1 US2013308439 A1 US 2013308439A1
- Authority
- US
- United States
- Prior art keywords
- processing
- processing blades
- blades
- blade
- network traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/28—Routing or path finding of packets in data switching networks using route fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/58—Association of routers
- H04L45/583—Stackable routers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
Definitions
- the subject matter disclosed in this application generally relates to computing and communication systems and, more specifically, to highly scalable modular systems that can provide high service availability/reliability and low latency in gateways.
- Gateway elements which can perform a variety of tasks including subscriber management, billing and charging, authentication, security (e.g., firewall, malware detection, etc.), tunnel management, session management, and mobility management, etc.
- security e.g., firewall, malware detection, etc.
- tunnel management e.g., session management, and mobility management, etc.
- session management e.g., session management, and mobility management, etc.
- mobility management e.g., etc.
- Such architecture is commonly referred to as modular computing systems or blade servers.
- a typical blade server can include a metal chassis, which can contain one or more slots, into which computing or communications processing blades can be inserted.
- blade servers typically contain one or more switch fabric cards that can provide inter-slot communications in the chassis using, for example, Ethernet or some other packet formats.
- External network communication are typically supported through network input-output (NIO) ports.
- NIO port can either be integrated into a processing blade or on a separate module that is plugged into the rear of a given blade via a connector.
- FIG. 1 illustrates a block diagram of a conventional modular computing and communication system 100 .
- the system 100 can include ports 110 (e.g., P 1 , P 2 , . . . Pn), processing blades 120 (e.g., B 1 , B 2 , . . . Bn), and an inter-slot packet switch fabric 130 .
- network traffic can ingress into and egress from the ports 110 .
- the processor blades 120 can be integrated with the ports 110 or be paired together.
- the processing blades 120 can be run individually as independent network elements or collectively as a pooled resource.
- the ports 110 can typically be configured in such a way that they can be assigned to specific processing blades 120 .
- FIG. 2 demonstrates a sample network traffic path in the conventional computing and communication system 100 in FIG. 1 .
- network traffic ingresses at a port 110 (e.g., P 1 ) and is usually bound to a specific processor blade 120 (e.g., B 1 ) for, e.g., the management and routing of subscriber sessions.
- a port 110 e.g., P 1
- B 1 a specific processor blade 120
- network traffic sometimes can be routed via the switch 130 to a different processing blade 120 (e.g., B 2 ).
- latency increases due to the multiple hops into and out of the system 100 . Depending on the number of hops this latency can be significant and thus can result in degraded (suboptimal) performance.
- FIG. 3 illustrates a block diagram of another conventional modular computing and communication system 300 .
- the system 300 can include ports 310 (e.g., P 1 . . . Pn), processing blades 320 (e.g., B 1 . . . Bn), an inter-slot packet switch fabric 330 , a standby port 340 , and a standby processing blade (SPB) 350 .
- the system 300 can provide some degree of service availability through, for example, the use of the SPB 350 .
- the SPB 350 can provide the same functions as the processing blades 320 it backs up. In some implementations, the SPB 350 can maintain a global table/database of sessions of each active processing blade 320 .
- the SPB 350 can back up as few as one processing blade 320 , in which case this is known as 1:1 redundancy, or it can back up an arbitrary number (N) of processing blades 320 , which is referred to as 1:N redundancy.
- N arbitrary number
- the SPB 350 can be switched from the standby mode to the active mode and can use its session database to re-establish sessions that were hosted on the failed processing blade 320 .
- the number of active sessions, and the complexity of the services being delivered, complete session recovery can take as much as several minutes.
- the resource information includes at least one of utilization, load, and health status of a processing blade.
- Disclosed subject matter includes, in another aspect, a computerized method of processing network traffic, which includes receiving at a system controller resource information from a plurality of processing blades, updating a router by the system controller with the resource information of the plurality of processing blades, receiving network traffic at a network port, and forwarding the networking traffic by the router to one or more of the plurality of processing blades based on the resource information of the plurality of processing blades, wherein the network port is not directly coupled with the plurality of processing blades.
- the resource information includes at least one of utilization, load, and health status of a processing blade.
- the computerized method further includes receiving at the system controller the resource information from the plurality of processing blades via a software-based messaging mechanism.
- Disclosed subject matter includes, in yet another aspect, a computing system for processing network traffic, which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, and a content-aware router coupled with the switch and the plurality of network ports, the content-aware router configured to classify and tag the network traffic and forward the network traffic, based on content information of the network traffic, to one of the plurality of processing blades without going through another of the plurality of processing blades.
- the content information of the network traffic includes at least one of a source address, a destination address, an application type, a protocol type, and a key word of the network traffic.
- the content-aware router includes a dynamic forwarding table containing rules for classifying, tagging, and forwarding the network traffic.
- the rules are based on the content information of the network traffic.
- the computing system further includes a system controller coupled to the content-aware router and the plurality of processing blades, the system controller configured to receive and maintain state information from the plurality of the processing blades and further configured to update the content-aware router with the state information of the plurality of the processing blades.
- the state information includes at least one of utilization, load, and health status of a processing blade.
- each of the plurality of processing blades contains a resource manager configured to gather the state information of the each of the plurality of processing blades and send the state information to the system controller.
- the system controller includes a state table containing the state information received from the plurality of processing blades.
- the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- the content-aware router is further configured to concatenate different types of services in the network traffic.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, and classifying and tagging the network traffic and forwarding the networking traffic by a content-aware router, based on the content information of the plurality of processing blades, to one of the plurality of processing blades without going through another of the plurality of processing blades, wherein the network port is not directly coupled with the plurality of processing blades.
- the content information of the network traffic includes at least one of a source address, a destination address, an application type, a protocol type, and a key word of the network traffic.
- the computerized method further includes receiving at a system controller state information from the plurality of processing blades, and updating the content-aware router by the system controller with the state information of the plurality of processing blades.
- the state information includes at least one of utilization, load, and health status of a processing blade.
- the computerized method further includes receiving at the system controller the state information from the plurality of processing blades via a software-based messaging mechanism.
- the computerized method further includes concatenating by the content-aware router different types of services in the network traffic.
- a computing system for processing network traffic which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on forwarding rules, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to detect a fault of one of the plurality of processing blades and further configured to update the forwarding rules of the router, upon detecting the fault, to divert the network traffic from the faulted processing blade to at least one different processing blade.
- the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- the system controller includes a state table containing session information received from the plurality of processing blades.
- each of the plurality of processing blades contains a resource manager configured to gather the session information of the each of the plurality of processing blades and send the session information to the system controller.
- system controller is configured to send the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
- the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- an average load per processing blade is less than Cb*(N ⁇ 1)/N, where Cb is a blade capacity and N is the number of processing blades.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, detecting by a system controller a fault of one of a plurality of processing blades, updating by the system controller forwarding rules of a router, and forwarding the network traffic by the router based on the updated forwarding rules to divert the network traffic from the faulted processing blade to at least one different processing blade, wherein the network port is not directly coupled with the plurality of processing blades.
- the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- the computerized method further includes receiving at the system controller session information from the plurality of processing blades.
- the computerized method further includes sending the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
- the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- the computerized method further includes keeping an average load per processing blade (Lb) less than Cb*(N ⁇ 1)/N, where Cb is a blade capacity and N is the number of processing blades.
- a computing system for processing network traffic which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, each of the plurality of processing blades belonging to one or more session pairs of processing blades, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on forwarding rules, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to detect a fault of one of the plurality of processing blades and further configured to update the forwarding rules of the router, upon detecting the fault, to divert the network traffic from the faulted processing blade to at least one different processing blade.
- the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- the system controller includes a state table containing session information received from the plurality of processing blades.
- each of the plurality of processing blades contains a resource manager configured to gather the session information of the each of the plurality of processing blades and send the session information to the system controller.
- each processing blade within a session pair contains session information of the other processing blade in the same session pair.
- a healthy processing blade in a session pair to which the faulted processing blade belongs is configured to, upon detecting the fault, send the session information of the faulted processing blade to the system controller, and the system controller is further configured to send the session information of the faulted processing blade to the at least one different processing blade.
- the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, detecting by a system controller a fault of one of a plurality of processing blades, wherein the faulted processing blade belonging to a session pair along with another processing blade, updating by the system controller forwarding rules of a router, and forwarding the network traffic by the router based on the updated forwarding rules to divert the network traffic from the faulted processing blade to at least one different processing blade, wherein the network port is not directly coupled with the plurality of processing blades.
- the computerized method further includes receiving at the system controller session information from the plurality of processing blades.
- the computerized method further includes sending the session information of the faulted processing blade, by a healthy processing blade in a session pair to which the faulted processing blade belongs, to the system controller, and sending the session information of the faulted processing blade, by the system controller, to the at least one different processing blade.
- Systems and methods disclosed herein can increase system utilization, reduce system latency, improve system reliability and service continuity, and enhance system availability.
- FIG. 1 illustrates a block diagram of a conventional modular computing and communication system.
- FIG. 2 illustrates a sample network traffic path in the conventional computing and communication system in FIG. 1 .
- FIG. 3 illustrates a block diagram of another conventional modular computing and communication system.
- FIG. 7 shows one exemplary list of processing blade utilizations according to certain embodiments of the disclosed subject matter.
- FIG. 8 illustrates another exemplary operation of processing network traffic according to certain embodiments of the disclosed subject matter.
- FIG. 9 shows one exemplary list of processing blade statuses according to certain embodiments of the disclosed subject matter.
- FIG. 10 illustrates yet another exemplary operation of processing network traffic according to certain embodiments of the disclosed subject matter.
- FIG. 11 illustrates a sample network traffic path in a highly scalable modular system according to certain embodiments of the disclosed subject matter.
- FIG. 12 illustrates a perspective schematic view of an exemplary computing device according to certain embodiments of the disclosed subject matter.
- the switch 420 can be implemented in hardware, software, or a combination of both.
- the processing blades 410 can be connected to each other by creating a cross-bar style switching bus between the processing blades 410 .
- the flow of data from a processing blade 410 to any other processing blade 410 can be controlled by the SC 450 that controls the cross-bar and hence the communication paths.
- the switch 420 can also contain a resource manager (RM) 460 .
- RM resource manager
- the CSR 430 can classify and uniquely tag the traffic flows (e.g., by the unique IDs of the processing blades) and then optimally assign processing blade(s) 410 to a give traffic flow based on classification rules and system health.
- the rules in the DFT 470 can allow for optimal classification, tagging, and forwarding of network traffic in the system 400 .
- the rules in the DFT 470 can also be affected by real-time utilization, load, and status in the system 400 based on information collected by the SC 450 and the RMs 460 on processing blades 410 of the system 400 .
- the CSR 430 can also contain a resource manager (RM) 460 .
- RM resource manager
- the CSR 430 can look up the rules stored in the DFT 470 .
- a processing blade can be selected, e.g., by the CSR 430 , based on load and/or utilization.
- the network traffic is can be classified and tagged.
- processing blade types can be determined based on rules (e.g., as the rules 500 in FIG. 5 ).
- a processing blade can be determined based on load and/or utilization.
- FIG. 7 shows one exemplary list of processing blade utilizations 700 according to certain embodiments of the disclosed subject matter.
- the processing blade 1 has a utilization of 50%; the processing blade 2 has a utilization of 60%; the processing blade 3 has a utilization of 75%; and the processing blade n has a utilization of 80%.
- the list of utilization 700 can be maintained in ST 480 of the SC 450 in the system 400 .
- SC 450 can gather the load status information of all processing blades 410 and create a table (e.g., as illustrated in FIG. 7 ).
- the ST 480 on the SC 450 can be updated based on the load status information of the processing blades.
- the SC 450 can store the load status information table in the ST 480 and update the ST 480 accordingly.
- the DFT 470 on the CSR 430 can be updated.
- the SC 450 can update the DFT 470 on the CSR 430 based on the most recent load status information maintained at the ST 480 .
- the processing blade for incoming network traffic can be chosen based on the updated DFT 470 .
- the CSR 430 can determine the processing blade 410 based on the DFT 470 . For example, the CSR 430 can select the processing blade with the lowest load and/or utilization.
- FIG. 9 shows one exemplary list of processing blade statuses 900 according to certain embodiments of the disclosed subject matter.
- the processing blades 1 , 2 , and 3 are UP while the processing blade n is DOWN.
- the list of status 900 can be maintained in ST 480 of the SC 450 in the system 400 .
- the affected traffic sessions can be re-distributed among other health processing blades.
- the DFT 470 on the CSR 430 can be updated.
- the SC 450 can update the DFT 470 on the CSR 430 based on the most recent health status information maintained at the ST 480 .
- the processing blade for incoming network traffic can be chosen based on the updated DFT 470 .
- the CSR 430 can determine the processing blade 410 based on the DFT 470 . For example, a faulty processing blade can be removed from the DFT 470 and thus CSR 430 can avoid forwarding network traffic to the faulty processing blade.
- the network traffic flow can be assigned to any processing blade 410 based on the DFT 470 on the CSR 430 .
- the CSR 430 can help choose a least utilized processing blade 410 to improve system load balance. Load balancing can be achieved by utilizing the RMs 460 that run on processor blades 410 .
- the RM 460 can monitor the health status of a given processing blade 410 and provide a real-time status report on key resources (e.g., memory, CPU utilization, active applications, active sessions, threads, etc.) of that processing blade. These information can be sent periodically or by event driven to the SC 450 that can aggregate these information from the processing blades 410 and store them in the state table (ST) 480 .
- the state table 480 can be used to update the rules in the DFT 470 in the CSR 430 .
- the CSR 430 can utilize the DFT 470 for optimal classification, tagging and forwarding of network traffic in the system 400 .
- the CSR 430 can classify the network traffic flow (e.g., IP traffic) entering the system 400 .
- the classifying rules can be based in part on resource utilization information received from the RMs 460 and stored in the ST 480 . Such rules can be used to optimally distribute traffic flows having the same classification across multiple processing blades 410 in the system 400 .
- systems and methods according to some embodiments of the disclosed subject matter can increase system utilization.
- each processing blade 120 usually provides the same set of computing and/or communications services as the others.
- system traffic load is statically assigned to one or the other processing blade 120 .
- the offered traffic load can vary greatly from blade to blade with one blade experiencing a high load and the other experiencing a low load.
- one processing blade e.g., B 1
- the other processing blade e.g., B 2
- the overall system utilization of 50% or less e.g., when traffic gets dropped.
- systems and methods according to some embodiments of the disclosed subject matter can help increase system utilization.
- the CSR 430 in the system 400 can serve as an integral, high-performance, application-agnostic load balancer.
- real-time resource information e.g., gathered from the RMs 460 on the processing blades 410
- dynamic forwarding rules can be created and updated in real-time and contained in the DFT 470 .
- These dynamic forwarding rules can apportion traffic flows to all available processing blades based on their current utilizations. With a reasonable smoothing function/feedback loop employed, this can lead to better spreading of traffic/transactions across all available processing blades in the system 400 , resulting in significantly better overall system utilization. For example, in a system 400 with two processing blades 410 , if the aggregated offered traffic load approaches 200%, the CSR 430 can help balancing the load so that each processing blade runs at nearly 100% capacity, thus giving an overall system utilization of about 200%.
- systems and methods according to some embodiments of the disclosed subject matter can reduce system latency.
- egress path of all network traffic traverses from one processing blade (e.g., B 2 ) to the switch 130 then to a different processing blade (e.g., B 1 ) then to the port (e.g., P 1 ) out to the network.
- Multiple hops for network packets can add latency to network traffic delivery, leading to poor end user experiences.
- systems and methods according to some embodiments of the disclosed subject matter can help reduce system latency.
- the system 400 according to certain embodiments of the disclosed subject matter can help avoid packet hops across multiple processing blades 410 .
- the CSR 430 can de-couple the processing blades (PB) 410 from the ports 440 .
- the SC 450 can create and update the dynamic network traffic routing rules in the DFT 470 in the CSR 430 .
- the rules can be based in part on any combination of the source-destination addresses, application type, protocol type, and key words of the network traffic streams.
- the rules can also take into consideration session load of each processor blade 410 , e.g., as reported by the ST 480 .
- the SC 450 can also update the DFT 470 in real time to reflect the current network and processing load conditions within the system 400 .
- network Traffic can enter the system 400 through any active network port (e.g., P 1 ).
- the CSR 430 can examine the network traffic, classify, and tag the network traffic and forward it to the appropriate processor blade (e.g., PB 2 ) based on the matching rule in the DFT 470 .
- Traffic originating from a processing blade can be processed in a similar manner and be forwarded to a particular port (e.g., P 1 ) or another processing blade (e.g., PBn) based on the matching rule in the DFT 470 .
- This feature can provide a meshed any-port to any-blade connectivity and can thus minimize traffic latency by limiting the number of hops for network traffic within the system 400 .
- network traffic can make only one hop in and one hop out of the system 400 and at most traverse one processing blade 410 . Therefore, the system 400 can reduce overall latency of network traffic, improving end user experiences.
- systems and methods according to some embodiments of the disclosed subject matter can improve system reliability and service continuity.
- processor blades 410 can broadcast their health, operational states and load/utilization information to the SC 450 .
- the SC 450 can promptly modify the dynamic forwarding rules in the DFT 470 in the CSR 430 and redistribute the traffic/processing load of the failed or failing processing blade across the remaining healthy processing blades.
- PB 4 fails.
- the SC 450 can detect the blade failure, e.g., via a heartbeat mechanism.
- the SC 450 can then modify the forwarding rules in the DFT 470 in the CSR 430 to redistribute PB 4 's traffic/processing loads across the remaining three processing blades, thus improving system reliability.
- the respective loads on the three healthy processing blades can increase to 100% as a result of the redistribution.
- the SC 450 can also send the state information of all active sessions on the failed PB 4 to the remaining active processing blades, e.g., via a software-based messaging mechanism.
- the ST 480 on the SC 450 can help provide seamless handoff of network connections and computing sessions that were previously hosted on the failed PB 4 to the newly assigned processing blades in the system 400 , thus improving service continuity.
- the RMs 460 on the processing blades 410 can distribute the resource utilization and current workload of the processing blades 410 to the SC 450 , e.g., via a software-based messaging mechanism.
- the SC 450 can aggregate and maintain the state information in the ST 480 .
- the ST 480 thus can have knowledge of the current session load of each processing blade 410 in the system 400 and can install rules in the DFT 470 of the CSR 430 .
- CSR 430 can be responsible for distribution of ingress network traffic from the ports 440 and assignment of the processing blades 410 to the incoming network traffic flows.
- the CSR 430 can help maintain that at any given time the average session load per processing blade (Lb) is:
- processing blade 1 (PB 1 ) and processing blade 2 (PB 2 ) can form one session pair; processing blade 2 (PB 2 ) and processing blade 3 (PB 3 ) can form another session pair; and processing blade 3 (PB 3 ) and processing blade 1 (PB 1 ) can form yet another session pair.
- processing blade 1 (PB 1 ) and processing blade 2 (PB 2 ) can form one session pair; processing blade 2 (PB 2 ) and processing blade 3 (PB 3 ) can form another session pair; and processing blade 3 (PB 3 ) and processing blade 1 (PB 1 ) can form yet another session pair.
- SP 12 the first session pair
- the second session pair as SP 23
- the third session pair as SP 31
- Each processing blade can have the session information of its paired processing blade.
- the SC 450 can detect the failure and re-distribute the sessions associated from the failed processing blade to other processing blade(s).
- PB 3 (paired with PB 2 ) can send PB 2 ′s session information to the SC 450 .
- the SC 450 after reviewing the current loads on PB 1 and PB 3 , can apportion PB 2 's sessions between the two remaining processing blades (PB 1 and PB 3 ).
- the SC 450 can also modify the forwarding rules in the DFT 470 in real time and by re-routing the network traffic being routed to the failed PB 2 to its paired processing blade and/or other processing blade(s).
- PB 1 and PB 3 can form a pairing relationship with one another.
- the pairing relationships among active processing blades can be adjusted automatically or on demand, e.g., when the failed PB 2 is restored to working order.
- FIG. 12 illustrates a perspective schematic view of an exemplary computing device 1200 according to certain embodiments of the disclosed subject matter.
- the device 1200 can include one or more processing blades 1210 interconnected by a switch 1220 , which in turn is connected to a CSR 1230 .
- the CSR can provide connections between ports 1240 and the processor blades 1210 .
- a “server,” “client,” “agent,” “module,” “interface,” and “host” is not software per se and includes at least some tangible, non-transitory hardware that is configured to execute computer readable instructions.
- the phrase “based on” does not imply exclusiveness—for example, if X is based on A, X can also be based on B, C, and/or D, . . .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A computing system for processing network traffic includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades.
Description
- This application claims priority to U.S. provisional patent applications Nos. 61/649,067, 61/649,001, and 61/648,990, all of which were filed on May 18, 2012 and are incorporated herein in their entireties.
- The subject matter disclosed in this application generally relates to computing and communication systems and, more specifically, to highly scalable modular systems that can provide high service availability/reliability and low latency in gateways.
- Mobile and fixed networks today generally employ a diverse set of networking gateway elements which can perform a variety of tasks including subscriber management, billing and charging, authentication, security (e.g., firewall, malware detection, etc.), tunnel management, session management, and mobility management, etc. Despite the wide range of gateway offerings they generally share a common architecture. Such architecture is commonly referred to as modular computing systems or blade servers.
- Modular computing and communications systems, such as blade servers, are in widespread use in corporate data centers and telecommunications facilities around the world. A typical blade server can include a metal chassis, which can contain one or more slots, into which computing or communications processing blades can be inserted. Aside from common power, cooling, and management interfaces, blade servers typically contain one or more switch fabric cards that can provide inter-slot communications in the chassis using, for example, Ethernet or some other packet formats. External network communication are typically supported through network input-output (NIO) ports. A NIO port can either be integrated into a processing blade or on a separate module that is plugged into the rear of a given blade via a connector. It follows that network traffic enters and exits through these network ports and, if necessary, is routed to the appropriate blade by the system's switch fabric card(s). These components can be housed in a multi-slot chassis which can provide common power, cooling, system management, and control functions.
-
FIG. 1 illustrates a block diagram of a conventional modular computing andcommunication system 100. Thesystem 100 can include ports 110 (e.g., P1, P2, . . . Pn), processing blades 120 (e.g., B1, B2, . . . Bn), and an inter-slotpacket switch fabric 130. Insystem 100, network traffic can ingress into and egress from theports 110. In some implementations, theprocessor blades 120 can be integrated with theports 110 or be paired together. In some implementations, theprocessing blades 120 can be run individually as independent network elements or collectively as a pooled resource. Theports 110 can typically be configured in such a way that they can be assigned tospecific processing blades 120. In operation, network traffic can be forwarded to acorresponding processing blade 120 for processing network traffic and providing further routing and other value-added services. Network traffic can also be forwarded across theprocessing blade 120 via theswitch 130, depending on the traffic processing logic and routing decisions made at theprocessing blade 120. Traditional blade server systems such as thesystem 100 may provide rudimentary scalability through addition ofprocessor blades 120 andports 110. In such systems theprocessing blades 120 can typically be treated as standalone or as loosely coupled processing elements. However, these systems do not provide fine-grain control or scalability of computing or communications services. -
FIG. 2 demonstrates a sample network traffic path in the conventional computing andcommunication system 100 inFIG. 1 . In this example, network traffic ingresses at a port 110 (e.g., P1) and is usually bound to a specific processor blade 120 (e.g., B1) for, e.g., the management and routing of subscriber sessions. However, network traffic sometimes can be routed via theswitch 130 to a different processing blade 120 (e.g., B2). In this situation, latency increases due to the multiple hops into and out of thesystem 100. Depending on the number of hops this latency can be significant and thus can result in degraded (suboptimal) performance. -
FIG. 3 illustrates a block diagram of another conventional modular computing andcommunication system 300. Thesystem 300 can include ports 310 (e.g., P1 . . . Pn), processing blades 320 (e.g., B1 . . . Bn), an inter-slotpacket switch fabric 330, astandby port 340, and a standby processing blade (SPB) 350. Thesystem 300 can provide some degree of service availability through, for example, the use of the SPB 350. The SPB 350 can provide the same functions as theprocessing blades 320 it backs up. In some implementations, the SPB 350 can maintain a global table/database of sessions of eachactive processing blade 320. The SPB 350 can back up as few as oneprocessing blade 320, in which case this is known as 1:1 redundancy, or it can back up an arbitrary number (N) ofprocessing blades 320, which is referred to as 1:N redundancy. When the failure of aprocessing blade 320 is detected, the SPB 350 can be switched from the standby mode to the active mode and can use its session database to re-establish sessions that were hosted on the failedprocessing blade 320. Depending on the implementations, the number of active sessions, and the complexity of the services being delivered, complete session recovery can take as much as several minutes. In addition, the need to maintain complete global knowledge of all active sessions imposes increased computational, memory, and intra-chassis communications requirements on theSPB 350, compared to theprocessing blades 320 it backs up. It naturally follows that the SPB 350 usually has a different hardware and software configuration from theactive processing blades 320 and has scaling limits. - In accordance with the disclosed subject matter, systems and methods are described for a highly scalable modular system with high reliability and low latency.
- Disclosed subject matter includes, in one aspect, a computing system for processing network traffic, which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades.
- In some embodiments, the resource information includes at least one of utilization, load, and health status of a processing blade.
- In some other embodiments, each of the plurality of processing blades contains a resource manager configured to gather the resource information of the each of the plurality of processing blades and send the resource information to the system controller.
- In some other embodiments, the router includes a dynamic forwarding table containing rules for forwarding the network traffic.
- In some other embodiments, the rules are based on the resource information of the plurality of processing blades.
- In some other embodiments, the system controller includes a state table containing the resource information received from the plurality of processing blades.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- Disclosed subject matter includes, in another aspect, a computerized method of processing network traffic, which includes receiving at a system controller resource information from a plurality of processing blades, updating a router by the system controller with the resource information of the plurality of processing blades, receiving network traffic at a network port, and forwarding the networking traffic by the router to one or more of the plurality of processing blades based on the resource information of the plurality of processing blades, wherein the network port is not directly coupled with the plurality of processing blades.
- In some embodiments, the resource information includes at least one of utilization, load, and health status of a processing blade.
- In some other embodiments, the computerized method further includes receiving at the system controller the resource information from the plurality of processing blades via a software-based messaging mechanism.
- Disclosed subject matter includes, in yet another aspect, a computing system for processing network traffic, which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, and a content-aware router coupled with the switch and the plurality of network ports, the content-aware router configured to classify and tag the network traffic and forward the network traffic, based on content information of the network traffic, to one of the plurality of processing blades without going through another of the plurality of processing blades.
- In some embodiments, the content information of the network traffic includes at least one of a source address, a destination address, an application type, a protocol type, and a key word of the network traffic.
- In some other embodiments, the content-aware router includes a dynamic forwarding table containing rules for classifying, tagging, and forwarding the network traffic.
- In some other embodiments, the rules are based on the content information of the network traffic.
- In some other embodiments, the computing system further includes a system controller coupled to the content-aware router and the plurality of processing blades, the system controller configured to receive and maintain state information from the plurality of the processing blades and further configured to update the content-aware router with the state information of the plurality of the processing blades.
- In some other embodiments, the state information includes at least one of utilization, load, and health status of a processing blade.
- In some other embodiments, each of the plurality of processing blades contains a resource manager configured to gather the state information of the each of the plurality of processing blades and send the state information to the system controller.
- In some other embodiments, the system controller includes a state table containing the state information received from the plurality of processing blades.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- In some other embodiments, the content-aware router is further configured to concatenate different types of services in the network traffic.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, and classifying and tagging the network traffic and forwarding the networking traffic by a content-aware router, based on the content information of the plurality of processing blades, to one of the plurality of processing blades without going through another of the plurality of processing blades, wherein the network port is not directly coupled with the plurality of processing blades.
- In some embodiments, the content information of the network traffic includes at least one of a source address, a destination address, an application type, a protocol type, and a key word of the network traffic.
- In some other embodiments, the computerized method further includes receiving at a system controller state information from the plurality of processing blades, and updating the content-aware router by the system controller with the state information of the plurality of processing blades.
- In some other embodiments, the state information includes at least one of utilization, load, and health status of a processing blade.
- In some other embodiments, the computerized method further includes receiving at the system controller the state information from the plurality of processing blades via a software-based messaging mechanism.
- In some other embodiments, the computerized method further includes concatenating by the content-aware router different types of services in the network traffic.
- Disclosed subject matter includes, in yet another aspect, a computing system for processing network traffic, which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on forwarding rules, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to detect a fault of one of the plurality of processing blades and further configured to update the forwarding rules of the router, upon detecting the fault, to divert the network traffic from the faulted processing blade to at least one different processing blade.
- In some embodiments, the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- In some other embodiments, the system controller includes a state table containing session information received from the plurality of processing blades.
- In some other embodiments, each of the plurality of processing blades contains a resource manager configured to gather the session information of the each of the plurality of processing blades and send the session information to the system controller.
- In some other embodiments, the system controller is configured to send the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- In some other embodiments, an average load per processing blade (Lb) is less than Cb*(N−1)/N, where Cb is a blade capacity and N is the number of processing blades.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, detecting by a system controller a fault of one of a plurality of processing blades, updating by the system controller forwarding rules of a router, and forwarding the network traffic by the router based on the updated forwarding rules to divert the network traffic from the faulted processing blade to at least one different processing blade, wherein the network port is not directly coupled with the plurality of processing blades.
- In some embodiments, the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- In some other embodiments, the computerized method further includes receiving at the system controller session information from the plurality of processing blades.
- In some other embodiments, the computerized method further includes sending the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- In some other embodiments, the computerized method further includes keeping an average load per processing blade (Lb) less than Cb*(N−1)/N, where Cb is a blade capacity and N is the number of processing blades.
- Disclosed subject matter includes, in yet another aspect, a computing system for processing network traffic, which includes a plurality of network ports configured to receive network traffic, a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic, each of the plurality of processing blades belonging to one or more session pairs of processing blades, a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades, a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on forwarding rules, and a system controller coupled to the router and the plurality of processing blades, the system controller configured to detect a fault of one of the plurality of processing blades and further configured to update the forwarding rules of the router, upon detecting the fault, to divert the network traffic from the faulted processing blade to at least one different processing blade.
- In some embodiments, the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- In some other embodiments, the system controller includes a state table containing session information received from the plurality of processing blades.
- In some other embodiments, each of the plurality of processing blades contains a resource manager configured to gather the session information of the each of the plurality of processing blades and send the session information to the system controller.
- In some other embodiments, each processing blade within a session pair contains session information of the other processing blade in the same session pair.
- In some other embodiments, a healthy processing blade in a session pair to which the faulted processing blade belongs is configured to, upon detecting the fault, send the session information of the faulted processing blade to the system controller, and the system controller is further configured to send the session information of the faulted processing blade to the at least one different processing blade.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- In some other embodiments, each processing blade is further configured to detect a fault of the other processing blade within a session pair to which the each processing blade belongs.
- Disclosed subject matter includes, in yet another aspect, a computerized method of processing network traffic, which includes receiving network traffic at a network port, detecting by a system controller a fault of one of a plurality of processing blades, wherein the faulted processing blade belonging to a session pair along with another processing blade, updating by the system controller forwarding rules of a router, and forwarding the network traffic by the router based on the updated forwarding rules to divert the network traffic from the faulted processing blade to at least one different processing blade, wherein the network port is not directly coupled with the plurality of processing blades.
- In some embodiments, the fault indicates the one of the plurality of processing blades has failed or is about to fail.
- In some other embodiments, the computerized method further includes receiving at the system controller session information from the plurality of processing blades.
- In some other embodiments, the computerized method further includes sending the session information of the faulted processing blade, by a healthy processing blade in a session pair to which the faulted processing blade belongs, to the system controller, and sending the session information of the faulted processing blade, by the system controller, to the at least one different processing blade.
- In some other embodiments, the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
- Various embodiments of the subject matter disclosed herein can provide one or more of the following capabilities. Systems and methods disclosed herein can increase system utilization, reduce system latency, improve system reliability and service continuity, and enhance system availability.
- These and other capabilities of embodiments of the disclosed subject matter will be more fully understood after a review of the following figures, detailed description, and claims.
-
FIG. 1 illustrates a block diagram of a conventional modular computing and communication system. -
FIG. 2 illustrates a sample network traffic path in the conventional computing and communication system inFIG. 1 . -
FIG. 3 illustrates a block diagram of another conventional modular computing and communication system. -
FIG. 4 illustrates a block diagram of a highly scalable modular system according to certain embodiments of the disclosed subject matter. -
FIG. 5 shows one exemplary set of rules according to certain embodiments of the disclosed subject matter. -
FIG. 6 illustrates an exemplary operation of processing network traffic according to certain embodiments of the disclosed subject matter. -
FIG. 7 shows one exemplary list of processing blade utilizations according to certain embodiments of the disclosed subject matter. -
FIG. 8 illustrates another exemplary operation of processing network traffic according to certain embodiments of the disclosed subject matter. -
FIG. 9 shows one exemplary list of processing blade statuses according to certain embodiments of the disclosed subject matter. -
FIG. 10 illustrates yet another exemplary operation of processing network traffic according to certain embodiments of the disclosed subject matter. -
FIG. 11 illustrates a sample network traffic path in a highly scalable modular system according to certain embodiments of the disclosed subject matter. -
FIG. 12 illustrates a perspective schematic view of an exemplary computing device according to certain embodiments of the disclosed subject matter. - In the following description, numerous specific details are set forth regarding the systems and methods of the disclosed subject matter and the environment in which such systems and methods may operate, in order to provide a thorough understanding of the disclosed subject matter. It will be apparent to one skilled in the art, however, that the disclosed subject matter may be practiced without such specific details, and that certain features, which are well known in the art, are not described in detail in order to avoid complication of the disclosed subject matter. In addition, it will be understood that the embodiments described below are only examples, and that it is contemplated that there are other systems and methods that are within the scope of the disclosed subject matter.
-
FIG. 4 illustrates a block diagram of a highly scalablemodular system 400 according to certain embodiments of the disclosed subject matter. Thesystem 400 can include one ormore processing blade 410, aswitch 420, a content-aware switch-router (CSR) 430, one or more network I/O ports 440, and a system controller (SC) 450. Theswitch 420 and theCSR 430 can be implemented either as two discrete elements or as an integrated element in thesystem 400. Theprocessing blades 410 can be inter-connected via theswitch 420. Theswitch 420 can be connected to and communicate with theCSR 430. TheCSR 430 can be connected to and communicate with theports 440. TheSC 450 can be connected to and communicate with theprocessing blades 410, theswitch 420, and theCSR 430. In the embodiments illustrated inFIG. 4 , theprocessing blades 410 and theports 440 are not directly coupled to each other. Instead, the CSR430 can provide connections between theports 440 and the pool ofprocessing blades 410. The connections among components within thesystem 400 can be static or dynamic. - Referring to
FIG. 4 , aprocessing blade 410 can have one or many CPUs (e.g. Intel microprocessors) for computing, RAM, memory for data storage, and some other communication chipsets for transferring data in and out of theprocessing blade 410 from/to other components of thesystem 400. Processingblades 410 can be the platforms where specific applications run. For example, a processing blade can run as, among others, a wireless access gateway which can be responsible for providing wireless access to client devices. Eachprocessing blade 410 can have a unique ID within thesystem 400. Eachprocessing blade 410 can contain a resource manager (RM) 460. TheRM 460 can help optimize processing load distribution among theprocessing blades 410. TheRM 460 can send information about the associatedprocessing blade 410, such as resource utilization and current workload, to theSC 450. The communication between theRMs 460 and theSC 450 can be via a software based messaging mechanism. - The
switch 420 can be implemented in hardware, software, or a combination of both. In some embodiments, theprocessing blades 410 can be connected to each other by creating a cross-bar style switching bus between theprocessing blades 410. The flow of data from aprocessing blade 410 to anyother processing blade 410 can be controlled by theSC 450 that controls the cross-bar and hence the communication paths. Theswitch 420 can also contain a resource manager (RM) 460. - The
CSR 430 can classify and tag the network traffic flowing through it. TheCSR 430 can include a dynamic forwarding table (DFT) 470. TheDFT 470 can have the traffic classification and forwarding rules for the proper distribution and routing of network traffic to and from theprocessing blades 410. A traffic flow can be the network traffic between local (i.e. in-chassis/on-blade) and external network resources (server, client, mobile phone, etc.) that can be uniquely identified by, e.g., a 5 tuple {source IP address, destination IP address, source port, destination port, protocol type}. In some embodiments, theCSR 430 can serve as the path of all ingress traffic flows of thesystem 400. TheCSR 430 can classify and uniquely tag the traffic flows (e.g., by the unique IDs of the processing blades) and then optimally assign processing blade(s) 410 to a give traffic flow based on classification rules and system health. The rules in theDFT 470 can allow for optimal classification, tagging, and forwarding of network traffic in thesystem 400. The rules in theDFT 470 can also be affected by real-time utilization, load, and status in thesystem 400 based on information collected by theSC 450 and theRMs 460 on processingblades 410 of thesystem 400. TheCSR 430 can also contain a resource manager (RM) 460. - The
ports 440 can include network interface controllers and can include hardware and/or software that enables connection of thesystem 400 to a computer network (e.g., an IP network). - The
SC 450 can aggregate real-time status and state information received from theRMs 460, e.g., running on theprocessor blades 410. TheSC 450 can have a state table (ST) 480 which can store these information including session states of theprocessing blades 410. TheST 480 can help provide high availability and system reliability. Real-time information can be stored in theST 480 of theSC 450. Information in theST 480 can be used to generate theDFT 470 in theCSR 430. TheSC 450 can help distribute loads among theprocessing blades 410 of thesystem 400. Further, in the event of a processing blade failure, theSC 450 can help distribute the affected sessions from the failed processing blade to other active processing blades, hence making the system resilient to failures. TheSC 450 can also contain a resource manager (RM) 460. - In one exemplary scenario, network traffic can enter the
system 400 through theport 440 where it can be classified, tagged, and routed to theappropriate processing blade 410 by theCSR 430. Classification can be done through a set of rules derived from a combination of the network traffic flow, protocol types, associated application, and other content embedded in the packet streams. Once tagged, the network traffic can be assigned a unique tag ID and be passed to theswitch 420, which can deliver it to theappropriate processing blade 410 based on its tag ID. Conversely, network traffic exiting from aprocessing blade 410 can be handed off to theswitch 420 which can then forward it to theCSR 430 for processing. TheCSR 430 can classify and tag the traffic and then forward it to theappropriate port 440 or deliver it back to theswitch 420 for delivery to anotherprocessing blade 410 for further processing. In some embodiments, different types of services within the network traffic (e.g., network service, subscriber management service, and application service) can be concatenated or daisy-chained in thesystem 400 by theCSR 430. -
FIG. 5 shows one exemplary set ofrules 500 according to certain embodiments of the disclosed subject matter. According to therules 500 listed inFIG. 5 , if the IP address is in a certain range, use processing blades type X; if the application type is voice over IP (VOIP), use processing blades type Y; if the application type is hypertext transport protocol (HTTP), use processing blades type Z; if none of the defined conditions is met, by default use the least utilized processing blade. In some embodiments, therules 500 can be contained in theDFT 470 of theCSR 430 in thesystem 400. -
FIG. 6 illustrates anexemplary operation 600 of processing network traffic according to certain embodiments of the disclosed subject matter. Theoperation 600 can be performed in theCSR 430 of thesystem 400. Atstage 610, network traffic (e.g., IP traffic) can be received, e.g., at theCSR 430 of thesystem 400. Atstage 620, the network traffic can be inspected, e.g., by theCSR 430. For example, theCSR 430 can examine the packets of the network traffic. Atstage 630, it can be determined, e.g., by theCSR 430, whether the network traffic matches a rule (e.g., as one of therules 500 inFIG. 5 ). For example, theCSR 430 can look up the rules stored in theDFT 470. Atstage 635, if there is no match, a processing blade can be selected, e.g., by theCSR 430, based on load and/or utilization. Atstage 640, if there is a match, the network traffic is can be classified and tagged. Atstage 650, processing blade types can be determined based on rules (e.g., as therules 500 inFIG. 5 ). Atstage 660, a processing blade can be determined based on load and/or utilization. -
FIG. 7 shows one exemplary list ofprocessing blade utilizations 700 according to certain embodiments of the disclosed subject matter. According to thelist 700 inFIG. 7 , theprocessing blade 1 has a utilization of 50%; theprocessing blade 2 has a utilization of 60%; theprocessing blade 3 has a utilization of 75%; and the processing blade n has a utilization of 80%. In some embodiments, the list ofutilization 700 can be maintained inST 480 of theSC 450 in thesystem 400. -
FIG. 8 illustrates anexemplary operation 800 of processing network traffic according to certain embodiments of the disclosed subject matter. Theoperation 800 can be performed in thesystem 400. Atstage 810, load status information of eachprocessing blade 410 can be sent to theSC 450. In some embodiments, theRM 460 on each processing blade can send the load status to theSC 450 periodically or on demand. At stage 820, the load status information of theprocessing blades 410 can be gathered. In some embodiments, the -
SC 450 can gather the load status information of all processingblades 410 and create a table (e.g., as illustrated inFIG. 7 ). Atstage 830, theST 480 on theSC 450 can be updated based on the load status information of the processing blades. In some embodiments, theSC 450 can store the load status information table in theST 480 and update theST 480 accordingly. Atstage 840, theDFT 470 on theCSR 430 can be updated. In some embodiments, theSC 450 can update theDFT 470 on theCSR 430 based on the most recent load status information maintained at theST 480. Atstage 850, the processing blade for incoming network traffic can be chosen based on the updatedDFT 470. In some embodiments, theCSR 430 can determine theprocessing blade 410 based on theDFT 470. For example, theCSR 430 can select the processing blade with the lowest load and/or utilization. -
FIG. 9 shows one exemplary list ofprocessing blade statuses 900 according to certain embodiments of the disclosed subject matter. According to thelist 900 inFIG. 9 , theprocessing blades status 900 can be maintained inST 480 of theSC 450 in thesystem 400. -
FIG. 10 illustrates anexemplary operation 1000 of processing network traffic according to certain embodiments of the disclosed subject matter. Theoperation 1000 can be performed in thesystem 400. Atstage 1010, health status information of eachprocessing blade 410 can be sent to theSC 450. In some embodiments, theRM 460 on each processing blade can send the health status to theSC 450 periodically or on demand. At stage 1020, the health status information of theprocessing blades 410 can be gathered. In some embodiments, theSC 450 can gather the health status information of all processingblades 410 and create a table (e.g., as illustrated inFIG. 9 ). In addition, theST 480 on theSC 450 can be updated based on the health status information of the processing blades. Atstage 1030, if any processing blade is down, the affected traffic sessions can be re-distributed among other health processing blades. Atstage 1040, theDFT 470 on theCSR 430 can be updated. In some embodiments, theSC 450 can update theDFT 470 on theCSR 430 based on the most recent health status information maintained at theST 480. Atstage 1050, the processing blade for incoming network traffic can be chosen based on the updatedDFT 470. In some embodiments, theCSR 430 can determine theprocessing blade 410 based on theDFT 470. For example, a faulty processing blade can be removed from theDFT 470 and thusCSR 430 can avoid forwarding network traffic to the faulty processing blade. - In some embodiments, the network traffic flow can be assigned to any
processing blade 410 based on theDFT 470 on theCSR 430. TheCSR 430 can help choose a least utilizedprocessing blade 410 to improve system load balance. Load balancing can be achieved by utilizing theRMs 460 that run onprocessor blades 410. TheRM 460 can monitor the health status of a givenprocessing blade 410 and provide a real-time status report on key resources (e.g., memory, CPU utilization, active applications, active sessions, threads, etc.) of that processing blade. These information can be sent periodically or by event driven to theSC 450 that can aggregate these information from theprocessing blades 410 and store them in the state table (ST) 480. The state table 480 can be used to update the rules in theDFT 470 in theCSR 430. TheCSR 430 can utilize theDFT 470 for optimal classification, tagging and forwarding of network traffic in thesystem 400. - In some embodiments, the
CSR 430 can classify the network traffic flow (e.g., IP traffic) entering thesystem 400. There can be a forwarding rule defined for every class. The classifying rules can be based in part on resource utilization information received from theRMs 460 and stored in theST 480. Such rules can be used to optimally distribute traffic flows having the same classification acrossmultiple processing blades 410 in thesystem 400. These functions and features can improve overall system utilization and latency, system reliability and service continuity, and system availability. These functions and features are discussed in details below. - In one aspect, systems and methods according to some embodiments of the disclosed subject matter can increase system utilization.
- In the conventional modular computing and
communication system 100 illustrated inFIG. 1 , eachprocessing blade 120 usually provides the same set of computing and/or communications services as the others. Typically system traffic load is statically assigned to one or theother processing blade 120. In such a system the offered traffic load can vary greatly from blade to blade with one blade experiencing a high load and the other experiencing a low load. In an extreme case one processing blade (e.g., B1) can be 100% loaded while the other processing blade (e.g., B2) can be 0% loaded, resulting in an overall system utilization of 50% or less (e.g., when traffic gets dropped). - In contrast, systems and methods according to some embodiments of the disclosed subject matter (e.g., 400) can help increase system utilization. In some embodiments, the CSR430 in the
system 400 can serve as an integral, high-performance, application-agnostic load balancer. Based on real-time resource information, e.g., gathered from theRMs 460 on theprocessing blades 410, dynamic forwarding rules can be created and updated in real-time and contained in theDFT 470. These dynamic forwarding rules can apportion traffic flows to all available processing blades based on their current utilizations. With a reasonable smoothing function/feedback loop employed, this can lead to better spreading of traffic/transactions across all available processing blades in thesystem 400, resulting in significantly better overall system utilization. For example, in asystem 400 with two processingblades 410, if the aggregated offered traffic load approaches 200%, theCSR 430 can help balancing the load so that each processing blade runs at nearly 100% capacity, thus giving an overall system utilization of about 200%. - In another aspect, systems and methods according to some embodiments of the disclosed subject matter can reduce system latency.
- In the sample network traffic path in the conventional computing and communications system as illustrated in
FIG. 2 , lack of any dynamic traffic distribution at theingress ports 110 often leads to inefficient routing of packets within thesystem 100. Due to static mapping of theports 110 to theprocessing blades 120, all the ingress traffic at the port 110 (e.g., P1) is forwarded to the corresponding processing blade 120 (e.g., B1) attached to the port 110 (e.g., P1). Only upon further inspection of the network traffic at the processing blade 120 (e.g., B1), the assigned destination processing blade 120 (e.g., B2) can be determined. This can lead to forwarding of the network traffic from one processing blade (e.g., B1) to a different processing blade (e.g., B2) via theswitch 130. In this example, egress path of all network traffic traverses from one processing blade (e.g., B2) to theswitch 130 then to a different processing blade (e.g., B1) then to the port (e.g., P1) out to the network. Multiple hops for network packets can add latency to network traffic delivery, leading to poor end user experiences. - In contrast, systems and methods according to some embodiments of the disclosed subject matter (e.g., 400) can help reduce system latency. In some embodiments, as illustrated in
FIG. 11 , thesystem 400 according to certain embodiments of the disclosed subject matter can help avoid packet hops acrossmultiple processing blades 410. In thesystem 400, theCSR 430 can de-couple the processing blades (PB) 410 from theports 440. TheSC 450 can create and update the dynamic network traffic routing rules in theDFT 470 in theCSR 430. The rules can be based in part on any combination of the source-destination addresses, application type, protocol type, and key words of the network traffic streams. The rules can also take into consideration session load of eachprocessor blade 410, e.g., as reported by theST 480. TheSC 450 can also update theDFT 470 in real time to reflect the current network and processing load conditions within thesystem 400. network Traffic can enter thesystem 400 through any active network port (e.g., P1). TheCSR 430 can examine the network traffic, classify, and tag the network traffic and forward it to the appropriate processor blade (e.g., PB2) based on the matching rule in theDFT 470. Traffic originating from a processing blade (e.g., PB2) can be processed in a similar manner and be forwarded to a particular port (e.g., P1) or another processing blade (e.g., PBn) based on the matching rule in theDFT 470. This feature can provide a meshed any-port to any-blade connectivity and can thus minimize traffic latency by limiting the number of hops for network traffic within thesystem 400. In most instances, network traffic can make only one hop in and one hop out of thesystem 400 and at most traverse oneprocessing blade 410. Therefore, thesystem 400 can reduce overall latency of network traffic, improving end user experiences. - In yet another aspect, systems and methods according to some embodiments of the disclosed subject matter can improve system reliability and service continuity.
- In some embodiments,
processor blades 410 can broadcast their health, operational states and load/utilization information to theSC 450. Upon detecting a processing blade failure, theSC 450 can promptly modify the dynamic forwarding rules in theDFT 470 in theCSR 430 and redistribute the traffic/processing load of the failed or failing processing blade across the remaining healthy processing blades. - To illustrate this feature in an example, assuming there are four processing blades 410 (labeled PB1, PB2, PB3, PB4) in the
system 400 and eachprocessing blade 410 is running at 75% capacity (or less). At some point in time, PB4 fails. TheSC 450 can detect the blade failure, e.g., via a heartbeat mechanism. TheSC 450 can then modify the forwarding rules in theDFT 470 in theCSR 430 to redistribute PB4's traffic/processing loads across the remaining three processing blades, thus improving system reliability. The respective loads on the three healthy processing blades can increase to 100% as a result of the redistribution. In addition to redistributing the failed PB4's traffic load, theSC 450 can also send the state information of all active sessions on the failed PB4 to the remaining active processing blades, e.g., via a software-based messaging mechanism. TheST 480 on theSC 450 can help provide seamless handoff of network connections and computing sessions that were previously hosted on the failed PB4 to the newly assigned processing blades in thesystem 400, thus improving service continuity. - In yet another aspect, systems and methods according to some embodiments of the disclosed subject matter can enhance system availability.
- In some embodiments, the
RMs 460 on theprocessing blades 410 can distribute the resource utilization and current workload of theprocessing blades 410 to theSC 450, e.g., via a software-based messaging mechanism. TheSC 450 can aggregate and maintain the state information in theST 480. TheST 480 thus can have knowledge of the current session load of eachprocessing blade 410 in thesystem 400 and can install rules in theDFT 470 of theCSR 430.CSR 430 can be responsible for distribution of ingress network traffic from theports 440 and assignment of theprocessing blades 410 to the incoming network traffic flows. - In one example, the
CSR 430 can help maintain that at any given time the average session load per processing blade (Lb) is: -
Lb<Cb*(N−1)/N, (1) - where:
-
- Lb=average session load per blade;
- Cb=session capacity per blade;
- N=number of blades in the system.
In this example, at any given time eachprocessing blade 410 can have excess capacity of at least Cb/N; the total excess capacity across all theprocessing blades 410 in thesystem 400 is at least Cb, which is the capacity of asingle processing blade 410. Lb can be adjusted such that the total excess capacity can be any multiple (whole or fractional) of Cb. Using equation (1), it follows that for N=2, 3, 4, 5, Lb is limited to Cb/2, Cb*⅔, Cb*¾ and Cb*⅘ respectively.
- In some embodiments, each
processing blade 410 in thesystem 400 can be paired with its neighbor processing blade, thus forming a session pair (SP). Eachprocessing blade 410 in thesystem 400 can have a unique ID. For the purpose of illustration, eachprocessing blade 410 can have an ID (i) that is simply the slot number it occupies in thesystem 400, with i taking on the values (1, . . . , N) and N being the total number of slots in thesystem 400. For example, in a 3-blade system, processing blade 1 (PB1) and processing blade 2 (PB2) can form one session pair; processing blade 2 (PB2) and processing blade 3 (PB3) can form another session pair; and processing blade 3 (PB3) and processing blade 1 (PB1) can form yet another session pair. To keep track of the pairings we can denote the first session pair as SP12, the second session pair as SP23, and the third session pair as SP31. Each processing blade can have the session information of its paired processing blade. When oneprocessing blade 410 experiences a hardware or software fault which causes it to fail, theSC 450 can detect the failure and re-distribute the sessions associated from the failed processing blade to other processing blade(s). - For the purpose of illustration, assuming that PB2 has failed and the
SC 450 has detected its failure in a timely manner. This failure detection can trigger a number of actions in thesystem 400. PB3 (paired with PB2) can send PB2′s session information to theSC 450. TheSC 450, after reviewing the current loads on PB1 and PB3, can apportion PB2's sessions between the two remaining processing blades (PB1 and PB3). TheSC 450 can also modify the forwarding rules in theDFT 470 in real time and by re-routing the network traffic being routed to the failed PB2 to its paired processing blade and/or other processing blade(s). These actions can be executed rather quickly (e.g., on the order of milliseconds), thus causing little or no impact to affected network traffic flows. In addition, in some situations such as both PB1 and PB3 are lightly loaded, PB1 and PB3 can form a pairing relationship with one another. The pairing relationships among active processing blades can be adjusted automatically or on demand, e.g., when the failed PB2 is restored to working order. -
FIG. 12 illustrates a perspective schematic view of anexemplary computing device 1200 according to certain embodiments of the disclosed subject matter. Thedevice 1200 can include one ormore processing blades 1210 interconnected by aswitch 1220, which in turn is connected to aCSR 1230. The CSR can provide connections betweenports 1240 and theprocessor blades 1210. - It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
- As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the disclosed subject matter.
- Although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter, which is limited only by the claims which follow.
- A “server,” “client,” “agent,” “module,” “interface,” and “host” is not software per se and includes at least some tangible, non-transitory hardware that is configured to execute computer readable instructions. In addition, the phrase “based on” does not imply exclusiveness—for example, if X is based on A, X can also be based on B, C, and/or D, . . .
Claims (23)
1. A computing system for processing network traffic, comprising:
a plurality of network ports configured to receive network traffic;
a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic;
a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades;
a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on resource information of the plurality of the processing blades; and
a system controller coupled to the router and the plurality of processing blades, the system controller configured to receive and maintain the resource information from the plurality of the processing blades and further configured to update the router with the resource information of the plurality of the processing blades.
2. The computing system of claim 1 , wherein the resource information includes at least one of utilization, load, and health status of a processing blade.
3. The computing system of claim 1 , wherein each of the plurality of processing blades contains a resource manager configured to gather the resource information of the each of the plurality of processing blades and send the resource information to the system controller.
4. The computing system of claim 1 , wherein the router includes a dynamic forwarding table containing rules for forwarding the network traffic.
5. The computing system of claim 4 , wherein the rules are based on the resource information of the plurality of processing blades.
6. The computing system of claim 1 , wherein the system controller includes a state table containing the resource information received from the plurality of processing blades.
7. The computing system of claim 1 , wherein the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
8. A computerized method of processing network traffic, comprising:
receiving at a system controller resource information from a plurality of processing blades;
updating a router by the system controller with the resource information of the plurality of processing blades;
receiving network traffic at a network port; and
forwarding the networking traffic by the router to one or more of the plurality of processing blades based on the resource information of the plurality of processing blades,
wherein the network port is not directly coupled with the plurality of processing blades.
9. The computerized method of claim 8 , wherein the resource information includes at least one of utilization, load, and health status of a processing blade.
10. The computerized method of claim 8 , further comprising receiving at the system controller the resource information from the plurality of processing blades via a software-based messaging mechanism.
11. A computing system for processing network traffic, comprising:
a plurality of network ports configured to receive network traffic;
a plurality of processing blades, not directly coupled with the plurality of network ports, configured to process the network traffic;
a switch coupled with the plurality of processing blades and configured to support inter-blade communications among the plurality of processing blades;
a router coupled with the switch and the plurality of network ports, the router configured to forward the network traffic to one or more of the plurality of processing blades based on forwarding rules; and
a system controller coupled to the router and the plurality of processing blades, the system controller configured to detect a fault of one of the plurality of processing blades and further configured to update the forwarding rules of the router, upon detecting the fault, to divert the network traffic from the faulted processing blade to at least one different processing blade.
12. The computing system of claim 11 , wherein the fault indicates the one of the plurality of processing blades has failed or is about to fail.
13. The computing system of claim 11 , wherein the system controller includes a state table containing session information received from the plurality of processing blades.
14. The computing system of claim 13 , wherein each of the plurality of processing blades contains a resource manager configured to gather the session information of the each of the plurality of processing blades and send the session information to the system controller.
15. The computing system of claim 13 , wherein the system controller is configured to send the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
16. The computing system of claim 11 , wherein the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
17. The computing system of claim 11 , wherein an average load per processing blade (Lb) is less than Cb*(N−1)/N, where Cb is a blade capacity and N is the number of processing blades.
18. A computerized method of processing network traffic, comprising:
receiving network traffic at a network port;
detecting by a system controller a fault of one of a plurality of processing blades;
updating by the system controller forwarding rules of a router; and
forwarding the network traffic by the router based on the updated forwarding rules to divert the network traffic from the faulted processing blade to at least one different processing blade,
wherein the network port is not directly coupled with the plurality of processing blades.
19. The computerized method of claim 18 , wherein the fault indicates the one of the plurality of processing blades has failed or is about to fail.
20. The computerized method of claim 18 , further comprising receiving at the system controller session information from the plurality of processing blades.
21. The computerized method of claim 20 , further comprising sending the session information of the faulted processing blade, upon detecting the fault, to the at least one different processing blade.
22. The computerized method of claim 18 , wherein the plurality of processing blades are configured to communicate with the system controller via a software-based messaging mechanism.
23. The computerized method of claim 18 , further comprising keeping an average load per processing blade (Lb) less than Cb*(N−1)/N, where Cb is a blade capacity and N is the number of processing blades.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/897,028 US20130308439A1 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261648990P | 2012-05-18 | 2012-05-18 | |
US201261649067P | 2012-05-18 | 2012-05-18 | |
US201261649001P | 2012-05-18 | 2012-05-18 | |
US13/897,028 US20130308439A1 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130308439A1 true US20130308439A1 (en) | 2013-11-21 |
Family
ID=49581215
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/897,028 Abandoned US20130308439A1 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
US13/897,022 Active US9288141B2 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
US13/896,989 Active US9197545B2 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/897,022 Active US9288141B2 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
US13/896,989 Active US9197545B2 (en) | 2012-05-18 | 2013-05-17 | Highly scalable modular system with high reliability and low latency |
Country Status (2)
Country | Link |
---|---|
US (3) | US20130308439A1 (en) |
WO (1) | WO2013173758A2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9344383B2 (en) | 2012-11-07 | 2016-05-17 | Dell Products L.P. | Event driven network system |
CN104811392B (en) * | 2014-01-26 | 2018-04-17 | 国际商业机器公司 | For handling the method and system of the resource access request in network |
US10091701B1 (en) | 2017-05-31 | 2018-10-02 | Sprint Communications Company L.P. | Information centric network (ICN) with content aware routers (CARs) to facilitate a user equipment (UE) handover |
US11290385B2 (en) | 2017-12-15 | 2022-03-29 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and traffic processing unit for handling traffic in a communication network |
US11489930B2 (en) * | 2019-06-11 | 2022-11-01 | At&T Intellectual Property I, L.P. | Telecommunication network edge cloud interworking via edge exchange point |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030202536A1 (en) * | 2001-04-27 | 2003-10-30 | Foster Michael S. | Integrated analysis of incoming data transmissions |
US20080126542A1 (en) * | 2006-11-28 | 2008-05-29 | Rhoades David B | Network switch load balance optimization |
US7636917B2 (en) * | 2003-06-30 | 2009-12-22 | Microsoft Corporation | Network load balancing with host status information |
US20110185065A1 (en) * | 2010-01-28 | 2011-07-28 | Vladica Stanisic | Stateless forwarding of load balanced packets |
US20120207158A1 (en) * | 2011-02-16 | 2012-08-16 | Oracle International Corporation | Method and system for classification and management of inter-blade network traffic in a blade server |
US20140143854A1 (en) * | 2011-02-16 | 2014-05-22 | Fortinet, Inc. | Load balancing among a cluster of firewall security devices |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9444785B2 (en) | 2000-06-23 | 2016-09-13 | Cloudshield Technologies, Inc. | Transparent provisioning of network access to an application |
US7305492B2 (en) * | 2001-07-06 | 2007-12-04 | Juniper Networks, Inc. | Content service aggregation system |
US9264384B1 (en) * | 2004-07-22 | 2016-02-16 | Oracle International Corporation | Resource virtualization mechanism including virtual host bus adapters |
EP1854250B1 (en) | 2005-02-28 | 2011-09-21 | International Business Machines Corporation | Blade server system with at least one rack-switch having multiple switches interconnected and configured for management and operation as a single virtual switch |
US7492765B2 (en) | 2005-06-15 | 2009-02-17 | Cisco Technology Inc. | Methods and devices for networking blade servers |
US7525957B2 (en) | 2005-09-01 | 2009-04-28 | Emulex Design & Manufacturing Corporation | Input/output router for storage networks |
US8233388B2 (en) * | 2006-05-30 | 2012-07-31 | Cisco Technology, Inc. | System and method for controlling and tracking network content flow |
CN101431432A (en) | 2007-11-06 | 2009-05-13 | 联想(北京)有限公司 | Blade server |
WO2010084529A1 (en) | 2009-01-23 | 2010-07-29 | 株式会社日立製作所 | Information processing system |
US8699499B2 (en) * | 2010-12-08 | 2014-04-15 | At&T Intellectual Property I, L.P. | Methods and apparatus to provision cloud computing network elements |
US8553552B2 (en) | 2012-02-08 | 2013-10-08 | Radisys Corporation | Stateless load balancer in a multi-node system for transparent processing with packet preservation |
-
2013
- 2013-05-17 WO PCT/US2013/041653 patent/WO2013173758A2/en active Application Filing
- 2013-05-17 US US13/897,028 patent/US20130308439A1/en not_active Abandoned
- 2013-05-17 US US13/897,022 patent/US9288141B2/en active Active
- 2013-05-17 US US13/896,989 patent/US9197545B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030202536A1 (en) * | 2001-04-27 | 2003-10-30 | Foster Michael S. | Integrated analysis of incoming data transmissions |
US7636917B2 (en) * | 2003-06-30 | 2009-12-22 | Microsoft Corporation | Network load balancing with host status information |
US20080126542A1 (en) * | 2006-11-28 | 2008-05-29 | Rhoades David B | Network switch load balance optimization |
US20110185065A1 (en) * | 2010-01-28 | 2011-07-28 | Vladica Stanisic | Stateless forwarding of load balanced packets |
US20120207158A1 (en) * | 2011-02-16 | 2012-08-16 | Oracle International Corporation | Method and system for classification and management of inter-blade network traffic in a blade server |
US20140143854A1 (en) * | 2011-02-16 | 2014-05-22 | Fortinet, Inc. | Load balancing among a cluster of firewall security devices |
Also Published As
Publication number | Publication date |
---|---|
US20130308459A1 (en) | 2013-11-21 |
US9197545B2 (en) | 2015-11-24 |
WO2013173758A3 (en) | 2015-07-09 |
US9288141B2 (en) | 2016-03-15 |
WO2013173758A2 (en) | 2013-11-21 |
US20130308438A1 (en) | 2013-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12047244B2 (en) | Method and system of connecting to a multipath hub in a cluster | |
EP3949293B1 (en) | Slice-based routing | |
US9736278B1 (en) | Method and apparatus for connecting a gateway router to a set of scalable virtual IP network appliances in overlay networks | |
US10523539B2 (en) | Method and system of resiliency in cloud-delivered SD-WAN | |
US10534601B1 (en) | In-service software upgrade of virtual router with reduced packet loss | |
US9467382B2 (en) | Elastic service chains | |
US9397933B2 (en) | Method and system of providing micro-facilities for network recovery | |
US10148554B2 (en) | System and methods for load placement in data centers | |
US9042234B1 (en) | Systems and methods for efficient network traffic forwarding | |
US20170063604A1 (en) | Method and apparatus for sve redundancy | |
US8438307B2 (en) | Method and device of load-sharing in IRF stack | |
US20160234091A1 (en) | Systems and methods for controlling switches to capture and monitor network traffic | |
US9008080B1 (en) | Systems and methods for controlling switches to monitor network traffic | |
US9197545B2 (en) | Highly scalable modular system with high reliability and low latency | |
US9787567B1 (en) | Systems and methods for network traffic monitoring | |
US8472324B1 (en) | Managing route selection in a communication network | |
US10447581B2 (en) | Failure handling at logical routers according to a non-preemptive mode | |
Görkemli et al. | Dynamic management of control plane performance in software-defined networks | |
WO2022057810A1 (en) | Service packet forwarding method, sr policy sending method, device, and system | |
CN113992569A (en) | Multi-path service convergence method and device in SDN network and storage medium | |
Thorat et al. | Optimized self-healing framework for software defined networks | |
CN107547394A (en) | A kind of load-balancing device dispositions method more living and device | |
CN112913198A (en) | Dynamic client balancing between branch gateways | |
CN105007234A (en) | Load balancing method for global ip scheduling | |
KR20190048324A (en) | Method for providing service based on multi network and apparatus therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PACIFIC WESTERN BANK, NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:BENU NETWORKS, INC.;REEL/FRAME:037391/0125 Effective date: 20151215 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BENU NETWORKS, INC., MASSACHUSETTS Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PACIFIC WESTERN BANK;REEL/FRAME:046645/0977 Effective date: 20180813 |