US20200252273A1 - Remote network interface card management - Google Patents

Remote network interface card management Download PDF

Info

Publication number
US20200252273A1
US20200252273A1 US16/266,850 US201916266850A US2020252273A1 US 20200252273 A1 US20200252273 A1 US 20200252273A1 US 201916266850 A US201916266850 A US 201916266850A US 2020252273 A1 US2020252273 A1 US 2020252273A1
Authority
US
United States
Prior art keywords
network
nic
flm
appliance
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/266,850
Other versions
US10742493B1 (en
Inventor
Stephen Kay
Long Sam
Christopher Murray
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Enterprise Development LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development LP filed Critical Hewlett Packard Enterprise Development LP
Priority to US16/266,850 priority Critical patent/US10742493B1/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAY, STEPHEN, MURRAY, CHRISTOPHER, SAM, LONG
Publication of US20200252273A1 publication Critical patent/US20200252273A1/en
Application granted granted Critical
Publication of US10742493B1 publication Critical patent/US10742493B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40006Architecture of a communication node
    • H04L12/40032Details regarding a bus interface enhancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection (CSMA-CD)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0866Checking the configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • H04L41/0897Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0811Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/16Implementing security features at a particular protocol layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/18Network architectures or network communication protocols for network security using different networks or channels, e.g. using out of band channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L2012/40208Bus networks characterized by the use of a particular bus standard
    • H04L2012/40215Controller Area Network CAN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Definitions

  • network connectivity between nodes, blades, or frames of adjacent network modules may represent a primary communication path for sharing data between those nodes.
  • the data may represent inputs to a compute process (e.g., data or applications), outputs of compute resources (e.g., compute results), communications to coordinate distributed processes, and other types of data.
  • a compute process e.g., data or applications
  • outputs of compute resources e.g., compute results
  • communications e.g., compute results
  • communications to coordinate distributed processes e.g., and other types of data.
  • adjacent nodes of network modules within a blade server, cluster, or frame may be expected to be directly connected to each other using a control network to exchange coordination information amongst the set of devices working, together.
  • This control network may be isolated from regular data traffic on an application network, sometimes referred to as a customer network, using a separate physical local area network (LAN) or logical network (e.g., virtual local area network VLAN).
  • LAN local area network
  • VLAN virtual local area network
  • control network there may be more than one control network.
  • Each control network may communicate across different physical media using an independent network interface card (NIC) port or may share a physical media using logical segregation.
  • Different topology networks may be used, e.g., ring networks, star networks, point-to-point, etc.
  • Control signals may use a LAN, VLAN, control area network (CAN), or other type of network.
  • some scalable compute resources may maintain a single active uplink from the group of resources and have multiple backup uplinks. For example, a single uplink to a customer network may be used to provide an application data communication path.
  • segregation of application data and the management network data that is used to monitor, configure, and control devices is desirable for a variety of reasons including performance, reliability, and security.
  • FIG. 1 is a functional block diagram of a computer infrastructure including multiple frame saleable compute resources, a customer VLAN, and a management VLAN, according to one or more disclosed implementations;
  • FIG. 2A is a functional block diagram representing an example of two frame link modules redundantly connected to two appliances that each have a single NIC having two ports, according to one or more disclosed implementations;
  • FIG. 2B is a functional block diagram extending the example of FIG. 2A with each of two appliances having multiple NICs with two ports each, according to one or more disclosed implementations;
  • FIG. 3A is a functional block diagram illustrating communication flow where data and management traffic are shared on a common network, according to one or more disclosed implementations;
  • FIG. 3B is a functional block diagram extending the example of FIG. 3A where data and management traffic are isolated on different networks, according to one or more disclosed implementations;
  • FIG. 4 is a functional block diagram illustrating two servers (appliances), each having a NIC and a control area network (CAN) microcontroller, a midplane and a frame link module, with different possible communication paths illustrated, according to one or more disclosed implementations;
  • NIC control area network
  • CAN control area network
  • FIG. 5 illustrates a flow chart depicting one example method for configuring components of FIG. 4 to isolate data from a management network and a customer network, according to one or more disclosed implementations
  • FIG. 6 illustrates an example computing device instrumented with computer instructions to perform the method of FIG. 5 , according to one or more disclosed examples.
  • Scaleable compute resources include, for example, a set of frames or a blade server with plugin components.
  • management traffic and application data traffic refer to network style communication between components to support overall functionality of a scaleable compute resource.
  • Management traffic includes, among other data, control commands to direct power usage and cooling (e.g., fan control) at different components within a computer system.
  • the examples of this disclosure will be presented using Frames, Frame Link Modules (FLMs), Midplanes (e.g., a cross communication bus), and appliances (sometimes referenced as servers (e.g., the appliances are connected through the PHY to servers or blade server)).
  • FLMs Frame Link Modules
  • Midplanes e.g., a cross communication bus
  • appliances sometimes referenced as servers (e.g., the appliances are connected through the PHY to servers or blade server)
  • disclosed techniques may also be applicable to other types of scaleable compute resources.
  • Other types of scaleable compute resources will have components that may be named differently but provide substantially similar functionality to the components used in disclosed examples. Accordingly, this disclosure is not limited to a Frame style implementation.
  • two new communication ports may be added (e.g., as a multi-port NIC) to each appliance to allow data traffic from internal appliances (e.g., appliances plugged into a Frame) to be sent through the data network without entering the FLM switch.
  • the FLM switch may additionally route packets between the internal management devices and to the external management network.
  • FIGS. 2A-B One example of adding communication ports and data flow paths is illustrated in FIGS. 2A-B discussed further below. Specifically, FIG. 2A illustrates an implementation without the additional ports and FIG. 2B illustrates the additional ports and communication paths. It is additionally worth noting here that one or more disclosed implementations may be accomplished without changes to other infrastructure components (e.g., existing Midplane components may be utilized without change).
  • data may be routed between new NIC ports (for example) to the FLM and the appropriate one of a management network or an application data network.
  • New communication paths may allow remote management through existing channels to the appliance.
  • data may bypass a management switch (within each FLM) as opposed to only being isolated using VLANs.
  • air gap a higher degree of security, sometimes referred to as “air gap” security may be achieved.
  • management data may have traversed the FLM switch prior to being isolated within logically segregated VLANs.
  • a CPU on the FLM may control PHY ports that are routed to the remote NIC ports of each appliance.
  • Control of a PHY may be accomplished using management software that has an additional communication path through a control area network (CAN) bus.
  • the CAN bus may be communicatively coupled to the CIM using CANMICs. See the discussion of FIGS. 3A-4 below.
  • an FLM CPU possibly acting as a CAN master and communicating to CAN slaves on an appliance, may control PHY setup using control commands through a CAN bus that bypasses the FLM switch.
  • FIG. 1 a functional block diagram of a system including multiple nodes of a scaleable resource that may benefit from the concepts of this disclosure
  • FIGS. 2A-B a first example of two functional block diagrams illustrating an example implementation using two appliances that have a single NIC contrasted with an example implementation using two appliances that each have two NICs to facilitate isolation of different networks
  • FIGS. 2A-B a second example of two functional block diagrams illustrating a shared common network and two isolated networks
  • FIG. 3A-B a functional block diagram illustrating two servers (appliances), each having a NIC and a control area network microcontroller (CANMIC), a midplane, and a frame link module, with different possible communication paths ( FIG. 4 ); a flow chart depicting one example method for configuring components of FIG. 4 to isolate data from a management network and a customer data network ( FIG. 5 ); and an example computing device instrumented with computer instructions to perform the method of FIG. 5 ( FIG. 6 ) (all according to different possible disclosed implementations).
  • CANMIC control area network microcontroller
  • FIG. 5 a flow chart depicting one example method for configuring components of FIG. 4 to isolate data from a management network and a customer data network
  • FIG. 6 an example computing device instrumented with computer instructions to perform the method of FIG. 5 ( FIG. 6 ) (all according to different possible disclosed implementations).
  • FIG. 1 an example computer infrastructure 100 is illustrated.
  • customer network 105 is connected to a set of frames (represented by frame 1 ( 110 ), and frame 2 ( 115 )).
  • frame 1 may be configured with a set of blades (B 1 , B 2 , . . . BN) and an Appliance.
  • B 1 , B 2 , . . . BN blades
  • Appliance There may be more than one Appliance within a single frame.
  • arrow 120 - 2 indicates that frame 2 may be configured in a like manner.
  • Frame 1 further includes two network modules, namely network module 1 ( 140 ) and network module 2 ( 145 ) (sometimes referred to as a Frame Link Module (FLM)).
  • Frame 2 also include two network modules, namely network module 3 ( 150 ) and network module 4 ( 155 ). These network modules provide connectivity for the compute resources represented by the respective blades within their frame.
  • Each of the blades is shown with a network connection to a network switch 160 respectively disposed within each individual network module (e.g., network module 1 ( 140 ) through network module 4 ( 155 )).
  • Each network module further includes a CPU 165 to facilitate configuration, monitoring, and maintenance of a corresponding network switch 160 .
  • a blade may be referred to as an appliance or a server.
  • a blade, appliance, server in this context refers to a “plugin” component to a scaleable compute resource to provide additional compute capacity (or functionality) for the scaleable compute resources.
  • a blade, appliance, server in this context refers to a “plugin” component to a scaleable compute resource to provide additional compute capacity (or functionality) for the scaleable compute resources.
  • Many different types of components may be used to augment a scaleable compute resource and each may be configured with redundant and isolated communication paths as disclosed herein. Specific functionality provided by these blades is not necessarily pertinent to this disclosure as disclosed techniques for control network isolation may be agnostic to functionality provided by any particular plugin component.
  • plugin as used herein, may refer to insertion of a card into a slot in a backplane of a computer system, may represent attachment via a communication cable, may represent one of many other types of connections used for computer system component attachment.
  • Connectivity from a set of frames to a customer network is typically provided by a single active uplink 125 from one of the plurality of network switches that exist across the multiple FLMs of a group of connected frames. That is, all communications external to the group of connected frames passes through uplink 125 .
  • Other potential uplinks 126 - 1 , 126 - 2 , and 126 - 3 are illustrated to be available (e.g., if needed as a result of failure to uplink 125 ) from other network switches.
  • customer network VLAN 130 connects each of the network switches 160 in an ethernet ring topology network and extends to the customer network 105 (e.g., includes VLANS 1 - 4094 ).
  • a second ring network, 4095 management VLAN 135 is also shown as a logically isolated network in computer infrastructure 100 .
  • 4095 management VLAN 135 is shown in a bolder line than customer network VLAN 130 and also connects each of the network switches 160 . Note, in a proper configuration of a group of frames, each network switch will be directly connected to each neighboring switch (either in the same frame or an adjacent frame) and no intervening network devices are present.
  • a virtual LAN refers to a broadcast domain that is partitioned and isolated (i.e., logically isolated) in a computer network at the data link layer (OSI layer 2).
  • OSI layer 2 data link layer 2
  • LAN is the abbreviation for local area network and when used in the context of a VLAN, “virtual” refers to a physical object recreated and altered by additional logic.
  • a VLAN is a custom network created from one or more existing LANs. It enables groups of devices from multiple networks (both wired and wireless) to be combined into a single logical network. The result is a virtual LAN that can be administered like a physical local area network, for example 4095 management VLAN 135 in computer infrastructure 100 .
  • a VLAN does not represent an “air gap” isolation of data traffic on a physical network from other VLANs on that same physical network.
  • a separate physical layer network may be used as explained below with reference to FIGS. 2B, 3B, and 4 .
  • FIG. 2A a functional block diagram illustrates communication flows 200 representing an example of two logically isolated networks (e.g., a management network and a data network), according to one or more disclosed implementations.
  • two frame link modules FLM 1 210 and FLM 2 220
  • FLM 1 210 and FLM 2 220 are redundantly connected to two appliances (appliance 1 ( 205 ) and appliance 2 ( 215 ).
  • Each appliance in this example, has a single NIC having two ports.
  • appliance 1 ( 205 ) includes NIC 1 ( 230 - 1 ) that has a first port and a second port as labeled in communication flow 200 .
  • Appliance 2 ( 215 ) similarly includes NIC 1 ( 230 - 2 ) with two ports.
  • FLM 1 ( 210 ) includes an FLM switch 211 - 1 , a multi-PHY communications capability (e.g., a multi-port PHY 212 - 1 ), a management connection 213 - 1 and a link connection 214 - 1 .
  • FLM 2 ( 220 ) is similarly configured with an FLM switch 211 - 2 , multi-port PHY 212 - 2 , management connection 213 - 2 , and link connection 214 - 2 .
  • Communication paths originating from a first port on a NIC are illustrated as dash-single-dot lines, for example, communication path 1 ( 231 ) and communication path 3 ( 233 ).
  • Communication paths originating from a second port on a NIC are illustrated as dash-double-dot lines, for example, communication path 2 ( 232 ) and communication path 4 ( 234 ).
  • communication path 1 ( 231 ) flows from NIC 1 ( 230 - 1 ) of appliance 1 ( 205 ) to FLM switch 211 - 1 of FLM 1 ( 210 ).
  • Communication path 2 ( 232 ) flows from NIC 1 ( 230 - 1 ) of appliance 1 ( 205 ) to FLM switch 211 - 2 of FLM 2 ( 220 ).
  • FLM switch 211 - 2 of FLM 2 ( 220 ) would route traffic to its appropriate destination as indicated in the network packet.
  • heartbeat messages may be exchanged between appliance 1 ( 205 ) and appliance 2 ( 215 ).
  • Heartbeat message would then flow out of appliance 1 ( 205 ) into one of the two FLM Switches ( 211 - 1 or 211 - 2 ) and then flow out of that FLM Switch to a port on NIC 1 ( 2302 ) of appliance 2 ( 215 ).
  • communication path 3 ( 233 ) flows from NIC 1 ( 230 - 2 ) of appliance 2 ( 215 ) to FLM switch 211 - 1 of FLM 1 ( 210 ).
  • Communication path 4 ( 234 ) flows from NIC 1 ( 230 - 2 ) of appliance 2 ( 215 ) to FLM switch 211 - 2 of FLM 2 ( 220 ).
  • each of appliance 1 ( 205 ) and appliance 2 ( 215 ) have redundant communication paths (for failover redundancy and high-availability for example) between each of FLM 1 ( 210 ) and FLM 2 ( 220 ).
  • management data flow may be isolated from application data flow at each multi-port PHY to arrive at management interface 213 - 1 of FLM 1 ( 210 ) and management interface 213 - 2 of FLM 2 ( 220 ).
  • Application data flow may be directed out of each multi-port PHY to link connection 214 - 1 in FLM 1 ( 210 ) and link connection 214 - 2 of FLM 2 ( 220 ).
  • the link ports associated with link connection 214 - 2 link the frames together in a ring to form a management network.
  • the FLM switch may perform logical data segregation by directing appropriate traffic (e.g., management or application data) to the appropriate PHY on multi-port PHY 212 - 1 of FLM 1 ( 210 ) or multi-port PHY 212 - 2 of FLM 2 ( 210 ).
  • appropriate traffic e.g., management or application data
  • FIG. 2B is a functional block diagram extending the example of FIG. 2A with each of two appliances having multiple network interface cards with two ports each, according to one or more disclosed implementations.
  • FIG. 2B illustrates communication flows 250 representing an example of two physically isolated networks (e.g., a management network and a data network separated by an air gap for enhanced security). Redundant communication paths are maintained to provide high-availability.
  • This example is similar to the example of FIG. 2A because two frame link modules (FLM 1 260 and FLM 2 270 ) are redundantly connected to two appliances (appliance 1 ( 265 ) and appliance 2 ( 266 ). However, in this example, each appliance has dual NICs each having two ports.
  • appliance 1 ( 265 ) includes NIC 1 ( 230 - 1 ) that has a first port and a second port and NIC 2 ( 285 - 1 ) that has a first port and a second port, each as labeled in communication flow 250 .
  • Appliance 2 ( 266 ) similarly includes NIC 1 ( 230 - 2 ) with two ports and NIC 2 ( 285 - 2 ).
  • FLM 1 ( 260 ) includes an FLM switch 261 - 1 , a multi-PHY communications capability (e.g., a multi-port PHY 262 - 1 ), a management connection 213 - 1 and a link connection 214 - 1 .
  • FLM 2 ( 270 ) is similarly configured with an FLM switch 261 - 2 , multi-port PHY 262 - 2 , management connection 213 - 2 , and link connection 214 - 2 .
  • each of FLM 1 ( 260 ) and FLM 2 ( 270 ) further include additional interfaces.
  • FLM 1 ( 260 ) includes a segregated PHY that has four interfaces (i.e., PHY-A, PHY-B, PHY-C, and PHY-D) on each of multi-port PHYs 262 - 1 and 262 - 2 .
  • a selected PHY from PHY-A through PHY-D may provide a physically isolated (and additional) communication path to each of appliance 1 and appliance 2 .
  • FLM 1 ( 260 ) includes interface to appliance 1 ( 263 - 1 ) and interface to appliance 2 ( 263 - 2 ).
  • FLM 2 ( 270 ) includes interface to appliance 1 ( 273 - 1 ) and interface to appliance 2 ( 273 - 2 ).
  • communication flows 250 there are eight independent communication paths illustrated (as opposed to the four in communication flows 200 ). Again, one of ordinary skill in the art, given the benefit of this disclosure, will recognize that these specific communication paths are examples only and that data may be switched within FLM Switch 261 - 1 or 261 - 2 to follow different flow paths than the eight specifically illustrated. That is, an FLM switch may direct traffic based on its destination information as included in an individual network packet to the appropriate outbound port of the FLM switch. These additional communication paths may be utilized to provide full air gap security and segregation of management traffic and application data traffic, according to one or more disclosed implementations. For example, air gap security may be provided by PHY C and PHY D, while PHY A and PHY B send all management traffic.
  • the eight communication paths illustrated in communication flows 250 include: communication paths originating from a first port on a first NIC illustrated as a bold line, for example, communication path 1 ( 251 - 1 , 251 - 2 , and 251 - 3 ) and communication path 5 ( 255 - 1 ).
  • Communication paths originating from a second port on a first NIC are illustrated as dash-single-dot lines, for example, communication path 2 ( 252 - 1 , 252 - 2 , and 252 - 3 ) and communication path 6 ( 256 - 1 ).
  • Communication paths 1 , 2 , 5 , and 6 are similar to communications paths 1 - 4 , respectively, of FIG. 2A .
  • Additional communication paths are illustrated in communication flows 250 of FIG. 2B and originate from each of the additional NICs provided in each of appliance 1 ( 265 ) and appliance 2 ( 266 ).
  • Communication path 3 ( 253 - 1 ) and communication path 7 ( 257 - 1 ) respectively originate at port 1 of a second NIC (e.g., NIC 2 ( 285 - 1 and 285 - 2 )) and are illustrated as a long-dash-short-dash line.
  • Communication path 4 ( 254 - 1 ) and communication path 8 ( 258 - 1 ) respectively originate at port 2 of the second NIC (e.g., NIC 2 ( 285 - 1 and 285 - 2 )) and are illustrated as a bold dotted line.
  • each of appliance 1 ( 265 ) and appliance 2 ( 266 ) may utilize a data flow on a communication path that both bypasses each respective FLM switch ( 261 - 1 and 261 - 2 ) and isolate management traffic from link traffic on different ports of each multi-port PHY ( 262 - 1 and 262 - 2 ).
  • management traffic from appliance 1 ( 265 ) intended for management interface 213 - 1 of FLM 1 ( 260 ) may follow communication path 1 ( 251 - 1 , 251 - 2 , and 251 - 3 ).
  • This data would flow from appliance 1 ( 265 ), via NIC 1 ( 230 - 1 ) port 1 , to FLM switch 261 - 1 , to PHY-A on multi-port PHY 262 - 1 prior to reaching management interface 213 - 1 on FLM 1 ( 260 ).
  • management traffic from appliance 1 ( 265 ) may have an additional communication path (i.e., communication path 2 ( 252 - 1 , 252 - 2 , and 252 - 3 )) using FLM 2 ( 270 ).
  • communication path 2 ( 252 - 1 , 251 - 2 , and 251 - 3 ) may flow from appliance 1 ( 265 ), via NIC 1 ( 230 - 1 ) port 2 , to FLM switch 261 - 2 , to PHY-A on multi-port PHY 262 - 2 prior to reaching management interface 213 - 2 on FLM 2 ( 270 ).
  • Appliance 2 ( 266 ) is also illustrated, in communication flows 250 , as having similar redundant communication paths for management traffic using communication path 5 ( 255 - 1 , 255 - 2 , and 255 - 3 ) and communication path 6 ( 256 - 1 ) respectively from a first port and a second port of NIC 1 ( 230 - 2 ).
  • application data traffic may be completely and physically isolated from each of communication path 1 ( 251 (segments 1 , 2 , and 3 )), communication path 2 ( 252 (segments 1 , 2 , and 3 ), communication path 5 ( 255 (segments 1 , 2 , and 3 ), and communication path 6 ( 256 (only segment 1 is labeled for clarity but there are three segments illustrated)) that are used.
  • application data for appliance 1 ( 265 ) may utilize communication path 3 ( 253 - 1 ) to flow from appliance 1 ( 265 ), via NIC 2 ( 285 - 1 ) port 1 through PHY-C of multi-port PHY 262 - 1 and arrive at appliance 1 interface 263 - 1 on FLM 1 ( 260 ).
  • Redundant application data for appliance 1 ( 265 ) may utilize communication path 4 ( 254 - 1 ) to flow from appliance 1 ( 265 ), via NIC 2 ( 285 - 1 ) port 2 through PHY-C of multi-port PHY 262 - 2 and arrive at appliance 1 interface 273 - 1 on FLM 2 ( 270 ).
  • Application data for appliance 2 may utilize communication path 7 ( 257 - 1 ) and communication path 8 ( 258 - 1 ) from NIC 2 ( 285 - 2 ) on appliance 2 ( 266 ) to provide similar redundant paths as those explained for application data from appliance 1 .
  • the exception to the similarity is that in this example communication path 8 would flow through PHY-D of FLM 2 ( 270 ) multi-port PHY 262 - 2 (rather than PHY-C). Accordingly, complete physical isolation of management traffic and application data traffic using a frame-based scaleable compute resource may be achieved.
  • FIG. 3A is a functional block diagram illustrating communication flow 300 where data and management traffic are shared on a common physical network (and may be logically isolated using VLANs), according to one or more disclosed implementations.
  • communication flow 300 is not illustrated with redundant FLM modules as may be provided in a production implementation and described above for FIGS. 2A-B .
  • communication path 327 illustrates a bi-directional communication path between an external server 330 (which may be an application server providing a set of functionalities for a customer network) and FLM 305 - 1 .
  • the connection utilizes PHY 320 that is connected to port 4 of a multi-port FLM switch 315 .
  • FLM switch 315 is illustrated in this example as having 6 ports but any number of ports are possible depending on design criteria for an FLM switch.
  • Appliance 1 325 is illustrated as utilizing communication path 326 as a bi-directional communication path to port 1 of FLM switch 315 on FLM 1 ( 305 - 1 ).
  • FLM 305 - 1 further illustrates FLM CPU 310 having a bi-directional communication path 329 to FLM switch 315 .
  • FLM CPU 310 may execute computer instructions to configure FLM switch 315 based on desired communication capabilities of a customer network environment.
  • data and management traffic may be shared on a single physical network and may be logically isolated on distinct VLANs (see FIG. 1 ).
  • FIG. 3B extends the example of FIG. 3A to illustrate communication flow 350 with isolated networks for data traffic (e.g., customer application data) and management traffic (e.g., management commands), according to one or more disclosed implementations.
  • FIG. 3B includes elements of FIG. 3A that maintain their reference numbers to indicate they are like components.
  • FLM CPU 310 , FLM Switch 315 , external server 330 , and appliance 1 ( 325 ) may not be different components between FIGS. 3A and 3B . However, additional components are illustrated to provide the illustrated data traffic segregation.
  • management traffic may include control commands to devices and components to configure those devices/components and issue run-time control commands (e.g., fan control, power consumption, etc.) to affect an operational state of devices/components to which those control commands are sent.
  • run-time control commands e.g., fan control, power consumption, etc.
  • control commands may originate at FLM CPU 310 or from other firmware/hardware/software executing within a customer enterprise.
  • FLM 305 - 2 includes two PHYs, namely PHY-A 355 -A and PHY-B 355 -B that may be used to isolate management traffic (i.e., on PHY-B 355 -B) from customer application data traffic (i.e., on PHY-A 355 -A).
  • PHY-A 355 -A may utilize communication path 356 to connect to PHY-A 355 -A and may not be connected to FLM switch 315 . Because, PHY-A may not be connected to FLM switch 315 , it is possible to allow application data to bypass FLM switch 315 as necessary based on desired security configuration constraints of different production customer networks.
  • management network 360 is illustrated as connecting via communication link 357 to PHY-B 355 -B and thus to port 4 (via communication link 358 ) on FLM switch 315 .
  • Management data for appliance 1 ( 325 ) may flow on link 359 through port 1 of FLM switch 315 .
  • component architecture 400 is illustrated as a functional block diagram including two appliances, each having a NIC and a control area network (CAN) microcontroller (CANMIC), a midplane and a frame link module, with different possible communication paths illustrated, according to one or more disclosed implementations.
  • midplane 405 facilitates communication between components.
  • a midplane 405 is a component, that in some cases, may be plugged into a frame of a scaleable compute resource.
  • midplane 405 allows for the physical connections from the appliances to the FLM and routes the data and management port connections between devices. Accordingly, midplane 405 is illustrated by a dashed line in FIG. 4 that overlaps other components that midplane 405 may interact with. Although not shown in FIGS. 1-3B , a midplane components such as midplane 405 may be present as illustrated in FIG. 4 . In some implementations, a midplane (such as midplane 405 in this example) may be designed to allow for plug-in of different versions of hardware and either achieve a full air gap segregation or not.
  • midplane 405 individual connections allow for an air gap implementation, however achieving a full air gap security segregation will depend on capabilities and configuration of other components that communicate through midplane 405 . As illustrated above in FIGS. 2A and 3A , some types of segregation of application data traffic and management data traffic may be accomplished through midplane 405 . In other example implementations (e.g., FIGS. 2B and 3B ), full air gap segregation may be achieved by leveraging additional architectural features of components (e.g., additional NICs in appliances). In either case, midplane 405 may support communication paths as configured via FLM CPU 310 and illustrated in FIGS. 1-4 of this disclosure.
  • an FLM CPU such as CPU 425 shown on FLM 435 of FIG. 4 may utilize various communication paths to configure appliance components and PHYs within an FLM.
  • Communication path 465 from CPU 425 to PHY 430 represents the configuration of the system side and the line side of PHY 430 .
  • Application data flow to PHY 430 from Appliance 1 ( 410 ) utilizes communication path 451 .
  • Communication path 470 represents a line side connection between PHY 430 and customer network 470 (which although not shown may include a connection to appliance 1 410 and appliance 2 420 ).
  • a communication path 453 between CPU 425 and CANMIC 412 that represents an example of CANBUS communication that was introduced above.
  • communication path 463 represents a CANBUS communication path between CPU 425 and CANMIC 422 of appliance 2 420 .
  • Each of the communication paths utilizing a CANBUS may be used to send configuration commands to a NIC within an appliance such as appliance 1 ( 410 ) and appliance 2 ( 420 ) of component architecture 400 . That is, CANMIC 412 of appliance 1 ( 410 ) may receive configuration information as management traffic from CPU 425 via communication path 453 . This configuration information may in turn be transmitted via communication path 452 from CANMIC 412 to NIC 411 and provide configuration information to NIC 411 .
  • remote management of a NIC may be accomplished via a CANBUS and thus provide another example of a technique to segregate management traffic from application data traffic within a scaleable compute resource such as a frame based system.
  • FIG. 5 represents an example method 500 for providing configuration of a segregated management network from an application data network (e.g., air gap isolation), according to one or more disclosed examples.
  • FIG. 5 illustrates method 500 which begins at block 505 with a FLM connected to a frame scaleable compute resource and indicates that the FLM (e.g., FLM CPU) may detect a connection to a midplane.
  • Block 510 indicates that the FLM may detect an appliance connection.
  • Block 515 indicates the FLM detects a PHY connection.
  • the order of each detection may occur in an order different than the order of this example.
  • each component may be interrogated to obtain operational characteristics of the detected component. For example, a field replaceable unit (FRU) designation may be obtained from the component and used to determine capabilities of that component. Other operational information may also be obtained if available.
  • FRU field replaceable unit
  • Block 520 indicates that the FLM CPU may validate connectivity and compatibility for each detected component.
  • Block 525 indicates that the FLM may communicate via a CAN bus (if appropriate) to configure appliance control communication paths through a segregated physical network (e.g., air gap).
  • Block 530 indicates that a system side connection to a customer network may be verified.
  • Block 535 indicates that a link side connection to an appliance may be verified.
  • Block 540 indicates that the FLM CPU may configure one or more available PHYs. For example, the FLM CPU may configure a component architecturally configured to support a full air gap isolation of networks using a CAN bus (via a CANMIC) as discussed above (see FIG. 4 ) or may configure a component to a best available security configuration for that component.
  • Block 545 indicates that, once a management network and application data network are configured, the scaleable compute resource will use these communication paths as appropriate for each type of data.
  • FIG. 6 is an example computing device 600 , with a hardware processor 601 (e.g., FLM CPU 310 of FIG. 3 ), and accessible machine-readable instructions stored on a machine-readable medium 602 for implementing one example configuration of NICs for network isolation within a scaleable compute resource, according to one or more disclosed example implementations.
  • FIG. 6 illustrates computing device 600 configured to perform the flow of method 500 as an example. However, computing device 600 may also be configured to perform the flow of other methods, techniques, functions, or processes described in this disclosure.
  • machine-readable storage medium 602 includes instructions to cause hardware processor 601 to perform blocks 505 - 545 discussed above with reference to FIG. 5 .
  • the machine-readable storage medium may be a non-transitory storage medium, where the term “non-transitory” does not encompass transitory propagating signals.
  • disclosed techniques represent several benefits to improve the art of system administration and improve the functioning and security of the overall scalable compute resource. These benefits include, but are not limited to, segregation of network management traffic from customer application data; and reuse of existing infrastructure components (including backward compatibility to components that may not be architecturally configured to recognize complete air gap isolation as disclosed).
  • disclosed management software techniques e.g., the method of FIG. 5
  • a best possible security configuration e.g., isolation but not complete air gap isolation

Abstract

Remote configuration of network interface cards (NICs) on appliances of a scaleable compute resource such as a frame-based system is disclosed. Frames may include a frame link module (FLM). Remote configuration of NICs may allow for multiple networks to be maintained in physical or logical isolation from each other. For example, a management data network may be maintained independently of an application data network. An FLM CPU may detect an appliance, validate compatibility for the appliance, a midplane, a PHY connection, etc. Commands from the FLM CPU may configure the independent networks. Independent networks may provide redundancy and segregation by data type. A controller area network (CAN) bus may deliver configuration commands to a NIC of an attached appliance. Air gap equivalent isolation of networks based on type of network may be achieved while maintaining redundancy of networks to address potential failure of individual components.

Description

    BACKGROUND
  • In the field of scalable compute resources, network connectivity between nodes, blades, or frames of adjacent network modules may represent a primary communication path for sharing data between those nodes. The data may represent inputs to a compute process (e.g., data or applications), outputs of compute resources (e.g., compute results), communications to coordinate distributed processes, and other types of data. In some architectures, adjacent nodes of network modules within a blade server, cluster, or frame may be expected to be directly connected to each other using a control network to exchange coordination information amongst the set of devices working, together. This control network, sometimes referred to as a management network, may be isolated from regular data traffic on an application network, sometimes referred to as a customer network, using a separate physical local area network (LAN) or logical network (e.g., virtual local area network VLAN).
  • In some systems, there may be more than one control network. Each control network may communicate across different physical media using an independent network interface card (NIC) port or may share a physical media using logical segregation. Different topology networks may be used, e.g., ring networks, star networks, point-to-point, etc. Control signals may use a LAN, VLAN, control area network (CAN), or other type of network. Additionally, some scalable compute resources may maintain a single active uplink from the group of resources and have multiple backup uplinks. For example, a single uplink to a customer network may be used to provide an application data communication path. In some customer environments, segregation of application data and the management network data that is used to monitor, configure, and control devices is desirable for a variety of reasons including performance, reliability, and security.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure may be better understood from the following detailed description when read with the accompanying Figures. It is emphasized that, in accordance with standard practice in the industry, various features are not drawn to scale. In fact, the dimensions or locations of functional attributes may be relocated or combined based on design, security, performance, or other factors known in the art of computer systems. Further, order of processing may be altered for some functions, both internally and with respect to each other. That is, some functions may not perform serial processing and therefore those functions may be performed in an order different than shown or possibly in parallel with each other. For a detailed description of various examples, reference will now be made to the accompanying drawings, in which:
  • FIG. 1 is a functional block diagram of a computer infrastructure including multiple frame saleable compute resources, a customer VLAN, and a management VLAN, according to one or more disclosed implementations;
  • FIG. 2A is a functional block diagram representing an example of two frame link modules redundantly connected to two appliances that each have a single NIC having two ports, according to one or more disclosed implementations;
  • FIG. 2B is a functional block diagram extending the example of FIG. 2A with each of two appliances having multiple NICs with two ports each, according to one or more disclosed implementations;
  • FIG. 3A is a functional block diagram illustrating communication flow where data and management traffic are shared on a common network, according to one or more disclosed implementations;
  • FIG. 3B is a functional block diagram extending the example of FIG. 3A where data and management traffic are isolated on different networks, according to one or more disclosed implementations;
  • FIG. 4 is a functional block diagram illustrating two servers (appliances), each having a NIC and a control area network (CAN) microcontroller, a midplane and a frame link module, with different possible communication paths illustrated, according to one or more disclosed implementations;
  • FIG. 5 illustrates a flow chart depicting one example method for configuring components of FIG. 4 to isolate data from a management network and a customer network, according to one or more disclosed implementations; and
  • FIG. 6 illustrates an example computing device instrumented with computer instructions to perform the method of FIG. 5, according to one or more disclosed examples.
  • DETAILED DESCRIPTION
  • Illustrative examples of the subject matter claimed below will now be disclosed. In the interest of clarity, not all features of an actual implementation are described for every example implementation in this disclosure. It will be appreciated that in the development of any such actual example, numerous implementation-specific decisions may be made to achieve the developer's specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort, even if complex and time-consuming, would be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
  • At a high-level, disclosed techniques isolate management traffic, within and between components of a scaleable compute resource from application data traffic. Scaleable compute resources include, for example, a set of frames or a blade server with plugin components. As referenced here, both management traffic and application data traffic refer to network style communication between components to support overall functionality of a scaleable compute resource. Management traffic includes, among other data, control commands to direct power usage and cooling (e.g., fan control) at different components within a computer system.
  • For simplicity, the examples of this disclosure will be presented using Frames, Frame Link Modules (FLMs), Midplanes (e.g., a cross communication bus), and appliances (sometimes referenced as servers (e.g., the appliances are connected through the PHY to servers or blade server)). However, disclosed techniques may also be applicable to other types of scaleable compute resources. Other types of scaleable compute resources will have components that may be named differently but provide substantially similar functionality to the components used in disclosed examples. Accordingly, this disclosure is not limited to a Frame style implementation.
  • In one example implementation, two new communication ports may be added (e.g., as a multi-port NIC) to each appliance to allow data traffic from internal appliances (e.g., appliances plugged into a Frame) to be sent through the data network without entering the FLM switch. The FLM switch may additionally route packets between the internal management devices and to the external management network.
  • One example of adding communication ports and data flow paths is illustrated in FIGS. 2A-B discussed further below. Specifically, FIG. 2A illustrates an implementation without the additional ports and FIG. 2B illustrates the additional ports and communication paths. It is additionally worth noting here that one or more disclosed implementations may be accomplished without changes to other infrastructure components (e.g., existing Midplane components may be utilized without change).
  • According to disclosed examples, using at least two new communication paths from an appliance, data may be routed between new NIC ports (for example) to the FLM and the appropriate one of a management network or an application data network. New communication paths may allow remote management through existing channels to the appliance.
  • In one example implementation, data may bypass a management switch (within each FLM) as opposed to only being isolated using VLANs. Thus, a higher degree of security, sometimes referred to as “air gap” security may be achieved. In solutions without air gap isolation, management data may have traversed the FLM switch prior to being isolated within logically segregated VLANs. To accomplish an air gap solution, a CPU on the FLM may control PHY ports that are routed to the remote NIC ports of each appliance. Control of a PHY may be accomplished using management software that has an additional communication path through a control area network (CAN) bus. The CAN bus may be communicatively coupled to the CIM using CANMICs. See the discussion of FIGS. 3A-4 below. In short, an FLM CPU, possibly acting as a CAN master and communicating to CAN slaves on an appliance, may control PHY setup using control commands through a CAN bus that bypasses the FLM switch.
  • As a result of the additional and physically segregated (air gapped) communication paths, a solution providing an air gapping between the management traffic and the data traffic of the appliance may be achieved. As a result, disclosed implementations that include an air gap isolation provide a computer system where an increased security level may be achieved. Accordingly, these and other disclosed implementations represent an improvement to the functionality of a computer system, specifically with respect to data isolation and data security.
  • More discussion of a CAN and CANBUS communications is available in U.S. Pat. No. 10,055,322, entitled “Interpreting Signals Received from Redundant Buses,” issued Aug. 21, 2018, to Alex Gunnar Olson, which is incorporated by reference herein for all applicable purposes.
  • Having an understanding of the above overview, this disclosure now explains at least one non-limiting example implementation (and possible variants thereof). This example implementation is explained with reference to the figures that include: a functional block diagram of a system including multiple nodes of a scaleable resource that may benefit from the concepts of this disclosure (FIG. 1); a first example of two functional block diagrams illustrating an example implementation using two appliances that have a single NIC contrasted with an example implementation using two appliances that each have two NICs to facilitate isolation of different networks (FIGS. 2A-B); a second example of two functional block diagrams illustrating a shared common network and two isolated networks (FIGS. 3A-B); a functional block diagram illustrating two servers (appliances), each having a NIC and a control area network microcontroller (CANMIC), a midplane, and a frame link module, with different possible communication paths (FIG. 4); a flow chart depicting one example method for configuring components of FIG. 4 to isolate data from a management network and a customer data network (FIG. 5); and an example computing device instrumented with computer instructions to perform the method of FIG. 5 (FIG. 6) (all according to different possible disclosed implementations).
  • Referring to FIG. 1, an example computer infrastructure 100 is illustrated. In this example, customer network 105 is connected to a set of frames (represented by frame 1 (110), and frame 2 (115)). Of course, more than two frames may be present but for simplicity of this disclosure only two are shown in this example. As indicated by arrow 120-1, frame 1 may be configured with a set of blades (B1, B2, . . . BN) and an Appliance. There may be more than one Appliance within a single frame. Similarly, arrow 120-2 indicates that frame 2 may be configured in a like manner. Frame 1 further includes two network modules, namely network module 1 (140) and network module 2 (145) (sometimes referred to as a Frame Link Module (FLM)). Frame 2 also include two network modules, namely network module 3 (150) and network module 4 (155). These network modules provide connectivity for the compute resources represented by the respective blades within their frame. Each of the blades is shown with a network connection to a network switch 160 respectively disposed within each individual network module (e.g., network module 1 (140) through network module 4 (155)). Each network module further includes a CPU 165 to facilitate configuration, monitoring, and maintenance of a corresponding network switch 160. As mentioned above, a blade may be referred to as an appliance or a server. In any case, a blade, appliance, server, in this context refers to a “plugin” component to a scaleable compute resource to provide additional compute capacity (or functionality) for the scaleable compute resources. Many different types of components may be used to augment a scaleable compute resource and each may be configured with redundant and isolated communication paths as disclosed herein. Specific functionality provided by these blades is not necessarily pertinent to this disclosure as disclosed techniques for control network isolation may be agnostic to functionality provided by any particular plugin component. Additionally, “plugin,” as used herein, may refer to insertion of a card into a slot in a backplane of a computer system, may represent attachment via a communication cable, may represent one of many other types of connections used for computer system component attachment.
  • Connectivity from a set of frames to a customer network is typically provided by a single active uplink 125 from one of the plurality of network switches that exist across the multiple FLMs of a group of connected frames. That is, all communications external to the group of connected frames passes through uplink 125. Other potential uplinks 126-1, 126-2, and 126-3 are illustrated to be available (e.g., if needed as a result of failure to uplink 125) from other network switches.
  • As further illustrated in computer infrastructure 100, customer network VLAN 130 connects each of the network switches 160 in an ethernet ring topology network and extends to the customer network 105 (e.g., includes VLANS 1-4094). A second ring network, 4095 management VLAN 135, is also shown as a logically isolated network in computer infrastructure 100. 4095 management VLAN 135 is shown in a bolder line than customer network VLAN 130 and also connects each of the network switches 160. Note, in a proper configuration of a group of frames, each network switch will be directly connected to each neighboring switch (either in the same frame or an adjacent frame) and no intervening network devices are present.
  • A virtual LAN (VLAN) refers to a broadcast domain that is partitioned and isolated (i.e., logically isolated) in a computer network at the data link layer (OSI layer 2). LAN is the abbreviation for local area network and when used in the context of a VLAN, “virtual” refers to a physical object recreated and altered by additional logic. A VLAN is a custom network created from one or more existing LANs. It enables groups of devices from multiple networks (both wired and wireless) to be combined into a single logical network. The result is a virtual LAN that can be administered like a physical local area network, for example 4095 management VLAN 135 in computer infrastructure 100. Note, a VLAN does not represent an “air gap” isolation of data traffic on a physical network from other VLANs on that same physical network. To achieve proper air gap isolation, a separate physical layer network may be used as explained below with reference to FIGS. 2B, 3B, and 4.
  • Referring now to FIG. 2A a functional block diagram illustrates communication flows 200 representing an example of two logically isolated networks (e.g., a management network and a data network), according to one or more disclosed implementations. In this example, two frame link modules (FLM 1 210 and FLM 2 220) are redundantly connected to two appliances (appliance 1 (205) and appliance 2 (215). Each appliance, in this example, has a single NIC having two ports. Specifically, appliance 1 (205) includes NIC 1 (230-1) that has a first port and a second port as labeled in communication flow 200. Appliance 2 (215) similarly includes NIC 1 (230-2) with two ports. FLM 1 (210) includes an FLM switch 211-1, a multi-PHY communications capability (e.g., a multi-port PHY 212-1), a management connection 213-1 and a link connection 214-1. FLM 2 (220) is similarly configured with an FLM switch 211-2, multi-port PHY 212-2, management connection 213-2, and link connection 214-2.
  • There are four logically independent (and redundant) communication paths illustrated for communication flows 200. Communication paths originating from a first port on a NIC are illustrated as dash-single-dot lines, for example, communication path 1 (231) and communication path 3 (233). Communication paths originating from a second port on a NIC are illustrated as dash-double-dot lines, for example, communication path 2 (232) and communication path 4 (234). In this example, communication path 1 (231) flows from NIC 1 (230-1) of appliance 1 (205) to FLM switch 211-1 of FLM 1 (210). Communication path 2 (232) flows from NIC 1 (230-1) of appliance 1 (205) to FLM switch 211-2 of FLM 2 (220). One of ordinary skill in the art, given the benefit of this disclosure, will recognize that this is only one possible data path for network data because FLM switch 211-2 of FLM 2 (220) would route traffic to its appropriate destination as indicated in the network packet. For example, heartbeat messages may be exchanged between appliance 1 (205) and appliance 2 (215). Heartbeat message would then flow out of appliance 1 (205) into one of the two FLM Switches (211-1 or 211-2) and then flow out of that FLM Switch to a port on NIC 1 (2302) of appliance 2 (215). In this example, however, communication path 3 (233) flows from NIC 1 (230-2) of appliance 2 (215) to FLM switch 211-1 of FLM 1 (210). Communication path 4 (234) flows from NIC 1 (230-2) of appliance 2 (215) to FLM switch 211-2 of FLM 2 (220). In this manner, each of appliance 1 (205) and appliance 2 (215) have redundant communication paths (for failover redundancy and high-availability for example) between each of FLM 1 (210) and FLM 2 (220).
  • In this example, management data flow may be isolated from application data flow at each multi-port PHY to arrive at management interface 213-1 of FLM 1 (210) and management interface 213-2 of FLM 2 (220). Application data flow may be directed out of each multi-port PHY to link connection 214-1 in FLM 1 (210) and link connection 214-2 of FLM 2 (220). The link ports associated with link connection 214-2 link the frames together in a ring to form a management network. Thus, the FLM switch may perform logical data segregation by directing appropriate traffic (e.g., management or application data) to the appropriate PHY on multi-port PHY 212-1 of FLM 1 (210) or multi-port PHY 212-2 of FLM 2 (210).
  • FIG. 2B is a functional block diagram extending the example of FIG. 2A with each of two appliances having multiple network interface cards with two ports each, according to one or more disclosed implementations. FIG. 2B illustrates communication flows 250 representing an example of two physically isolated networks (e.g., a management network and a data network separated by an air gap for enhanced security). Redundant communication paths are maintained to provide high-availability. This example is similar to the example of FIG. 2A because two frame link modules (FLM 1 260 and FLM 2 270) are redundantly connected to two appliances (appliance 1 (265) and appliance 2 (266). However, in this example, each appliance has dual NICs each having two ports.
  • Specifically, appliance 1 (265) includes NIC 1 (230-1) that has a first port and a second port and NIC 2 (285-1) that has a first port and a second port, each as labeled in communication flow 250. Appliance 2 (266) similarly includes NIC 1 (230-2) with two ports and NIC 2 (285-2). FLM 1 (260) includes an FLM switch 261-1, a multi-PHY communications capability (e.g., a multi-port PHY 262-1), a management connection 213-1 and a link connection 214-1. FLM 2 (270) is similarly configured with an FLM switch 261-2, multi-port PHY 262-2, management connection 213-2, and link connection 214-2.
  • In contrast to FIG. 2A, each of FLM 1 (260) and FLM 2 (270) further include additional interfaces. FLM 1 (260) includes a segregated PHY that has four interfaces (i.e., PHY-A, PHY-B, PHY-C, and PHY-D) on each of multi-port PHYs 262-1 and 262-2. On each of these PHYs, a selected PHY from PHY-A through PHY-D may provide a physically isolated (and additional) communication path to each of appliance 1 and appliance 2. Namely, FLM 1 (260) includes interface to appliance 1 (263-1) and interface to appliance 2 (263-2). FLM 2 (270) includes interface to appliance 1 (273-1) and interface to appliance 2 (273-2).
  • In communication flows 250, there are eight independent communication paths illustrated (as opposed to the four in communication flows 200). Again, one of ordinary skill in the art, given the benefit of this disclosure, will recognize that these specific communication paths are examples only and that data may be switched within FLM Switch 261-1 or 261-2 to follow different flow paths than the eight specifically illustrated. That is, an FLM switch may direct traffic based on its destination information as included in an individual network packet to the appropriate outbound port of the FLM switch. These additional communication paths may be utilized to provide full air gap security and segregation of management traffic and application data traffic, according to one or more disclosed implementations. For example, air gap security may be provided by PHY C and PHY D, while PHY A and PHY B send all management traffic. The eight communication paths illustrated in communication flows 250 include: communication paths originating from a first port on a first NIC illustrated as a bold line, for example, communication path 1 (251-1, 251-2, and 251-3) and communication path 5 (255-1). Communication paths originating from a second port on a first NIC are illustrated as dash-single-dot lines, for example, communication path 2 (252-1, 252-2, and 252-3) and communication path 6 (256-1). Communication paths 1, 2, 5, and 6, are similar to communications paths 1-4, respectively, of FIG. 2A.
  • Additional communication paths (relative to FIG. 2A) are illustrated in communication flows 250 of FIG. 2B and originate from each of the additional NICs provided in each of appliance 1 (265) and appliance 2 (266). Communication path 3 (253-1) and communication path 7 (257-1) respectively originate at port 1 of a second NIC (e.g., NIC 2 (285-1 and 285-2)) and are illustrated as a long-dash-short-dash line. Communication path 4 (254-1) and communication path 8 (258-1) respectively originate at port 2 of the second NIC (e.g., NIC 2 (285-1 and 285-2)) and are illustrated as a bold dotted line. In this manner, each of appliance 1 (265) and appliance 2 (266) may utilize a data flow on a communication path that both bypasses each respective FLM switch (261-1 and 261-2) and isolate management traffic from link traffic on different ports of each multi-port PHY (262-1 and 262-2).
  • Specifically, management traffic from appliance 1 (265) intended for management interface 213-1 of FLM 1 (260) may follow communication path 1 (251-1, 251-2, and 251-3). This data would flow from appliance 1 (265), via NIC 1 (230-1) port 1, to FLM switch 261-1, to PHY-A on multi-port PHY 262-1 prior to reaching management interface 213-1 on FLM 1 (260). For redundancy and fault tolerance, for example, management traffic from appliance 1 (265) may have an additional communication path (i.e., communication path 2 (252-1, 252-2, and 252-3)) using FLM 2 (270). As illustrated, communication path 2 (252-1, 251-2, and 251-3) may flow from appliance 1 (265), via NIC 1 (230-1) port 2, to FLM switch 261-2, to PHY-A on multi-port PHY 262-2 prior to reaching management interface 213-2 on FLM 2 (270). Appliance 2 (266) is also illustrated, in communication flows 250, as having similar redundant communication paths for management traffic using communication path 5 (255-1, 255-2, and 255-3) and communication path 6 (256-1) respectively from a first port and a second port of NIC 1 (230-2).
  • Continuing with this example, application data traffic may be completely and physically isolated from each of communication path 1 (251 (segments 1, 2, and 3)), communication path 2 (252 (segments 1, 2, and 3), communication path 5 (255 (segments 1, 2, and 3), and communication path 6 (256 (only segment 1 is labeled for clarity but there are three segments illustrated)) that are used. Specifically, application data for appliance 1 (265) may utilize communication path 3 (253-1) to flow from appliance 1 (265), via NIC 2 (285-1) port 1 through PHY-C of multi-port PHY 262-1 and arrive at appliance 1 interface 263-1 on FLM 1 (260). Redundant application data for appliance 1 (265) may utilize communication path 4 (254-1) to flow from appliance 1 (265), via NIC 2 (285-1) port 2 through PHY-C of multi-port PHY 262-2 and arrive at appliance 1 interface 273-1 on FLM 2 (270). Application data for appliance 2 (266) may utilize communication path 7 (257-1) and communication path 8 (258-1) from NIC 2 (285-2) on appliance 2 (266) to provide similar redundant paths as those explained for application data from appliance 1. The exception to the similarity is that in this example communication path 8 would flow through PHY-D of FLM 2 (270) multi-port PHY 262-2 (rather than PHY-C). Accordingly, complete physical isolation of management traffic and application data traffic using a frame-based scaleable compute resource may be achieved. One of ordinary skill in the art, given the benefit of this disclosure, would recognize that names of PHYs used in this example are arbitrary and no special meaning is implied by the letters “A-D” other than to show there are four independent PHYs on each of multi-port PHYs 262-1 and 262-2.
  • FIG. 3A is a functional block diagram illustrating communication flow 300 where data and management traffic are shared on a common physical network (and may be logically isolated using VLANs), according to one or more disclosed implementations. For simplicity, communication flow 300 is not illustrated with redundant FLM modules as may be provided in a production implementation and described above for FIGS. 2A-B. In communication flow 300, communication path 327 illustrates a bi-directional communication path between an external server 330 (which may be an application server providing a set of functionalities for a customer network) and FLM 305-1. The connection utilizes PHY 320 that is connected to port 4 of a multi-port FLM switch 315. FLM switch 315 is illustrated in this example as having 6 ports but any number of ports are possible depending on design criteria for an FLM switch. Appliance 1 325 is illustrated as utilizing communication path 326 as a bi-directional communication path to port 1 of FLM switch 315 on FLM 1 (305-1). FLM 305-1 further illustrates FLM CPU 310 having a bi-directional communication path 329 to FLM switch 315. As explained throughout this disclosure, FLM CPU 310 may execute computer instructions to configure FLM switch 315 based on desired communication capabilities of a customer network environment. Also, in this example, data and management traffic may be shared on a single physical network and may be logically isolated on distinct VLANs (see FIG. 1).
  • FIG. 3B extends the example of FIG. 3A to illustrate communication flow 350 with isolated networks for data traffic (e.g., customer application data) and management traffic (e.g., management commands), according to one or more disclosed implementations. FIG. 3B includes elements of FIG. 3A that maintain their reference numbers to indicate they are like components. For example, FLM CPU 310, FLM Switch 315, external server 330, and appliance 1 (325) may not be different components between FIGS. 3A and 3B. However, additional components are illustrated to provide the illustrated data traffic segregation.
  • As illustrated in FIG. 3B, a physical separation of an application data network and management network 360 has been introduced. Although not specifically illustrated in communication flow 350, additional components (including appliance 1 325 (via link 359)) may be connected to management network 360. As discussed throughout this disclosure, management traffic may include control commands to devices and components to configure those devices/components and issue run-time control commands (e.g., fan control, power consumption, etc.) to affect an operational state of devices/components to which those control commands are sent. Further, control commands may originate at FLM CPU 310 or from other firmware/hardware/software executing within a customer enterprise. As illustrated, FLM 305-2 includes two PHYs, namely PHY-A 355-A and PHY-B 355-B that may be used to isolate management traffic (i.e., on PHY-B 355-B) from customer application data traffic (i.e., on PHY-A 355-A). As further illustrated, PHY-A 355-A may utilize communication path 356 to connect to PHY-A 355-A and may not be connected to FLM switch 315. Because, PHY-A may not be connected to FLM switch 315, it is possible to allow application data to bypass FLM switch 315 as necessary based on desired security configuration constraints of different production customer networks. In contrast, management network 360 is illustrated as connecting via communication link 357 to PHY-B 355-B and thus to port 4 (via communication link 358) on FLM switch 315. Management data for appliance 1 (325) may flow on link 359 through port 1 of FLM switch 315.
  • Referring now to FIG. 4, component architecture 400 is illustrated as a functional block diagram including two appliances, each having a NIC and a control area network (CAN) microcontroller (CANMIC), a midplane and a frame link module, with different possible communication paths illustrated, according to one or more disclosed implementations. In this disclosed example, midplane 405 facilitates communication between components. A midplane 405 is a component, that in some cases, may be plugged into a frame of a scaleable compute resource.
  • In general, midplane 405 allows for the physical connections from the appliances to the FLM and routes the data and management port connections between devices. Accordingly, midplane 405 is illustrated by a dashed line in FIG. 4 that overlaps other components that midplane 405 may interact with. Although not shown in FIGS. 1-3B, a midplane components such as midplane 405 may be present as illustrated in FIG. 4. In some implementations, a midplane (such as midplane 405 in this example) may be designed to allow for plug-in of different versions of hardware and either achieve a full air gap segregation or not. Within midplane 405, individual connections allow for an air gap implementation, however achieving a full air gap security segregation will depend on capabilities and configuration of other components that communicate through midplane 405. As illustrated above in FIGS. 2A and 3A, some types of segregation of application data traffic and management data traffic may be accomplished through midplane 405. In other example implementations (e.g., FIGS. 2B and 3B), full air gap segregation may be achieved by leveraging additional architectural features of components (e.g., additional NICs in appliances). In either case, midplane 405 may support communication paths as configured via FLM CPU 310 and illustrated in FIGS. 1-4 of this disclosure.
  • As illustrated in component architecture 400 and mentioned above, an FLM CPU such as CPU 425 shown on FLM 435 of FIG. 4 may utilize various communication paths to configure appliance components and PHYs within an FLM. Communication path 465 from CPU 425 to PHY 430 represents the configuration of the system side and the line side of PHY 430. Application data flow to PHY 430 from Appliance 1 (410) utilizes communication path 451. Communication path 470 represents a line side connection between PHY 430 and customer network 470 (which although not shown may include a connection to appliance 1 410 and appliance 2 420). Specifically illustrated in component architecture 400 is a communication path 453 between CPU 425 and CANMIC 412 that represents an example of CANBUS communication that was introduced above. Similarly, communication path 463 represents a CANBUS communication path between CPU 425 and CANMIC 422 of appliance 2 420. Each of the communication paths utilizing a CANBUS (i.e., communication path 463 and 453) may be used to send configuration commands to a NIC within an appliance such as appliance 1 (410) and appliance 2 (420) of component architecture 400. That is, CANMIC 412 of appliance 1 (410) may receive configuration information as management traffic from CPU 425 via communication path 453. This configuration information may in turn be transmitted via communication path 452 from CANMIC 412 to NIC 411 and provide configuration information to NIC 411. A similar configuration information path exists for appliance 2 (420) using communication path 463 and communication path 462. In this example, remote management of a NIC may be accomplished via a CANBUS and thus provide another example of a technique to segregate management traffic from application data traffic within a scaleable compute resource such as a frame based system.
  • FIG. 5 represents an example method 500 for providing configuration of a segregated management network from an application data network (e.g., air gap isolation), according to one or more disclosed examples. FIG. 5 illustrates method 500 which begins at block 505 with a FLM connected to a frame scaleable compute resource and indicates that the FLM (e.g., FLM CPU) may detect a connection to a midplane. Block 510 indicates that the FLM may detect an appliance connection. Block 515 indicates the FLM detects a PHY connection. Of course, the order of each detection may occur in an order different than the order of this example. Upon detection, each component may be interrogated to obtain operational characteristics of the detected component. For example, a field replaceable unit (FRU) designation may be obtained from the component and used to determine capabilities of that component. Other operational information may also be obtained if available.
  • Block 520 indicates that the FLM CPU may validate connectivity and compatibility for each detected component. Block 525 indicates that the FLM may communicate via a CAN bus (if appropriate) to configure appliance control communication paths through a segregated physical network (e.g., air gap). Block 530 indicates that a system side connection to a customer network may be verified. Block 535 indicates that a link side connection to an appliance may be verified. Block 540 indicates that the FLM CPU may configure one or more available PHYs. For example, the FLM CPU may configure a component architecturally configured to support a full air gap isolation of networks using a CAN bus (via a CANMIC) as discussed above (see FIG. 4) or may configure a component to a best available security configuration for that component. Block 545 indicates that, once a management network and application data network are configured, the scaleable compute resource will use these communication paths as appropriate for each type of data. In some cases, there may be separate physical networks to represent a full air gap isolation or there may be a logical isolation (e.g., using separate VLANs).
  • FIG. 6 is an example computing device 600, with a hardware processor 601 (e.g., FLM CPU 310 of FIG. 3), and accessible machine-readable instructions stored on a machine-readable medium 602 for implementing one example configuration of NICs for network isolation within a scaleable compute resource, according to one or more disclosed example implementations. FIG. 6 illustrates computing device 600 configured to perform the flow of method 500 as an example. However, computing device 600 may also be configured to perform the flow of other methods, techniques, functions, or processes described in this disclosure. In this example of FIG. 6, machine-readable storage medium 602 includes instructions to cause hardware processor 601 to perform blocks 505-545 discussed above with reference to FIG. 5.
  • A machine-readable storage medium, such as 602 of FIG. 6, may include both volatile and nonvolatile, removable and non-removable media, and may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions, data structures, program module, or other data accessible to a processor, for example firmware, erasable programmable read-only memory (EPROM), random access memory (RAM), non-volatile random access memory (NVRAM), optical disk, solid state drive (SSD), flash memory chips, and the like. The machine-readable storage medium may be a non-transitory storage medium, where the term “non-transitory” does not encompass transitory propagating signals.
  • One of skill in the art, given the benefit of this disclosure, will recognize that the disclosed techniques represent several benefits to improve the art of system administration and improve the functioning and security of the overall scalable compute resource. These benefits include, but are not limited to, segregation of network management traffic from customer application data; and reuse of existing infrastructure components (including backward compatibility to components that may not be architecturally configured to recognize complete air gap isolation as disclosed). In a case where a component is not architecturally constructed to include additional NICs (e.g., FIG. 2A), disclosed management software techniques (e.g., the method of FIG. 5) may allow that component to function within the system using a best possible security configuration (e.g., isolation but not complete air gap isolation). Thus, disclosed techniques may be implemented without forcing complete system upgrade for all components concurrently within a customer enterprise and allow customer a migration path to enhanced security.
  • Certain terms have been used throughout this description and claims to refer to particular system components. As one skilled in the art will appreciate, different parties may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In this disclosure and claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” or “couples” is intended to mean either an indirect or direct wired or wireless connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections. The recitation “based on” is intended to mean “based at least in part on.” Therefore, if X is based on Y, X may be a function of Y and any number of other factors.
  • The above discussion is meant to be illustrative of the principles and various implementations of the present disclosure. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims (20)

What is claimed is:
1. A computer-implemented method for dynamically configuring network isolation between two networks, the method comprising:
detecting, by a frame link module (FLM), a first appliance connection to a frame scaleable compute resource;
detecting a first PHY connection on a multi-port PHY interface of the FLM;
validating, using an FLM central processing unit (CPU), compatibility for the first appliance, a midplane, and the first PHY connection;
initiating configuration commands to a first network interface card (NIC) port of the first appliance to configure a first network for application data;
initiating configuration commands to a second NIC port of the first appliance to configure a second network for management data; and
initiating communication on the first network and the second network,
wherein the management data is maintained and transmitted on the second network, internally within the scaleable compute resource, and the application data is transmitted on the first network, and
wherein the first network is isolated from the second network.
2. The computer-implemented method of claim 1, wherein:
the first network and the second network are logically isolated.
3. The computer-implemented method of claim 1, wherein logical isolation of networks is provided using independent VLANs of the scaleable compute resource.
4. The computer-implemented method of claim 1, wherein the configuration commands are initiated in response to detection of attachment of the first appliance while the scaleable compute resource is running.
5. The computer-implemented method of claim 1, wherein the configuration commands are initiated as part of a startup boot sequence for the scaleable compute resource.
6. The computer-implemented method of claim 1, wherein the first NIC port and the second NIC port are on a first NIC.
7. The computer-implemented method of claim 6, wherein the configuration commands are delivered to the first NIC via a controller area network (CAN) bus.
8. The computer-implemented method of claim 6, further comprising:
detecting a second PHY connection on the multi-port PHY interface of the FLM;
validating, using the FLM CPU, compatibility for the first appliance, the midplane, and the second PHY connection;
initiating configuration commands to a second NIC, different from the first NIC, on the first appliance, the second NIC including a second NIC first port and a second NIC second port;
using the configuration commands to the second NIC to configure a third network for management data as a redundant management data network to the second network using the second NIC first port; and
using the configuration commands to the second NIC to configure a fourth network for application data, as a redundant application data network to the first network, using the second NIC second port,
wherein each application data network maintains physical isolation from each management data network on all components within the scaleable compute resource.
9. The computer-implemented method of claim 8, wherein the physical isolation of each of the management data network and the redundant management data network from each of the application data network and the redundant application data network includes air gap equivalent isolation and electrical signals of each management data network and each application data network are isolated based on type, within the scaleable compute resource, from being present on any single physical network media transport.
10. The computer-implemented method of claim 8, wherein the configuration commands to the second NIC are initiated in response to detection of attachment of the first appliance and a determination by the FLM CPU that the first appliance includes multiple NICs.
11. The computer-implemented method of claim 8, wherein the configuration commands for the second NIC are delivered to the second NIC via a controller area network (CAN) bus.
12. The computer-implemented method of claim 8, wherein the configuration commands for the second NIC are initiated in response to detection of attachment of the first appliance while the scaleable compute resource is running.
13. The computer-implemented method of claim 8, wherein the configuration commands for the second NIC are initiated as part of a startup boot sequence for the scaleable compute resource.
14. A non-transitory computer readable medium comprising computer executable instructions that, when executed by one or more processing units, cause the one or more processing units to:
detect, by a frame link module (FLM), a first appliance connection to a frame scaleable compute resource;
detect a first PHY connection on a multi-port PHY interface of the FLM;
validate, using an FLM central processing unit (CPU) as one of the one or more processing units, compatibility for the first appliance, a midplane, and the first PHY connection;
initiate configuration commands to a first network interface card (NIC) port of the first appliance to configure a first network for application data;
initiate configuration commands to a second NIC port of the first appliance to configure a second network for management data; and
initiate communication on the first network and the second network,
wherein management data is maintained and transmitted on the second network, internally within the scaleable compute resource, and the application data is transmitted on the first network, and
wherein the first network is isolated from the second network.
15. The non-transitory computer readable medium of claim 14, further comprising computer executable instructions that, when executed by one or more processing units, cause the one or more processing units to:
initiate configuration commands to a first NIC including the first NIC port responsive to detection of attachment of the first appliance while the scaleable compute resource is running.
16. The non-transitory computer readable medium of claim 15, further comprising computer executable instructions that, when executed by one or more processing units, cause the one or more processing units to:
deliver configuration commands to the first NIC via a controller area network (CAN) bus.
17. The non-transitory computer readable medium of claim 14, further comprising computer executable instructions that, when executed by one or more processing units, cause the one or more processing units to:
detect a second PHY connection on the multi-port PHY interface of the FLM;
validate, using the FLM CPU, compatibility for the first appliance, the midplane, and the second PHY connection;
initiate configuration commands to a second NIC, different from the first NIC, on the first appliance, the second NIC including a second NIC first port and a second NIC second port;
use the configuration commands to the second NIC to configure a third network for management data as a redundant management data network to the second network using the second NIC first port; and
use the configuration commands to the second NIC to configure a fourth network for application data as a redundant application data network to the first network using the second NIC second port,
wherein each application data network maintains physical isolation from each management data network on all components within the scaleable compute resource.
18. A frame link module (FLM) within a scaleable compute resource, the FLM comprising:
an FLM central processing unit (CPU);
a multi-port PHY interface communicatively coupled to the FLM CPU;
a midplane providing connectivity between the FLM CPU, the multi-port PHY interface, and one or more appliances connected to the scalable compute resource; and
a data storage area to store executable instructions to be executed by the FLM CPU, wherein the executable instructions, when executed by the FLM CPU cause the FLM CPU to:
detect a first appliance connection, from the one or more appliances, to a frame of the scaleable compute resource;
detect a first PHY connection on the multi-port PHY interface;
validate compatibility for the first appliance, the midplane, and the first PHY connection;
initiate configuration commands to a first network interface card (NIC) port of the first appliance to configure a first network for application data;
initiate configuration commands to a second NIC port of the first appliance to configure a second network for management data; and
initiate communication on the first network and the second network,
wherein management data is maintained and transmitted on the second network, internally within the scaleable compute resource, and application data is transmitted on the first network, and
wherein the first network is isolated from the second network.
19. The FLM of claim 18, wherein the executable instructions, when executed by the FLM CPU cause the FLM CPU to:
detect a second PHY connection on the multi-port PHY interface;
validate compatibility for the first appliance, the midplane, and the second PHY connection;
initiate configuration commands to a second NIC, different from the first NIC, on the first appliance, the second NIC including a second NIC first port and a second NIC second port;
use the configuration commands to the second NIC to configure a third network for management data as a redundant management data network to the second network using the second NIC first port; and
use the configuration commands to the second NIC to configure a fourth network for application data as a redundant application data network to the first network using the second NIC second port, wherein
each application data network maintains physical isolation from each management data network on all components within the scaleable compute resource.
20. The FLM of claim 19, wherein the executable instructions, when executed by the FLM CPU cause the FLM CPU to:
deliver configuration commands to the second NIC via a controller area network (CAN) bus.
US16/266,850 2019-02-04 2019-02-04 Remote network interface card management Active 2039-02-15 US10742493B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/266,850 US10742493B1 (en) 2019-02-04 2019-02-04 Remote network interface card management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/266,850 US10742493B1 (en) 2019-02-04 2019-02-04 Remote network interface card management

Publications (2)

Publication Number Publication Date
US20200252273A1 true US20200252273A1 (en) 2020-08-06
US10742493B1 US10742493B1 (en) 2020-08-11

Family

ID=71837988

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/266,850 Active 2039-02-15 US10742493B1 (en) 2019-02-04 2019-02-04 Remote network interface card management

Country Status (1)

Country Link
US (1) US10742493B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2610458A (en) * 2021-09-03 2023-03-08 Goldilock Secure S R O Air gap-based network isolation device circuit board
US11616781B2 (en) 2017-12-05 2023-03-28 Goldilock Secure s.r.o. Air gap-based network isolation device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11966617B2 (en) 2021-07-28 2024-04-23 Seagate Technology Llc Air gapped data storage devices and systems

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030152075A1 (en) * 2002-02-14 2003-08-14 Hawthorne Austin J. Virtual local area network identifier translation in a packet-based network
US9384102B2 (en) * 2009-12-15 2016-07-05 Hewlett Packard Enterprise Development Lp Redundant, fault-tolerant management fabric for multipartition servers
CN104679703A (en) * 2013-11-29 2015-06-03 英业达科技有限公司 High-density server system
CN107534590B (en) * 2015-10-12 2020-07-28 慧与发展有限责任合伙企业 Network system
US20190075158A1 (en) * 2017-09-06 2019-03-07 Cisco Technology, Inc. Hybrid io fabric architecture for multinode servers
US20190327173A1 (en) * 2018-04-22 2019-10-24 Mellanox Technologies Tlv Ltd. Load balancing among network links using an efficient forwarding scheme
US10470111B1 (en) * 2018-04-25 2019-11-05 Hewlett Packard Enterprise Development Lp Protocol to detect if uplink is connected to 802.1D noncompliant device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11616781B2 (en) 2017-12-05 2023-03-28 Goldilock Secure s.r.o. Air gap-based network isolation device
GB2610458A (en) * 2021-09-03 2023-03-08 Goldilock Secure S R O Air gap-based network isolation device circuit board

Also Published As

Publication number Publication date
US10742493B1 (en) 2020-08-11

Similar Documents

Publication Publication Date Title
US11743123B2 (en) Managed switch architectures: software managed switches, hardware managed switches, and heterogeneous managed switches
US9143444B2 (en) Virtual link aggregation extension (VLAG+) enabled in a TRILL-based fabric network
EP3482532B1 (en) Automatic service function validation in a virtual network environment
EP2617165B1 (en) System and method for providing ethernet over infiniband virtual hub scalability in a middleware machine environment
US7644254B2 (en) Routing data packets with hint bit for each six orthogonal directions in three dimensional torus computer system set to avoid nodes in problem list
US10742493B1 (en) Remote network interface card management
US9634849B2 (en) System and method for using a packet process proxy to support a flooding mechanism in a middleware machine environment
US9755959B2 (en) Dynamic service path creation
EP3895388B1 (en) Server redundant network paths
EP2430802B1 (en) Port grouping for association with virtual interfaces
EP3036873B1 (en) Dedicated control path architecture for stacked packet switches
US7765385B2 (en) Fault recovery on a parallel computer system with a torus network
US9130858B2 (en) System and method for supporting discovery and routing degraded fat-trees in a middleware machine environment
CN116158063A (en) Multi-edge Ethernet channel (MEEC) creation and management
WO2017118080A1 (en) Heat removing and heat adding method and device for central processing unit (cpu)
CN104954276B (en) System and method for load balancing multicast traffic
CN105245504A (en) North-south flow safety protection system in cloud computing network
JP2015211374A (en) Information processing system, control method for information processing system, and control program for management device
US8958337B1 (en) Scalable method to support multi-device link aggregation
US20130061086A1 (en) Fault-tolerant system, server, and fault-tolerating method
WO2023076371A1 (en) Automatic encryption for cloud-native workloads
US20150341219A1 (en) Network device and operating method thereof
US10516625B2 (en) Network entities on ring networks
US20230261971A1 (en) Robust Vertical Redundancy Of Networking Devices
US11962498B1 (en) Symmetric networking for orphan workloads in cloud networks

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAY, STEPHEN;SAM, LONG;MURRAY, CHRISTOPHER;REEL/FRAME:049809/0616

Effective date: 20190204

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4