WO2017132271A1 - System and method for supporting scalable representation of switch port status in a high performance computing environment - Google Patents
System and method for supporting scalable representation of switch port status in a high performance computing environment Download PDFInfo
- Publication number
- WO2017132271A1 WO2017132271A1 PCT/US2017/014963 US2017014963W WO2017132271A1 WO 2017132271 A1 WO2017132271 A1 WO 2017132271A1 US 2017014963 W US2017014963 W US 2017014963W WO 2017132271 A1 WO2017132271 A1 WO 2017132271A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- switch
- switches
- port
- subnet
- ports
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2228—Indexing structures
- G06F16/2237—Vectors, bitmaps or matrices
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/046—Network management architectures or arrangements comprising network management agents or mobile agents therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/085—Retrieval of network configuration; Tracking network configuration history
- H04L41/0853—Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0817—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0823—Errors, e.g. transmission errors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/48—Routing tree calculation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/113—Arrangements for redundant switching, e.g. using parallel planes
- H04L49/118—Address processing within a device, e.g. using internal ID or tags for routing within a switch
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/15—Interconnection of switching modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
- H04L49/356—Switches specially adapted for specific applications for storage area networks
- H04L49/358—Infiniband Switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0227—Filtering policies
- H04L63/0236—Filtering by address, protocol, port number or service, e.g. IP-address or URL
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/02—Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
- H04L63/0227—Filtering policies
- H04L63/0254—Stateful filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/44—Star or tree networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/111—Switch interfaces, e.g. port details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/70—Virtual switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/12—Applying verification of the received information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
Definitions
- the present invention is generally related to computer systems, and is particularly related to supporting a scalable representation of switch port status in a high performance computing environment.
- a method can provide, at one or more computers, including one or more microprocessors, at least one subnet, the at least one subnet comprising one or more switches, the one or more switches comprising at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, and wherein each of the one or more switches comprise at least one attribute, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, and a subnet manager, the subnet manager running on one of the one or more switches or one of the plurality of host channel adapters.
- the method can associate each port of the plurality of ports on the one or more switches with a switch port status.
- the method can represent each switch port status associated with each port of the pluralit
- An exemplary method can provide, at one or more computers, including one or more microprocessors, at least one subnet, the at least one or more switches, the one or more switches comprising at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, and wherein each of the one or more switches comprise at least one attribute, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, and a subnet manager, the subnet manager running on one of the one or more switches or one of the plurality of host channel adapters.
- the method can provide, at each of the one or more switches, at least one attribute.
- the method can provide a subnet management agent (SMA) of a plurality of subnet management agents at a switch of the one or more switches.
- SMA subnet management agent
- the method can monitor, by the SMA of the switch of the one or more switches, at least one of link stability at each port of the plurality of ports of the switch and link availability at each port of the plurality of ports at the switch.
- one or more of the plurality of host channel adapters can comprise at least one virtual function, at least one virtual switch, and at least one physical function.
- the plurality of end nodes can comprise physical hosts, virtual machines, or a combination of physical hosts and virtual machines, wherein the virtual machines are associated with at least one virtual function.
- Figure 1 shows an illustration of an InfiniBand environment, in accordance with an embodiment.
- Figure 2 shows an illustration of a partitioned cluster environment, in accordance with an embodiment
- Figure 3 shows an illustration of a tree topology in a network environment, in accordance with an embodiment.
- Figure 4 shows an exemplary shared port architecture, in accordance with an embodiment.
- Figure 5 shows an exemplary vSwitch architecture, in accordance with an embodiment.
- Figure 6 shows an exemplary vPort architecture, in accordance with an embodiment.
- Figure 7 shows an exemplary vSwitch architecture with prepopulated LIDs, in accordance with an embodiment.
- Figure 8 shows an exemplary vSwitch architecture with dynamic LID assignment, in accordance with an embodiment.
- Figure 9 shows an exemplary vSwitch architecture with vSwitch with dynamic LID assignment and prepopulated LIDs, in accordance with an embodiment.
- Figure 10 shows an exemplary multi-subnet InfiniBand fabric, in accordance with an embodiment.
- Figure 11 shows a scalable representation of switch port status, in accordance with an embodiment.
- Figure 12 shows a scalable representation of link status, in accordance with an embodiment.
- Figure 13 is a flowchart for a method for supporting scalable representation of switch port status in a high performance computing environment, in accordance with an embodiment.
- Figure 14 is a flowchart for a method for supporting scalable representation of switch port status in a high performance computing environment, in accordance with an embodiment.
- Figure 15 shows a scalable representation of link stability, in accordance with an embodiment.
- Figure 16 shows a scalable representation of link availability, in accordance with an embodiment.
- Figure 17 is a flowchart of an exemplary method for supporting scalable representation of link stability and availability in a high performance computing environment, in accordance with an embodiment.
- IB InfiniBandTM
- IB InfiniBandTM
- IB InfiniBandTM
- IB InfiniBandTM
- RDMA Remote Direct Memory Access
- SR-IOV Single-Root I/O Virtualization
- a virtual switch (vSwitch) SR-IOV architecture can be provided for applicability in high performance lossless interconnection networks.
- vSwitch virtual switch
- a scalable and topology-agnostic dynamic reconfiguration mechanism can be provided.
- routing strategies for virtualized environments using vSwitches can be provided, and an efficient routing algorithm for network topologies (e.g., Fat-Tree topologies) can be provided.
- the dynamic reconfiguration mechanism can be further tuned to minimize imposed overhead in Fat-Trees.
- virtualization can be beneficial to efficient resource utilization and elastic resource allocation in cloud computing.
- Live migration makes it possible to optimize resource usage by moving virtual machines (VMs) between physical servers in an application transparent manner.
- VMs virtual machines
- virtualization can enable consolidation, on-demand provisioning of resources, and elasticity through live migration.
- IB InfiniBandTM
- HPC high-performance computing
- IBA InfiniBandTM Architecture
- IB networks are referred to as subnets, where a subnet can include a set of hosts interconnected using switches and point-to-point links.
- an IB fabric constitutes one or more subnets, which can be interconnected using routers.
- hosts can be connected using switches and point-to-point links. Additionally, there can be a master management entity, the subnet manager (SM), which resides on a designated device in the subnet. The subnet manager is responsible for configuring, activating and maintaining the IB subnet. Additionally, the subnet manager (SM) can be responsible for performing routing table calculations in an IB fabric. Here, for example, the routing of the IB network aims at proper load balancing between all source and destination pairs in the local subnet.
- SM master management entity
- the subnet manager exchanges control packets, which are referred to as subnet management packets (SMPs), with subnet management agents (SMAs).
- SMPs subnet management packets
- SMAs subnet management agents
- the subnet management agents reside on every IB subnet device.
- SMPs the subnet manager is able to discover the fabric, configure end nodes and switches, and receive notifications from SMAs.
- intra-subnet routing in an IB network can be based on LFTs stored in the switches.
- the LFTs are calculated by the SM according to the routing mechanism in use.
- HCA Host Channel Adapter
- LIDs local identifiers
- Each entry in an LFT consists of a destination LID (DLID) and an output port. Only one entry per LID in the table is supported.
- DLID destination LID
- the routing is deterministic as packets take the same path in the network between a given source-destination pair (LID pair).
- LIDs local identifiers
- L3 IB layer three
- the SM can calculate routing tables (i.e., the connections/routes between each pair of nodes within the subnet) at network initialization time. Furthermore, the routing tables can be updated whenever the topology changes, in order to ensure connectivity and optimal performance. During normal operations, the SM can perform periodic light sweeps of the network to check for topology changes. If a change is discovered during a light sweep or if a message (trap) signaling a network change is received by the SM, the SM can reconfigure the network according to the discovered changes.
- routing tables i.e., the connections/routes between each pair of nodes within the subnet
- the routing tables can be updated whenever the topology changes, in order to ensure connectivity and optimal performance.
- the SM can perform periodic light sweeps of the network to check for topology changes. If a change is discovered during a light sweep or if a message (trap) signaling a network change is received by the SM, the SM can reconfigure the network according to the discovered changes.
- the SM can reconfigure the network when the network topology changes, such as when a link goes down, when a device is added, or when a link is removed.
- the reconfiguration steps can include the steps performed during the network initialization.
- the reconfigurations can have a local scope that is limited to the subnets, in which the network changes occurred. Also, the segmenting of a large fabric with routers may limit the reconfiguration scope.
- FIG. 1 An example InfiniBand fabric is shown in Figure 1 , which shows an illustration of an InfiniBand environment 100, in accordance with an embodiment.
- nodes A-E, 101-105 use the InfiniBand fabric, 120, to communicate, via the respective host channel adapters 11 1-115.
- the various nodes, e.g., nodes A-E, 101-105 can be represented by various physical devices.
- the various nodes, e.g., nodes A-E, 101-105 can be represented by various virtual devices, such as virtual machines. Partitioning in InfiniBand
- IB networks can support partitioning as a security mechanism to provide for isolation of logical groups of systems sharing a network fabric.
- Each HCA port on a node in the fabric can be a member of one or more partitions.
- Partition memberships are managed by a centralized partition manager, which can be part of the SM.
- the SM can configure partition membership information on each port as a table of 16- bit partition keys (P_Keys).
- P_Keys 16- bit partition keys
- the SM can also configure switch and router ports with the partition enforcement tables containing P_Key information associated with the end-nodes that send or receive data traffic through these ports.
- partition membership of a switch port can represent a union of all membership indirectly associated with LIDs routed via the port in an egress (towards the link) direction.
- partitions are logical groups of ports such that the members of a group can only communicate to other members of the same logical group.
- HCAs host channel adapters
- packets can be filtered using the partition membership information to enforce isolation. Packets with invalid partitioning information can be dropped as soon as the packets reaches an incoming port.
- partitions can be used to create tenant clusters. With partition enforcement in place, a node cannot communicate with other nodes that belong to a different tenant cluster. In this way, the security of the system can be guaranteed even in the presence of compromised or malicious tenant nodes.
- Queue Pairs QPs
- EECs End-to-End contexts
- QPO and QP1 Management Queue Pairs
- the P_Key information can then be added to every IB transport packet sent.
- a packet arrives at an HCA port or a switch, its P_Key value can be validated against a table configured by the SM. If an invalid P_Key value is found, the packet is discarded immediately. In this way, communication is allowed only between ports sharing a partition.
- FIG. 2 shows an illustration of a partitioned cluster environment, in accordance with an embodiment.
- nodes A-E, 101-105 use the InfiniBand fabric, 120, to communicate, via the respective host channel adapters 11 1-1 15.
- the nodes A-E are arranged into partitions, namely partition 1 , 130, partition 2, 140, and partition 3, 150.
- Partition 1 comprises node A 101 and node D 104.
- Partition 2 comprises node A 101 , node B 102, and node C 103.
- Partition 3 comprises node C 103 and node E 105.
- node D 104 and node E 105 are not allowed to communicate as these nodes do not share a partition. Meanwhile, for example, node A 101 and node C 103 are allowed to communicate as these nodes are both members of partition 2, 140.
- HPC High Performance Computing
- memory overhead has been significantly reduced by virtualizing the Memory Management Unit
- storage overhead has been reduced by the use of fast SAN storages or distributed networked file systems
- network I/O overhead has been reduced by the use of device passthrough techniques like Single Root Input/Output Virtualization (SR-IOV).
- SR-IOV Single Root Input/Output Virtualization
- IB InfiniBand
- VMs virtual machines
- I B uses three different types of addresses.
- a first type of address is the 16 bits Local Identifier (LID). At least one unique LID is assigned to each HCA port and each switch by the SM. The LIDs are used to route traffic within a subnet. Since the LID is 16 bits long, 65536 unique address combinations can be made, of which only 49151 (0x0001-0xBFFF) can be used as unicast addresses. Consequently, the number of available unicast addresses defines the maximum size of an IB subnet.
- a second type of address is the 64 bits Global Unique Identifier (GUID) assigned by the manufacturer to each device (e.g. HCAs and switches) and each HCA port.
- GUID Global Unique Identifier
- the SM may assign additional subnet unique GUIDs to an HCA port, which is useful when SR-IOV is used.
- a third type of address is the 128 bits Global Identifier (GID).
- GID Global Identifier
- the GID is a valid IPv6 unicast address, and at least one is assigned to each HCA port.
- the GID is formed by combining a globally unique 64 bits prefix assigned by the fabric administrator, and the GUID address of each HCA port.
- some of the IB based HPC systems employ a fat-tree topology to take advantage of the useful properties fat-trees offer. These properties include full bisection-bandwidth and inherent fault-tolerance due to the availability of multiple paths between each source destination pair.
- the initial idea behind fat-trees was to employ fatter links between nodes, with more available bandwidth, as the tree moves towards the roots of the topology. The fatter links can help to avoid congestion in the upper-level switches and the bisection-bandwidth is maintained.
- Figure 3 shows an illustration of a tree topology in a network environment, in accordance with an embodiment.
- one or more end nodes 201-204 can be connected in a network fabric 200.
- the network fabric 200 can be based on a fat-tree topology, which includes a plurality of leaf switches 211-214, and multiple spine switches or root switches 231-234. Additionally, the network fabric 200 can include one or more intermediate switches, such as switches 221-224.
- each of the end nodes 201-204 can be a multi-homed node, i.e., a single node that is connected to two or more parts of the network fabric 200 through multiple ports.
- the node 201 can include the ports H1 and H2
- the node 202 can include the ports H3 and H4
- the node 203 can include the ports H5 and H6, and the node 204 can include the ports H7 and H8.
- each switch can have multiple switch ports.
- the root switch 231 can have the switch ports 1-2
- the root switch 232 can have the switch ports 3-4
- the root switch 233 can have the switch ports 5-6
- the root switch 234 can have the switch ports 7-8.
- the fat-tree routing mechanism is one of the most popular routing algorithm for IB based fat-tree topologies.
- the fat-tree routing mechanism is also implemented in the OFED (Open Fabric Enterprise Distribution - a standard software stack for building and deploying IB based applications) subnet manager, OpenSM.
- OFED Open Fabric Enterprise Distribution - a standard software stack for building and deploying IB based applications
- the fat-tree routing mechanism aims to generate LFTs that evenly spread shortest- path routes across the links in the network fabric.
- the mechanism traverses the fabric in the indexing order and assigns target LIDs of the end nodes, and thus the corresponding routes, to each switch port.
- the indexing order can depend on the switch port to which the end node is connected (i.e., port numbering sequence).
- the mechanism can maintain a port usage counter, and can use this port usage counter to select a least-used port each time a new route is added.
- nodes that are not members of a common partition are not allowed to communicate. Practically, this means that some of the routes assigned by the fat-tree routing algorithm are not used for the user traffic.
- the problem arises when the fat tree routing mechanism generates LFTs for those routes the same way it does for the other functional paths. This behavior can result in degraded balancing on the links, as nodes are routed in the order of indexing. As routing can be performed oblivious to the partitions, fat-tree routed subnets, in general, provide poor isolation among partitions.
- a Fat-Tree is a hierarchical network topology that can scale with the available network resources. Moreover, Fat-Trees are easy to build using commodity switches placed on different levels of the hierarchy. Different variations of Fat-Trees are commonly available, including /(-ary-n-trees, Extended Generalized Fat-Trees (XGFTs), Parallel Ports Generalized Fat-Trees (PGFTs) and Real Life Fat-Trees (RLFTs).
- XGFTs Extended Generalized Fat-Trees
- PGFTs Parallel Ports Generalized Fat-Trees
- RLFTs Real Life Fat-Trees
- a /(-ary-n-tree is an n level Fat-Tree with k n end nodes and n ⁇ k n_1 switches, each with 2k ports. Each switch has an equal number of up and down connections in the tree.
- XGFT Fat-Tree extends /(-ary-n-trees by allowing both different number of up and down connections for the switches, and different number of connections at each level in the tree.
- the PGFT definition further broadens the XGFT topologies and permits multiple connections between switches. A large variety of topologies can be defined using XGFTs and PGFTs.
- RLFT which is a restricted version of PGFT, is introduced to define Fat- Trees commonly found in today's HPC clusters.
- An RLFT uses the same port-count switches at all levels in the Fat-Tree.
- I/O Input/Output
- I/O Virtualization can provide availability of I/O by allowing virtual machines (VMs) to access the underlying physical resources.
- VMs virtual machines
- the combination of storage traffic and inter-server communication impose an increased load that may overwhelm the I/O resources of a single server, leading to backlogs and idle processors as they are waiting for data.
- 10V can provide availability; and can improve performance, scalability and flexibility of the (virtualized) I/O resources to match the level of performance seen in modern CPU virtualization.
- lOV is desired as it can allow sharing of I/O resources and provide protected access to the resources from the VMs.
- lOV decouples a logical device, which is exposed to a VM, from its physical implementation.
- a logical device which is exposed to a VM, from its physical implementation.
- DA direct assignment
- SR-IOV single root-l/O virtualization
- one type of lOV technology is software emulation.
- Software emulation can allow for a decoupled front-end/back-end software architecture.
- the front-end can be a device driver placed in the VM, communicating with the back-end implemented by a hypervisor to provide I/O access.
- the physical device sharing ratio is high and live migrations of VMs are possible with just a few milliseconds of network downtime.
- software emulation introduces additional, undesired computational overhead.
- Direct device assignment involves a coupling of I/O devices to VMs, with no device sharing between VMs.
- Direct assignment, or device passthrough provides near to native performance with minimum overhead.
- the physical device bypasses the hypervisor and is directly attached to the VM.
- a downside of such direct device assignment is limited scalability, as there is no sharing among virtual machines - one physical network card is coupled with one VM.
- Single Root lOV can allow a physical device to appear through hardware virtualization as multiple independent lightweight instances of the same device. These instances can be assigned to VMs as passthrough devices, and accessed as Virtual Functions (VFs). The hypervisor accesses the device through a unique (per device), fully featured Physical Function (PF). SR-IOV eases the scalability issue of pure direct assignment. However, a problem presented by SR-IOV is that it can impair VM migration. Among these lOV technologies, SR-IOV can extend the PCI Express (PCIe) specification with the means to allow direct access to a single physical device from multiple VMs while maintaining near to native performance.
- PCIe PCI Express
- SR-IOV can provide good performance and scalability.
- SR-IOV allows a PCIe device to expose multiple virtual devices that can be shared between multiple guests by allocating one virtual device to each guest.
- Each SR-IOV device has at least one physical function (PF) and one or more associated virtual functions (VF).
- PF is a normal PCIe function controlled by the virtual machine monitor (VMM), or hypervisor
- VF is a light-weight PCIe function.
- Each VF has its own base address (BAR) and is assigned with a unique requester ID that enables I/O memory management unit (IOMMU) to differentiate between the traffic streams to/from different VFs.
- IOMMU I/O memory management unit
- the IOMMU also apply memory and interrupt translations between the PF and the VFs.
- SR-IOV models e.g. a shared port model, a virtual switch model, and a virtual port model.
- FIG. 4 shows an exemplary shared port architecture, in accordance with an embodiment.
- a host 300 e.g., a host channel adapter
- a hypervisor 310 which can assign the various virtual functions 330, 340, 350, to a number of virtual machines.
- the physical function can be handled by the hypervisor 310.
- the host e.g., HCA
- the host appears as a single port in the network with a single shared LID and shared Queue Pair (QP) space between the physical function 320 and the virtual functions 330, 350, 350.
- QP shared Queue Pair
- each function i.e., physical function and virtual functions
- different GIDs can be assigned to the virtual functions and the physical function, and the special queue pairs, QPO and QP1 (i.e., special purpose queue pairs that are used for InfiniBand management packets), are owned by the physical function.
- QPO and QP1 i.e., special purpose queue pairs that are used for InfiniBand management packets
- QPs are exposed to the VFs as well, but the VFs are not allowed to use QPO (all SMPs coming from VFs towards QPO are discarded), and QP1 can act as a proxy of the actual QP1 owned by the PF.
- the shared port architecture can allow for highly scalable data centers that are not limited by the number of VMs (which attach to the network by being assigned to the virtual functions), as the LID space is only consumed by physical machines and switches in the network.
- a shortcoming of the shared port architecture is the inability to provide transparent live migration, hindering the potential for flexible VM placement.
- a migrating VM i.e., a virtual machine migrating to a destination hypervisor
- a subnet manager cannot run inside a VM.
- FIG. 5 shows an exemplary vSwitch architecture, in accordance with an embodiment.
- a host 400 e.g., a host channel adapter
- a hypervisor 410 which can assign the various virtual functions 430, 440, 450, to a number of virtual machines.
- the physical function can be handled by the hypervisor 410.
- a virtual switch 415 can also be handled by the hypervisor 401.
- each virtual function 430, 440, 450 is a complete virtual Host Channel Adapter (vHCA), meaning that the VM assigned to a VF is assigned a complete set of IB addresses (e.g., GID, GUID, LID) and a dedicated QP space in the hardware.
- vHCA virtual Host Channel Adapter
- the HCA 400 looks like a switch, via the virtual switch 415, with additional nodes connected to it.
- the hypervisor 410 can use the PF 420, and the VMs (attached to the virtual functions) use the VFs.
- a vSwitch architecture provide transparent virtualization.
- each virtual function is assigned a unique LID, the number of available LIDs gets consumed rapidly.
- more communication paths have to be computed by the SM and more Subnet Management Packets (SMPs) have to be sent to the switches in order to update their LFTs.
- SMPs Subnet Management Packets
- the computation of the communication paths might take several minutes in large networks.
- LID space is limited to 49151 unicast LIDs, and as each VM (via a VF), physical node, and switch occupies one LID each, the number of physical nodes and switches in the network limits the number of active VMs, and vice versa.
- FIG. 6 shows an exemplary vPort concept, in accordance with an embodiment.
- a host 300 e.g., a host channel adapter
- a hypervisor 410 can assign the various virtual functions 330, 340, 350, to a number of virtual machines.
- the physical function can be handled by the hypervisor 310.
- the vPort concept is loosely defined in order to give freedom of implementation to vendors (e.g. the definition does not rule that the implementation has to be SRIOV specific), and a goal of the vPort is to standardize the way VMs are handled in subnets.
- a goal of the vPort is to standardize the way VMs are handled in subnets.
- both SR-IOV Shared-Port-like and vSwitch-like architectures or a combination of both, that can be more scalable in both the space and performance domains can be defined.
- a vPort supports optional LIDs, and unlike the Shared-Port, the SM is aware of all the vPorts available in a subnet even if a vPort is not using a dedicated LID.
- the present disclosure provides a system and method for providing a vSwitch architecture with prepopulated LIDs.
- FIG. 7 shows an exemplary vSwitch architecture with prepopulated LIDs, in accordance with an embodiment.
- a number of switches 501-504 can provide communication within the network switched environment 600 (e.g., an IB subnet) between members of a fabric, such as an InfiniBand fabric.
- the fabric can include a number of hardware devices, such as host channel adapters 510, 520, 530. Each of the host channel adapters 510, 520, 530, can in turn interact with a hypervisor 511 , 521 , and 531 , respectively.
- Each hypervisor can, in turn, in conjunction with the host channel adapter it interacts with, setup and assign a number of virtual functions 514, 515, 516, 524, 525, 526, 534, 535, 536, to a number of virtual machines.
- virtual machine 1 550 can be assigned by the hypervisor 511 to virtual function 1 514.
- Hypervisor 511 can additionally assign virtual machine 2 551 to virtual function 2 515, and virtual machine 3 552 to virtual function 3 516.
- Hypervisor 531 can, in turn, assign virtual machine 4 553 to virtual function 1 534.
- the hypervisors can access the host channel adapters through a fully featured physical function 513, 523, 533, on each of the host channel adapters.
- each of the switches 501-504 can comprise a number of ports (not shown), which are used in setting a linear forwarding table in order to direct traffic within the network switched environment 600.
- the virtual switches 512, 522, and 532 can be handled by their respective hypervisors 51 1 , 521 , 531.
- each virtual function is a complete virtual Host Channel Adapter (vHCA), meaning that the VM assigned to a VF is assigned a complete set of IB addresses (e.g., GID, GUID, LID) and a dedicated QP space in the hardware.
- vHCA virtual Host Channel Adapter
- the HCAs 510, 520, and 530 look like a switch, via the virtual switches, with additional nodes connected to them.
- the present disclosure provides a system and method for providing a vSwitch architecture with prepopulated LIDs.
- the LIDs are prepopulated to the various physical functions 513, 523, 533, as well as the virtual functions 514-516, 524-526, 534-536 (even those virtual functions not currently associated with an active virtual machine).
- physical function 513 is prepopulated with LID 1
- virtual function 1 534 is prepopulated with LID 10.
- the LIDs are prepopulated in an SR-IOV vSwitch-enabled subnet when the network is booted. Even when not all of the VFs are occupied by VMs in the network, the populated VFs are assigned with a LID as shown in Figure 7.
- virtual HCAs can also be represented with two ports and be connected via one, two or more virtual switches to the external IB subnet.
- each hypervisor can consume one LID for itself through the PF and one more LID for each additional VF.
- the theoretical hypervisor limit for a single subnet is ruled by the number of available unicast LIDs and is: 2891 (49151 available LIDs divided by 17 LIDs per hypervisor), and the total number of VMs (i.e., the limit) is 46256 (2891 hypervisors times 16 VFs per hypervisor). (In actuality, these numbers are actually smaller since each switch, router, or dedicated SM node in the IB subnet consumes a LID as well). Note that the vSwitch does not need to occupy an additional LID as it can share the LID with the PF
- a vSwitch architecture with prepopulated LIDs also allows for the ability to calculate and use different paths to reach different VMs hosted by the same hypervisor. Essentially, this allows for such subnets and networks to use a LID Mask Control (LMC) like feature to provide alternative paths towards one physical machine, without being bound by the limitation of the LMC that requires the LIDs to be sequential.
- LMC LID Mask Control
- the freedom to use non-sequential LIDs is particularly useful when a VM needs to be migrated and carry its associated LID to the destination.
- the present disclosure provides a system and method for providing a vSwitch architecture with dynamic LID assignment.
- FIG. 8 shows an exemplary vSwitch architecture with dynamic LID assignment, in accordance with an embodiment.
- a number of switches 501-504 can provide communication within the network switched environment 700 (e.g., an IB subnet) between members of a fabric, such as an InfiniBand fabric.
- the fabric can include a number of hardware devices, such as host channel adapters 510, 520, 530. Each of the host channel adapters 510, 520, 530, can in turn interact with a hypervisor 51 1 , 521 , 531 , respectively.
- Each hypervisor can, in turn, in conjunction with the host channel adapter it interacts with, setup and assign a number of virtual functions 514, 515, 516, 524, 525, 526, 534, 535, 536, to a number of virtual machines.
- virtual machine 1 550 can be assigned by the hypervisor 51 1 to virtual function 1 514.
- Hypervisor 51 1 can additionally assign virtual machine 2 551 to virtual function 2 515, and virtual machine 3 552 to virtual function 3 516.
- Hypervisor 531 can, in turn, assign virtual machine 4 553 to virtual function 1 534.
- the hypervisors can access the host channel adapters through a fully featured physical function 513, 523, 533, on each of the host channel adapters.
- each of the switches 501-504 can comprise a number of ports (not shown), which are used in setting a linear forwarding table in order to direct traffic within the network switched environment 700.
- the virtual switches 512, 522, and 532 can be handled by their respective hypervisors 51 1 , 521 , 531.
- each virtual function is a complete virtual Host Channel Adapter (vHCA), meaning that the VM assigned to a VF is assigned a complete set of IB addresses (e.g., GID, GUID, LID) and a dedicated QP space in the hardware.
- vHCA virtual Host Channel Adapter
- IB addresses e.g., GID, GUID, LID
- the HCAs 510, 520, and 530 look like a switch, via the virtual switches, with additional nodes connected to them.
- the present disclosure provides a system and method for providing a vSwitch architecture with dynamic LID assignment.
- the LIDs are dynamically assigned to the various physical functions 513, 523, 533, with physical function 513 receiving LID 1 , physical function 523 receiving LID 2, and physical function 533 receiving LID 3.
- Those virtual functions that are associated with an active virtual machine can also receive a dynamically assigned LID.
- virtual machine 1 550 is active and associated with virtual function 1 514
- virtual function 514 can be assigned LID 5.
- virtual function 2 515, virtual function 3 516, and virtual function 1 534 are each associated with an active virtual function.
- these virtual functions are assigned LIDs, with LID 7 being assigned to virtual function 2 515, LID 1 1 being assigned to virtual function 3 516, and LID 9 being assigned to virtual function 1 534. Unlike vSwitch with prepopulated LIDs, those virtual functions not currently associated with an active virtual machine do not receive a LID assignment.
- the initial path computation can be substantially reduced.
- a relatively small number of LIDs can be used for the initial path calculation and LFT distribution.
- virtual HCAs can also be represented with two ports and be connected via one, two or more virtual switches to the external IB subnet.
- a free VM slot is found in order to decide on which hypervisor to boot the newly added VM, and a unique non-used unicast LID is found as well.
- LFTs of the switches for handling the newly added LID.
- Computing a new set of paths in order to handle the newly added VM is not desirable in a dynamic environment where several VMs may be booted every minute. In large IB subnets, computing a new set of routes can take several minutes, and this procedure would have to repeat each time a new VM is booted.
- the LIDs assigned in the vSwitch with dynamic LID assignment architecture do not have to be sequential.
- the LIDs assigned on VMs on each hypervisor in vSwitch with prepopulated LIDs versus vSwitch with dynamic LID assignment
- the LIDs assigned in the dynamic LID assignment architecture are non-sequential, while those prepopulated in are sequential in nature.
- the vSwitch dynamic LID assignment architecture when a new VM is created, the next available LID is used throughout the lifetime of the VM.
- each VM inherits the LID that is already assigned to the corresponding VF, and in a network without live migrations, VMs consecutively attached to a given VF get the same LID.
- the vSwitch with dynamic LID assignment architecture can resolve the drawbacks of the vSwitch with prepopulated LIDs architecture model at a cost of some additional network and runtime SM overhead.
- the LFTs of the physical switches in the subnet are updated with the newly added LID associated with the created VM.
- One subnet management packet (SMP) per switch is needed to be sent for this operation.
- SMP subnet management packet
- the LMC-like functionality is also not available, because each VM is using the same path as its host hypervisor. However, there is no limitation on the total amount of VFs present in all hypervisors, and the number of VFs may exceed that of the unicast LID limit.
- VFs are allowed to be attached on active VMs simultaneously if this is the case, but having more spare hypervisors and VFs adds flexibility for disaster recovery and optimization of fragmented networks when operating close to the unicast LID limit.
- FIG. 9 shows an exemplary vSwitch architecture with vSwitch with dynamic LID assignment and prepopulated LIDs, in accordance with an embodiment.
- a number of switches 501-504 can provide communication within the network switched environment 800 (e.g., an IB subnet) between members of a fabric, such as an InfiniBand fabric.
- the fabric can include a number of hardware devices, such as host channel adapters 510, 520, 530. Each of the host channel adapters 510, 520, 530, can in turn interact with a hypervisor 51 1 , 521 , and 531 , respectively.
- Each hypervisor can, in turn, in conjunction with the host channel adapter it interacts with, setup and assign a number of virtual functions 514, 515, 516, 524, 525, 526, 534, 535, 536, to a number of virtual machines.
- virtual machine 1 550 can be assigned by the hypervisor 511 to virtual function 1 514.
- Hypervisor 51 1 can additionally assign virtual machine 2 551 to virtual function 2 515.
- Hypervisor 521 can assign virtual machine 3 552 to virtual function 3 526.
- Hypervisor 531 can, in turn, assign virtual machine 4 553 to virtual function 2 535.
- the hypervisors can access the host channel adapters through a fully featured physical function 513, 523, 533, on each of the host channel adapters.
- each of the switches 501-504 can comprise a number of ports (not shown), which are used in setting a linear forwarding table in order to direct traffic within the network switched environment 800.
- the virtual switches 512, 522, and 532 can be handled by their respective hypervisors 51 1 , 521 , 531.
- each virtual function is a complete virtual Host Channel Adapter (vHCA), meaning that the VM assigned to a VF is assigned a complete set of IB addresses (e.g., GID, GUID, LID) and a dedicated QP space in the hardware.
- vHCA virtual Host Channel Adapter
- IB addresses e.g., GID, GUID, LID
- the HCAs 510, 520, and 530 look like a switch, via the virtual switches, with additional nodes connected to them.
- the present disclosure provides a system and method for providing a hybrid vSwitch architecture with dynamic LID assignment and prepopulated LIDs.
- hypervisor 511 can be arranged with vSwitch with prepopulated LIDs architecture
- hypervisor 521 can be arranged with vSwitch with prepopulated LIDs and dynamic LID assignment
- Hypervisor 531 can be arranged with vSwitch with dynamic LID assignment.
- the physical function 513 and virtual functions 514-516 have their LIDs prepopulated (i.e., even those virtual functions not attached to an active virtual machine are assigned a LID).
- Physical function 523 and virtual function 1 524 can have their LIDs prepopulated, while virtual function 2 and 3, 525 and 526, have their LIDs dynamically assigned (i.e., virtual function 2 525 is available for dynamic LID assignment, and virtual function 3 526 has a LID of 11 dynamically assigned as virtual machine 3 552 is attached).
- the functions (physical function and virtual functions) associated with hypervisor 3 531 can have their LIDs dynamically assigned. This results in virtual functions 1 and 3, 534 and 536, are available for dynamic LID assignment, while virtual function 2 535 has LID of 9 dynamically assigned as virtual machine 4 553 is attached there.
- virtual HCAs can also be represented with two ports and be connected via one, two or more virtual switches to the external IB subnet.
- embodiments of the current disclosure can also provide for an InfiniBand fabric that spans two or more subnets.
- FIG. 10 shows an exemplary multi-subnet InfiniBand fabric, in accordance with an embodiment.
- a number of switches 1001- 1004 can provide communication within subnet A 1000 (e.g., an IB subnet) between members of a fabric, such as an InfiniBand fabric.
- the fabric can include a number of hardware devices, such as, for example, channel adapter 1010.
- Host channel adapters 1010 can in turn interact with a hypervisor 101 1.
- the hypervisor can, in turn, in conjunction with the host channel adapter it interacts with, setup a number of virtual functions 1014.
- the hypervisor can additionally assign virtual machines to each of the virtual functions, such as virtual machine 1 10105 being assigned to virtual function 1 1014.
- the hypervisor can access their associated host channel adapters through a fully featured physical function, such as physical function 1013, on each of the host channel adapters.
- a number of switches 1021-1024 can provide communication within subnet b 1040 (e.g., an IB subnet) between members of a fabric, such as an InfiniBand fabric.
- the fabric can include a number of hardware devices, such as, for example, channel adapter 1030.
- Host channel adapters 1030 can in turn interact with a hypervisor 1031.
- the hypervisor can, in turn, in conjunction with the host channel adapter it interacts with, setup a number of virtual functions 1034.
- the hypervisor can additionally assign virtual machines to each of the virtual functions, such as virtual machine 2 1035 being assigned to virtual function 2 1034.
- the hypervisor can access their associated host channel adapters through a fully featured physical function, such as physical function 1033, on each of the host channel adapters. It is noted that although only one host channel adapter is shown within each subnet (i.e., subnet A and subnet B), it is to be understood that a plurality of host channel adapters, and their corresponding components, can be included within each subnet.
- each of the host channel adapters can additionally be associated with a virtual switch, such as virtual switch 1012 and virtual switch 1032, and each HCA can be set up with a different architecture model, as discussed above.
- a virtual switch such as virtual switch 1012 and virtual switch 1032
- each HCA can be set up with a different architecture model, as discussed above.
- At least one switch within each subnet can be associated with a router, such as switch 1002 within subnet A 1000 being associated with router 1005, and switch 1021 within subnet B 1040 being associated with router 1006.
- At least one device can be associated with a fabric manager (not shown).
- the fabric manager can be used, for example, to discover inter-subnet fabric topology, created a fabric profile (e.g., a virtual machine fabric profile), build a virtual machine related database objects that forms the basis for building a virtual machine fabric profile.
- the fabric manager can define legal inter-subnet connectivity in terms of which subnets are allowed to communicate via which router ports using which partition numbers.
- the traffic when traffic at an originating source, such as virtual machine 1 within subnet A, is addressed to a destination at a different subnet, such as virtual machine 2 within subnet B, the traffic can be addressed to the router within subnet A, i.e., router 1005, which can then pass the traffic to subnet B via its link with router 1006.
- the IB specification in order to observe link status change, defines that an attribute at each port (e.g., at each of the ports at any given switch or virtual switch) that can indicate when any port state has changed.
- the SM In order for the SM to determine whether a status at any port within the fabric has changed state, the SM must send a subnet management packet for each port in order to determine whether the port status has changed.
- the above defined method for determining port status within a fabric works well for fabrics that are mostly static (e.g., those fabrics made up from physical end nodes where a port status change does not happen very often), the method does not scale well for those fabrics that have been virtualized (e.g., with introduction of virtual HCAs used by dynamically created virtual machines and where a vSwitch architecture is used to interconnected virtual HCA ports)m as well as for very large physical fabric configurations.
- a scalable representation of switch port status can be provided. By adding a scalable representation of switch port status at each switch (both physical and virtual) - instead of getting all switch port changes individually, the scalable representation of switch port status can combine a number of ports that can scale by just using a few bits of information for each port's status. [000112] In accordance with an embodiment, a scalable representation of switch port status can be a fixed size message at each switch that can represent all the port status information for all, or a subset of, the ports in the switch associated with the fixed size message. That is in particular important in fabrics using virtualization, as the scalable representation can dynamically represent the virtual switch and its associated ports.
- virtual machines i.e., virtual end nodes
- migrate e.g., for performance benefits
- the legacy specification relied on changes not being frequent - that is when the SM checked to see if any state changes had taken place since the last check, it was unlikely that any changes had occurred (default nothing changed), so the SM could receive an indication if there was any change for any port in a single operation and could move on to the next switch if not.
- the SM could receive an indication if there was any change for any port in a single operation and could move on to the next switch if not.
- whenever any changes occurred for any port then all ports had to be inspected individually by the SM.
- a SM can expect to detect more frequent state changes at the ports in the fabric.
- a scalable representation of switch port status can be a fixed size message (e.g., a switch attribute) that can provide the SM with the means to observe all state changes of all ports at a switch in one operation (i.e., one SMP). This reduces the overhead that the SM would otherwise encounter, and optimizes the SM mechanism to query each switch to determine which ports need further handling.
- a switch attribute e.g., a switch attribute
- observing/checking link status for individual switch ports requires multiple SMP operations, as one SMP must be sent for each port at a switch.
- a SM can send fewer SMPs to each switch to discover the link status at each port, thus reducing the overhead required for topology discovery.
- the overhead is likewise reduced for the SM to determine whether it needs to perform additional operations on a port for retrieving more information or to set up new configuration parameters.
- a scalable representation of switch port status can be an attribute where port/link status is represented as scalar objects (single or multi-bit values).
- This attribute in which the scalar objects are contained can provide a compressed way of fetching logical link state of a (virtual) switch.
- Such an attribute can additionally be used by routing algorithms in order to ignore virtual links while balancing the various routes through the fabric.
- the scalable representation of switch port status can also optimize the SM discovery of topology of the fabric as each port's link status can be represented as a scalar object within the attribute.
- Figure 11 shows a scalable representation of switch port status, in accordance with an embodiment. More specifically, Figure 1 1 illustrates a switch having an attribute representing a scalable representation of switch port status.
- a switch 1 100 can comprise a number of ports, such as ports 1110-1133 (it is noted that the number of ports shown in Figure 11 is not illustrative nor indicative of a usual number of ports at a given switch within a fabric, such as an InfiniBand fabric).
- the switch 1100 also comprises a switch port status attribute 1150 which can be a fixed size message that represents the switch port status information for switch port 11 10-1133 in the switch 1100.
- a management module such as the subnet manager 1 140 can, instead of sending one SMP for each port within the switch 1 100 to determine each port's status, send one SMP 1 145 to query the switch port status attribute 1150.
- the SMP can relay each ports' 11 10-1133 status at the time of checking.
- Figure 12 shows a scalable representation of extended link status, in accordance with an embodiment. More specifically, Figure 12 illustrates a switch having an attribute representing a scalable representation of extended link status.
- a switch 1 100 can comprise a number of ports, such as ports 1110-1133 (it is noted that the number of ports shown in Figure 12 is not illustrative nor indicative of a usual number of ports at a given switch within a fabric, such as an InfiniBand fabric).
- the switch 1100 also comprises an extended link status attribute 1250 which can be a fixed size message that represents the status of any links connected to the switch ports 1 110-1 133 in the switch 1100.
- a management module such as the subnet manager 1 140 can, instead of sending one SMP for each port within the switch 1 100 to determine the extended link status at each port, send one SMP 1245 to query the extended link status attribute 1250.
- the SMP can relay the link status for each port 11 10-1 133 status at the time of checking.
- FIG. 13 is a flowchart for a method for supporting scalable representation of switch port status in a high performance computing environment, in accordance with an embodiment.
- a management entity such as an InfiniBand Subnet Manager, can send a management packet to a switch requesting the switch port status for each port at the switch to which the management packet is sent.
- the switch to which the management packet is sent can receive the management packet.
- the switch can provide the status for each of its switch ports via an attribute that contains the switch port status for each port at the switch.
- the requested status for each switch port can be relayed, via the management packet to the management entity, such as the InfiniBand Subnet Manager.
- Figure 14 is a flowchart for a method for supporting scalable representation of switch port status in a high performance computing environment, in accordance with an embodiment.
- the method can provide, at one or more computers, including one or more microprocessors, at least one subnet, the at least one subnet comprising one or more switches, the one or more switches comprising at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, and wherein each of the one or more switches comprise at least one attribute, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, the subnet manager running on one of the one or more switches or one of the plurality of host channel adapters.
- the method can associate each port of the plurality of ports on the one or more switches with a switch port status.
- the method can represent each switch port status associated with each port of the plurality of ports on each switch in the at least one attribute at the associated switch.
- Figure 15 shows a scalable representation of link stability, in accordance with an embodiment. More specifically, Figure 15 illustrates a switch having an attribute representing a scalable representation of link stability.
- a switch 1 100 can comprise a number of ports, such as ports 1110-1133 (it is noted that the number of ports shown in Figure 15 is not illustrative nor indicative of a usual number of ports at a given switch within a fabric, such as an InfiniBand fabric).
- the switch 1 100 also comprises a link stability attribute 1550 which can be a fixed size message that represents the stability of any links connected to the switch ports 11 10-1133 in the switch 1100.
- a management module such as the subnet manager 1 140 can, instead of sending one SMP for each port within the switch 1 100 to determine the link stability at each port, send one SMP 1545 to query the link stability attribute 1550.
- the SMP can relay the link stability for each port 11 10-1133 at the time of checking.
- a subnet management agent (SMA) 1555 can, over a time period (e.g., variable or fixed), monitor the stability of the links connected to the switch ports 1 110-1133. Such monitoring, can include, for example, a number of errors each link connected to each port at the switch encountered during the set time period.
- a time period e.g., variable or fixed
- the number of link errors found by the SMA 1555 at any given port within the switch can be used to continuously update the link stability attribute 1550, which can be queried by a single SMP from the subnet manager.
- the SM can gather link stability information from any given switch in a subnet via one SMP, rather than sending multiple SMPs, one to check each link at a node.
- the disclosed embodiment additionally allows for continuous monitoring and updating of a link stability attribute at any given node (i.e., via the SMA of each node) in a system, such that the SM can gather (via, e.g., Get() operations) link stability information for each link connected a node in the subnet the SM manages.
- Scalable Link Availability Attribute i.e., via the SMA of each node
- Figure 16 shows a scalable representation of link availability, in accordance with an embodiment. More specifically, Figure 16 illustrates a switch having an attribute representing a scalable representation of link availability.
- a switch 1 100 can comprise a number of ports, such as ports 1110-1133 (it is noted that the number of ports shown in Figure 16 is not illustrative nor indicative of a usual number of ports at a given switch within a fabric, such as an InfiniBand fabric).
- the switch 1 100 also comprises a link availability attribute 1650 which can be a fixed size message that represents the availability of any links connected to the switch ports 11 10-1133 in the switch 1100.7
- a management module such as the subnet manager 1 140 can, instead of sending one SMP for each port within the switch 1 100 to determine the link availability at each port, send one SMP 1645 to query the link availability attribute 1650.
- the SMP can relay the link availability for each port 1 110-1 133 at the time of checking.
- a subnet management agent (SMA) 1655 can, over a time period (e.g., variable or fixed), monitor the availability of the links connected to the switch ports 1 110-1133. Such monitoring, can include, for example, a level of congestion on each link connected to each port of the switch.
- a time period e.g., variable or fixed
- the higher the level of congestion on each link as determined by the SMA 1655 can be used to continuously update the link availability attribute 1650, which can be queried by a single SMP from the subnet manager.
- the SM can gather link availability information from any given switch in a subnet via one SMP, rather than sending multiple SMPs, one to check each link at a switch/node.
- the disclosed embodiment additionally allows for continuous monitoring and updating of a link availability at any given node (i.e., via the SMA at each node) in a system, such that the SM can gather (via, e.g., Get() operations) link availability information for each link connected to a node in the subnet the SM manages.
- Figure 17 is a flowchart of an exemplary method for supporting scalable representation of link stability and availability in a high performance computing environment, in accordance with an embodiment.
- the method can provide, at one or more computers, including one or more microprocessors, at least one subnet, the at least one subnet comprising one or more switches, the one or more switches comprising at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, and wherein each of the one or more switches comprise at least one attribute, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, and a subnet manager, the subnet manager running on one of the one or more switches or one of the plurality of host channel adapters.
- the method can provide, at each of the one or more switches, at least one attribute.
- the method can provide a subnet management agent (SMA) of a plurality of subnet management agents at a switch of the one or more switches.
- SMA subnet management agent
- the method can monitor, by the SMA of the switch of the switch of the one or more switches at least one of link stability and link availability at each port of the plurality of ports at the switch.
- a system for supporting scalable representation of link stability and availability in a high performance computing environment comprises: one or more microprocessors; at least one subnet, the at least one subnet comprises one or more switches, the one or more switches comprises at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, and wherein each of the one or more switches comprise at least one attribute, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, and a subnet manager, the subnet manager running on one of the one or more switches or one of the plurality of host channel adapters; wherein each of the one or more switches comprise at least one attribute; wherein a subnet management agent (SMA) of a plurality of subnet management agents is provided at a switch of the one or more switches
- SMA subnet management agent
- the above system further comprises monitoring, by the SMA of the switch of the one or more switches, link stability at each port of the plurality of ports of the switch comprises, counting, for a monitoring period of time, a number of errors at each link attached to each port of the plurality of ports of the switch; and wherein after the monitoring, by the SMA, the link stability at each port of the plurality of ports of the switch, the SMA populates a representation of the counted errors for each link in the at least one attribute.
- the subnet manager determines the link stability for each port on the switch of the one or more switches using one operation.
- the one operation comprises a subnet management packet.
- the above system further comprises: monitoring, by the SMA of the switch of the one or more switches, link availability at each port of the plurality of ports of the switch comprises, observing, for a monitoring period of time, a traffic load at each link attached to each port of the plurality of ports of the switch; and after the monitoring, by the SMA, the link availability at each port of the plurality of ports of the switch, the SMA populates a representation of the observed traffic load for each link in the at least one attribute.
- the subnet manager determines the link availability for each port on the switch of the one or more switches using one operation.
- the one operation comprises a subnet management packet.
- the above method further comprises monitoring, by the SMA of the switch of the one or more switches, link stability at each port of the plurality of ports of the switch comprises, counting, for a monitoring period of time, a number of errors at each link attached to each port of the plurality of ports of the switch; and upon completion of the monitoring, by the SMA of the switch of the one or more switches, link stability at each port of the plurality of ports of the, the SMA populates a representation of the counted errors for each link in the at least one attribute.
- the above method comprises determining, by the subnet manager, the link stability for each port on the switch of the one or more switches using one operation.
- the one operation comprises a subnet management packet.
- the above method further comprises: monitoring, by the SMA of the switch of the one or more switches, link availability at each port of the plurality of ports of the switch comprises, observing, for a monitoring period of time, a traffic load at each link attached to each port of the plurality of ports of the switch; and upon completion of the monitoring the link availability at each port of the plurality of ports of the switch for the monitoring period of time, populating, by the SMA, a representation of the observed traffic load for each link in the at least one attribute.
- the subnet manager determines the link availability status for each port on one of the one or more switches using one operation.
- a non-transitory computer readable storage medium including instructions stored thereon for supporting scalable representation of link stability and availability in a high performance computing environment, which when read and executed by one or more computers cause the one or more computers to perform steps comprises: providing, at one or more computers, including one or more microprocessors, at least one subnet, the at least one subnet comprises one or more switches, the one or more switches comprises at least a leaf switch, wherein each of the one or more switches comprise a plurality of ports, a plurality of host channel adapters, wherein the plurality of host channel adapters are interconnected via the one or more switches, a plurality of end nodes, each of the plurality of end nodes being associated with at least one host channel adapter of the plurality of host channel adapters, and a subnet manager, the subnet manager running on one of the one or more switches or one of the plurality
- the above non-transitory computer readable storage medium further comprises: monitoring, by the SMA of the switch of the one or more switches, link stability at each port of the plurality of ports of the switch comprises, counting, for a monitoring period of time, a number of errors at each link attached to each port of the plurality of ports of the switch; and upon completion of the monitoring, by the SMA of the switch of the one or more switches, link stability at each port of the plurality of ports of the, the SMA populates a representation of the counted errors for each link in the at least one attribute.
- the above non-transitory computer readable storage medium further comprises: determining, by the subnet manager, the link stability for each port on the switch of the one or more switches using one operation.
- the one operation comprises a subnet management packet.
- the above non-transitory computer readable storage medium further comprises: monitoring, by the SMA of the switch of the one or more switches, link availability at each port of the plurality of ports of the switch comprises, observing, for a monitoring period of time, a traffic load at each link attached to each port of the plurality of ports of the switch; and upon completion of the monitoring the link availability at each port of the plurality of ports of the switch for the monitoring period of time, populating, by the SMA, the observed traffic load for each link in the at least one attribute.
- the subnet manager determines the link availability status for each port on one of the one or more switches using one operation, the one operation comprises a subnet management packet.
- a computer program comprises program instructions in machine-readable format that when executed by a computer system cause the computer system to perform the above method.
- a computer program comprises the above computer program stored in a non-transitory machine readable data storage medium.
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- FIG. 1 A block diagram illustrating an exemplary computing system
- features of the present invention can be incorporated in software and/or firmware for controlling the hardware of a processing system, and for enabling a processing system to interact with other mechanism utilizing the results of the present invention.
- software or firmware may include, but is not limited to, application code, device drivers, operating systems and execution environments/containers.
- the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
- Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Environmental & Geological Engineering (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018504729A JP6902527B2 (ja) | 2016-01-27 | 2017-01-25 | 高性能コンピューティング環境においてスイッチポートステータスのスケーラブルな表現をサポートするためのシステムおよび方法 |
| EP17705998.7A EP3408983B1 (en) | 2016-01-27 | 2017-01-25 | System and method for supporting scalable representation of switch port status in a high performance computing environment |
| CN201780002356.0A CN107852377B (zh) | 2016-01-27 | 2017-01-25 | 用于在高性能计算环境中支持交换机端口状况的可伸缩表示的系统和方法 |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201662287704P | 2016-01-27 | 2016-01-27 | |
| US62/287,704 | 2016-01-27 | ||
| US15/412,995 | 2017-01-23 | ||
| US15/413,075 | 2017-01-23 | ||
| US15/412,995 US10594627B2 (en) | 2016-01-27 | 2017-01-23 | System and method for supporting scalable representation of switch port status in a high performance computing environment |
| US15/413,075 US10200308B2 (en) | 2016-01-27 | 2017-01-23 | System and method for supporting a scalable representation of link stability and availability in a high performance computing environment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017132271A1 true WO2017132271A1 (en) | 2017-08-03 |
Family
ID=65528945
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2017/014963 Ceased WO2017132271A1 (en) | 2016-01-27 | 2017-01-25 | System and method for supporting scalable representation of switch port status in a high performance computing environment |
Country Status (4)
| Country | Link |
|---|---|
| EP (1) | EP3408983B1 (enExample) |
| JP (1) | JP6902527B2 (enExample) |
| CN (1) | CN107852377B (enExample) |
| WO (1) | WO2017132271A1 (enExample) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110035009A (zh) * | 2018-01-12 | 2019-07-19 | 丛林网络公司 | 分组转发路径元素的节点表示 |
| CN118018447A (zh) * | 2024-02-01 | 2024-05-10 | 山东师范大学 | 一种用于远程控制的交换机状态监控方法及系统 |
| US20250247299A1 (en) * | 2024-01-25 | 2025-07-31 | Mellanox Technologies, Ltd. | System for optimized data communication in hierarchical networks |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108696436A (zh) * | 2018-08-15 | 2018-10-23 | 无锡江南计算技术研究所 | 一种分布式网络拓扑探查与路由分发系统及方法 |
| US11341082B2 (en) * | 2019-11-19 | 2022-05-24 | Oracle International Corporation | System and method for supporting target groups for congestion control in a private fabric in a high performance computing environment |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005043842A1 (en) * | 2003-10-21 | 2005-05-12 | Cicso Technology, Inc. | Port-based loadsharing for an access-layer switch |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1056464A (ja) * | 1996-08-13 | 1998-02-24 | Fujitsu Ltd | Atm化装置における回線障害監視方法 |
| US20030005039A1 (en) * | 2001-06-29 | 2003-01-02 | International Business Machines Corporation | End node partitioning using local identifiers |
| US20040030763A1 (en) * | 2002-08-08 | 2004-02-12 | Manter Venitha L. | Method for implementing vendor-specific mangement in an inifiniband device |
| US7925477B2 (en) * | 2004-09-20 | 2011-04-12 | The Mathworks, Inc. | Method and system for transferring data between a discrete event environment and an external environment |
| US7200704B2 (en) * | 2005-04-07 | 2007-04-03 | International Business Machines Corporation | Virtualization of an I/O adapter port using enablement and activation functions |
| WO2013170218A1 (en) * | 2012-05-10 | 2013-11-14 | Oracle International Corporation | System and method for supporting subnet manager (sm) master negotiation in a network environment |
| US9130858B2 (en) * | 2012-08-29 | 2015-09-08 | Oracle International Corporation | System and method for supporting discovery and routing degraded fat-trees in a middleware machine environment |
| US9135198B2 (en) * | 2012-10-31 | 2015-09-15 | Avago Technologies General Ip (Singapore) Pte Ltd | Methods and structure for serial attached SCSI expanders that self-configure by setting routing attributes of their ports based on SMP requests |
| CN104407911B (zh) * | 2014-10-31 | 2018-03-20 | 新华三技术有限公司 | 虚拟机迁移方法及装置 |
-
2017
- 2017-01-25 JP JP2018504729A patent/JP6902527B2/ja active Active
- 2017-01-25 EP EP17705998.7A patent/EP3408983B1/en active Active
- 2017-01-25 CN CN201780002356.0A patent/CN107852377B/zh active Active
- 2017-01-25 WO PCT/US2017/014963 patent/WO2017132271A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005043842A1 (en) * | 2003-10-21 | 2005-05-12 | Cicso Technology, Inc. | Port-based loadsharing for an access-layer switch |
Non-Patent Citations (3)
| Title |
|---|
| FRANCESCO FUSCO ET AL: "Real-time creation of bitmap indexes on streaming network data", THE VLDB JOURNAL ; THE INTERNATIONAL JOURNAL ON VERY LARGE DATA BASES, SPRINGER, BERLIN, DE, vol. 21, no. 3, 30 July 2011 (2011-07-30), pages 287 - 307, XP035056143, ISSN: 0949-877X, DOI: 10.1007/S00778-011-0242-X * |
| INFINIBAND® TRADE ASSOCIATION ARCHITECTURE SPECIFICATION, vol. 1, March 2015 (2015-03-01), Retrieved from the Internet <URL:http://www.inifinibandta.org> |
| VISHNU A ET AL: "Performance Modeling of Subnet Management on Fat Tree InfiniBand Networks using OpenSM", PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, 2005. PROCEEDINGS. 19TH IEEE INTERNATIONAL DENVER, CO, USA 04-08 APRIL 2005, PISCATAWAY, NJ, USA,IEEE, 4 April 2005 (2005-04-04), pages 296b - 296b, XP010785940, ISBN: 978-0-7695-2312-5, DOI: 10.1109/IPDPS.2005.339 * |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110035009A (zh) * | 2018-01-12 | 2019-07-19 | 丛林网络公司 | 分组转发路径元素的节点表示 |
| US10979339B2 (en) | 2018-01-12 | 2021-04-13 | Juniper Networks, Inc. | Node representations of packet forwarding path elements |
| US20250247299A1 (en) * | 2024-01-25 | 2025-07-31 | Mellanox Technologies, Ltd. | System for optimized data communication in hierarchical networks |
| CN118018447A (zh) * | 2024-02-01 | 2024-05-10 | 山东师范大学 | 一种用于远程控制的交换机状态监控方法及系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6902527B2 (ja) | 2021-07-14 |
| CN107852377B (zh) | 2021-06-25 |
| CN107852377A (zh) | 2018-03-27 |
| JP2019503597A (ja) | 2019-02-07 |
| EP3408983B1 (en) | 2021-12-08 |
| EP3408983A1 (en) | 2018-12-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11716292B2 (en) | System and method for supporting scalable representation of switch port status in a high performance computing environment | |
| US11695691B2 (en) | System and method for supporting dual-port virtual router in a high performance computing environment | |
| US11740922B2 (en) | System and method for providing an InfiniBand SR-IOV vSwitch architecture for a high performance cloud computing environment | |
| US10630583B2 (en) | System and method for supporting multiple lids for dual-port virtual routers in a high performance computing environment | |
| EP3408983B1 (en) | System and method for supporting scalable representation of switch port status in a high performance computing environment | |
| EP3408982B1 (en) | System and method for supporting scalable bit map based p_key table in a high performance computing environment | |
| US11271870B2 (en) | System and method for supporting scalable bit map based P_Key table in a high performance computing environment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17705998 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2018504729 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2017705998 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2017705998 Country of ref document: EP Effective date: 20180827 |