US20130185423A1 - Dynamic distribution of nodes on a multi-node computer system - Google Patents

Dynamic distribution of nodes on a multi-node computer system Download PDF

Info

Publication number
US20130185423A1
US20130185423A1 US13/786,785 US201313786785A US2013185423A1 US 20130185423 A1 US20130185423 A1 US 20130185423A1 US 201313786785 A US201313786785 A US 201313786785A US 2013185423 A1 US2013185423 A1 US 2013185423A1
Authority
US
United States
Prior art keywords
nodes
job
node
block
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/786,785
Inventor
Eric L. Barsness
David L. Darrington
Amanda Randles
John M. Santosuosso
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/786,785 priority Critical patent/US20130185423A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RANDLES, AMANDA, SANTOSUOSSO, JOHN M., BARSNESS, ERIC L., DARRINGTON, DAVID L.
Publication of US20130185423A1 publication Critical patent/US20130185423A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors

Definitions

  • the disclosure and claims herein generally relate to multi-node computer systems, and more specifically relate to dynamic distribution of compute nodes with respect to I/O nodes on a multi-node computer system.
  • the Blue Gene/L system is a high density, scalable system in which the current maximum number of compute nodes is 65,536.
  • the Blue Gene/L node consists of a single ASIC (application specific integrated circuit) with 2 CPUs and memory.
  • the full computer is housed in 64 racks or cabinets with 32 node boards in each rack.
  • Computer systems such as Blue Gene have a large number of nodes, each with its own processor and local memory.
  • the nodes are connected with several communication networks.
  • One communication network connects the nodes in a logical tree network.
  • the Nodes are connected to an input-output (I/O) node at the top of the tree.
  • I/O input-output
  • Blue Gene there are 2 compute nodes per node card with 2 processors each.
  • a node board holds 16 node cards and each rack holds 32 node boards.
  • a node board has slots to hold 2 I/O cards that each have 2 I/O nodes.
  • fully loaded node boards have 4 I/O nodes for 32 compute nodes.
  • the nodes on two node boards can be configured in a virtual tree network that communicate with the I/O nodes. For two node boards there may be 8 I/O nodes that correspond to 64 compute nodes. If the I/O nodes slots are not fully populated, then there could be 2 I/O nodes for 64 compute nodes.
  • the distribution of I/O nodes to compute nodes may vary between 1/64 and 1/8.
  • the I/O node to compute node ratios can be defined as 1/8, 1/32, 1/64 or 1/128 (IO/compute). In the prior art, the distribution of the I/O nodes is static once a block is configured.
  • the Blue Gene computer can be partitioned into multiple, independent blocks. Each block is used to run one job at a time.
  • a block consists of a number of ‘processing sets’ (psets). Each pset has an I/O node and a group of compute nodes. The compute nodes run the user application, and the I/O nodes are used to access external files and networks.
  • An apparatus and method is described for dynamic distribution of compute nodes versus I/O nodes on a multi-node computing system.
  • An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes.
  • the I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution.
  • the I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job.
  • FIG. 1 is a block diagram of a massively parallel computer system
  • FIG. 2 is a block diagram of a compute node in a massively parallel computer system
  • FIG. 3 shows a block diagram of a block of compute nodes to illustrate the tree network
  • FIG. 4 shows a data packet for communicating on a collective network in a massively parallel computer system
  • FIG. 5 shows a block diagram that represents a job record in a massively parallel computer system
  • FIG. 6 is a block diagram that illustrates an example of an initial I/O node distribution in a massively parallel computer system
  • FIG. 7 is a block diagram that illustrates the example of FIG. 6 after dynamic distribution of the I/O nodes in a massively parallel computer system
  • FIG. 8 is a flow diagram of a method for dynamic I/O node redistribution on a massively parallel computer system
  • FIG. 9 is a method flow diagram that illustrates one possible implementation of step 830 in FIG. 8 ;
  • FIG. 10 is a method flow diagram that illustrates one possible implementation of step 930 in FIG. 9 .
  • An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes.
  • the I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution.
  • the I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job.
  • the examples herein will be described with respect to the Blue Gene/L massively parallel computer developed by International Business Machines Corporation (IBM).
  • FIG. 1 shows a block diagram that represents a massively parallel computer system 100 such as the Blue Gene/L computer system.
  • the Blue Gene/L system is a scalable system in which the maximum number of compute nodes is 65,536.
  • Each node 110 has an application specific integrated circuit (ASIC) 112 , also called a Blue Gene/L compute chip 112 .
  • the compute chip incorporates two processors or central processor units (CPUs) and is mounted on a node daughter card 114 .
  • the node also typically has 512 megabytes of local memory (not shown).
  • a node board 120 accommodates 32 node daughter cards 114 each having a node 110 .
  • each node board has 32 nodes, with 2 processors for each node, and the associated memory for each processor.
  • a rack 130 is a housing that contains 32 node boards 120 .
  • Each of the node boards 120 connect into a midplane printed circuit board 132 with a midplane connector 134 .
  • the midplane 132 is inside the rack and not shown in FIG. 1 .
  • the full Blue Gene/L computer system would be housed in 64 racks 130 or cabinets with 32 node boards 120 in each. The full system would then have 65,536 nodes and 131,072 CPUs (64 racks ⁇ 32 node boards ⁇ 32 nodes ⁇ 2 CPUs).
  • the Blue Gene/L computer system structure can be described as a compute node core with an I/O node surface, where communication to 1024 compute nodes 110 is handled by each I/O node 170 that has an I/O processor connected to the service node 140 .
  • the I/O nodes 170 have no local storage.
  • the I/O nodes are connected to the compute nodes through the logical tree network and also have functional wide area network capabilities through a gigabit Ethernet network (See FIG. 2 below).
  • the gigabit Ethernet network is connected to an I/O processor (or Blue Gene/L link chip) in the I/O node 170 located on a node board 120 that handles communication from the service node 160 to a number of nodes.
  • the Blue Gene/L system has one or more I/O nodes 170 connected to the node board 120 .
  • the I/O processors can be configured to communicate with 8, 32 or 64 nodes.
  • the service node uses the gigabit network to control connectivity by communicating to link cards on the compute nodes.
  • the connections to the I/O nodes are similar to the connections to the compute node except the I/O nodes are not connected to the torus network.
  • the computer system 100 includes a service node 140 that handles the loading of the nodes with software and controls the operation of the whole system.
  • the service node 140 is typically a mini computer system such as an IBM pSeries server running Linux with a control console (not shown).
  • the service node 140 is connected to the racks 130 of compute nodes 110 with a control system network 150 .
  • the control system network provides control, test, and bring-up infrastructure for the Blue Gene/L system.
  • the control system network 150 includes various network interfaces that provide the necessary communication for the massively parallel computer system. The network interfaces are described further below.
  • the service node 140 communicates through the control system network 150 dedicated to system management.
  • the control system network 150 includes a private 100-Mb/s Ethernet connected to an Ido chip 180 located on a node board 120 that handles communication from the service node 160 to a number of nodes.
  • This network is sometime referred to as the JTAG network since it communicates using the JTAG protocol. All control, test, and bring-up of the compute nodes 110 on the node board 120 is governed through the JTAG port communicating with the service node.
  • the service node includes a job scheduler 142 for allocating and scheduling work and data placement on the compute nodes.
  • the job scheduler 142 loads a job record 144 from data storage 138 for placement on the compute nodes.
  • the job record 144 includes a job and related information as described more fully below.
  • the service node further includes an I/O configuration mechanism 146 that dynamically distributes I/O nodes on a multi-node computing system.
  • the I/O configuration mechanism 146 uses job information located in the job record 144 to initially configure the I/O node distribution.
  • the I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job.
  • FIG. 2 illustrates a block diagram of an exemplary compute node as introduced above.
  • FIG. 2 also represents a block diagram for an I/O node, which has the same overall structure as the compute node.
  • a notable difference between the compute node and the I/O nodes is that the Ethernet adapter 226 is connected to the control system on the I/O node but is not used in the compute node.
  • the compute node 110 of FIG. 2 includes a plurality of computer processors 210 , each with an arithmetic logic unit (ALU) 211 and a memory management unit (MMU) 212 .
  • the processors 210 are connected to random access memory (‘RAM’) 214 through a high-speed memory bus 215 .
  • RAM random access memory
  • Also connected to the high-speed memory bus 214 is a bus adapter 217 .
  • the bus adapter 217 connects to an extension bus 218 that connects to other components of the compute node.
  • the class routing table 221 stores data for routing data packets on the collective network or tree network as described more fully below.
  • the application program is loaded on the node by the control system to perform a user designated task.
  • the application program typically runs in a parallel with application programs running on adjacent nodes.
  • the operating system kernel 223 is a module of computer program instructions and routines for an application program's access to other resources of the compute node.
  • the quantity and complexity of tasks to be performed by an operating system on a compute node in a massively parallel computer are typically smaller and less complex than those of an operating system on a typical stand alone computer.
  • the operating system may therefore be quite lightweight by comparison with operating systems of general purpose computers, a pared down version as it were, or an operating system developed specifically for operations on a particular massively parallel computer.
  • Operating systems that may usefully be improved, simplified, for use in a compute node include UNIX, Linux, Microsoft XP, AIX, IBM's i5/OS, and others as will occur to those of skill in the art.
  • the compute node 110 of FIG. 2 includes several communications adapters 226 , 228 , 230 , 232 for implementing data communications with other nodes of a massively parallel computer. Such data communications may be carried out serially through RS-232 connections, through external buses such as USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network.
  • the data communications adapters in the example of FIG. 2 include a Gigabit Ethernet adapter 226 that couples example I/O node 110 for data communications to a Gigabit Ethernet 234 .
  • this communication link is only used on I/O nodes and is not connected on the compute nodes.
  • Gigabit Ethernet is a network transmission standard, defined in the IEEE 802.3 standard, that provides a data rate of 1 billion bits per second (one gigabit).
  • Gigabit Ethernet is a variant of Ethernet that operates over multimode fiber optic cable, single mode fiber optic cable, or unshielded twisted pair.
  • the data communications adapters in the example of FIG. 2 include a JTAG Slave circuit 228 that couples the compute node 110 for data communications to a JTAG Master circuit over a JTAG network 236 .
  • JTAG is the usual name used for the IEEE 1149.1 standard entitled Standard Test Access Port and Boundary-Scan Architecture for test access ports used for testing printed circuit boards using boundary scan.
  • JTAG boundary scans through JTAG Slave 236 may efficiently configure processor registers and memory in compute node 110 .
  • the data communications adapters in the example of FIG. 2 include a Point To Point Network Adapter 230 that couples the compute node 110 for data communications to a network 238 .
  • the Point To Point Network is typically configured as a three-dimensional torus or mesh.
  • Point To Point Adapter 230 provides data communications in six directions on three communications axes, x, y, and z, through six bidirectional links 238 : +x, ⁇ x, +y, ⁇ y, +z, and ⁇ z.
  • the torus network logically connects the compute nodes in a lattice like structure that allows each compute node 110 to communicate with its closest 6 neighbors.
  • the data communications adapters in the example of FIG. 2 include a collective network or tree network adapter 232 that couples the compute node 110 for data communications to a network 240 configured as a binary tree. This network is also sometimes referred to as the collective network.
  • Collective network adapter 232 provides data communications through three bidirectional links: two links to children nodes and one link to a parent node (not shown).
  • the collective network adapter 232 of each node has additional hardware to support operations on the collective network.
  • the collective network 240 extends over the compute nodes of the entire Blue Gene machine, allowing data to be sent from any node to all others (broadcast), or a subset of nodes.
  • Each node typically has three links, with one or two links to a child node and a third connected to a parent node.
  • Arithmetic and logical hardware is built into the collective network to support integer reduction operations including min, max, sum, bitwise logical OR, bitwise logical AND, and bitwise logical XOR.
  • the collective network is also used for global broadcast of data, rather than transmitting it around in rings on the torus network. For one-to-all communications, this is a tremendous improvement from a software point of view over the nearest-neighbor 3D torus network.
  • the collective network partitions in a manner akin to the torus network.
  • an independent collective network is formed for the partition; it includes all nodes in the partition (and no nodes in any other partition).
  • each node contains a class routing table that is used in conjunction with a small header field in each packet of data sent over the network to determine a class.
  • the class is used to locally determine the routing of the packet.
  • multiple independent collective networks are virtualized in a single physical network with one or more I/O nodes for the virtual network. Two standard examples of this are the class that connects a small group of compute nodes to an I/O node and a class that includes all the compute nodes in the system.
  • the physical routing of the collective network is static and in the prior art the virtual network was static after being configured.
  • the I/O configuration mechanism ( FIG. 1 , 146 ) dynamically distributes the I/O nodes in the virtual network.
  • the virtual network can be reconfigured to dynamically redistribute the I/O nodes to the virtual networks as described herein.
  • the I/O configuration mechanism could distribute the I/O nodes using hardware for a non-virtual network.
  • FIG. 3 illustrates a portion of the collective network or tree network shown as 240 in FIG. 2 .
  • the collective or tree network 300 is connected to the service node 140 through the control system network 150 .
  • the tree network 300 is a group of compute nodes 110 connected an I/O node 170 in a logical tree structure.
  • the I/O node 170 is connected to one or more compute nodes 110 .
  • Each of the compute nodes Node1 312 , and Node2 314 are connected directly to the I/O node 170 and form the top of the tree or a first level 311 for a set of nodes connected below each of Node1 312 and Node2 314 .
  • Node1 312 is the top of a tree network and has child nodes Node3 316 and Node4 318 on a second level 317 .
  • Node3 316 has child nodes Node7 322 and Node8 324 on a third level 325 .
  • Many of the child nodes are not shown for simplicity, but the tree network 300 could contain any number of nodes with any number of levels.
  • FIG. 4 shows a data packet 400 for communicating on the tree network 240 ( FIG. 2 ) in a massively parallel computer system 100 ( FIG. 1 ).
  • Each data packet 400 includes a class 410 and data 420 .
  • the class 410 is used to determine the routing of the packet to deliver data 420 on the virtual tree network over the tree network 240 .
  • the class 410 is used in conjunction with the class routing table 221 to determine how to route the data packet 400 to the appropriate node on the tree network.
  • FIG. 5 shows a block diagram that represents a job record 146 in a massively parallel computer system.
  • the job record 146 includes a name 510 , the job executable 512 , a job description 514 , historical I/O utilization 516 and application control parameters 518 .
  • the name 510 identifies the job record that contains the record information.
  • the job executable 512 is the code to execute the job.
  • the job description 514 includes information about the job including information that may help determine the I/O needs of the job.
  • the historical I/O utilization 516 contains historical information about the I/O utilization of the job that is recorded by the I/O configuration mechanism during past executions of the job.
  • Application control parameters 518 are embedded control commands that allow a job to dictate how to set up the I/O configuration.
  • the application control 518 may be set by a system administrator to allow the execution of the job to dictate the I/O configuration upon initial execution of the job.
  • FIG. 6 and FIG. 7 illustrate an example of dynamically allocating an I/O node in a massively parallel computer system.
  • FIG. 6 represents an initial state of a portion of a massively parallel computer system prior to dynamically allocating the I/O nodes.
  • I/O nodes 170 a - 170 d installed on a node card (not shown) that has 64 nodes 110 .
  • the nodes 110 are allocated with 16 nodes in a node block 710 - 713 for executing a job.
  • Each node block 710 - 713 has been initially configured with a single I/O node 170 .
  • the job associated with block 713 is determined to have extensive I/O needs by the I/O configuration mechanism 144 ( FIG. 1 ).
  • the determination of I/O needs may be determined upon loading a job, or while the job is executing as described herein.
  • the I/O configuration mechanism may then determine to dynamically distribute an additional node to the job executing on block 713 .
  • the I/O configuration mechanism determines to distribute the nodes associated with block 712 to block 711 to free up an I/O node 170 c and then distribute this I/O node 170 c to the node block 713 needing the additional I/O capability.
  • FIG. 7 where the 16 nodes in node block 713 are now configured with 2 I/O nodes 170 c, 170 d.
  • the nodes that were previously configured to node block 712 have be re-configured 812 with node block 711 .
  • the I/O configuration mechanism dynamically distributes I/O nodes to blocks of compute nodes in a massively parallel computer system.
  • the determination to distribute an additional I/O node to the node block may have been based on data in the job record or by real-time I/O needs determined by monitoring the job execution. For example, upon loading the job, the I/O configuration mechanism could have detected from the job description that the job has extensive I/O needs and then distributed the additional I/O node from a block that has less I/O demands or is not being used.
  • the historical I/O utilization 516 may have indicated that the job typically requires a large amount of I/O resources and thus would execute more efficiently with an additional I/O node.
  • the I/O configuration mechanism may have determined from the job record that the application will assert control with application control parameters 518 ( FIG. 5 ).
  • the application control parameters 518 indicate how the application wishes to assert control over the I/O configuration mechanism.
  • the application control parameters may indicate a priority or an number of I/O nodes that are required.
  • the I/O configuration mechanism may dynamically distribute the I/O nodes based on real-time monitoring of the I/O needs of the job while it is executing.
  • the I/O needs may be monitoring by looking a performance metrics such as the number of I/O operations performed by the I/O node for the job, network latency, overall network loading, etc. When the performance metrics indicate the I/O demand is above an established threshold, then the I/O configuration mechanism will attempt to dynamically update the I/O configuration to distribute additional I/O nodes to the job.
  • FIG. 8 shows a method 800 for dynamic distribution of compute nodes versus I/O nodes on a multi-node computing system.
  • the steps in method 800 are preferably performed by an I/O configuration mechanism 146 in the service node 140 ( FIG. 1 ).
  • the I/O configuration mechanism 146 loads a first job record 144 from the data storage 138 and examines the job record for I/O needs of the job (step 810 ).
  • the method dynamically distributes I/O nodes based on the job record (step 820 ).
  • the information in the job record that is used for dynamically allocating the jobs may include the job description, job execution history and application control parameters as described above.
  • the method executes the job on the nodes while real-time monitoring for dynamic I/O node configuration (step 830 ). The method is then done.
  • FIG. 9 shows a method 900 for selecting the nodes to execute a job on a massively parallel computer system as an exemplary implementation of step 830 in method 800 .
  • FIG. 10 shows a method for selecting the nodes to execute a job on a massively parallel computer system as an exemplary implementation of step 930 in method 900 .
  • the method first suspends all jobs executing on the blocks of nodes to be redistributed (step 1010 ).
  • the method then redistributes I/O nodes among blocks of nodes that are executing jobs on the multi-node computer system (step 1020 ).
  • the method resets the block structure with the new allocation of I/O nodes (step 1030 ).
  • the method can then resume the job that was suspended (step 1040 ).
  • the method is then done.
  • An apparatus and method is described herein to dynamically distributes I/O nodes on a multi-node computing system.
  • the I/O configuration mechanism monitors the I/O performance of the executing job to then dynamically redistribute the I/O node distribution based on the I/O performance of the executing job to increase the multi-node computer system.

Abstract

I/O nodes are dynamically distributed on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjusts the I/O node distribution based on the I/O performance of the executing job.

Description

    BACKGROUND
  • 1. Technical Field
  • The disclosure and claims herein generally relate to multi-node computer systems, and more specifically relate to dynamic distribution of compute nodes with respect to I/O nodes on a multi-node computer system.
  • 2. Background Art
  • Supercomputers and other multi-node computer systems continue to be developed to tackle sophisticated computing jobs. One type of multi-node computer system is a massively parallel computer system. A family of such massively parallel computers is being developed by International Business Machines Corporation (IBM) under the name Blue Gene. The Blue Gene/L system is a high density, scalable system in which the current maximum number of compute nodes is 65,536. The Blue Gene/L node consists of a single ASIC (application specific integrated circuit) with 2 CPUs and memory. The full computer is housed in 64 racks or cabinets with 32 node boards in each rack.
  • Computer systems such as Blue Gene have a large number of nodes, each with its own processor and local memory. The nodes are connected with several communication networks. One communication network connects the nodes in a logical tree network. In the logical tree network, the Nodes are connected to an input-output (I/O) node at the top of the tree.
  • In Blue Gene, there are 2 compute nodes per node card with 2 processors each. A node board holds 16 node cards and each rack holds 32 node boards. A node board has slots to hold 2 I/O cards that each have 2 I/O nodes. Thus, fully loaded node boards have 4 I/O nodes for 32 compute nodes. The nodes on two node boards can be configured in a virtual tree network that communicate with the I/O nodes. For two node boards there may be 8 I/O nodes that correspond to 64 compute nodes. If the I/O nodes slots are not fully populated, then there could be 2 I/O nodes for 64 compute nodes. Thus the distribution of I/O nodes to compute nodes may vary between 1/64 and 1/8. Thus, the I/O node to compute node ratios can be defined as 1/8, 1/32, 1/64 or 1/128 (IO/compute). In the prior art, the distribution of the I/O nodes is static once a block is configured.
  • The Blue Gene computer can be partitioned into multiple, independent blocks. Each block is used to run one job at a time. A block consists of a number of ‘processing sets’ (psets). Each pset has an I/O node and a group of compute nodes. The compute nodes run the user application, and the I/O nodes are used to access external files and networks.
  • With the communication networks as described above, applications or “jobs” loaded on nodes execute on a fixed I/O to compute node ratio. Without a way to dynamically distribute the I/O nodes to adjust the ratio of IO to compute nodes based on the I/O characteristics of work being performed on the system, multi-node computer systems will continue to suffer from reduced efficiency of the computer system.
  • BRIEF SUMMARY
  • An apparatus and method is described for dynamic distribution of compute nodes versus I/O nodes on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job.
  • The description and examples herein are directed to a massively parallel computer system such as the Blue Gene architecture, but the claims herein expressly extend to other parallel computer systems with multiple processors arranged in a network structure.
  • The foregoing and other features and advantages will be apparent from the following more particular description, and as illustrated in the accompanying drawings.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The disclosure will be described in conjunction with the appended drawings, where like designations denote like elements, and:
  • FIG. 1 is a block diagram of a massively parallel computer system;
  • FIG. 2 is a block diagram of a compute node in a massively parallel computer system;
  • FIG. 3 shows a block diagram of a block of compute nodes to illustrate the tree network;
  • FIG. 4 shows a data packet for communicating on a collective network in a massively parallel computer system;
  • FIG. 5 shows a block diagram that represents a job record in a massively parallel computer system;
  • FIG. 6 is a block diagram that illustrates an example of an initial I/O node distribution in a massively parallel computer system;
  • FIG. 7 is a block diagram that illustrates the example of FIG. 6 after dynamic distribution of the I/O nodes in a massively parallel computer system;
  • FIG. 8 is a flow diagram of a method for dynamic I/O node redistribution on a massively parallel computer system;
  • FIG. 9 is a method flow diagram that illustrates one possible implementation of step 830 in FIG. 8; and
  • FIG. 10 is a method flow diagram that illustrates one possible implementation of step 930 in FIG. 9.
  • DETAILED DESCRIPTION
  • The description and claims herein are directed to a method and apparatus for dynamic distribution of compute nodes versus I/O nodes on a multi-node computing system. An I/O configuration mechanism located in the service node of a multi-node computer system controls the distribution of the I/O nodes. The I/O configuration mechanism uses job information located in a job record to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job. The examples herein will be described with respect to the Blue Gene/L massively parallel computer developed by International Business Machines Corporation (IBM).
  • FIG. 1 shows a block diagram that represents a massively parallel computer system 100 such as the Blue Gene/L computer system. The Blue Gene/L system is a scalable system in which the maximum number of compute nodes is 65,536. Each node 110 has an application specific integrated circuit (ASIC) 112, also called a Blue Gene/L compute chip 112. The compute chip incorporates two processors or central processor units (CPUs) and is mounted on a node daughter card 114. The node also typically has 512 megabytes of local memory (not shown). A node board 120 accommodates 32 node daughter cards 114 each having a node 110. Thus, each node board has 32 nodes, with 2 processors for each node, and the associated memory for each processor. A rack 130 is a housing that contains 32 node boards 120. Each of the node boards 120 connect into a midplane printed circuit board 132 with a midplane connector 134. The midplane 132 is inside the rack and not shown in FIG. 1. The full Blue Gene/L computer system would be housed in 64 racks 130 or cabinets with 32 node boards 120 in each. The full system would then have 65,536 nodes and 131,072 CPUs (64 racks×32 node boards×32 nodes×2 CPUs).
  • The Blue Gene/L computer system structure can be described as a compute node core with an I/O node surface, where communication to 1024 compute nodes 110 is handled by each I/O node 170 that has an I/O processor connected to the service node 140. The I/O nodes 170 have no local storage. The I/O nodes are connected to the compute nodes through the logical tree network and also have functional wide area network capabilities through a gigabit Ethernet network (See FIG. 2 below). The gigabit Ethernet network is connected to an I/O processor (or Blue Gene/L link chip) in the I/O node 170 located on a node board 120 that handles communication from the service node 160 to a number of nodes. The Blue Gene/L system has one or more I/O nodes 170 connected to the node board 120. The I/O processors can be configured to communicate with 8, 32 or 64 nodes. The service node uses the gigabit network to control connectivity by communicating to link cards on the compute nodes. The connections to the I/O nodes are similar to the connections to the compute node except the I/O nodes are not connected to the torus network.
  • Again referring to FIG. 1, the computer system 100 includes a service node 140 that handles the loading of the nodes with software and controls the operation of the whole system. The service node 140 is typically a mini computer system such as an IBM pSeries server running Linux with a control console (not shown). The service node 140 is connected to the racks 130 of compute nodes 110 with a control system network 150. The control system network provides control, test, and bring-up infrastructure for the Blue Gene/L system. The control system network 150 includes various network interfaces that provide the necessary communication for the massively parallel computer system. The network interfaces are described further below.
  • The service node 140 communicates through the control system network 150 dedicated to system management. The control system network 150 includes a private 100-Mb/s Ethernet connected to an Ido chip 180 located on a node board 120 that handles communication from the service node 160 to a number of nodes. This network is sometime referred to as the JTAG network since it communicates using the JTAG protocol. All control, test, and bring-up of the compute nodes 110 on the node board 120 is governed through the JTAG port communicating with the service node.
  • The service node includes a job scheduler 142 for allocating and scheduling work and data placement on the compute nodes. The job scheduler 142 loads a job record 144 from data storage 138 for placement on the compute nodes. The job record 144 includes a job and related information as described more fully below. The service node further includes an I/O configuration mechanism 146 that dynamically distributes I/O nodes on a multi-node computing system. The I/O configuration mechanism 146 uses job information located in the job record 144 to initially configure the I/O node distribution. The I/O configuration mechanism further monitors the I/O performance of the executing job to then dynamically adjust the I/O node distribution based on the I/O performance of the executing job.
  • FIG. 2 illustrates a block diagram of an exemplary compute node as introduced above. FIG. 2 also represents a block diagram for an I/O node, which has the same overall structure as the compute node. A notable difference between the compute node and the I/O nodes is that the Ethernet adapter 226 is connected to the control system on the I/O node but is not used in the compute node. The compute node 110 of FIG. 2 includes a plurality of computer processors 210, each with an arithmetic logic unit (ALU) 211 and a memory management unit (MMU) 212. The processors 210 are connected to random access memory (‘RAM’) 214 through a high-speed memory bus 215. Also connected to the high-speed memory bus 214 is a bus adapter 217. The bus adapter 217 connects to an extension bus 218 that connects to other components of the compute node.
  • Stored in RAM 214 is a class routing table 221, an application program (or job) 222, an operating system kernel 223. The class routing table 221 stores data for routing data packets on the collective network or tree network as described more fully below. The application program is loaded on the node by the control system to perform a user designated task. The application program typically runs in a parallel with application programs running on adjacent nodes. The operating system kernel 223 is a module of computer program instructions and routines for an application program's access to other resources of the compute node. The quantity and complexity of tasks to be performed by an operating system on a compute node in a massively parallel computer are typically smaller and less complex than those of an operating system on a typical stand alone computer. The operating system may therefore be quite lightweight by comparison with operating systems of general purpose computers, a pared down version as it were, or an operating system developed specifically for operations on a particular massively parallel computer. Operating systems that may usefully be improved, simplified, for use in a compute node include UNIX, Linux, Microsoft XP, AIX, IBM's i5/OS, and others as will occur to those of skill in the art.
  • The compute node 110 of FIG. 2 includes several communications adapters 226, 228, 230, 232 for implementing data communications with other nodes of a massively parallel computer. Such data communications may be carried out serially through RS-232 connections, through external buses such as USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network.
  • The data communications adapters in the example of FIG. 2 include a Gigabit Ethernet adapter 226 that couples example I/O node 110 for data communications to a Gigabit Ethernet 234. In Blue Gene, this communication link is only used on I/O nodes and is not connected on the compute nodes. Gigabit Ethernet is a network transmission standard, defined in the IEEE 802.3 standard, that provides a data rate of 1 billion bits per second (one gigabit). Gigabit Ethernet is a variant of Ethernet that operates over multimode fiber optic cable, single mode fiber optic cable, or unshielded twisted pair.
  • The data communications adapters in the example of FIG. 2 include a JTAG Slave circuit 228 that couples the compute node 110 for data communications to a JTAG Master circuit over a JTAG network 236. JTAG is the usual name used for the IEEE 1149.1 standard entitled Standard Test Access Port and Boundary-Scan Architecture for test access ports used for testing printed circuit boards using boundary scan. JTAG boundary scans through JTAG Slave 236 may efficiently configure processor registers and memory in compute node 110.
  • The data communications adapters in the example of FIG. 2 include a Point To Point Network Adapter 230 that couples the compute node 110 for data communications to a network 238. In Blue Gene, the Point To Point Network is typically configured as a three-dimensional torus or mesh. Point To Point Adapter 230 provides data communications in six directions on three communications axes, x, y, and z, through six bidirectional links 238: +x, −x, +y, −y, +z, and −z. The torus network logically connects the compute nodes in a lattice like structure that allows each compute node 110 to communicate with its closest 6 neighbors.
  • The data communications adapters in the example of FIG. 2 include a collective network or tree network adapter 232 that couples the compute node 110 for data communications to a network 240 configured as a binary tree. This network is also sometimes referred to as the collective network. Collective network adapter 232 provides data communications through three bidirectional links: two links to children nodes and one link to a parent node (not shown). The collective network adapter 232 of each node has additional hardware to support operations on the collective network.
  • Again referring to FIG. 2, the collective network 240 extends over the compute nodes of the entire Blue Gene machine, allowing data to be sent from any node to all others (broadcast), or a subset of nodes. Each node typically has three links, with one or two links to a child node and a third connected to a parent node. Arithmetic and logical hardware is built into the collective network to support integer reduction operations including min, max, sum, bitwise logical OR, bitwise logical AND, and bitwise logical XOR. The collective network is also used for global broadcast of data, rather than transmitting it around in rings on the torus network. For one-to-all communications, this is a tremendous improvement from a software point of view over the nearest-neighbor 3D torus network.
  • The collective network partitions in a manner akin to the torus network. When a user partition is formed, an independent collective network is formed for the partition; it includes all nodes in the partition (and no nodes in any other partition). In the collective network, each node contains a class routing table that is used in conjunction with a small header field in each packet of data sent over the network to determine a class. The class is used to locally determine the routing of the packet. With this technique, multiple independent collective networks are virtualized in a single physical network with one or more I/O nodes for the virtual network. Two standard examples of this are the class that connects a small group of compute nodes to an I/O node and a class that includes all the compute nodes in the system. In Blue Gene, the physical routing of the collective network is static and in the prior art the virtual network was static after being configured. As described herein, the I/O configuration mechanism (FIG. 1, 146) dynamically distributes the I/O nodes in the virtual network. Thus, while the physical routing table of the collective network is static, the virtual network can be reconfigured to dynamically redistribute the I/O nodes to the virtual networks as described herein. Alternatively, the I/O configuration mechanism could distribute the I/O nodes using hardware for a non-virtual network.
  • FIG. 3 illustrates a portion of the collective network or tree network shown as 240 in FIG. 2. The collective or tree network 300 is connected to the service node 140 through the control system network 150. The tree network 300 is a group of compute nodes 110 connected an I/O node 170 in a logical tree structure. The I/O node 170 is connected to one or more compute nodes 110. Each of the compute nodes Node1 312, and Node2 314 are connected directly to the I/O node 170 and form the top of the tree or a first level 311 for a set of nodes connected below each of Node1 312 and Node2 314. Node1 312 is the top of a tree network and has child nodes Node3 316 and Node4 318 on a second level 317. Similarly, Node3 316 has child nodes Node7 322 and Node8 324 on a third level 325. Many of the child nodes are not shown for simplicity, but the tree network 300 could contain any number of nodes with any number of levels.
  • FIG. 4 shows a data packet 400 for communicating on the tree network 240 (FIG. 2) in a massively parallel computer system 100 (FIG. 1). Each data packet 400 includes a class 410 and data 420. The class 410 is used to determine the routing of the packet to deliver data 420 on the virtual tree network over the tree network 240. The class 410 is used in conjunction with the class routing table 221 to determine how to route the data packet 400 to the appropriate node on the tree network.
  • FIG. 5 shows a block diagram that represents a job record 146 in a massively parallel computer system. The job record 146 includes a name 510, the job executable 512, a job description 514, historical I/O utilization 516 and application control parameters 518. The name 510 identifies the job record that contains the record information. The job executable 512 is the code to execute the job. The job description 514 includes information about the job including information that may help determine the I/O needs of the job. The historical I/O utilization 516 contains historical information about the I/O utilization of the job that is recorded by the I/O configuration mechanism during past executions of the job. Application control parameters 518 are embedded control commands that allow a job to dictate how to set up the I/O configuration. The application control 518 may be set by a system administrator to allow the execution of the job to dictate the I/O configuration upon initial execution of the job.
  • FIG. 6 and FIG. 7 illustrate an example of dynamically allocating an I/O node in a massively parallel computer system. FIG. 6 represents an initial state of a portion of a massively parallel computer system prior to dynamically allocating the I/O nodes. In this initial state, there are 4 I/O nodes 170 a-170 d installed on a node card (not shown) that has 64 nodes 110. In the initial state, the nodes 110 are allocated with 16 nodes in a node block 710-713 for executing a job. Each node block 710-713 has been initially configured with a single I/O node 170. The job associated with block 713 is determined to have extensive I/O needs by the I/O configuration mechanism 144 (FIG. 1). The determination of I/O needs may be determined upon loading a job, or while the job is executing as described herein. The I/O configuration mechanism may then determine to dynamically distribute an additional node to the job executing on block 713. In this example, the I/O configuration mechanism determines to distribute the nodes associated with block 712 to block 711 to free up an I/O node 170 c and then distribute this I/O node 170 c to the node block 713 needing the additional I/O capability. The result of this dynamic distribution is illustrated in FIG. 7 where the 16 nodes in node block 713 are now configured with 2 I/ O nodes 170 c, 170 d. The nodes that were previously configured to node block 712 have be re-configured 812 with node block 711.
  • As illustrated in the above example, the I/O configuration mechanism dynamically distributes I/O nodes to blocks of compute nodes in a massively parallel computer system. In the previous example, the determination to distribute an additional I/O node to the node block may have been based on data in the job record or by real-time I/O needs determined by monitoring the job execution. For example, upon loading the job, the I/O configuration mechanism could have detected from the job description that the job has extensive I/O needs and then distributed the additional I/O node from a block that has less I/O demands or is not being used. Second, the historical I/O utilization 516 may have indicated that the job typically requires a large amount of I/O resources and thus would execute more efficiently with an additional I/O node. Third, the I/O configuration mechanism may have determined from the job record that the application will assert control with application control parameters 518 (FIG. 5). The application control parameters 518 indicate how the application wishes to assert control over the I/O configuration mechanism. For example, the application control parameters may indicate a priority or an number of I/O nodes that are required. Finally, the I/O configuration mechanism may dynamically distribute the I/O nodes based on real-time monitoring of the I/O needs of the job while it is executing. The I/O needs may be monitoring by looking a performance metrics such as the number of I/O operations performed by the I/O node for the job, network latency, overall network loading, etc. When the performance metrics indicate the I/O demand is above an established threshold, then the I/O configuration mechanism will attempt to dynamically update the I/O configuration to distribute additional I/O nodes to the job.
  • FIG. 8 shows a method 800 for dynamic distribution of compute nodes versus I/O nodes on a multi-node computing system. The steps in method 800 are preferably performed by an I/O configuration mechanism 146 in the service node 140 (FIG. 1). First, the I/O configuration mechanism 146 loads a first job record 144 from the data storage 138 and examines the job record for I/O needs of the job (step 810). Next, the method dynamically distributes I/O nodes based on the job record (step 820). The information in the job record that is used for dynamically allocating the jobs may include the job description, job execution history and application control parameters as described above. Then the method executes the job on the nodes while real-time monitoring for dynamic I/O node configuration (step 830). The method is then done.
  • FIG. 9 shows a method 900 for selecting the nodes to execute a job on a massively parallel computer system as an exemplary implementation of step 830 in method 800. The method first monitors the I/O characteristics of the executing job (step 910). If the I/O characteristics indicate the I/O demand is not above a threshold (step 920 =no) then continue monitoring any executing jobs (step 910). If the I/O characteristics indicate the I/O demand is above a threshold (step 920=yes) then dynamically update the I/O configuration of the nodes executing the job (step 930) and continue monitoring any executing jobs (step 910). The method may operate continuously or be terminated by the I/O configuration mechanism.
  • FIG. 10 shows a method for selecting the nodes to execute a job on a massively parallel computer system as an exemplary implementation of step 930 in method 900. The method first suspends all jobs executing on the blocks of nodes to be redistributed (step 1010). The method then redistributes I/O nodes among blocks of nodes that are executing jobs on the multi-node computer system (step 1020). Then the method resets the block structure with the new allocation of I/O nodes (step 1030). The method can then resume the job that was suspended (step 1040). The method is then done.
  • An apparatus and method is described herein to dynamically distributes I/O nodes on a multi-node computing system. The I/O configuration mechanism monitors the I/O performance of the executing job to then dynamically redistribute the I/O node distribution based on the I/O performance of the executing job to increase the multi-node computer system.
  • One skilled in the art will appreciate that many variations are possible within the scope of the claims. Thus, while the disclosure has been particularly shown and described above, it will be understood by those skilled in the art that these and other changes in form and details may be made therein without departing from the spirit and scope of the claims.

Claims (6)

1. A computer implemented method for an I/O configuration mechanism to distribute I/O nodes in a multi-node computer system, the method comprising the steps of:
monitoring the I/O characteristics of an executing job on a block of compute nodes in the multi-node computer system with one or more I/O nodes comprising a processor in the multi-node computer system, wherein the I/O nodes are connected to a block of compute nodes with a virtual network operating on a physical network, and the I/O nodes communicate with a service node to provide I/O communication to network resources;
determining whether an I/O demand on the one or more I/O nodes is above a threshold;
dynamically updating the I/O configuration to adjust a ratio of I/O nodes to compute nodes for the block of compute nodes by dynamically configuring the virtual network;
suspending the job;
re-allocating the ratio of I/O nodes;
resetting a structure of the block of compute nodes with a new allocation of I/O nodes that adjusts the ratio; and
resuming the job.
2. The computer implemented method of claim 1 further comprising the steps of:
examining a job record associated with the job for I/O needs of the job; and
dynamically allocating I/O nodes to the job based on the job record.
3. The computer implemented method of claim 2 wherein the step of examining the job record further comprises the steps of:
examining the job description for I/O needs;
examining a job execution history for I/O needs; and
allowing an application to control the I/O configuration with application control parameters.
4. The computer implemented method of claim 1 wherein the block of compute nodes are arranged in a virtual tree network and an I/O node connects to the top of the tree network to allow the block of compute nodes to communicate with a service node of a massively parallel computer system.
5. The computer implemented method of claim 4 wherein the virtual tree network is determined by a class routing table in the node.
6. A computer implemented method for an I/O configuration mechanism to distribute I/O nodes in a multi-node computer system, the method comprising the steps of:
examining a job record associated with a job for I/O needs of the job;
dynamically allocating I/O nodes comprising a processor to the job based on the I/O needs in the job record, wherein the I/O nodes are connected to a block of compute nodes with a virtual network operating on a physical network;
monitoring the I/O characteristics of the job, wherein the job is executing on the block of compute nodes;
determining whether an I/O demand on the one or more I/O nodes is above a threshold; and
dynamically updating the I/O configuration to adjust a ratio of I/O nodes to compute nodes for the block of nodes by performing the steps of:
suspending the job;
re-allocating the ratio of I/O nodes by dynamically configuring the virtual network;
resetting a block structure of the block of nodes with a new allocation of I/O nodes that adjusts the ratio; and
resuming the job;
wherein the block of compute nodes are arranged in a virtual tree network and an I/O node connects to the top of the tree network to allow the block of compute nodes to communicate with a service node of a massively parallel computer system.
US13/786,785 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system Abandoned US20130185423A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/786,785 US20130185423A1 (en) 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/955,067 US20090158276A1 (en) 2007-12-12 2007-12-12 Dynamic distribution of nodes on a multi-node computer system
US13/786,785 US20130185423A1 (en) 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/955,067 Continuation US20090158276A1 (en) 2007-12-12 2007-12-12 Dynamic distribution of nodes on a multi-node computer system

Publications (1)

Publication Number Publication Date
US20130185423A1 true US20130185423A1 (en) 2013-07-18

Family

ID=40755024

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/955,067 Abandoned US20090158276A1 (en) 2007-12-12 2007-12-12 Dynamic distribution of nodes on a multi-node computer system
US13/786,785 Abandoned US20130185423A1 (en) 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system
US13/786,750 Expired - Fee Related US9172628B2 (en) 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/955,067 Abandoned US20090158276A1 (en) 2007-12-12 2007-12-12 Dynamic distribution of nodes on a multi-node computer system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/786,750 Expired - Fee Related US9172628B2 (en) 2007-12-12 2013-03-06 Dynamic distribution of nodes on a multi-node computer system

Country Status (1)

Country Link
US (3) US20090158276A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8261249B2 (en) * 2008-01-08 2012-09-04 International Business Machines Corporation Distributed schemes for deploying an application in a large parallel system
US7958184B2 (en) * 2008-03-04 2011-06-07 International Business Machines Corporation Network virtualization in a multi-node system with multiple networks
US8108467B2 (en) * 2008-06-26 2012-01-31 International Business Machines Corporation Load balanced data processing performed on an application message transmitted between compute nodes of a parallel computer
US8387064B2 (en) 2008-10-09 2013-02-26 International Business Machines Corporation Balancing a data processing load among a plurality of compute nodes in a parallel computer
US8516487B2 (en) * 2010-02-11 2013-08-20 International Business Machines Corporation Dynamic job relocation in a high performance computing system
US20110321056A1 (en) * 2010-06-23 2011-12-29 International Business Machines Corporation Dynamic run time allocation of distributed jobs
US8566837B2 (en) 2010-07-16 2013-10-22 International Business Machines Corportion Dynamic run time allocation of distributed jobs with application specific metrics
US9026658B2 (en) * 2012-03-28 2015-05-05 Microsoft Technology Licensing, Llc Enhanced computer cluster operation using resource allocation requests
US9250954B2 (en) * 2013-01-17 2016-02-02 Xockets, Inc. Offload processor modules for connection to system memory, and corresponding methods and systems
US10073880B2 (en) * 2015-08-06 2018-09-11 International Business Machines Corporation Vertical tuning of distributed analytics clusters
US10534655B1 (en) 2016-06-21 2020-01-14 Amazon Technologies, Inc. Job scheduling based on job execution history
US10592280B2 (en) 2016-11-23 2020-03-17 Amazon Technologies, Inc. Resource allocation and scheduling for batch jobs
US11036733B2 (en) 2019-08-20 2021-06-15 Ant Financial (Hang Zhou) Network Technology Co., Ltd. Method, apparatus, system, server, and storage medium for connecting tables stored at distributed database

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6161152A (en) * 1998-06-04 2000-12-12 Intel Corporation System for providing asynchronous I/O operations by identifying and polling a portal from an application process using a table of entries corresponding to I/O operations
US20020010840A1 (en) * 2000-06-10 2002-01-24 Barroso Luiz A. Multiprocessor cache coherence system and method in which processor nodes and input/output nodes are equal participants
US20020073164A1 (en) * 1996-07-02 2002-06-13 Sun Microsystems, Inc. Hierarchical SMP computer system
US20020087807A1 (en) * 2000-06-10 2002-07-04 Kourosh Gharachorloo System for minimizing directory information in scalable multiprocessor systems with logically independent input/output nodes
US20040024859A1 (en) * 2002-08-05 2004-02-05 Gerald Bloch Method and apparatus for communications network resource utilization assessment
US20040081155A1 (en) * 2001-02-24 2004-04-29 Bhanot Gyan V Class network routing
US20040103218A1 (en) * 2001-02-24 2004-05-27 Blumrich Matthias A Novel massively parallel supercomputer
US20040148472A1 (en) * 2001-06-11 2004-07-29 Barroso Luiz A. Multiprocessor cache coherence system and method in which processor nodes and input/output nodes are equal participants
US20050131993A1 (en) * 2003-12-15 2005-06-16 Fatula Joseph J.Jr. Apparatus, system, and method for autonomic control of grid system resources
US20060015505A1 (en) * 2004-07-16 2006-01-19 Henseler David A Role-based node specialization within a distributed processing system
US20060026161A1 (en) * 2004-07-16 2006-02-02 Henseler David A Distributed parallel file system for a distributed processing system
US20060041644A1 (en) * 2004-07-16 2006-02-23 Henseler David A Unified system services layer for a distributed processing system
US20060168584A1 (en) * 2004-12-16 2006-07-27 International Business Machines Corporation Client controlled monitoring of a current status of a grid job passed to an external grid environment
US20070011485A1 (en) * 2004-12-17 2007-01-11 Cassatt Corporation Application-based specialization for computing nodes within a distributed processing system
US20070078960A1 (en) * 2005-10-04 2007-04-05 International Business Machines Corporation Grid computing accounting and statistics management system
US20090049114A1 (en) * 2007-08-15 2009-02-19 Faraj Ahmad A Determining a Bisection Bandwidth for a Multi-Node Data Communications Network
US20090083746A1 (en) * 2007-09-21 2009-03-26 Fujitsu Limited Method for job management of computer system
US7840779B2 (en) * 2007-08-22 2010-11-23 International Business Machines Corporation Line-plane broadcasting in a data communications network of a parallel computer
US8117288B2 (en) * 2004-10-12 2012-02-14 International Business Machines Corporation Optimizing layout of an application on a massively parallel supercomputer
US8127273B2 (en) * 2007-11-09 2012-02-28 International Business Machines Corporation Node selection for executing a Java application among a plurality of nodes

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4313036A (en) * 1980-02-19 1982-01-26 Rolm Corporation Distributed CBX system employing packet network
US4766534A (en) * 1986-10-16 1988-08-23 American Telephone And Telegraph Company, At&T Bell Laboratories Parallel processing network and method
US4897874A (en) * 1988-03-31 1990-01-30 American Telephone And Telegraph Company At&T Bell Laboratories Metropolitan area network arrangement for serving virtual data networks
US6366945B1 (en) * 1997-05-23 2002-04-02 Ibm Corporation Flexible dynamic partitioning of resources in a cluster computing environment
JP2000194674A (en) * 1998-12-28 2000-07-14 Nec Corp Decentralized job integration management system
JP2001109638A (en) * 1999-10-06 2001-04-20 Nec Corp Method and system for distributing transaction load based on estimated extension rate and computer readable recording medium
US6427152B1 (en) * 1999-12-08 2002-07-30 International Business Machines Corporation System and method for providing property histories of objects and collections for determining device capacity based thereon
US7516221B2 (en) * 2003-08-14 2009-04-07 Oracle International Corporation Hierarchical management of the dynamic allocation of resources in a multi-node system
US9183256B2 (en) * 2003-09-19 2015-11-10 Ibm International Group B.V. Performing sequence analysis as a relational join
US8856793B2 (en) * 2004-05-11 2014-10-07 International Business Machines Corporation System, method and program for scheduling computer program jobs
US7761557B2 (en) * 2005-01-06 2010-07-20 International Business Machines Corporation Facilitating overall grid environment management by monitoring and distributing grid activity
US7549028B2 (en) * 2005-06-29 2009-06-16 Emc Corporation Backup and restore operations using a single snapshot driven by a server job request
US20070101000A1 (en) * 2005-11-01 2007-05-03 Childress Rhonda L Method and apparatus for capacity planning and resourse availability notification on a hosted grid
US7493419B2 (en) * 2005-12-13 2009-02-17 International Business Machines Corporation Input/output workload fingerprinting for input/output schedulers

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020073164A1 (en) * 1996-07-02 2002-06-13 Sun Microsystems, Inc. Hierarchical SMP computer system
US6161152A (en) * 1998-06-04 2000-12-12 Intel Corporation System for providing asynchronous I/O operations by identifying and polling a portal from an application process using a table of entries corresponding to I/O operations
US20020010840A1 (en) * 2000-06-10 2002-01-24 Barroso Luiz A. Multiprocessor cache coherence system and method in which processor nodes and input/output nodes are equal participants
US20020087807A1 (en) * 2000-06-10 2002-07-04 Kourosh Gharachorloo System for minimizing directory information in scalable multiprocessor systems with logically independent input/output nodes
US20090259713A1 (en) * 2001-02-24 2009-10-15 International Business Machines Corporation Novel massively parallel supercomputer
US20040081155A1 (en) * 2001-02-24 2004-04-29 Bhanot Gyan V Class network routing
US20040103218A1 (en) * 2001-02-24 2004-05-27 Blumrich Matthias A Novel massively parallel supercomputer
US20120311299A1 (en) * 2001-02-24 2012-12-06 International Business Machines Corporation Novel massively parallel supercomputer
US20040148472A1 (en) * 2001-06-11 2004-07-29 Barroso Luiz A. Multiprocessor cache coherence system and method in which processor nodes and input/output nodes are equal participants
US20040024859A1 (en) * 2002-08-05 2004-02-05 Gerald Bloch Method and apparatus for communications network resource utilization assessment
US20050131993A1 (en) * 2003-12-15 2005-06-16 Fatula Joseph J.Jr. Apparatus, system, and method for autonomic control of grid system resources
US20060026161A1 (en) * 2004-07-16 2006-02-02 Henseler David A Distributed parallel file system for a distributed processing system
US20060041644A1 (en) * 2004-07-16 2006-02-23 Henseler David A Unified system services layer for a distributed processing system
US20060015505A1 (en) * 2004-07-16 2006-01-19 Henseler David A Role-based node specialization within a distributed processing system
US8117288B2 (en) * 2004-10-12 2012-02-14 International Business Machines Corporation Optimizing layout of an application on a massively parallel supercomputer
US20060168584A1 (en) * 2004-12-16 2006-07-27 International Business Machines Corporation Client controlled monitoring of a current status of a grid job passed to an external grid environment
US20070011485A1 (en) * 2004-12-17 2007-01-11 Cassatt Corporation Application-based specialization for computing nodes within a distributed processing system
US20120192152A1 (en) * 2004-12-17 2012-07-26 Computer Associates Think, Inc. Application-Based Specialization For Computing Nodes Within A Distributed Processing System
US20070078960A1 (en) * 2005-10-04 2007-04-05 International Business Machines Corporation Grid computing accounting and statistics management system
US20090049114A1 (en) * 2007-08-15 2009-02-19 Faraj Ahmad A Determining a Bisection Bandwidth for a Multi-Node Data Communications Network
US7840779B2 (en) * 2007-08-22 2010-11-23 International Business Machines Corporation Line-plane broadcasting in a data communications network of a parallel computer
US20090083746A1 (en) * 2007-09-21 2009-03-26 Fujitsu Limited Method for job management of computer system
US8127273B2 (en) * 2007-11-09 2012-02-28 International Business Machines Corporation Node selection for executing a Java application among a plurality of nodes

Also Published As

Publication number Publication date
US9172628B2 (en) 2015-10-27
US20090158276A1 (en) 2009-06-18
US20130185731A1 (en) 2013-07-18

Similar Documents

Publication Publication Date Title
US9172628B2 (en) Dynamic distribution of nodes on a multi-node computer system
US8539256B2 (en) Optimizing power consumption and performance in a hybrid computer environment
US10754690B2 (en) Rule-based dynamic resource adjustment for upstream and downstream processing units in response to a processing unit event
US8544065B2 (en) Dataspace protection utilizing virtual private networks on a multi-node computer system
US8108467B2 (en) Load balanced data processing performed on an application message transmitted between compute nodes of a parallel computer
US8381220B2 (en) Job scheduling and distribution on a partitioned compute tree based on job priority and network utilization
US8140704B2 (en) Pacing network traffic among a plurality of compute nodes connected using a data communications network
US7697443B2 (en) Locating hardware faults in a parallel computer
US8516487B2 (en) Dynamic job relocation in a high performance computing system
US9459923B2 (en) Dynamic run time allocation of distributed jobs with application specific metrics
US7941681B2 (en) Proactive power management in a parallel computer
US9665401B2 (en) Dynamic run time allocation of distributed jobs
US8055651B2 (en) Distribution of join operations on a multi-node computer system
US8812818B2 (en) Management of persistent memory in a multi-node computer system
US8572723B2 (en) Utilizing virtual private networks to provide object level security on a multi-node computer system

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BARSNESS, ERIC L.;DARRINGTON, DAVID L.;RANDLES, AMANDA;AND OTHERS;SIGNING DATES FROM 20130412 TO 20130509;REEL/FRAME:030385/0837

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION