WO1999000744A1 - Novel computer platform connection utilizing virtual system bus - Google Patents

Novel computer platform connection utilizing virtual system bus Download PDF

Info

Publication number
WO1999000744A1
WO1999000744A1 PCT/US1998/013532 US9813532W WO9900744A1 WO 1999000744 A1 WO1999000744 A1 WO 1999000744A1 US 9813532 W US9813532 W US 9813532W WO 9900744 A1 WO9900744 A1 WO 9900744A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor
mainframe
system bus
virtual system
channel
Prior art date
Application number
PCT/US1998/013532
Other languages
French (fr)
Inventor
Armando J. Palomar
William F. O'connell, Jr.
Billy B. Waldron
Original Assignee
Commercial Data Servers, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commercial Data Servers, Inc. filed Critical Commercial Data Servers, Inc.
Publication of WO1999000744A1 publication Critical patent/WO1999000744A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/122Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware performs an I/O function other than control of data transfer

Definitions

  • This invention pertains to a novel method of connecting multiple computer platforms to perform as a global computer system.
  • This invention pertains to computers that execute LBM Corporation's System 370 and 390 instruction set as well as any other computer system whose sub-functions performed by independent platforms can be integrated into the greater global function by clustering those computer platforms.
  • PCs are well known in the prior art and much has been made recently of microcomputers, often referred to as personal computers or PCs.
  • PCs have improved significantly in recent years in their disk storage capacity and processing power, allowing PCs to become very popular with the general populace.
  • PCs are less useful in mission critical applications than one might imagine and, for this reason, computer networking has undergone technological advances.
  • By networking a plurality of PCs users on a network can share files and disk space. While processing typically takes place on each individual's PC, generally the loss of service from a single PC would not cripple the mission. Notwithstanding the significant advances in PC technology and PC networking technology, mainframe computers remain highly important.
  • Mainframe computers provide significantly more connectivity of peripherals and greater on-line storage capacity, as well as greater reliability, information processing through-put, and data security than PCs. It is the mainframe computer, mainframe software operating system, and mainframe application programs that are used by major institutions to perform data processing tasks on large volumes of extremely valuable information. For example, insurance companies, government services, airline reservation systems, banks, and credit card institutions are representative of those who rely on the mainframe.
  • mainframe we refer to the larger general purpose information processing systems which have performed governmental, business, and institutional processing for decades before the Personal Computer was developed.
  • Mainframe hardware and operating system architecture were designed for business rather than personal information processing.
  • a PC may have one or two disk drives or connect to servers over shared LANs, but a mainframe complex though its multiplicity of paths can connect hundreds and support architecturally thousands of disk storage I/O devices along with tape libraries containing thousands of gigabytes of on-line storage, and achieve this through communication processors make that information available to thousands of users.
  • Mainframe technology and more importantly, their software operating systems are devoted to data integrity, error detection/recovery, redundancy, operational reliability, and data security. Mainframes are controlled by robust and mature operating systems that integrate the data processing applications and system integrity.
  • the predominant mainframe architecture is the IBM System 360/370 and the evolved successor, the LBM System 390.
  • the Personal Computer permits the individual user to perform complex personal work, data inquiries, and entertainment processing. Since the PC was first offered, efforts to interface the PC-to-mainframe have proceeded vigorously. As a result, a PC user can work with local PC disk storage and processing capability for a personal task such as drawing, word processing, maintaining source code, composing music, or performing software development, and yet have an interface to a mainframe which allows that PC to perform terminal emulation as a so called "dumb terminal". Where in the past the central host processor performed the terminal's data, keyboard, and screen manipulation, the PC now can retrieve and store files to the mainframe and perform the dumb terminal processing. More efficient both to the PC and mainframe because the mainframe now doesn't spend resources to manipulate screens and keystroke controls and is freed to more efficiently execute its primary purpose of major business processing, storing, protecting, and retrieving information for those authorized personal computer users.
  • mainframes While there have been significant improvements in PC technology and PC networking technology in recent years, mainframes have shown improvements as well, particularly in operating system robustness, technology, processing speed, multiprocessing, intelligent I/O architecture, and reliability. It's not unusual for a mainframe to run continuously for over a year without the anomalies that require the system to be rebooted or otherwise reinitialized.
  • mainframes can contain a substantial amount of unique and, in comparison to a Personal Computer, expensive technology which was designed and produced for performance and reliability by the mainframe vendor.
  • the consumer oriented PC uses electronic components, chip sets, and disk drives from a number of sources along with add-in boards for video graphics drivers, memory capacity, network support, and the like.
  • the Personal Computer is a consumer product and the manufacturer is driven to achieve the lowest pricing.
  • PC performance is primarily a function of the microprocessor and memory speed leaving little flexibility to differentiate from other PC manufacturers other than by cost.
  • Mainframe manufacturers strive to constantly improve performance and reliability and technology flexibility in their implementations but their components may be unique and more expensive. Also, large mainframes require dedicated building facilities to contain the system components and maintain a controlled environment which are very costly.
  • PCs are inherently less expensive, less stable, and less reliable than mainframes, as can be readily attested to by any PC user who has had to "reboot” their computer. This lower reliability can be a function of some lower quality commercial components, but primarily it is a function of the PC operating systems and application software.
  • PC operating systems were developed for the personal user and as such are user friendly, but without the mainframe's extensive safeguards to prevent operational errors.
  • PC operating systems were developed for personal application software ranging from playing video games and drawing posters to providing low cost processing for small business concerns. A software hang up normally results in the user rebooting the personal computer and restarting the program. Experienced personal computer users, for that reason, are diligent at backing up their work.
  • Mainframe operating systems and application software were designed to support financial institutions, automobile manufacturers, manned space shots, national defense, and other mission critical applications. In comparison, the PC operating systems do not have the experience of decades-of-service in mission critical applications provided by the mainframe operating systems.
  • Mainframes are extremely reliable and stable and rarely, if ever, need to be "rebooted".
  • Mainframes utilize a logical structure (architecture) which supports parity checking, error correction, path redundancy, error recovery, and that architecture is supported by comprehensive and mature software operating systems which have steadily improved in operational reliability since the middle 1960's when the original O/S 360, the foundation of System 390, was introduced.
  • Mainframe operating systems are designed to take advantage of those inherent features to make the total mainframe information processing system more reliable. This is not the case with Personal Computers or the somewhat more powerful workstations, leading to the questionable use of such lower cost systems in highly integrated and critical applications.
  • FIG. 1 is a block diagram of a typical prior art single or "uni" processor
  • Mainframe Computer 100 including central processing unit (CPU) complex 101 coupled directly to Cache Memory 102 for fast access to recently used data.
  • the CPU complex 101 with Cache Memory 102 and I/O processor 104 are coupled to main system bus 103 in order to access main memory 105 to store and retrieve programs and data. All accesses to use system bus 103 are controlled via requests which result in grants provided by the system bus controller 106.
  • Service processor 1 also coupled to the system bus 103, provides the operator console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components.
  • Other implementations treat the service processor 110 as an I/O control unit and peripheral device.
  • Mainframe 100 also includes I/O Processor 104 which is also coupled to the system bus 103 for memory accesses and which receives instructions from CPU complex 101 pertaining to input/output functions and which I/O Processor 104 offloads tasks from CPU complex 101 and executes those tasks required by those I/O functions. As required for the act of program execution and data processing by CPU complex 101, these I/O processing operations transfer information which may contain data or programs to and from Main Memory 105 via communications on System Bus 103, as well as all other I/O functions which are executed by channels 107-1 through 107-N.
  • "Channel" a direct memory access channel, as used in connection with System 390 mainframes, means System 390 compatible I/O Channel paths, also referred to as Channel Path ID (CHPLD).
  • CHPLD Channel Path ID
  • Channels 107-1 through 107-N are each coupled to System Bus 103 through the I/O Processor-Channel complex 104.
  • System Bus 103 is organized in a parallel format which is made up of groups of lines, each line representing a bit. Bits combine to represent bytes which are 8 data bits plus 1 parity bit.
  • the typical large mainframe System Bus 103 is approximately 116 bits wide including 8 bytes of 64 data and 8 associated parity bits, four bytes of 32 address bits and 4 associated parity bits, and several control bits. Control bits are highly design dependent and may or may not be required to be in a byte format. However, all information in the mainframe is represented in bytes of eight binary bits or groups of bytes. The byte is the smallest addressable component of information.
  • System Bus 103 is formed of high speed copper wire connections which are typically capable of one hundred to several hundred megabytes per second bandwidth, yet are limited by the physical length which effects the electrical characteristics of capacitance, resistance, and inductance and whose effect is to attenuate and distort high speed binary signals as the switching speed and length of the System Bus 103 increases. Bandwidth limitation, electromagnetic interference susceptibility, and data distortion errors normally result from excessive length of System Bus 103. In the prior art mainframe 100, System Bus lines are physically short (e.g., contained on a system backplane) as possible for data integrity.
  • System Bus Controller 106 is included in order to maintain control of System Bus 103, receiving bus requests from various complexes requiring access to Main Memory 105 via System Bus 103, and granting allocations to those various complexes of Mainframe 100 in order to maintain individual accesses via System Bus 103 to and from Main Memory 105.
  • CPU complex 101 performs I/O operations through channels 107-1 through 107-N. All data transferred in I/O operations is either retrieved from or is stored into Main Memory 105, other than specific operations which are called READ SKLP and serve to allow a tape or other I/O unit to move recording media without storing unwanted data in Main Memory. Data being transferred from Main Memory to an I/O device such as a disk storage is considered to be a WRITE to the I/O device. Data being stored in Main Memory from an I/O device such as disk storage or tape is considered to a READ into Main Memory. Each channel represents an individual path of the I O Processing-Channel complex 104 located between Main Memory 105 and the I/O peripheral.
  • Each channel operation is a direct memory access function and must directly or indirectly connect to the System Bus 103 and typically shares access to the System Bus Controller 106 and System Bus 103 through I O Processor-Channel complex 104 in order not to replicate System Bus 103 access hardware and possibly any firmware that might control the System Bus 103 accesses.
  • the most elementary I/O Processor-Channel complex 104 would consist of an I/O processor, System Bus and System Bus Controller access logic, Channel and Main Memory data transfer facilities, I/O control unit selection, command presentation, I/O Control Unit data transfer facilities, and operation and electrical termination facilities (not shown).
  • each Channel is connected to its associated I/O Control Unit by one of two methods.
  • Channel lines 109-1 and 109-2 which can connect up to eight I/O control units to Channel 107-1 or to Channel 107-2, are formed of high speed copper wire connections which are typically capable of approximately 4.5 million bytes per second (MBS) bandwidth and yet are limited to approximately 400 feet physical length.
  • MCS Megabyte bytes per second
  • the copper cables, electrical characteristics, and protocol that are use to connect the I/O Control Unit are known in the prior art and referred to as the Original Equipment Manufacturer's Interface or OEMI.
  • OEMI interface The advantage of the prior art OEMI interface is that it is a parallel interface that performs universal connection to prior art parallel I/O control units and as such was adopted as Federal Information Processing Standard-60 (FLPS-60).
  • Channel lines 107-3 through 107-N are serial in nature and utilize fiber-optic cables which are capable of 17 MBS data transfer bandwidth and are capable of reliable connections up to 3 kilometers when Light Emitting Diodes are used as transmitters.
  • Channel lines 109-3 through 109-N are, in most cases, connected to Director 118 which performs a subsequent connection function to the Serial Interface I/O Control Unit 121.
  • This prior art serial interface which defines encoding and decoding of transmission characters, optical wavelength and power requirements, connectors, protocols, framing, and fiber-optic cable characteristics is a proprietary serial I/O interface which is a unique and proprietary standard of IBM Corporation and known as Enterprise Connection Architecture or the ESCON.
  • ESCON serial interface is that it is a serial interface that performs universal connection to prior art Directors 118 and serial I/O control units 121.
  • the disadvantage is that ESCON is a point to point connection and requires the Director 118, a serial switcher, to connect more than one serial I/O Control Unit to Channels 109-3 through 109-N as described in prior art Figure 1.
  • prior art ESCON interfaces are serial fiber-optic in nature and can provide connections to ESCON capable I O control units over greater distances than the OEMI cable and can provide higher data bandwidths.
  • a typical prior art ESCON connection may provide distances up to approximately 3 km, with a data transmission bandwidth of approximately 17 MB/sec.
  • each I/O processor complex includes one or more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives or magnetic tape drives as switched by director 118.
  • Director 118 connects the serial control and data signal interface between channel unit 107-1 and the serial I/O Control Unit and peripheral devices. In this manner, standard mainframe commands and data are applied from channel unit 107-1 to the I/O peripheral control unit.
  • Director 1 18 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control and exchange data with the selected I/O peripheral device.
  • CCW Channel Command Word
  • Disadvantages of prior art mainframe architectures include limitations on the distance between channel complexes and peripheral devices due to the use of either OEMI or ESCON connections.
  • Another disadvantage is the limitation in the distance of the copper buses between CPU complexes and channel complexes, which are affected by their electrical characteristics, capacitance, and inductance.
  • mainframe has been the basis for the hundreds of billions of dollars and millions of man hours invested in mainframe operating systems and application software performing extremely critical tasks, such as financial processing, civilian and military aircraft development, manned and unmanned space mission, critical national defense applications and the like.
  • the monetary cost and time spent for rewriting such applications to run on systems other than mainframes would be astronomical.
  • any such redeployment to non-mainframe platforms would inevitably result in massive amounts of errors requiring more time and money to be spent in debugging and lost productivity.
  • FIG. 2 is a block diagram of a typical prior art single or "uni" processor Mainframe Computer 100 including central processing unit (CPU) complex 101 coupled directly to Cache Memory 102 for fast access to recently used data. Both CPU complex 101 and Cache Memory 102 are coupled to main system bus 103 in order to access main memory 105 to store and retrieve programs and data console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components. All accesses via requests and grants to use the system bus 103 are controlled by the system bus controller 106.
  • CPU central processing unit
  • Cache Memory 102 main system bus 103 in order to access main memory 105 to store and retrieve programs and data console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components. All accesses via requests and grants to use the system bus 103 are controlled by the system bus controller 106.
  • mainframe 100 includes a multiplicity of I/O Processors 104 and 120 also coupled to the system bus 103 which receive instructions from CPU complex 101 pertaining to input/output functions and which I/O Processors 104 offload tasks from CPU complex 101 and execute those tasks required by those I/O functions.
  • the multiplicity of I/O Processors 104 demonstrate the multiple paths to Disk I/O Control units, 112, 115, 125 and to tape I/O control units 110 and 123.
  • Each I/O processor complex includes one or a multiplicity more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives, 113, 114, 116, and 117 or magnetic tape drive 111 and 124 drives as attached by an electrical OEMI interface or ESCON fiber-optic which is switched by director 118.
  • I O Processor 104 and channel 107-0 with OEMI lines 109-0 and I/O Processor 120 and channel 122-0 are each connected to tape I/O Control Unit 110.
  • I/O Processor 104 and channels 107-1 and 107-2 connect to disk I/O control 1 12 and 1 15 as well as I/O processor 120 and channels 121-1 and 121-2 connect to disk I/O Control Units 112 and 115 as provided by System 390 I/O Architecture.
  • Director 1 18 connects the serial control and data signal interface between I/O Processor and channel 107-1 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others while I/O Processor 120 and channel 121-n connects to Director 1 18 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others as shown.
  • I O Processor 104 and I O Processor 120 are applied from I O Processor 104 and I O Processor 120 to the I/O peripheral control units 123, 125 and others.
  • Director 118 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control, store and retrieve information with the selected I/O peripheral device.
  • CCW Channel Command Word
  • This configuration demonstrates the superiority of mainframe I/O architecture which is designed to support multiple paths to information.
  • the failure of a channel or even an I/O Processor complex would not prevent access to the valuable information and thereby disable the function of the entire mainframe.
  • a disadvantage is that due to the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus, the I O Processors must be physically close together and close to the Central processor complex necessitating expensive facilities with environmental control, dedicated floor space, and exposing the entire systems information to natural or man-made disaster.
  • FIG. 3 is a block diagram depicting a tightly coupled multi-processor prior art system 390 mainframe 100-MP where a multiplicity of central processing unit (CPU) complexes 101-1 through 101-N are coupled directly to Cache Memory 102 for fast access to recently used data. Tightly coupled means that all CPU complexes 101-1 through 101-n and Cache Memories 102-1 through 102-n are coupled to the main SYSTEM BUS 103 in order to access MAIN MEMORY 105 to store and retrieve programs for execution by each CPU complex 101.
  • CPU central processing unit
  • Service processor 1 10 is also attached directly to the SYSTEM BUS 103 and provides the operator console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components.
  • Prior art Mainframe 100-MP may also include a multiplicity of I/O Processors 104 and 120 all which are directly attached to the same System Bus 103 and which receive instructions from CPU complexes 101-1 through 101-n pertaining to input/output functions and which I/O Processors 104 and 120 execute tasks from CPU complex 101-1 through 101-n as required by those I/O functions. All accesses via requests and grants to use the System Bus 103 are controlled by the System Bus Controller 106.
  • the multiplicity of I/O Processors 104 and 120 demonstrate the multiple paths to Disk I/O Control units, 112, 115, 125 and to tape I/O control units 1 10 and 123.
  • Each I/O processor complex includes one or a multiplicity more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives, 1 13, 1 14, 1 16, and 1 17 or magnetic tape drive 11 1 and 124 drives as attached by an electrical OEMI interface or ESCON fiber-optic which is switched by director 118.
  • I O Processor 104 and channel 107-0 with OEMI lines 109-0 and I/O Processor 120 and channel 122-0 are each connected to tape I/O Control Unit 110.
  • I/O Processor 104 and channels 107-1 and 107-2 connect to disk I/O control 112 and 115 as well as I/O processor 120 and channels 121-1 and 121-2 connect to disk I/O Control Units 112 and 115 as provided by System 390 I/O Architecture.
  • Director 1 18 connects the serial control and data signal interface between I/O Processor and channel 107-1 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others while I/O Processor 120 and channel 121-n connects to Director 1 18. and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others as shown.
  • standard mainframe commands and data are applied from I/O Processor 104 and I/O Processor 120 to the I/O peripheral control units 123, 125 and others.
  • Director 118 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control, store and retrieve information with the selected I/O peripheral device.
  • CCW Channel Command Word
  • This configuration demonstrates the superiority of mainframe processing architecture which supports simultaneous operation of application programs sharing physical resources and information by several processors and in conjunction with the superior I/O architecture supports multiple paths to allow sharing and redundant access to information.
  • the failure of a Central processor, channel or even an I/O Processor complex would not prevent program execution or access to the valuable information and thereby disable the function of the entire mainframe.
  • a disadvantage is that due to the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus, the Central and I/O Processors must be physically close together requiring expensive proprietary technology and packaging also necessitating expensive building facilities with environmental control, dedicated floor space, and exposing the entire systems information to natural or man-made disaster.
  • Processor complexes 101-0 through 101-n For purposes of explanation of tightly coupled mainframe 100-MP, one could consider Processor complexes 101-0 through 101-n to be coupled "at the head" of the mainframe 100-MP system since all Processor complexes 101-0 through 101-n share the same system bus 103, system bus controller 106, and main memory 105.
  • Figure 4 is a block diagram depicting loosely coupled multiple prior art Mainframe Computers 100 and or Mainframe Computers 100-MP including central processing unit (CPU) complexes 101 and Cache Memories 102 for fast access to recently used data.
  • Independent mainframe 100 or 100-MP, diagramed System A contains one or a multiplicity of CPU complexes 101 with Cache Memories 102 and one or a multiplicity of I/O processor 104 coupled to System A's main System Bus 103 in order to access System A's main memory 105 to store and retrieve programs and data. All System A accesses to use the System Bus 103 are controlled via requests and grants given by System A's System Bus controller 106.
  • mainframe 100 or 100-MP described as System B also contains one or a multiplicity of CPU complexes 101 with Cache Memories 102 and one or a multiplicity of I/O processor 104 coupled to System B's main System Bus 103 in order to access System B's main memory 105 to store and retrieve programs and data. All System B accesses to use the System Bus 103 are controlled via requests and grants given by System B's system bus controller 106.
  • the system buses of System A and System B are not directly connected. System A and System B are connected through their respective I/O subsystems.
  • Mainframe System A and mainframe System B are known to be loosely coupled because they do not share a single System Bus 103.
  • the connections and paths by which data is shared between the individual mainframes 100 of System A and System B are enabled through the respective I/O processors, 104-A and 104-B, of System A and System B.
  • the OEMI I O interfaces 109-OA and 109-OB are connected by telecommunications front-end processors, 402-A and 402-B, while the OEMI I/O interfaces 109-2 A and 109-2B are connected by a channel-to-channel adapter 400 which appears as an I/O control unit to each interface.
  • ESCON architecture eliminates the need for a channel-to-channel adapter 400 with a direct serial protocol between system A and B.
  • ESCON I/O interface 109-5 A of System A directly connects to ESCON I/O interface 109-5B of System B.
  • I/O operations are exchanged by access method software operating in System A and System B which performs synchronized I/O operations between the two systems.
  • an I/O write operation is initiated by the sending system and an I/O read operation is initiated by the receiving system.
  • I/O operations are controlled by I/O instructions and Channel Control Words, previously described with reference to Figure 1.
  • the data transferred from system A to B moves from system A memory 105-A via system bus 103 -A to I/O processor 104-A to one of the channels and respective I/O interfaces 109-OA, 109-2A or 105-5A and connects directly, if ESCON, or to a connecting unit such as the channel-to-channel adapter or front-end processor which connect to I/O interfaces 109-OB, 109-2B, or 109-5B and respective channels which connect to I/O Processor 104-B which connects to the memory 105-B via system bus 103- B.
  • a connecting unit such as the channel-to-channel adapter or front-end processor which connect to I/O interfaces 109-OB, 109-2B, or 109-5B and respective channels which connect to I/O Processor 104-B which connects to the memory 105-B via system bus 103- B.
  • the advantage of loosely coupled systems is that information may be exchanged or shared locally over OEMI wire or ESCON fiber-optics or over great distances using high speed telecommunication media to connect the loosely coupled systems.
  • the disadvantage is that I/O communications are much slower than system bus speeds, with a maximum speed of 4.5 Megabytes per second for OEMI or 17 megabytes per second for ESCON, (in comparison to 100 megabytes per second and greater for a system bus) and require high level I/O programming and synchronized I/O operations where System A is performing a write operation while System B is performing a read operation or vice versa in order to initiate and handle data transfer between systems, which is inefficient if a large amount of information is to be transferred.
  • loosely coupled System A and System B could be considered to connected "at-the-feet” in comparison to the tightly coupled method being connected "at-the-head”.
  • Figure 5 shows the prior art System Hybrid 390/Personal Computer 200 marketed by LBM as the Data Server 500 series, System 390 which executes System 390 mainframe applications software at much lower costs than the classical mainframe but with the very limited input/output performance of a personal computer because the 390 I/O functions are emulated in software using the personal computer's limited peripheral complex and also must compete for bandwidth with the personal computer's native I/O functions, and compete for slots on the distance limited personal computer electrical system bus.
  • the P-390 card 201 is a CMOS System 390 microprocessor complex which executes the 390 instruction set with exception of the ESCON, PR/SM, Parallel Sysplex, Coupling Links, Integrated coupling migration facility, Sysplex, Sysplex Timer, Dynamic reconfiguration management, Vector Facility, Expanded Vector Facility Instructions, Asynchronous page out facility, ICRF, and Asynchronous Data Mover Facility. Most 370 and 390 business application programs do not require these functions and will run on the P-390 card without modification, conversion or reassembly.
  • the P-390 card 201 serves as the 390 processor in the hybrid 390/PC computer 200 known as the IBM Data Server 500 Series, System 390.
  • Hybrid 390/PC Computer 200 The majority of technology in Hybrid 390/PC Computer 200 consists of less expensive off the shelf personal computer technology.
  • the computer 200 contains a P-390 central processor card 201 and one or more Pentium microprocessors.
  • Pentium microprocessors For I/O, one or two 500 megabyte to 4 gigabyte hard disks (with more disks in RAID configuration as options) serve as the primary substitute for the I/O control unit and disk storage of the expensive classical mainframe.
  • Other add-in cards serve as communications connections and control peripherals such tape drives, etc.
  • the hybrid 390 PC computer 200 is packaged in normal PC Server single case enclosure.
  • the P-390 card 201 plugs into the personal computer motherboard 202 with the prior art Pentium microprocessor 205 and other normal PC functions.
  • the Pentium 205 and Memory 206 are attached to the PCI bus and connect via the PCI bus to the Peripheral Functions 207.
  • This combination of P-390 Card 201 and the Personal Computer consisting of the microprocessor 205 and Memory 206 supported by components 202 through 207 make up the prior art Hybrid 390/PC Computer 200.
  • Typical PC Peripherals 207 are the floppy disk, one or more hard disk drives, network cards for LAN, other communications, and an optional CD-ROM and/or tape unit functions.
  • the PCI bus serves as the System Bus 203 for the Pentium 205 microprocessor and Peripheral Functions 207 and PCI capable P-390 Card 201.
  • Earlier versions of P-390 Card 201 utilizing the MCA bus connectors have logical bridging circuitry to connect to the PCI bus for accesses among facilities.
  • the MCA and PCI bus combine to form System Bus 203.
  • System Bus Controller 204 consists of hardware bus grant logic circuitry, any necessary bridging circuitry, software protocols, and software running in both the P-390 card and microprocessor.
  • the combined circuitry and software form System Bus Controller 204 and serve to control the exchange of information among the processors and other complexes of computer 200.
  • Prior Art Figure 5 also shows the Software Components 208 organization of Hybrid 390/Personal Computer 200.
  • the P-390 software complex 209 is made up of the S ⁇ 390 operating system and S ⁇ 390 application software.
  • the Pentium microprocessor software complex 210 consists of the OS/2 operating system, Communications Manager 215, and P-390 I/O Subsystem 211.
  • the P-390 I O Subsystem 21 1 software components are the P-390 Device Driver 212, S ⁇ 390 Channel Emulator 213, and devices managers 214.
  • the P-390 software complex cooperates with the microprocessor software complex.
  • the Personal Computer OS/2 software emulates the S ⁇ 390 I/O functions for the S ⁇ 390 Operating System and application programs.
  • P390 device driver 212 software provides the operational interface between P-390 software 209 and Pentium software 210 which is performing the channel and I/O device emulation.
  • I/O Subsystem 210 is made up of OS/2 PC application programs that emulate the S ⁇ 390 channel, I/O control units, and I/O devices.
  • I/O Subsystem 210 contains several software modules which are the P-390 Device Driver 211, 390 Channel Emulator 212, and Specific Device Manager modules 213, which transform S ⁇ 390 mainframe I/O device data formats into forms acceptable to the PC Peripherals 207.
  • the P-390 Device Driver software 208 starts and stops P-390 Card 201, performs Initial Program Load (IPL), handles interrupts between microprocessor 205 and P-390 Card 201 and is written as a standard OS/2 PC device driver.
  • IPL Initial Program Load
  • S ⁇ 390 Channel Emulator 212 interfaces between the Device Manager modules 213 and P-390 Device Driver 211.
  • microprocessor 205 invokes P-390 Device Driver 212.
  • P-390 Device Driver 21 1 then passes information and control to the S ⁇ 390 Channel
  • S ⁇ 390 Channel Emulator 212 converts 390 Channel Control Word (CCW) formats into PC format and selects the appropriate Device Manager 213 to exercise its PC peripheral to emulate the I/O operation requested by P-390 CARD 201 and 390 Software 208.
  • CCW Control Word
  • any information that is to be retrieved from the PC disk storage is provided in the proper format by the S ⁇ 390 Channel Emulator to the P-390 Device Driver which accesses and stores the information into the P-390 Memory across System Bus 203.
  • Information to be written to the PC peripheral disk storage is retrieved by P-390 Channel Emulator 212 from the P-390 memory across System Bus 203 and provided to the appropriate OS/2 Device Managers 213 to be mapped into the PC Disk format.
  • Device Managers 213 are PC (OS/2) application programs which are software versions of System 390 I/O Control Units and emulate the S ⁇ 390 devices in the PC disks and other hardware. While prior art Hybrid 390 ⁇ PC Computer 200 does execute the S ⁇ 390 operating system and application programs and is significantly less expensive than a mainframe computer, the disadvantage of computer 200 remains the same as the normal PC, e.g., that there is insufficient I/O capability. All 390 I/O operations are emulated in the PC peripherals which present the most significant deficiency in comparison to the mainframe I/O peripherals and a corresponding lack of performance. Therefore, performance for a number of users degrades as the 390 applications require more information to be retrieved or stored away.
  • PC OS/2
  • prior art mainframe computer 200 may be networked using PC LAN technology but the same lack of I/O operation capability will saturate the network as happens with a normal PC.
  • Special add-in cards which perform the functions ofa SV390 I/O Channel could inserted into the PCI bus to operate certain System 370 and 390 I/O control units.
  • the PCI bus which is standard among PC manufacturers, limits the number of connectors and plug in cards because of length and electrical driver characteristics.
  • the processing for the add-in card would necessarily be performed by the Computer 200 platform Pentium which is already handling S ⁇ 390 emulation and 390 Device Driver control for the P-390 Card 201.
  • Pentium class personal computers and a System 390 processor card are integrated into a novel 390 mainframe computer with full 390 I/O capability.
  • the Processor and I/O Subsystem tasks are performed by individual Pentium personal computer platforms.
  • This embodiment utilizes a novel Virtual System Bus which provides inter-platform data transfer at or near the internal computer platforms' system bus speed.
  • System 390 computer organizations are taught that operate the mainframe operating system, mainframe application software, and high performance and capacity I/O structure, in which the majority of the system is implemented using proprietary and expensive technology.
  • This classical mainframe computer complex is operated by a mature and robust operating system and performs mission critical functions for all manner of users.
  • the classical mainframe is limited by the distance, speed, and drive limitations of the electrical system bus when multiple central processors and/or I/O Processors are connected on the common system bus to share a common memory as an integrated or tightly coupled system.
  • a loosely coupled connection of mainframe systems sharing access to data permits greater distance flexibility but is limited by the maximum information transfer speed of 4.5 and 17 megabytes per second and the overhead of the central processor and I/O processor generated by I/O operation software.
  • the proprietary technology of classical mainframes is generally much more expensive than commercial technology and requires much more in the way of physical building facilities and floor space, specialized packaging to minimize system bus lengths, and cooling.
  • a novel computer operating as a prior art hybrid Personal/390 (IBM Data Server Series 500) computer organization is also taught which executes the mainframe operating system and mainframe application software.
  • Interface Controller integrates hardware and software components to expand the main data artery, the System Bus into a Virtual System Bus which connects local System Buses of computer platforms without the distance, speed, or electrical drive limitations of either mainframe proprietary technology or the PCI buses found in personal computers.
  • Individual and commercially available computer platforms such as the Pentium based personal computer may serve as a 390 CPU complex or as a 390 I/O Processor.
  • a plurality of Personal Computers may serve as 390 Computer complexes and/or I/O Processors.
  • the Virtual System Bus implemented locally with serial fiber-optic media can connect multiple PC PCI System Buses without the electrical distance or drive limitations or susceptibility to electromagnetic interference.
  • the Virtual System Interface Controller formats and exchanges information between computer platforms over the Virtual System Bus at or near native speeds of the local computer platform PCI bus or standard or proprietary bus.
  • the serial characteristic of the VSIC and Virtual System Bus may be connected to telecommunications media such as ATM or SONET to produce a global Virtual System Bus and eliminate the need for front-end communication processors or other pre- processing in order for systems to share information, applications, or work loads. For those situations where slower communication speeds are acceptable and the distances between the complexes is close, electrical coaxial transmission cables and electrical drivers and receiver may be substituted for the higher reliability and bit rate fiber- optics.
  • the exemplary embodiment of this invention is a novel computer organization that improves on the prior art computer organizations to: 1) provide a low cost computer platform which is controlled by the mature and robust System 390 mainframe operating system, 2) operate the mainframe software applications at much less cost than a proprietary technology mainframe; 3) greatly increase on-line storage capacity and I/O performance over the prior art 390/personal computer (IBM DATA SERVER 500 SERIES) system by implementing external I/O processing, storage, and other peripheral functions; 4) implement unique and efficient peer-to-peer communication referred to as the Virtual System Bus which performs as a system bus among a plurality of computers to enable integration of CPU complexes and I/O Processor-Channel complexes into a mainframe capable system platform; 5) implement novel use of serial high speed serial links to extend the System Bus to remote locations; and 6) broaden the geographical extent of the mainframe operations between the CPU complex or a plurality of CPU complexes and one I/O Processor-Channel complex or a plurality of I/O processor
  • the novel serial Virtual System Bus and Virtual System Interface Controller provide the I/O processor-channel complex or complexes and/or the CPU complex or complexes with a very high bandwidth System Bus connection and allows the I/O processor-channel and/or CPU complex or complexes to be located physically apart.
  • This Virtual System Bus implementation allows CPU and I/O Processor-channel complexes to be closely located while connected by the Virtual System Bus having one serial link or a plurality of serial links, which are implemented with optical fibers or copper cable (or a combination of both) and can be separated great distances from one another by linking a CPU complex or a plurality of CPU complexes and one I/O Processor-Channel complex or a plurality of I/O processor-channel complexes together with private fiber-optic and/or coaxial links and/or common carrier serial communications media such as the DS rate connections, the Synchronous Optical Network (SONET) optical carrier also referred to as the Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM) which is also called Broadband Integrated Services Digital Network and is defined to be a switched digital service, Switched Multi-megabit Data Service (SMDS), Frame Relay, and other any future serial carriers which can provide the bandwidth and data transmission reliability necessary to execute the mainframe applications.
  • SONET Synchronous Optical
  • the CPU complex includes a fiber optic switching mechanism which allows data to be multiplexed between the CPU and a plurality of channel units. This minimizes circuit overhead and yet still provides adequate bandwidth given the significantly enhanced bandwidth of fiber optic connections.
  • Virtual System Bus and Virtual System Interface Controllers eliminate the need for front-end or communication processors when telecommunications media are component of the Virtual System Bus because command, information, and programs may be formatted and exchanged in a direct memory to memory manner.
  • the embodiment of the novel organization of this invention improves on any prior art clustered computer organization to: 1) greatly increase on-line storage capacity; 2) greatly increase I/O performance; and 3) greatly increase peer-to-peer communications bandwidth among a plurality of CPU complexes and I/O Processor- Channel complexes.
  • This embodiment also provides the same benefits to a plurality of CPU complexes which execute operating systems other than the System 390 operating system and application software.
  • the System Bus is the main artery through which programs and information are transported to the processor that is performing work.
  • the novel virtual system image controller (VSIC) environment greatly broadens the expanse of a computer's system bus in effect creating a virtual system bus by which a plurality of complexes local and remote may interchange programs and information to improve System 390 operations between the CPU complex or a plurality of CPU complexes and one or a plurality of I/O Processor-Channel complexes.
  • This novel serial system bus invention enables an I/O processor-channel complex or plurality of I/O complexes and/or the CPU complex or plurality of complexes mutually to enjoy very high bandwidth system bus connections and specifically allows the I/O processor-channel and/or CPU complex or complexes to be located physically apart.
  • This VSIC serial system bus implementation allows Central Processor and I/O Processor-channel complexes to be either be co-located or remotely located and maintain a common system bus environment.
  • the virtual system bus environment is formed by a plurality of serial links that join individual platforms into an all inclusive virtual system bus environment.
  • platforms can connect to one another in the same location or at great distances from one another by linking the central processor complex or a plurality of CP complexes and one I/O Processor-Channel complex or a plurality of I/O processor-channel complexes together with private fiber-optic links, coaxial cable links, or high speed common carrier serial communications media such as the so called DS connections.
  • the virtual system bus environment may be expanded across the continent over the fiber-optic trunks of common carriers over the Synchronous Optical network (SONET) also referred to as the Synchronous Digital Hierarchy (SDH).
  • SONET Synchronous Optical network
  • SDH Synchronous Digital Hierarchy
  • ATM Asynchronous Transfer Mode
  • ISDN Broad-band Integrated Services Digital Network
  • SMDS Switched Multi-megabit Data Service
  • Frame Relay Frame Relay
  • other Wide Area Services provided by serial carriers can provide the bandwidth and transmission reliability necessary to execute the applications over a metropolitan area or a greater metropolitan area such as New York, Boston, or Los Angeles.
  • redundant fiber optic connections can be run between the CPU complex or complexes and one or more of the I/O processor-channel complexes, providing further increased bandwidth, as well as redundancy in the event one of the fiber optic connections fails.
  • a novel computer bus organization which utilizes fiber-optic or serial telecommunications media between one or a multiplicity of CPU complexes and one or a multiplicity of channel complexes.
  • This invention allows one or a multiplicity of I/O channel complexes to be connected to one or a multiplicity of CPU complexes at a high bandwidth system bus rate and yet allows the I/O Processor-channel complexes to be located physically close or at great distance from the Central Processor complex.
  • redundant fiber optic connections can be run between the CPU complex and one or more of the channel complexes, providing further increased bandwidth, as well as redundancy in the event of a failure of a fiber optic connection.
  • the Computer bus organization includes example redundant serial bus expanders to provide redundancy and increased bandwidth.
  • Figure 1 is a block diagram depicting a typical prior art proprietary technology single or uni processor System 390 mainframe with one I/O processor operating a group of I/O channels and peripherals;
  • Figure 2 is a block diagram depicting a typical prior art proprietary technology single or uni processor System 390 mainframe which operates multiple I/O processors operating multiple groups of I/O channels and peripherals;
  • Figure 3 is a block diagram depicting a typical prior art proprietary technology tightly coupled multi-processor System 390 mainframe operating one or a multiplicity of I/O processors, I/O channels, and peripherals;
  • Figure 4 is a block diagram depicting a typical prior art proprietary technology "loosely coupled" multiprocessor System 390 mainframes and one or a multiplicity of I/O processors, I/O channels, and peripherals;
  • FIG. 5 is a block diagram depicting the prior art Hybrid 390/PC computer 200, known as the IBM Data Server 500 containing a P-390 System 390 processor and emulated 390 peripherals;
  • Figure 6 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting external I/O Processor-Channel 304;
  • FIG. 7 is a block diagram depicting VSIC software components of the Virtual System Bus and VSIC invention as used in new art Hybrid 390/PC computer 300 showing how this embodiment's implementation interacts to improve the performance over the prior art computer 200;
  • Figure 8 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controllers connecting multiple external I/O processor-channel complexes 301 to a single 390 processor complex,
  • Figure 9 is a block diagram depicting one embodiment of this invention showing new art hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting system buses of multiple 390 processor complexes to the system bus of a single external I/O processor-channel complex 301;
  • Figure 10 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting system buses in an any-to-any manner multiple 390 processor complexes and multiple external I/O processor-channel complexes 301;
  • Figure 1 1 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller implemented in local fiber-optic connections and remote virtual system interface controller which connect by carrier data communications media into the Virtual System Bus for high performance Personal Computers or workstations operating locally and remote and sharing information and programs at System Bus Speeds between processors, I/O peripherals, and large capacity RAID disk servers connecting computer platforms in an any-to-any manner including multiple 390 processor complexes and multiple external I O processor-channel complexes 301 in different geographical areas;
  • Figure 12 is a diagram depicting one embodiment showing software levels and hardware level of prior art Open Systems Interface Reference Model standard for communication among computers
  • Figure 13 is a diagram depicting one embodiment showing software levels and hardware level of prior art communications models both proprietary and standard models compared to the Open Systems Interface Reference Model standard for communication among computers
  • Figure 14 is a diagram depicting one embodiment of new art Virtual System Bus and the two software and one hardware communication layers accompanied by the VSIC Request frame header.
  • the VSIC frame header depicts requesting node (source) and responding node (destination), Task Identifier, and op-codes to perform memory-to- memory request operations for communication among computers
  • Figure 15 is a diagram depicting one embodiment of new art Virtual System Bus and the two software and one hardware communication layers accompanied by the VSIC Response frame header.
  • the VSIC frame header depicts responding node (source) and requesting node (destination), Task Identifier, and the operation codes to perform memory-to-memory response operations for communication among computers;
  • Figure 16 is a diagram depicting one embodiment of new art VSIC frames which are transported over the Virtual System bus.
  • the VSIC frame consists of Software Start of Frame containing a recognition pattern and transmitted Word Count, Header, Data Words (if applicable), Cyclic Redundancy Check (CRC) word, and the Software End of Frame also containing a recognition pattern, Validation Bits, Sequence Count, and received Word Count; and
  • Figure 17 is a diagram depicting one embodiment of a new art computer 300 referred to as the new art Cluster Computer 300 distributed on the Virtual System Bus and connected by dynamic switches.
  • the processor nodes and memories can communicate in an any-to-any manner.
  • FIG. 6 shows one embodiment of a novel low cost 390/PC Computer 300 which executes System 390 mainframe applications software as does the prior art proprietary technology S ⁇ 390 mainframe 100 (Fig. 1) or the 100-MP, while solving many of the problems not solved by the prior art IBM DATA SERVER 500 Series computer 200 (Fig. 2).
  • Novel 390/PC Computer 300 contains a high content of off-the-shelf less expensive commercially available personal computer technology.
  • the 390 Central Processor of computer 300 of this embodiment is referred to as the P-390 Card 201.
  • the P-390 Card 201 is commercially available from IBM Corporation and is sold alone as a typical personal computer card. However, any System 390 Central Processor capable of properly executing the 390 instruction set would serve the same purpose in computer 300.
  • Computer 300 sub-system tasks are performed individually by multiple computer platforms because of a novel method of connecting computer platforms, called the Virtual System Bus, which enables inter-platform communication at or near the sub-system computer platform's system bus speed which enables an implementation to integrate the platforms and sub-system tasks into a System 390 mainframe with the advantages of prior art computers 100, lOOMP, all the while at less cost and providing more function than prior art computer 200 .
  • the Virtual System Bus a novel method of connecting computer platforms
  • the 390 Central Processor is in one embodiment, the P-390 Card 201.
  • Computer 300 may use any 390 capable Central Processor.
  • Early versions of the P-390 Card 201 plug into an IBM proprietary micro- channel architecture bus (MCA) and subsequent versions plug into the Peripheral Component Interconnect Local Bus (PCI bus) which is a universal standard devised for Personal Computer multi-vendor compatibility.
  • MCA micro-channel architecture bus
  • PCI bus Peripheral Component Interconnect Local Bus
  • P-390 Card 201 plugs into PC motherboard 202 along with prior art Pentium or similar Microprocessor 205 which performs normal PC and emulation functions.
  • Microprocessor 205 and Memory 206 are also attached to the PCI bus along with appropriate personal computer Peripheral Functions 207.
  • Hybrid 390/PC Computer 300 has the same capability to execute System 390 operating systems and application programs as prior art Computer 200.
  • VSIC Virtual System Bus and Virtual System Interface Controller
  • the Virtual System Bus greatly overcomes the limited number of connectors, only 5, which severely limits the number of add-in cards due to the very limited electrical drive capabilities of the PCI bus in the case of a typical personal computer.
  • the Virtual System Bus enables a multiplicity of functions to be re- located to remote platforms while greatly increasing the bus speed connectivity distance to those other computer platforms which may, or may not be, personal computer based.
  • the Virtual System Bus is not limited to connect similar platforms but can also expand connectivity between dissimilar micro processor platforms such as Intel, Motorola, Sun, NEC or other proprietary platforms because the Virtual System Interface Controller (VSIC) which exchanges information over the Virtual System Bus formats the commands, information, and programs into Big Endian, Little Endian or other required binary format for exchange among the computer platforms.
  • VSIC Virtual System Interface Controller
  • the Virtual System Bus increases the ability to create greater on-line storage capacity, higher data retrieval and storage rates, performance, while providing the capability of an I/O Processor-Channel structure with the capabilities of the prior art S ⁇ 390 mainframe computer 100.
  • PC ⁇ 390 Computer 300 over prior art computer 200 is the capability to connect one or a multiplicity of the external I/O Processor-Channel complexes 301 which is implemented on a personal computer platform without the distance limitations of the PCI electrical bus or the limitations and speed degradation and software overhead of LAN connections.
  • the novel Virtual System Bus and Virtual System Interface Controllers 301 -A and 301-B expand the capability of the PCI bus (or other distance and connector limited bus) of prior art computer or other computer to connect to the PCI (or other) bus of external computer platforms which are in this embodiment shown as the processor complex 200 and the I/O Processor complex 301.
  • the Virtual System Bus and VSIC or Virtual Systems Interface Controller greatly expands any computer or Personal Computer's ability to connect with another computer or Personal Computer platform at speeds very near or equal to PCI bus transfer speeds.
  • the Virtual System Bus of computer 300 is capable of 100 megabytes per second peak bandwidth using commercially available technology and fiber-optic cable.
  • the Virtual System Interface Controllers 301 -A and 301-B control the exchange of commands, status, programs, and information across the Virtual System Bus connecting the processor platform and I/O complexes of the computer 300.
  • Hybrid 390/PC computer 300 includes prior art computer 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and is connected by the Virtual System Bus 600 consisting of new art 303 A Virtual System Interface Controller located in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controller 303B which is identical to VSIC 303A to an I/O Processor Complex 301 which consists of Pentium microprocessor 305, Pentium Memory 306, and PCI System Bus 304 to a Small Computer System Interface (SCSI raid disk controller card 307, Small Computer System Interface (SCSI) tape system controller card 309, and PCI 390 ESCON Channel Card 311 which in turn control RAID Disk 308, Tape 310, and drive ESCON Fiber-optic cables 312.
  • SCSI raid disk controller card 307 Small Computer System Interface (SCSI) tape system controller card 309
  • Prior art Computer 200 depends on Local Area Network (LAN) attachments to communicate with other systems.
  • Prior art LAN bandwidths are 10 Million bits per second (mbs) Ether-Net, lOO bs Ethernet, 25mbs ATM, and 4-16mbs Token Ring. All of the above mentioned LAN protocols are inefficient because of the layers of software between the application and the media.
  • LAN connections or a special channel card which plugs into Computer 200's PCI Bus limited connector allocation, the computer 200 is a closed system.
  • Computer 300 greatly improved organization is made possible by the Virtual System Bus and the Virtual System Interface Controllers join the independent platforms at system bus speed.
  • the Virtual System Image Controller 303 A and 303B and Virtual System Bus 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Bus 306 of the I/O Processor-Channel Complex 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
  • VSIC controllers 303 -A and 303-B are identical.
  • the VSIC hardware has duplex capability in that frames may assembled and transmitted simultaneously while incoming frames are received and disassembled.
  • VSIC controllers 303-A and 303-B each contain PCI bus access logic, a RISC microprocessor, volatile and non-volatile memory, and support UART logic to download operational software.
  • the VSIC controller contains a frame assembler, frame transmitter, parallel to serial conversion circuitry, and an optical (or telecommunications) transmitter with appropriate connectors to the optical fiber or other media.
  • RISC software (401 or 402) writes the data into the frame assembler and when complete, the hardware automatically sends the frame to the receiving VSIC controller.
  • the VSIC controller contains optical (or telecommunications) receiver with appropriate connectors to the optical fiber or other media, serial-to-parallel conversion circuitry, frame receiver and dissembler.
  • the dissembler holds the frame until the RISC software (401 or 402) transfer the frame onto the PCI bus.
  • novel 390/PC Computer 300 is shown to also contain new art software components 401 , 402, 403, 404, 405, 406, 407, and 408 connecting via the Virtual System
  • the P-390 software complex 209 is made up of S ⁇ 390 operating system and S ⁇ 390 application software.
  • PC Pentium processor software complex 210 is made up of OS/2 operating system
  • Prior art P-390 I/O Subsystem 210 software components are the P-390 Device Driver 211, S ⁇ 390 Channel Emulator 212, and device managers 213.
  • the P-390 software complex works with the Pentium Software complex.
  • the Pentium hardware and software perform some of the I/O functions for the S ⁇ 390 Operating System and application programs.
  • P390 device driver 211 software provides the operational interface between the P-390 software 208 and the Pentium software 210 which now performs elementary single channel and I/O device emulation to support non-critical performance I/O operations.
  • novel computer 300 high performance I/O operations are possible because
  • Virtual System Bus 600 controlled by Virtual System Interface Controller subsystems 401 and 402 permits one or a plurality of External I/O processor-channel complexes 304 each with multiple channel-to-I/O device paths to off-load System 390 I/O operations from the software emulation in PC peripherals of prior art computer 200.
  • novel computer 300 when a System 390 I/O operation is initiated by the P-390 Card 201 or other 390 Processor, the I/O address is checked to determine if the I/O operation is an emulated or an actual operation to the external I/O Processor Channel complex. If the operation is to be performed by the External I/O Processor-Channel complex, the parameters concerning the I/O operation from P-390 complex 208 are provided across the local PCI bus to the Virtual System Interface Serial adapter 303 A.
  • the Virtual System Interface Controller 401 formats the information into a VSIC frame then transmits and serializes the byte format information frame at a Gigabaud rate via the Virtual System Bus to the opposite serial adapter which deserializes the information for The Virtual System Interface Controller 402 to controls that transfer on the remote PCI bus to the required location in the I/O processor memory.
  • the I/O processor subsystem 404 may be polling for a new operation information or be interrupted by the Virtual System Interface Controller 402 when information such a new operational command arrives.
  • the I/O Processor may have previously made a data fetch request to the VSIC 402 and be prepared to receive data to perform a Write operation, or be in a less busy state and able to poll or, if in a busy state when a new operation arrives, take an interrupt when the I/O Processor subsystem's 404 work load permits.
  • P-390 Access software 217 provides information such as main storage starting Channel Command Word Address (CCW), I/O device address, and I O Control Block to VSIC software 401 which formats a VSIC frame for VSIC hardware 303 A which serializes and transmits that frame containing the beginning I/O operation parameters to VSIC hardware 303B which deserializes the frame for VSIC software 402 to be sent to the I/O processor subsystem software 404.
  • the I/O processor 404 prepares a device initiation by setting up pointers, and via the Virtual System Bus and VSIC software 402, VSIC hardware 303B issues a request to the P-390 Access Software 217 for a CCW and data if applicable.
  • P-390 software 217 provides that CCW and data if applicable to VSIC software 401 and hardware 303 A for transmission across the Virtual System Bus to VSIC hardware 303B and VSIC software 402 provides the CCW and a quantity of data, if applicable, to I/O Processor Sub System 404 and initiates the I/O channel subsystem 405.
  • I/O channel subsystem 405 asks I/O processor subsystem for more data if a write operation to the disk subsystem 406 or tape subsystem 407 or external I O channel subsystem 408.
  • Disk subsystem 406, tape subsystem 407, or external I/O channel subsystem 408 execute the particular command, e.g., handle control or data transfer with the respective peripheral device or devices.
  • the I O channel sub-system 405 requests data from or passes data to the I/O processor 404 which is presented to VSIC 402 and VSIC hardware 303B across the Virtual System Bus to VSIC hardware 303 A and VSIC software subsystem 401 to perform a data store or fetch from the P-390 Access software 217. The operation continues in a like manner until the ending CCW count is exhausted or the operation ends due to function or record length.
  • the disk subsystem 406, tape subsystem 407, or external I/O channel subsystem 408 as applicable presents the ending status to the I/O channel subsystem 405 then to the I/O processor 404 which presents the VSIC 402 the information and parameters to frame and transfer via the Virtual System Bus to VSIC 401 and on to P-390 Access software 217 of the Central processor complex 208 as a Subchannel status word which signals the I/O program with the operation's completion status.
  • Figure 8 shows detail of the hardware components of one embodiment of new art
  • the System 390 Processor, P-390 Card 201, in this embodiment of new art computer 300 is now connected to a plurality of External I O Processor channel complexes 304 via the Virtual System Bus (VSB) 600 and VSIC Controller hardware 303A (2) and 303B (2).
  • VSS Virtual System Bus
  • Greater I/O throughput, more peripheral devices, and disk storage capacity providing more information stored and available to users results from the multiplicity of I/O Processors providing more channel paths to more I/O peripherals.
  • I/O Processor-Channel 304 the example starts with P-390 Card 201 decoding an I/O instruction such as a Start Subchannel.
  • the Start Subchannel is accompanied by a subchannel address.
  • the subchannel address corresponds to a channel path (CHPID) and device address.
  • CHPID channel path
  • parameters in the form of Channel Command Words which describe the operation and the P-390 memory locations involved with data storage or retrieval had been set up in the P-390 Card 201 memory.
  • data to be moved from P-390 memory to external storage is called a Write Operation while data to be retrieved from external storage and placed in P-390 memory is called a Read Operation.
  • the P-390 Access software 217 then provides an I/O Control Block containing a valid Start Subchannel address of the device that is to perform the operation to VSIC software 401.
  • the subchannel address translates to a CHPID and device address of the I/O Processor Complex as was defined in the I/O configuration of computer system 300, a normal mainframe means to establish logical to physical I/O paths to I/O devices.
  • VSIC software 401 examines the configuration table to determine which of the VSIC nodes 303 A connects to the proper path of the Virtual System Bus and to the specific I/O processor complex. If VSB paths are redundant, The VSIC software 401 chooses the VSIC Node 303A according to a first and second choice priority scheme.
  • the VSIC software 401 Upon determining the proper node, the VSIC software 401 writes the I/O Control Block and other information to the VSIC node 303 A, adds an End of Frame and VSIC Node 303 A serializes and transmits that frame to partner VSIC node 303B which receives and de-serializes the frame for VSIC Software 402.
  • Software 401 in conjunction with external software module 402 has provided a block of information to the I/O Processor sub-system 404 called an I/O Control Block (IOCB).
  • the IOCB contains stimulus to initiate the operation and parameters of the operation.
  • the I/O Processor subsystem software 404 selects which channel and device that is to be accessed. The selected channel starts the I/O device by an internal or external initial selection to provide the command.
  • the command initiates an input (read) output (write), or control operation.
  • Backspace, seek, recalibrate, and set sector are examples of control commands which prepare the device for an operation or serve to position the recording media for reading or writing data. If the command is a read or write, data is transferred according to that respective command. If a write, data is presented from P-390 memory by P-390 Access software 217 to VSIC software 401 to VSIC node 303A for transport over the Virtual System Bus to VSIC node 303B.
  • VSIC software 402 transfers that data to the I/O processor 404 and hardware 304 for channel subsystem 405 to transfer to the I O device for proper action such as writing to a disk or tape storage device.
  • the CHPID works with the I/O Processor to send the data to the P-390 memory.
  • the I/O Processor 404 identifies the data for VSIC software 402 which writes the data to VSIC hardware node 303B which serializes and transmits the framed data across the Virtual System Bus 600 to VSIC 303A which receives and de-serializes the frame for VSIC software 401 to send the data to the proper location via P-390 Access software 217 into the P-390 memory for use by the 390 application program.
  • the I/O operation may terminate at the end of data transfer and ending status presentation or, if conditions require, will continue to execute more commands in a continuous operation called CHAINING until all parameters and ending status of the I O operation have been satisfied. (Chaining allows multiple commands or data fields to be transferred by a single S ⁇ 390 I/O instruction.) At the end, the I/O control unit and device will present status to the channel signifying end of operation.
  • That status is presented to the I/O Processor 404 which uses VSIC software 402, to present the status information to VSIC node 301-B which serializes and transmits the information across the VSB to VSIC controller node 301 -A and VSIC software 401 to present the status and interrupt information to P-390 Access software 217 to inform the S ⁇ 390 processor that the I/O operation has completed.
  • VSIC software 402 uses VSIC software 402 to present the status information to VSIC node 301-B which serializes and transmits the information across the VSB to VSIC controller node 301 -A and VSIC software 401 to present the status and interrupt information to P-390 Access software 217 to inform the S ⁇ 390 processor that the I/O operation has completed.
  • VSIC software 402 uses VSIC software 402 to present the status information to VSIC node 301-B which serializes and transmits the information across the VSB to VSIC controller node 301 -A and VSIC software 401 to present the status and interrupt information to P
  • a first type is an external 390 channel 31 1 with a standard interface such as ESCON and will operate prior art 390 I O control units and devices as does any mainframe channel.
  • a second type is a Small Computer System Interface (SCSI) channel 307 or 309 which integrates with commercially supplied SCSI controllers cards to operate tape drives, disk drives, or other SCSI peripherals.
  • SCSI Small Computer System Interface
  • a third type of channel can actually integrated into large capacity RALD disk storage systems which have the internal processing power.
  • RALD systems can provide several levels of error correction and data recovery in the event of the failure of a disk drive during operation.
  • FIG. 9 is a block diagram depicting one embodiment of a System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention.
  • Hybrid 390/PC computer 300 includes two prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controllers 303B which are identical to VSIC 303A to share data of an independent I/O Processor Complex 301 that consist of Pentium microprocessor 305, Pentium Memory 306, and PCI System Bus 304 to a Small Computer System Interface (SCSI raid disk controller card 307, Small Computer System Interface (SCSI) tape system controller card 309, and PCI 390 ESCON Channel Card 311 which in turn controls RAID Dis
  • Virtual System Bus and the Virtual System Interface Controllers join the independent platforms at system bus Speed.
  • the Virtual System Image Controller 303 A and 303B and Virtual System BUS 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Buses 306 of the I/O Processor-Channel Complexes 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
  • FIG. 10 is a block diagram depicting one embodiment of a System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention, which is similar to the embodiment of Fig. 9, and using a serial switcher- router Hybrid 390/PC computer 300 includes two or more prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controllers 303B which are identical to VSIC 303A to share data of two or more independent I/O Processor Complexes 301 which consist of Pentium microprocessors 305, Pentium Memories 306, and PCI System Buses 304 to Small Computer System Interface (SCSI) raid disk controller cards 307, Small Computer System Interface (SCSI
  • Figure 1 1 is a block diagram depicting one embodiment of a System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention, which is similar to the embodiment of Fig. 9, and using a serial switcher- router, and using a virtual system bus.
  • Hybrid 390/PC computer 300 includes two or more prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A and 303C Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables, Duplex telecommunication media such as ATM or SONNET and Virtual System Interface Controllers 303B and 303D which are identical to VSIC 303A and 303C to share data of two or more independent I/O Processor Complexes 301 which consist of Pentium microprocessors 305, Pentium Memories 306, and PCI System Buses 304 to Small Computer System Interface (SCSI) raid disk controller cards 307, Small Computer System Interface (SCSI) tape system controller cards 309, and PCI 390 ESCON Channel Cards 311 which in turn control RAID Disk 308, Tape 3 10,
  • Computer 300 greatly improved organization is made possible by the Virtual System Bus and the Virtual System Interface Controllers that join the independent platforms at system bus Speed.
  • the Virtual System Image Controller 303 A and 303B and Virtual System BUS 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Buses 306 of the I/O Processor-Channel Complexes 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
  • FIG 12 shows the prior art Open Systems Interconnect (OSI) Reference Model, one of the standard computer networking architectures contains seven layers, six are software and one is the software / physical layer.
  • a network architecture provides the plan and rules to govern the implementation and function of the hardware and software components by which the network connects and the computers communicate. The move to personal computers and workstations away from dumb terminals and Host Centric control, influenced an open structure.
  • the Open Systems Interconnect Reference Model (OSIRM) defines the functions and of protocols necessary for international data communications. This model was developed by the International Standards Organization (ISO), an international body of 90 countries chartered to cover technology issues among the members. Work began on the OSI architecture in 1977, 3 years after IBM's 1974 announcement of Systems Networking Architecture (SNA). IBM has since (1988), introduced a networking structure called Systems Application Architecture (S AA) which is very similar to the OSIRM standard.
  • ISO International Standards Organization
  • SNA Systems Networking Architecture
  • Figure 12 also depicts the prior art TCP/IP networking protocol which evolved from an early research computer network, the ARPANET.
  • the prior art Local and Wide Area network standards provide for multi-vendor inter-operability among users and to enable those users to send or retrieve files and e-mail, "surf the Internet", or down load programs from a multiplicity of hosts or servers Local and Wide Area networking standards are oriented to the individual personal user and the personal computer or workstation which is a more powerful form of personal computer and primarily provide a one-on-one form of computer communication; user to host, host to user performing operations such as file to storage, copy files, retrieve file from storage, send electronic mail, receive electronic mail, etc.
  • the Open Systems Interconnect Reference Model (OSIRM): 1) The Application Layer permits application programs to transparently send and receive information through the system's interconnection software layers. 2) The Presentation Layer preserves the information content of data transferred across the network. The two systems' Presentation layers negotiate the syntax for transferring messages exchanged by two systems. The presentation also performs the necessary conversions between data formats of the two systems. 3) The Session Layer manages the user-to-network interactive sessions and all session oriented transmissions. During communications between users, normally terminals or LAN workstations, and a central processor or front end processor, the session layer controls the information required by the user. Some examples of the session layer are terminal-to- mainframe log-on procedures, transfer of user information, and setting up information and resource allocations.
  • OSIRM Open Systems Interconnect Reference Model
  • the Transport layer controls the quality and methods of data transport across the entire network. This layer can be independent of the number of networks or the types of networks through which the data must move. The transport layer's responsibility is to manage end-to-end control of complete message transmission, retransmission and delivery. The Transport layer assures that packet/message segmentation and the reassembly process is complete. The transport layer provides for higher level error correction and retransmission for services such as Frame Relay and SMDS. OSI Transport Protocol and Transmission Control Protocol (TCP) are examples of transport layers, however TCP is used in conjunction (TCP/IP) with the Internet protocol(LP). 5) The Network layer manages details of transmitting data across the physical network between network elements as well as between networks.
  • the Link layer is sometimes called the data link layer, The data link manages the flow of data between user and network.
  • the Physical Layer which is addressable, manages the connection of network elements, including voltages and currents, optical wavelength (if applicable), connector pin definitions, and signaling formats.
  • RS 232, 449, x.21, V.35, IEEE 802 LAN, ISO FDDI and others are examples.
  • TCP/IP architecture functional layers do not separate application oriented functions into the three distinct layers as does the OSIRM.
  • the TCP/TP Application layer approximates the OSI Application, Presentation, and Session layers.
  • the OSI and TCP/IP Transport are equivalent, as are the OSI Network and TCP/IP Internet layers.
  • the OSI Data Link and TCP/TP Network Interface layers perform the framing for the Physical layer and Hardware layers but the TCP/IP Hardware layer is not addressable by the software layers and any form of communication circuit may be used by TCP/TP as long as a Network Interface function can control that communication circuit.
  • the Network Interface Card (NIC) which attaches to the interconnect media is assigned an unique address when manufactured and is of no concern to the upper software layers as is the physical layer of the OSI.
  • Application Layer is where the application programs that use the Internet operate. Some application layer software implements a set of standardized Application Layer protocols that directly services terminal or workstation users. Other Application layer software provides Application Programming Interfaces (APIs) for user-written programs to communicate over the TCP/IP Internet. TCP/IP Application layer protocols provide services such as remote login, file copying and sharing, electronic mail (e-mail), directory assistance, and network management.
  • APIs Application Programming Interfaces
  • the Transport Layer serves computer systems with TCP/IP communication software where several application processes may be running concurrently.
  • the Transport Layer provides end-to-end data transport services to those applications which require the TCP/IP communication capability.
  • IP Internet Protocol
  • the IP is responsible for transport data from a source host to a destination host. Whether the host are on the same network or on different physical networks connected by routers, IP makes a complex Internet appear as a single integrated or virtual network.
  • the LP process executes in each host and router on the path from the source host to the destination host.
  • the Network Interface layer presents a standard interface to the Internet layer and handles hardware dependent functions .
  • the Hardware layer is generally considered to be independent of TCP/IP architecture since the Internet layer provides the interface to the Internet layer.
  • the Hardware layer is concerned with physical entities such as the NICs, transceivers, cables, hubs, connectors, and the like which physically interconnect connect the network.
  • Figure 13 depicts the prior art OSIRM, IBM SNA/SAA, DECnet, and ISDN architecture layers and show the similarity. Those Prior art data transmission architectures have a predominant ancestor, the prior art LBM Systems Networking Architecture or SNA.
  • IBM provided a hierarchy of network access methods (drivers) to accommodate a variety of users and applications, however the final controller was the front-end or communications processor which worked directly with a mainframe host and was called Host Centric networking since the host mainframe controlled the entire network. Loosely coupled mainframes also follow the SNA architecture for the channel-to-channel connection between the systems. In LBM Systems Networking Architecture there are seven layers of which the
  • the Transaction Services layer provides network management and configuration services as well as a functional user interface to the network operations.
  • the Presentation layer formats and presents the data to the users as well as performing data translation, compression and encryption.
  • the Data Flow control layer provides services related to user sessions with both layers operating at times in tandem.
  • the Transmission layer establishes, maintains, and terminates SNA sessions for which it provides session management and flow control while performing some routing functions
  • the Path Control layer provides the flow control and routing between point-to-point logical channels on virtual circuits throughout the network by establishing the logical connection between the source and the destination nodes.
  • the Data Link layers define flow control functions of serial data links employing the SDLC protocol and channel attachments employing the S370/390 protocols.
  • the Physical Layer manages the connection of network elements, including voltages and currents, optical wavelength (if applicable), connector pin definitions, and signaling formats
  • OSIRM IBM System Application Architecture (SAA)
  • DECnet IBM System Application Architecture
  • ISDN architecture layers show the similarity among the networking architectures which fall into standardized or proprietary as are SAA and DECnet.
  • the similarities show that LAN networking is adapted or standardized to perform the required communications function in order to provide the Client/Server service required by the respective user.
  • Figure 14 depicts an embodiment of the new art VSIC and Virtual System Bus 600 connecting the simple version of new art Computer 300 consisting of two processor nodes COMPUTER 200 and I/O Processor 301.
  • a memory REQUEST operation may be initiated by COMPUTER 200 or I/O Processor 301 depending on the functional characteristics of the operation, requester, and responder.
  • Figure 14 also depicts the embodiment of Request Frame Header 141 carried in the Request Frame 140 which conveys a memory operation request to the responding platform enabling memory to memory operations to be performed between those platforms. Even though only two nodes exist in the figure, both platforms have a node address assigned at VSIC initialization.
  • VSIC Software 401 checks the configuration validity and, if permitted, initiates the request and information transfer to VSIC Hardware 303 A. After that information is passed to the VSIC 303 A, the VSIC Software 401 concludes with a command that ENDS the frame. The VSIC hardware then serializes and transmits the Request Frame 140 immediately. VSIC Hardware 303B receives the Frame 140 and restores the information to parallel or byte oriented format. VSIC Software 402 then reads the Frame from VSIC Hardware 303B and initiates the operation.
  • the Request Frame Header 141 always contains a Source Node Address 144 and a Destination Node Address 143.
  • the Request Frame Header 141 1) The Destination Node Address 143 is the platform which receives the frame; 2) The Source Node Address 144 is the platform which sent the frame.
  • a task ID 145 is attached to application operations to lower the communication overhead by allowing the VSIC at each end to keep track of operations across the virtual system bus which last longer than a simple request-response, or those operations which require specific memory address areas dedicated solely to that operation. 4) Request Frame Header 141 particularly coveys the Memory Interface Request Operations 142. Examples of Memory Interface Request Operations 142:
  • Fetch Data is a request to provide data from the Destination Node's memory
  • Partial Store is a request to store only a partial word of data into a memory location and which does not modify the adjacent bytes filling up the word or double word as the case might be;
  • Set Storage Key sets up a protection key for a specific area of memory and accesses require the key along with the memory address in order to access that area;
  • Data Transfer Write is a continuous output data exchange operation defined by the parameters given the destination node during the Data Transfer Initialization operation from the source node processor.
  • Data Transfer Read is a continuous input data exchange operation defined by the parameters given the destination node during the Data Transfer Initialization operation from the source node processor.
  • Stop Data Transfer tells the destination node to cease the previously set up Write or Read Data Transfer operation.
  • Figure 15 depicts a new art example of the VSIC and Virtual System Bus connecting a simple version of new art Computer 300 consisting of two processor nodes, prior art Computer 200 and new art I/O Processor 301 .
  • a memory Response operation responds to a REQUEST initiated by either prior art Computer 200 or new art I O Processor 301 depending on the functional characteristics of the operation, requester, and responder.
  • Figure 15 also depicts the RESPONSE Frame Header 151 carried in the Request Frame 150 which conveys a memory operation RESPONSE to the REQUESTING platform to confirm any memory to memory operations performed between those platforms. Both platforms have a node address assigned at VSIC initialization even though only two nodes exist in the example of Figure 15.
  • the Destination Node is the platform which receives the Response Frame
  • the Source Node is the platform which sent the Response Frame
  • a task ID is attached to application operations to lower the communication overhead by allowing the VSIC software 401 and 402 at each end to keep track of operations across the Virtual System Bus 600 which last longer than a simple request-response, or those operations which require specific memory address areas dedicated solely to that operation; 4) Response Frame Header 151 coveys the Memory Interface Response Operations OPCODES 152;
  • Fetch Data Response usually carries the requested data and response for the requesting node.
  • Store Data Response is a response that data was stored successfully or unsuccessfully as the case might be.
  • Fetch and Set Lock Response is a response if successful accompanied by data and a response indicating the lock was successful or the lock attempt was unsuccessful, probably due to a previously set lock;
  • Partial Store Response is a response that data was stored successfully or unsuccessfully as the case might be;
  • Set Storage Key Response is a response that the storage key was set successfully or unsuccessfully as the case might be;
  • Set Address Limit Response is a response that the Address Limit was set successfully or unsuccessfully as the case might be
  • Data Transfer Initialization Response is a response that the Data Transfer was initialized successfully or unsuccessfully as the case might be
  • Data Transfer Write Response is a response that the Data Transfer write proceeded successfully or unsuccessfully as the case might be
  • Data Transfer Read Response is a response that the Data Transfer Read finished successfully or unsuccessfully as the case might be
  • Stop Data Transfer Response is a response that the Data Transfer was terminated successfully or unsuccessfully as the case might be;
  • Figure 16 depicts the new art VSIC Frame in Serial transmission format 140-S Request Frame and 150-S Response Frame serialized for transmission in a Serial VIRTUAL SYSTEM BUS 600.
  • Diagram 16 also depicts the parallel format 140-P Request and 150-P Response Frames in the form necessary to computers internally since computers are organized in parallel byte, word or multiple word structures and the parallel format is manipulated and moved by the VSIC software 401 and 402.
  • Serial transmission encoding is the 8B/10B format defined in Prior art Fibre-Channel and Prior art ESCON serial transmission specifications;
  • the Serial Start of Frame Delimiter 161 is a prior art Fibre-channel SOFn (normal) transmission control character indicating to the serial receiver and decoder that a frame is beginning;
  • the Software Start of Frame 162 is a recognizable pattern of FF, FF, CC, and transmitted word count in HEX or base 16 notation.
  • the CC character identifies the format of the frame.
  • the S/W SOF permits easy manipulation of incoming frames by providing an easily recognized boundary indication since the serial SOF is meaningless in a parallel format,
  • the transmitted Software End of Frame Delimiter 165 is a recognizable pattern of FC, FC, and two reserved Bytes sent by the transmitting Software 401 or 402.
  • the Serial End of Frame Delimiter 167 is a prior art Fibre-Channel EOFn (normal) transmission character indicating to the serial receiver and decoder that a frame is ending;
  • Cyclic Redundancy (CRC) Check Word was used by the serial reception circuits to calculate the arrival frame's CRC and compare that CRC with the CRC as transmitted. Once checked as good, the CRC is meaningless in parallel and therefore removed when the frame is restored to parallel format. If bad, an error condition is indicated and the invalid frame bit in the S/W EOF is set.
  • Software Start of Frame Delimiter 162 is a recognizable pattern to aid software 401 and 402 in manipulating frame transmit and receive buffers during the actual operation.
  • the Software End of Frame 168LS/W EOF is particularly useful to the Software 401 & 402 when allocating buffer space and determining where one frame ends, another begins, and if the frame is valid since the receiving VSIC appends a Sequence Count of 4 bits, invalid frame and error bits if frame was invalid, and the received word count to compare with the transmitted word count.
  • the S/W EOF aids Software 401 and 402 by informing that no further frames have been received. For example, if Software 401 or 402 attempt to read a frame from VSIC hardware 303 A or 303B and no frame has been received, the last frame's Software EOF will be presented. If no new frames are present, the sequence count will remain the same as the last.
  • Figure 17 Depicts a new art Clustered Computer 300 joined by the New Art Virtual System Bus 600.
  • 5 Prior art computers 200 are joined to 3 New art I/O Processors 301.
  • the size of the cluster may be much larger.
  • the capability of 128 CPU platforms may be joined to 128 I/O Processors or other processors.
  • the Any-to-Any switch 170 directs frames Destination Node Address 143. Node addresses are unique in a cluster. The Any-to-Any switch 170 is Node addressed. Ports receive from the Source Node Address 144 and switch a frame to a Destination Node Address 143. A frame arriving at a switch port has its Destination Node address 143 checked against the configuration and the frame is directed to that port/node.
  • the frame is sent to the second Any-to-Any switch 170 node or returned to the Source Node Address 144 as an error if no additional Any-to-Any switch 170 node is configured
  • the frame arrives at that Any-to-Any switch 170, the frame is sent to the proper node address as determined by the configuration of that Any-to-Any Switch 170. In the event that the node address in invalid, an error frame is returned to the source node address.
  • the node addresses are stored in the Any-to-Any Switch 170 configuration table. Any modification to the cluster such as additional processor nodes being installed requires a partial or full re-initialization. In case ofa partial re-initialization, normal operations may continue.
  • New Art Cluster Computer 300 provides the benefits of prior art mainframe 100- MP in that the new art Cluster Computer 300 configuration demonstrates the superiority of mainframe processing architecture which supports simultaneous operation of application programs sharing physical resources and information by several processors and in conjunction with the superior I/O architecture supports multiple paths to allow sharing and redundant access to information. The failure of a Central processor, channel or even an I/O Processor complex would not prevent program execution or access to the valuable information and thereby disable the function of the entire mainframe.
  • the new art Cluster Computer 300 overcomes the mainframe 100-MP's disadvantage due to length of the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus.
  • the Central and I/O Processors need not be physically close together and are implemented in commercially available components and require little if any expensive proprietary technology and packaging.
  • the Personal Computer platforms require little or no expensive building facilities with environmental control, since personal computer technology is more suited for the normal office environment.
  • the Cluster Computer 300 foot print can be distributed throughout a normal office area or placed in a dedicated floor space.
  • the distributed platforms present no more exposure to natural or man-made disaster than highly portable personal computers, and because of the Virtual System Bus 600, the CPU Platforms and particularly I/O Processing platforms may be remotely located in secure areas thereby greatly minimizing loss or theft of critical information.
  • All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.

Abstract

In one embodiment, multiple Pentium class personal computers (301) and a System 390 processor card (201) are integrated into a novel 390 mainframe computer (200) with full 390 I/O capability. The Processor and I/O Subsystem tasks are performed by individual Pentium personal computer platforms (305). This embodiment utilizes a novel Virtual System Bus (600) which provides inter-platform data transfer at or near the internal computer platforms' system bus speed.

Description

NOVEL COMPUTER PLATFORM CONNECTION UTILIZING VIRTUAL SYSTEM BUS
INTRODUCTION Technical Field This invention pertains to a novel method of connecting multiple computer platforms to perform as a global computer system. This invention pertains to computers that execute LBM Corporation's System 370 and 390 instruction set as well as any other computer system whose sub-functions performed by independent platforms can be integrated into the greater global function by clustering those computer platforms.
Background
Computers are well known in the prior art and much has been made recently of microcomputers, often referred to as personal computers or PCs. PCs have improved significantly in recent years in their disk storage capacity and processing power, allowing PCs to become very popular with the general populace. However, for many functions, PCs are less useful in mission critical applications than one might imagine and, for this reason, computer networking has undergone technological advances. By networking a plurality of PCs, users on a network can share files and disk space. While processing typically takes place on each individual's PC, generally the loss of service from a single PC would not cripple the mission. Notwithstanding the significant advances in PC technology and PC networking technology, mainframe computers remain highly important. Mainframe computers provide significantly more connectivity of peripherals and greater on-line storage capacity, as well as greater reliability, information processing through-put, and data security than PCs. It is the mainframe computer, mainframe software operating system, and mainframe application programs that are used by major institutions to perform data processing tasks on large volumes of extremely valuable information. For example, insurance companies, government services, airline reservation systems, banks, and credit card institutions are representative of those who rely on the mainframe. By mainframe, we refer to the larger general purpose information processing systems which have performed governmental, business, and institutional processing for decades before the Personal Computer was developed. Mainframe hardware and operating system architecture (logical structure) were designed for business rather than personal information processing. Business applications require substantially more input/output capability such as multiple and redundant paths to I/O devices resulting in higher throughput performance and shorter times to access information. An example is on-line storage. A PC may have one or two disk drives or connect to servers over shared LANs, but a mainframe complex though its multiplicity of paths can connect hundreds and support architecturally thousands of disk storage I/O devices along with tape libraries containing thousands of gigabytes of on-line storage, and achieve this through communication processors make that information available to thousands of users. Mainframe technology and more importantly, their software operating systems are devoted to data integrity, error detection/recovery, redundancy, operational reliability, and data security. Mainframes are controlled by robust and mature operating systems that integrate the data processing applications and system integrity. The predominant mainframe architecture is the IBM System 360/370 and the evolved successor, the LBM System 390.
The Personal Computer on the other hand permits the individual user to perform complex personal work, data inquiries, and entertainment processing. Since the PC was first offered, efforts to interface the PC-to-mainframe have proceeded vigorously. As a result, a PC user can work with local PC disk storage and processing capability for a personal task such as drawing, word processing, maintaining source code, composing music, or performing software development, and yet have an interface to a mainframe which allows that PC to perform terminal emulation as a so called "dumb terminal". Where in the past the central host processor performed the terminal's data, keyboard, and screen manipulation, the PC now can retrieve and store files to the mainframe and perform the dumb terminal processing. More efficient both to the PC and mainframe because the mainframe now doesn't spend resources to manipulate screens and keystroke controls and is freed to more efficiently execute its primary purpose of major business processing, storing, protecting, and retrieving information for those authorized personal computer users.
While there have been significant improvements in PC technology and PC networking technology in recent years, mainframes have shown improvements as well, particularly in operating system robustness, technology, processing speed, multiprocessing, intelligent I/O architecture, and reliability. It's not unusual for a mainframe to run continuously for over a year without the anomalies that require the system to be rebooted or otherwise reinitialized.
However, unlike PCs which to a large extent rely on components available from a variety of vendors, mainframes can contain a substantial amount of unique and, in comparison to a Personal Computer, expensive technology which was designed and produced for performance and reliability by the mainframe vendor. Moreover, the consumer oriented PC uses electronic components, chip sets, and disk drives from a number of sources along with add-in boards for video graphics drivers, memory capacity, network support, and the like. The Personal Computer is a consumer product and the manufacturer is driven to achieve the lowest pricing. PC performance is primarily a function of the microprocessor and memory speed leaving little flexibility to differentiate from other PC manufacturers other than by cost.
Mainframe manufacturers strive to constantly improve performance and reliability and technology flexibility in their implementations but their components may be unique and more expensive. Also, large mainframes require dedicated building facilities to contain the system components and maintain a controlled environment which are very costly.
PCs are inherently less expensive, less stable, and less reliable than mainframes, as can be readily attested to by any PC user who has had to "reboot" their computer. This lower reliability can be a function of some lower quality commercial components, but primarily it is a function of the PC operating systems and application software. PC operating systems were developed for the personal user and as such are user friendly, but without the mainframe's extensive safeguards to prevent operational errors. PC operating systems were developed for personal application software ranging from playing video games and drawing posters to providing low cost processing for small business concerns. A software hang up normally results in the user rebooting the personal computer and restarting the program. Experienced personal computer users, for that reason, are diligent at backing up their work.
Mainframe operating systems and application software were designed to support financial institutions, automobile manufacturers, manned space shots, national defense, and other mission critical applications. In comparison, the PC operating systems do not have the experience of decades-of-service in mission critical applications provided by the mainframe operating systems. Mainframes are extremely reliable and stable and rarely, if ever, need to be "rebooted". Mainframes utilize a logical structure (architecture) which supports parity checking, error correction, path redundancy, error recovery, and that architecture is supported by comprehensive and mature software operating systems which have steadily improved in operational reliability since the middle 1960's when the original O/S 360, the foundation of System 390, was introduced. Mainframe operating systems are designed to take advantage of those inherent features to make the total mainframe information processing system more reliable. This is not the case with Personal Computers or the somewhat more powerful workstations, leading to the questionable use of such lower cost systems in highly integrated and critical applications.
Furthermore, Personal Computers and workstations as well, have significantly lower amounts of input/output (I/O) capability, whereas mainframes have been designed and constructed to allow substantial amounts of separate I/O connections and thousands of simultaneous users. While PCs and workstation networks allow a number of simultaneous users, a shared LAN network can rapidly become saturated by requests because of bandwidth limitations or the inability to maintain the number of I/O connections as compared to mainframes. Even with PCs that are programmed to perform as servers, neither the reliability or performance of the mainframe architecture is realized.
However, with today's rapid technology advances, the microprocessors are very powerful at executing instructions. Semiconductor technology is becoming denser and faster, making possible microprocessor chips that operate at clocking speeds of 200 megahertz or more. A cost effective approach to mainframe information processing is to successfully integrate the very powerful microcomputers and one or more System 390 instruction capable processors into a system that exhibits complete mainframe operating system and application capability. To date, electrical busing distance limitations have hindered or limited this approach. Figure 1 is a block diagram of a typical prior art single or "uni" processor
Mainframe Computer 100 including central processing unit (CPU) complex 101 coupled directly to Cache Memory 102 for fast access to recently used data. The CPU complex 101 with Cache Memory 102 and I/O processor 104 are coupled to main system bus 103 in order to access main memory 105 to store and retrieve programs and data. All accesses to use system bus 103 are controlled via requests which result in grants provided by the system bus controller 106.
Service processor 1 10, also coupled to the system bus 103, provides the operator console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components. Other implementations treat the service processor 110 as an I/O control unit and peripheral device.
Mainframe 100 also includes I/O Processor 104 which is also coupled to the system bus 103 for memory accesses and which receives instructions from CPU complex 101 pertaining to input/output functions and which I/O Processor 104 offloads tasks from CPU complex 101 and executes those tasks required by those I/O functions. As required for the act of program execution and data processing by CPU complex 101, these I/O processing operations transfer information which may contain data or programs to and from Main Memory 105 via communications on System Bus 103, as well as all other I/O functions which are executed by channels 107-1 through 107-N. "Channel", a direct memory access channel, as used in connection with System 390 mainframes, means System 390 compatible I/O Channel paths, also referred to as Channel Path ID (CHPLD). Channels 107-1 through 107-N are each coupled to System Bus 103 through the I/O Processor-Channel complex 104. System Bus 103 is organized in a parallel format which is made up of groups of lines, each line representing a bit. Bits combine to represent bytes which are 8 data bits plus 1 parity bit. The typical large mainframe System Bus 103 is approximately 116 bits wide including 8 bytes of 64 data and 8 associated parity bits, four bytes of 32 address bits and 4 associated parity bits, and several control bits. Control bits are highly design dependent and may or may not be required to be in a byte format. However, all information in the mainframe is represented in bytes of eight binary bits or groups of bytes. The byte is the smallest addressable component of information. For convenience in manipulation and/or calculation, bytes may be organized into groups which are referred to as "half words" (2 bytes), "words" (4 bytes), and "double words" (8 bytes). System Bus 103 is formed of high speed copper wire connections which are typically capable of one hundred to several hundred megabytes per second bandwidth, yet are limited by the physical length which effects the electrical characteristics of capacitance, resistance, and inductance and whose effect is to attenuate and distort high speed binary signals as the switching speed and length of the System Bus 103 increases. Bandwidth limitation, electromagnetic interference susceptibility, and data distortion errors normally result from excessive length of System Bus 103. In the prior art mainframe 100, System Bus lines are physically short (e.g., contained on a system backplane) as possible for data integrity. System Bus Controller 106 is included in order to maintain control of System Bus 103, receiving bus requests from various complexes requiring access to Main Memory 105 via System Bus 103, and granting allocations to those various complexes of Mainframe 100 in order to maintain individual accesses via System Bus 103 to and from Main Memory 105.
In such prior art mainframes, CPU complex 101 performs I/O operations through channels 107-1 through 107-N. All data transferred in I/O operations is either retrieved from or is stored into Main Memory 105, other than specific operations which are called READ SKLP and serve to allow a tape or other I/O unit to move recording media without storing unwanted data in Main Memory. Data being transferred from Main Memory to an I/O device such as a disk storage is considered to be a WRITE to the I/O device. Data being stored in Main Memory from an I/O device such as disk storage or tape is considered to a READ into Main Memory. Each channel represents an individual path of the I O Processing-Channel complex 104 located between Main Memory 105 and the I/O peripheral. Each channel operation is a direct memory access function and must directly or indirectly connect to the System Bus 103 and typically shares access to the System Bus Controller 106 and System Bus 103 through I O Processor-Channel complex 104 in order not to replicate System Bus 103 access hardware and possibly any firmware that might control the System Bus 103 accesses. The most elementary I/O Processor-Channel complex 104 would consist of an I/O processor, System Bus and System Bus Controller access logic, Channel and Main Memory data transfer facilities, I/O control unit selection, command presentation, I/O Control Unit data transfer facilities, and operation and electrical termination facilities (not shown).
In the prior art, each Channel is connected to its associated I/O Control Unit by one of two methods. Channel lines 109-1 and 109-2, which can connect up to eight I/O control units to Channel 107-1 or to Channel 107-2, are formed of high speed copper wire connections which are typically capable of approximately 4.5 million bytes per second (MBS) bandwidth and yet are limited to approximately 400 feet physical length. The copper cables, electrical characteristics, and protocol that are use to connect the I/O Control Unit are known in the prior art and referred to as the Original Equipment Manufacturer's Interface or OEMI. The advantage of the prior art OEMI interface is that it is a parallel interface that performs universal connection to prior art parallel I/O control units and as such was adopted as Federal Information Processing Standard-60 (FLPS-60). Disadvantages of the OEMI interface, however, are related to the electrical transmission characteristics which are capacitance, inductance, and resistance whose effects increase as the length of the copper cable increases and influence the OEMI protocol to limit bandwidth to approximately 4.5 MB/sec and the practical length of a universal OEMI interface cable to 400 feet.
Channel lines 107-3 through 107-N are serial in nature and utilize fiber-optic cables which are capable of 17 MBS data transfer bandwidth and are capable of reliable connections up to 3 kilometers when Light Emitting Diodes are used as transmitters. Channel lines 109-3 through 109-N are, in most cases, connected to Director 118 which performs a subsequent connection function to the Serial Interface I/O Control Unit 121. This prior art serial interface which defines encoding and decoding of transmission characters, optical wavelength and power requirements, connectors, protocols, framing, and fiber-optic cable characteristics is a proprietary serial I/O interface which is a unique and proprietary standard of IBM Corporation and known as Enterprise Connection Architecture or the ESCON. The advantage of the prior art ESCON serial interface is that it is a serial interface that performs universal connection to prior art Directors 118 and serial I/O control units 121. The disadvantage is that ESCON is a point to point connection and requires the Director 118, a serial switcher, to connect more than one serial I/O Control Unit to Channels 109-3 through 109-N as described in prior art Figure 1.
On the other hand, prior art ESCON interfaces are serial fiber-optic in nature and can provide connections to ESCON capable I O control units over greater distances than the OEMI cable and can provide higher data bandwidths. A typical prior art ESCON connection may provide distances up to approximately 3 km, with a data transmission bandwidth of approximately 17 MB/sec.
As shown in Figure 1, each I/O processor complex includes one or more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives or magnetic tape drives as switched by director 118. Director 118 connects the serial control and data signal interface between channel unit 107-1 and the serial I/O Control Unit and peripheral devices. In this manner, standard mainframe commands and data are applied from channel unit 107-1 to the I/O peripheral control unit. Director 1 18 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control and exchange data with the selected I/O peripheral device. Disadvantages of prior art mainframe architectures include limitations on the distance between channel complexes and peripheral devices due to the use of either OEMI or ESCON connections. Another disadvantage is the limitation in the distance of the copper buses between CPU complexes and channel complexes, which are affected by their electrical characteristics, capacitance, and inductance.
In the prior art, electrical buses interconnect I/O processor and channel complexes and the CPU complex. This results in high operating environment costs since a significant amount of computer equipment must be placed in a large dedicated computer center, necessitating support personnel and costly electrical and air conditioning systems. Due to complexity of the mainframe instruction set, architecture, technology, and very stringent requirements to maintain compatibility with the IBM software operating systems, development costs are very high. Few vendors enter the mainframe market, with the resultant increase in cost. In prior art mainframes, cyclic redundancy codes (ESCON) and parity bits support data and control integrity when communicating via OEMI or ESCON connections in order to increase the reliability of that information transmitted. The use of CRC and parity is sufficient to provide a highly reliable data interchange between the channel path and I/O control unit. Mainframe reliability does have the disadvantage of high price which is derived from hardware costs and the massive operating system software development and testing costs.
However, for over thirty years, the mainframe has been the basis for the hundreds of billions of dollars and millions of man hours invested in mainframe operating systems and application software performing extremely critical tasks, such as financial processing, civilian and military aircraft development, manned and unmanned space mission, critical national defense applications and the like. The monetary cost and time spent for rewriting such applications to run on systems other than mainframes would be astronomical. Added to that, the fact that any such redeployment to non-mainframe platforms would inevitably result in massive amounts of errors requiring more time and money to be spent in debugging and lost productivity. There is no known way to calculate costs of lost data integrity, reliability, security, or the inherent errors and processing delays prior to attain today's level of function and reliability. This vast investment, risk of change, the reliability provided by mature operating systems has resulted in the fact that serious information users are firmly committed to their mainframe applications and systems in order to avoid these difficulties and costs. While mainframe costs are high today, the billions of dollars and millions of man hours already spent debugging and streamlining the mainframe applications and systems influence the continued commitment to these well-proven mainframe applications and systems. Accordingly, mainframes are likely to remain of significant importance for many years to come, in spite of their high cost and unique designs provided by a relatively few vendors.
Figure 2 is a block diagram of a typical prior art single or "uni" processor Mainframe Computer 100 including central processing unit (CPU) complex 101 coupled directly to Cache Memory 102 for fast access to recently used data. Both CPU complex 101 and Cache Memory 102 are coupled to main system bus 103 in order to access main memory 105 to store and retrieve programs and data console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components. All accesses via requests and grants to use the system bus 103 are controlled by the system bus controller 106.
In Figure 2, mainframe 100 includes a multiplicity of I/O Processors 104 and 120 also coupled to the system bus 103 which receive instructions from CPU complex 101 pertaining to input/output functions and which I/O Processors 104 offload tasks from CPU complex 101 and execute those tasks required by those I/O functions. The multiplicity of I/O Processors 104 demonstrate the multiple paths to Disk I/O Control units, 112, 115, 125 and to tape I/O control units 110 and 123. Each I/O processor complex includes one or a multiplicity more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives, 113, 114, 116, and 117 or magnetic tape drive 111 and 124 drives as attached by an electrical OEMI interface or ESCON fiber-optic which is switched by director 118. In addition, I O Processor 104 and channel 107-0 with OEMI lines 109-0 and I/O Processor 120 and channel 122-0 are each connected to tape I/O Control Unit 110. I/O Processor 104 and channels 107-1 and 107-2 connect to disk I/O control 1 12 and 1 15 as well as I/O processor 120 and channels 121-1 and 121-2 connect to disk I/O Control Units 112 and 115 as provided by System 390 I/O Architecture. Director 1 18 connects the serial control and data signal interface between I/O Processor and channel 107-1 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others while I/O Processor 120 and channel 121-n connects to Director 1 18 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others as shown. In this manner, standard mainframe commands and data are applied from I O Processor 104 and I O Processor 120 to the I/O peripheral control units 123, 125 and others. Director 118 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control, store and retrieve information with the selected I/O peripheral device.
This configuration demonstrates the superiority of mainframe I/O architecture which is designed to support multiple paths to information. The failure of a channel or even an I/O Processor complex would not prevent access to the valuable information and thereby disable the function of the entire mainframe. A disadvantage is that due to the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus, the I O Processors must be physically close together and close to the Central processor complex necessitating expensive facilities with environmental control, dedicated floor space, and exposing the entire systems information to natural or man-made disaster. Figure 3 is a block diagram depicting a tightly coupled multi-processor prior art system 390 mainframe 100-MP where a multiplicity of central processing unit (CPU) complexes 101-1 through 101-N are coupled directly to Cache Memory 102 for fast access to recently used data. Tightly coupled means that all CPU complexes 101-1 through 101-n and Cache Memories 102-1 through 102-n are coupled to the main SYSTEM BUS 103 in order to access MAIN MEMORY 105 to store and retrieve programs for execution by each CPU complex 101.
Service processor 1 10 is also attached directly to the SYSTEM BUS 103 and provides the operator console function for configuring the system, controlling the operational aspects of the system, and keeping track of errors and/or failing components. Prior art Mainframe 100-MP may also include a multiplicity of I/O Processors 104 and 120 all which are directly attached to the same System Bus 103 and which receive instructions from CPU complexes 101-1 through 101-n pertaining to input/output functions and which I/O Processors 104 and 120 execute tasks from CPU complex 101-1 through 101-n as required by those I/O functions. All accesses via requests and grants to use the System Bus 103 are controlled by the System Bus Controller 106.
The multiplicity of I/O Processors 104 and 120 demonstrate the multiple paths to Disk I/O Control units, 112, 115, 125 and to tape I/O control units 1 10 and 123. Each I/O processor complex includes one or a multiplicity more channel paths which interface between CPU complex 101 and one or more mass storage devices, such as disk drives, 1 13, 1 14, 1 16, and 1 17 or magnetic tape drive 11 1 and 124 drives as attached by an electrical OEMI interface or ESCON fiber-optic which is switched by director 118. In addition, I O Processor 104 and channel 107-0 with OEMI lines 109-0 and I/O Processor 120 and channel 122-0 are each connected to tape I/O Control Unit 110. I/O Processor 104 and channels 107-1 and 107-2 connect to disk I/O control 112 and 115 as well as I/O processor 120 and channels 121-1 and 121-2 connect to disk I/O Control Units 112 and 115 as provided by System 390 I/O Architecture.
Director 1 18 connects the serial control and data signal interface between I/O Processor and channel 107-1 and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others while I/O Processor 120 and channel 121-n connects to Director 1 18. and the serial I/O Control Units 123 and 125 controlling peripheral devices 124, 126, and others as shown. In this manner, standard mainframe commands and data are applied from I/O Processor 104 and I/O Processor 120 to the I/O peripheral control units 123, 125 and others. Director 118 passes those standard mainframe format Channel Command Word (CCW) commands and data to the proper I/O control unit in order to generate specific electrical signals necessary to control, store and retrieve information with the selected I/O peripheral device.
This configuration demonstrates the superiority of mainframe processing architecture which supports simultaneous operation of application programs sharing physical resources and information by several processors and in conjunction with the superior I/O architecture supports multiple paths to allow sharing and redundant access to information. The failure of a Central processor, channel or even an I/O Processor complex would not prevent program execution or access to the valuable information and thereby disable the function of the entire mainframe. A disadvantage is that due to the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus, the Central and I/O Processors must be physically close together requiring expensive proprietary technology and packaging also necessitating expensive building facilities with environmental control, dedicated floor space, and exposing the entire systems information to natural or man-made disaster. For purposes of explanation of tightly coupled mainframe 100-MP, one could consider Processor complexes 101-0 through 101-n to be coupled "at the head" of the mainframe 100-MP system since all Processor complexes 101-0 through 101-n share the same system bus 103, system bus controller 106, and main memory 105.
Figure 4 is a block diagram depicting loosely coupled multiple prior art Mainframe Computers 100 and or Mainframe Computers 100-MP including central processing unit (CPU) complexes 101 and Cache Memories 102 for fast access to recently used data. Independent mainframe 100 or 100-MP, diagramed System A contains one or a multiplicity of CPU complexes 101 with Cache Memories 102 and one or a multiplicity of I/O processor 104 coupled to System A's main System Bus 103 in order to access System A's main memory 105 to store and retrieve programs and data. All System A accesses to use the System Bus 103 are controlled via requests and grants given by System A's System Bus controller 106. However, mainframe 100 or 100-MP described as System B also contains one or a multiplicity of CPU complexes 101 with Cache Memories 102 and one or a multiplicity of I/O processor 104 coupled to System B's main System Bus 103 in order to access System B's main memory 105 to store and retrieve programs and data. All System B accesses to use the System Bus 103 are controlled via requests and grants given by System B's system bus controller 106. The system buses of System A and System B are not directly connected. System A and System B are connected through their respective I/O subsystems. Mainframe System A and mainframe System B are known to be loosely coupled because they do not share a single System Bus 103.
The connections and paths by which data is shared between the individual mainframes 100 of System A and System B are enabled through the respective I/O processors, 104-A and 104-B, of System A and System B. From the I/O processors, 104- A and 104-B, the OEMI I O interfaces 109-OA and 109-OB are connected by telecommunications front-end processors, 402-A and 402-B, while the OEMI I/O interfaces 109-2 A and 109-2B are connected by a channel-to-channel adapter 400 which appears as an I/O control unit to each interface. However ESCON architecture eliminates the need for a channel-to-channel adapter 400 with a direct serial protocol between system A and B. ESCON I/O interface 109-5 A of System A directly connects to ESCON I/O interface 109-5B of System B.
Data is exchanged by access method software operating in System A and System B which performs synchronized I/O operations between the two systems. To send data from System A to System B or vice versa, an I/O write operation is initiated by the sending system and an I/O read operation is initiated by the receiving system. I/O operations are controlled by I/O instructions and Channel Control Words, previously described with reference to Figure 1.
The data transferred from system A to B (or vice versa) moves from system A memory 105-A via system bus 103 -A to I/O processor 104-A to one of the channels and respective I/O interfaces 109-OA, 109-2A or 105-5A and connects directly, if ESCON, or to a connecting unit such as the channel-to-channel adapter or front-end processor which connect to I/O interfaces 109-OB, 109-2B, or 109-5B and respective channels which connect to I/O Processor 104-B which connects to the memory 105-B via system bus 103- B. The advantage of loosely coupled systems is that information may be exchanged or shared locally over OEMI wire or ESCON fiber-optics or over great distances using high speed telecommunication media to connect the loosely coupled systems. The disadvantage is that I/O communications are much slower than system bus speeds, with a maximum speed of 4.5 Megabytes per second for OEMI or 17 megabytes per second for ESCON, (in comparison to 100 megabytes per second and greater for a system bus) and require high level I/O programming and synchronized I/O operations where System A is performing a write operation while System B is performing a read operation or vice versa in order to initiate and handle data transfer between systems, which is inefficient if a large amount of information is to be transferred.
For purposes of explanation, loosely coupled System A and System B could be considered to connected "at-the-feet" in comparison to the tightly coupled method being connected "at-the-head".
Figure 5 shows the prior art System Hybrid 390/Personal Computer 200 marketed by LBM as the Data Server 500 series, System 390 which executes System 390 mainframe applications software at much lower costs than the classical mainframe but with the very limited input/output performance of a personal computer because the 390 I/O functions are emulated in software using the personal computer's limited peripheral complex and also must compete for bandwidth with the personal computer's native I/O functions, and compete for slots on the distance limited personal computer electrical system bus.
LBM Corporation manufacturers a personal computer add-in card which is referred to as the Personal or P-390 card 201. The P-390 card 201 is a CMOS System 390 microprocessor complex which executes the 390 instruction set with exception of the ESCON, PR/SM, Parallel Sysplex, Coupling Links, Integrated coupling migration facility, Sysplex, Sysplex Timer, Dynamic reconfiguration management, Vector Facility, Expanded Vector Facility Instructions, Asynchronous page out facility, ICRF, and Asynchronous Data Mover Facility. Most 370 and 390 business application programs do not require these functions and will run on the P-390 card without modification, conversion or reassembly. The P-390 card 201 serves as the 390 processor in the hybrid 390/PC computer 200 known as the IBM Data Server 500 Series, System 390.
The majority of technology in Hybrid 390/PC Computer 200 consists of less expensive off the shelf personal computer technology. The computer 200 contains a P-390 central processor card 201 and one or more Pentium microprocessors. For I/O, one or two 500 megabyte to 4 gigabyte hard disks (with more disks in RAID configuration as options) serve as the primary substitute for the I/O control unit and disk storage of the expensive classical mainframe. Other add-in cards serve as communications connections and control peripherals such tape drives, etc. The hybrid 390 PC computer 200 is packaged in normal PC Server single case enclosure.
The P-390 card 201 plugs into the personal computer motherboard 202 with the prior art Pentium microprocessor 205 and other normal PC functions. The Pentium 205 and Memory 206 are attached to the PCI bus and connect via the PCI bus to the Peripheral Functions 207. This combination of P-390 Card 201 and the Personal Computer consisting of the microprocessor 205 and Memory 206 supported by components 202 through 207 make up the prior art Hybrid 390/PC Computer 200. Typical PC Peripherals 207 are the floppy disk, one or more hard disk drives, network cards for LAN, other communications, and an optional CD-ROM and/or tape unit functions. The PCI bus serves as the System Bus 203 for the Pentium 205 microprocessor and Peripheral Functions 207 and PCI capable P-390 Card 201. Earlier versions of P-390 Card 201 utilizing the MCA bus connectors have logical bridging circuitry to connect to the PCI bus for accesses among facilities. In this instance, the MCA and PCI bus combine to form System Bus 203. System Bus Controller 204 consists of hardware bus grant logic circuitry, any necessary bridging circuitry, software protocols, and software running in both the P-390 card and microprocessor. The combined circuitry and software form System Bus Controller 204 and serve to control the exchange of information among the processors and other complexes of computer 200.
Prior Art Figure 5 also shows the Software Components 208 organization of Hybrid 390/Personal Computer 200. The P-390 software complex 209 is made up of the S\390 operating system and S\390 application software. The Pentium microprocessor software complex 210 consists of the OS/2 operating system, Communications Manager 215, and P-390 I/O Subsystem 211.
The P-390 I O Subsystem 21 1 software components are the P-390 Device Driver 212, S\390 Channel Emulator 213, and devices managers 214. The P-390 software complex cooperates with the microprocessor software complex. The Personal Computer OS/2 software emulates the S\390 I/O functions for the S\390 Operating System and application programs. P390 device driver 212 software provides the operational interface between P-390 software 209 and Pentium software 210 which is performing the channel and I/O device emulation. In the prior art (LBM DATA SERVER SERIES 500) 390/PC Computer 200, I/O Subsystem 210 is made up of OS/2 PC application programs that emulate the S\390 channel, I/O control units, and I/O devices. I/O Subsystem 210 contains several software modules which are the P-390 Device Driver 211, 390 Channel Emulator 212, and Specific Device Manager modules 213, which transform S\390 mainframe I/O device data formats into forms acceptable to the PC Peripherals 207.
The P-390 Device Driver software 208 starts and stops P-390 Card 201, performs Initial Program Load (IPL), handles interrupts between microprocessor 205 and P-390 Card 201 and is written as a standard OS/2 PC device driver. S\390 Channel Emulator 212 interfaces between the Device Manager modules 213 and P-390 Device Driver 211.
When a System 390 I/O operation is initiated by P-390 Card 201 across System
Bus 203 to microprocessor 205, microprocessor 205 invokes P-390 Device Driver 212.
P-390 Device Driver 21 1 then passes information and control to the S\390 Channel
Emulator 213. S\390 Channel Emulator 212 converts 390 Channel Control Word (CCW) formats into PC format and selects the appropriate Device Manager 213 to exercise its PC peripheral to emulate the I/O operation requested by P-390 CARD 201 and 390 Software 208. For example, any information that is to be retrieved from the PC disk storage is provided in the proper format by the S\390 Channel Emulator to the P-390 Device Driver which accesses and stores the information into the P-390 Memory across System Bus 203. Information to be written to the PC peripheral disk storage is retrieved by P-390 Channel Emulator 212 from the P-390 memory across System Bus 203 and provided to the appropriate OS/2 Device Managers 213 to be mapped into the PC Disk format. Device Managers 213 are PC (OS/2) application programs which are software versions of System 390 I/O Control Units and emulate the S\390 devices in the PC disks and other hardware. While prior art Hybrid 390\PC Computer 200 does execute the S\390 operating system and application programs and is significantly less expensive than a mainframe computer, the disadvantage of computer 200 remains the same as the normal PC, e.g., that there is insufficient I/O capability. All 390 I/O operations are emulated in the PC peripherals which present the most significant deficiency in comparison to the mainframe I/O peripherals and a corresponding lack of performance. Therefore, performance for a number of users degrades as the 390 applications require more information to be retrieved or stored away. As with a Personal Computer, prior art mainframe computer 200 may be networked using PC LAN technology but the same lack of I/O operation capability will saturate the network as happens with a normal PC. Special add-in cards which perform the functions ofa SV390 I/O Channel could inserted into the PCI bus to operate certain System 370 and 390 I/O control units. However, the PCI bus, which is standard among PC manufacturers, limits the number of connectors and plug in cards because of length and electrical driver characteristics. Also, the processing for the add-in card would necessarily be performed by the Computer 200 platform Pentium which is already handling S\390 emulation and 390 Device Driver control for the P-390 Card 201.
SUMMARY Integrating microcomputers into a full 390 system configuration is made feasible by the novel virtual system bus connection and the virtual system interface controller environment described in this document. This unique approach connects powerful commercial microcomputers as platform support and mainframe capable I/O processors and channels to off-the-shelf 390 processors, RAID disks, and tape systems. These cost effective and high performance peripheral complexes enable less expensive mainframe applications while providing System 390 I/O performance, reliability, security, and capacity.
In one embodiment, multiple Pentium class personal computers and a System 390 processor card are integrated into a novel 390 mainframe computer with full 390 I/O capability. The Processor and I/O Subsystem tasks are performed by individual Pentium personal computer platforms. This embodiment utilizes a novel Virtual System Bus which provides inter-platform data transfer at or near the internal computer platforms' system bus speed.
In accordance with the teachings of this invention, System 390 computer organizations are taught that operate the mainframe operating system, mainframe application software, and high performance and capacity I/O structure, in which the majority of the system is implemented using proprietary and expensive technology. This classical mainframe computer complex is operated by a mature and robust operating system and performs mission critical functions for all manner of users. The classical mainframe is limited by the distance, speed, and drive limitations of the electrical system bus when multiple central processors and/or I/O Processors are connected on the common system bus to share a common memory as an integrated or tightly coupled system. A loosely coupled connection of mainframe systems sharing access to data permits greater distance flexibility but is limited by the maximum information transfer speed of 4.5 and 17 megabytes per second and the overhead of the central processor and I/O processor generated by I/O operation software. The proprietary technology of classical mainframes is generally much more expensive than commercial technology and requires much more in the way of physical building facilities and floor space, specialized packaging to minimize system bus lengths, and cooling. In accordance with the teachings of this invention, a novel computer operating as a prior art hybrid Personal/390 (IBM Data Server Series 500) computer organization is also taught which executes the mainframe operating system and mainframe application software. The majority of the novel computer is implemented in commercially available technology at great savings in cost over proprietary mainframe technology, however the personal 390 is unable to utilize the extensive mainframe I/O capabilities since all I/O functions are implemented in the personal computer platform's peripherals and the 390 I/O processor channel functions are emulated in software. I/O emulation provides very limited I/O performance in comparison with mainframe I/O capability. In novel 390/PC computer 300, the Virtual System Bus and Virtual System
Interface Controller integrates hardware and software components to expand the main data artery, the System Bus into a Virtual System Bus which connects local System Buses of computer platforms without the distance, speed, or electrical drive limitations of either mainframe proprietary technology or the PCI buses found in personal computers. Individual and commercially available computer platforms such as the Pentium based personal computer may serve as a 390 CPU complex or as a 390 I/O Processor. Also, if desired, a plurality of Personal Computers may serve as 390 Computer complexes and/or I/O Processors. The Virtual System Bus implemented locally with serial fiber-optic media can connect multiple PC PCI System Buses without the electrical distance or drive limitations or susceptibility to electromagnetic interference. The Virtual System Interface Controller formats and exchanges information between computer platforms over the Virtual System Bus at or near native speeds of the local computer platform PCI bus or standard or proprietary bus. The serial characteristic of the VSIC and Virtual System Bus may be connected to telecommunications media such as ATM or SONET to produce a global Virtual System Bus and eliminate the need for front-end communication processors or other pre- processing in order for systems to share information, applications, or work loads. For those situations where slower communication speeds are acceptable and the distances between the complexes is close, electrical coaxial transmission cables and electrical drivers and receiver may be substituted for the higher reliability and bit rate fiber- optics. The exemplary embodiment of this invention is a novel computer organization that improves on the prior art computer organizations to: 1) provide a low cost computer platform which is controlled by the mature and robust System 390 mainframe operating system, 2) operate the mainframe software applications at much less cost than a proprietary technology mainframe; 3) greatly increase on-line storage capacity and I/O performance over the prior art 390/personal computer (IBM DATA SERVER 500 SERIES) system by implementing external I/O processing, storage, and other peripheral functions; 4) implement unique and efficient peer-to-peer communication referred to as the Virtual System Bus which performs as a system bus among a plurality of computers to enable integration of CPU complexes and I/O Processor-Channel complexes into a mainframe capable system platform; 5) implement novel use of serial high speed serial links to extend the System Bus to remote locations; and 6) broaden the geographical extent of the mainframe operations between the CPU complex or a plurality of CPU complexes and one I/O Processor-Channel complex or a plurality of I/O processor-channel complexes. The novel serial Virtual System Bus and Virtual System Interface Controller provide the I/O processor-channel complex or complexes and/or the CPU complex or complexes with a very high bandwidth System Bus connection and allows the I/O processor-channel and/or CPU complex or complexes to be located physically apart. This Virtual System Bus implementation allows CPU and I/O Processor-channel complexes to be closely located while connected by the Virtual System Bus having one serial link or a plurality of serial links, which are implemented with optical fibers or copper cable (or a combination of both) and can be separated great distances from one another by linking a CPU complex or a plurality of CPU complexes and one I/O Processor-Channel complex or a plurality of I/O processor-channel complexes together with private fiber-optic and/or coaxial links and/or common carrier serial communications media such as the DS rate connections, the Synchronous Optical Network (SONET) optical carrier also referred to as the Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM) which is also called Broadband Integrated Services Digital Network and is defined to be a switched digital service, Switched Multi-megabit Data Service (SMDS), Frame Relay, and other any future serial carriers which can provide the bandwidth and data transmission reliability necessary to execute the mainframe applications. Furthermore, due to the low bulk and superior transmission characteristics of fiber-optic cable optic connections, redundant fiber optic connections can be run between the CPU complex or complexes and one or more of the I/O processor-channel complexes, providing further increased bandwidth, as well as redundancy in the event one of the fiber optic connections fails. In one embodiment of this invention, the CPU complex includes a fiber optic switching mechanism which allows data to be multiplexed between the CPU and a plurality of channel units. This minimizes circuit overhead and yet still provides adequate bandwidth given the significantly enhanced bandwidth of fiber optic connections.
Moreover the Virtual System Bus and Virtual System Interface Controllers eliminate the need for front-end or communication processors when telecommunications media are component of the Virtual System Bus because command, information, and programs may be formatted and exchanged in a direct memory to memory manner.
The embodiment of the novel organization of this invention improves on any prior art clustered computer organization to: 1) greatly increase on-line storage capacity; 2) greatly increase I/O performance; and 3) greatly increase peer-to-peer communications bandwidth among a plurality of CPU complexes and I/O Processor- Channel complexes. This embodiment also provides the same benefits to a plurality of CPU complexes which execute operating systems other than the System 390 operating system and application software. In any prior art computer, the System Bus is the main artery through which programs and information are transported to the processor that is performing work. The novel virtual system image controller (VSIC) environment greatly broadens the expanse of a computer's system bus in effect creating a virtual system bus by which a plurality of complexes local and remote may interchange programs and information to improve System 390 operations between the CPU complex or a plurality of CPU complexes and one or a plurality of I/O Processor-Channel complexes. This novel serial system bus invention enables an I/O processor-channel complex or plurality of I/O complexes and/or the CPU complex or plurality of complexes mutually to enjoy very high bandwidth system bus connections and specifically allows the I/O processor-channel and/or CPU complex or complexes to be located physically apart. This VSIC serial system bus implementation allows Central Processor and I/O Processor-channel complexes to be either be co-located or remotely located and maintain a common system bus environment.
The virtual system bus environment is formed by a plurality of serial links that join individual platforms into an all inclusive virtual system bus environment. In the VSIC system bus environment, platforms can connect to one another in the same location or at great distances from one another by linking the central processor complex or a plurality of CP complexes and one I/O Processor-Channel complex or a plurality of I/O processor-channel complexes together with private fiber-optic links, coaxial cable links, or high speed common carrier serial communications media such as the so called DS connections. The virtual system bus environment may be expanded across the continent over the fiber-optic trunks of common carriers over the Synchronous Optical network (SONET) also referred to as the Synchronous Digital Hierarchy (SDH). Asynchronous Transfer Mode (ATM) which is also called Broad-band Integrated Services Digital Network (ISDN) and is defined to be a world wide variable speed switched digital service can expand the Virtual System Bus Environment internationally to developing and developed countries. Switched Multi-megabit Data Service (SMDS), Frame Relay, and other Wide Area Services provided by serial carriers can provide the bandwidth and transmission reliability necessary to execute the applications over a metropolitan area or a greater metropolitan area such as New York, Boston, or Los Angeles.
Furthermore, due to the low bulk and superior transmission characteristics of fiber-optic cable optic connections, redundant fiber optic connections can be run between the CPU complex or complexes and one or more of the I/O processor-channel complexes, providing further increased bandwidth, as well as redundancy in the event one of the fiber optic connections fails.
In accordance with the teachings of this invention, a novel computer bus organization is taught which utilizes fiber-optic or serial telecommunications media between one or a multiplicity of CPU complexes and one or a multiplicity of channel complexes. This invention allows one or a multiplicity of I/O channel complexes to be connected to one or a multiplicity of CPU complexes at a high bandwidth system bus rate and yet allows the I/O Processor-channel complexes to be located physically close or at great distance from the Central Processor complex. Furthermore, due to the high speeds, reliability, low Electromagnetic Interference and susceptibility, low cost, and bulk, redundant fiber optic connections can be run between the CPU complex and one or more of the channel complexes, providing further increased bandwidth, as well as redundancy in the event of a failure of a fiber optic connection. In one embodiment of this invention, the Computer bus organization includes example redundant serial bus expanders to provide redundancy and increased bandwidth.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram depicting a typical prior art proprietary technology single or uni processor System 390 mainframe with one I/O processor operating a group of I/O channels and peripherals; Figure 2 is a block diagram depicting a typical prior art proprietary technology single or uni processor System 390 mainframe which operates multiple I/O processors operating multiple groups of I/O channels and peripherals;
Figure 3 is a block diagram depicting a typical prior art proprietary technology tightly coupled multi-processor System 390 mainframe operating one or a multiplicity of I/O processors, I/O channels, and peripherals;
Figure 4 is a block diagram depicting a typical prior art proprietary technology "loosely coupled" multiprocessor System 390 mainframes and one or a multiplicity of I/O processors, I/O channels, and peripherals;
Figure 5 is a block diagram depicting the prior art Hybrid 390/PC computer 200, known as the IBM Data Server 500 containing a P-390 System 390 processor and emulated 390 peripherals; Figure 6 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting external I/O Processor-Channel 304;
Figure 7 is a block diagram depicting VSIC software components of the Virtual System Bus and VSIC invention as used in new art Hybrid 390/PC computer 300 showing how this embodiment's implementation interacts to improve the performance over the prior art computer 200;
Figure 8 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controllers connecting multiple external I/O processor-channel complexes 301 to a single 390 processor complex,
Figure 9 is a block diagram depicting one embodiment of this invention showing new art hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting system buses of multiple 390 processor complexes to the system bus of a single external I/O processor-channel complex 301;
Figure 10 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller environment connecting system buses in an any-to-any manner multiple 390 processor complexes and multiple external I/O processor-channel complexes 301; Figure 1 1 is a block diagram depicting one embodiment of this invention showing new art Hybrid 390/PC computer 300 with virtual system bus and virtual system interface controller implemented in local fiber-optic connections and remote virtual system interface controller which connect by carrier data communications media into the Virtual System Bus for high performance Personal Computers or workstations operating locally and remote and sharing information and programs at System Bus Speeds between processors, I/O peripherals, and large capacity RAID disk servers connecting computer platforms in an any-to-any manner including multiple 390 processor complexes and multiple external I O processor-channel complexes 301 in different geographical areas;
Figure 12 is a diagram depicting one embodiment showing software levels and hardware level of prior art Open Systems Interface Reference Model standard for communication among computers; Figure 13 is a diagram depicting one embodiment showing software levels and hardware level of prior art communications models both proprietary and standard models compared to the Open Systems Interface Reference Model standard for communication among computers; Figure 14 is a diagram depicting one embodiment of new art Virtual System Bus and the two software and one hardware communication layers accompanied by the VSIC Request frame header. The VSIC frame header depicts requesting node (source) and responding node (destination), Task Identifier, and op-codes to perform memory-to- memory request operations for communication among computers; Figure 15 is a diagram depicting one embodiment of new art Virtual System Bus and the two software and one hardware communication layers accompanied by the VSIC Response frame header. The VSIC frame header depicts responding node (source) and requesting node (destination), Task Identifier, and the operation codes to perform memory-to-memory response operations for communication among computers; Figure 16 is a diagram depicting one embodiment of new art VSIC frames which are transported over the Virtual System bus. The VSIC frame consists of Software Start of Frame containing a recognition pattern and transmitted Word Count, Header, Data Words (if applicable), Cyclic Redundancy Check (CRC) word, and the Software End of Frame also containing a recognition pattern, Validation Bits, Sequence Count, and received Word Count; and
Figure 17 is a diagram depicting one embodiment of a new art computer 300 referred to as the new art Cluster Computer 300 distributed on the Virtual System Bus and connected by dynamic switches. The processor nodes and memories can communicate in an any-to-any manner.
DETAILED DESCRIPTION
Figure 6 shows one embodiment ofa novel low cost 390/PC Computer 300 which executes System 390 mainframe applications software as does the prior art proprietary technology S\390 mainframe 100 (Fig. 1) or the 100-MP, while solving many of the problems not solved by the prior art IBM DATA SERVER 500 Series computer 200 (Fig. 2). Novel 390/PC Computer 300 contains a high content of off-the-shelf less expensive commercially available personal computer technology. The 390 Central Processor of computer 300 of this embodiment is referred to as the P-390 Card 201. The P-390 Card 201 is commercially available from IBM Corporation and is sold alone as a typical personal computer card. However, any System 390 Central Processor capable of properly executing the 390 instruction set would serve the same purpose in computer 300. Personal Computer technology Pentium microprocessors and Personal Computer peripherals replace the very expensive proprietary technology 390 I/O control units and peripherals such as disk storage, tape units, etc. However, provisions are made in 390/PC computer 300 to connect the prior art proprietary 370/390 I/O control units and peripherals as well. However, novel 390/PC Computer 300 improves over the prior art computers 100,
100MP, and computer 200. In Computer 300, sub-system tasks are performed individually by multiple computer platforms because of a novel method of connecting computer platforms, called the Virtual System Bus, which enables inter-platform communication at or near the sub-system computer platform's system bus speed which enables an implementation to integrate the platforms and sub-system tasks into a System 390 mainframe with the advantages of prior art computers 100, lOOMP, all the while at less cost and providing more function than prior art computer 200 .
In novel 390/PC Computer 300, the 390 Central Processor, is in one embodiment, the P-390 Card 201. However, Computer 300 may use any 390 capable Central Processor. Early versions of the P-390 Card 201 plug into an IBM proprietary micro- channel architecture bus (MCA) and subsequent versions plug into the Peripheral Component Interconnect Local Bus (PCI bus) which is a universal standard devised for Personal Computer multi-vendor compatibility. P-390 Card 201 plugs into PC motherboard 202 along with prior art Pentium or similar Microprocessor 205 which performs normal PC and emulation functions. Microprocessor 205 and Memory 206 are also attached to the PCI bus along with appropriate personal computer Peripheral Functions 207. This combination of P-390 Card 201 and the PC consisting of microprocessor 205 and Memory 206 is supported by components 202 through 207. Hybrid 390/PC Computer 300, has the same capability to execute System 390 operating systems and application programs as prior art Computer 200.
However, the limited System Bus structure 203 of prior art computer 200 is improved greatly by the novel Virtual System Bus and Virtual System Interface Controller (VSIC) environment of the present invention. The Virtual System Bus greatly overcomes the limited number of connectors, only 5, which severely limits the number of add-in cards due to the very limited electrical drive capabilities of the PCI bus in the case of a typical personal computer. The Virtual System Bus enables a multiplicity of functions to be re- located to remote platforms while greatly increasing the bus speed connectivity distance to those other computer platforms which may, or may not be, personal computer based. The Virtual System Bus is not limited to connect similar platforms but can also expand connectivity between dissimilar micro processor platforms such as Intel, Motorola, Sun, NEC or other proprietary platforms because the Virtual System Interface Controller (VSIC) which exchanges information over the Virtual System Bus formats the commands, information, and programs into Big Endian, Little Endian or other required binary format for exchange among the computer platforms.
In this embodiment, the Virtual System Bus increases the ability to create greater on-line storage capacity, higher data retrieval and storage rates, performance, while providing the capability of an I/O Processor-Channel structure with the capabilities of the prior art S\390 mainframe computer 100.
Moreover, one of the major improvements of PC\390 Computer 300 over prior art computer 200 is the capability to connect one or a multiplicity of the external I/O Processor-Channel complexes 301 which is implemented on a personal computer platform without the distance limitations of the PCI electrical bus or the limitations and speed degradation and software overhead of LAN connections.
The novel Virtual System Bus and Virtual System Interface Controllers 301 -A and 301-B expand the capability of the PCI bus (or other distance and connector limited bus) of prior art computer or other computer to connect to the PCI (or other) bus of external computer platforms which are in this embodiment shown as the processor complex 200 and the I/O Processor complex 301. The Virtual System Bus and VSIC or Virtual Systems Interface Controller greatly expands any computer or Personal Computer's ability to connect with another computer or Personal Computer platform at speeds very near or equal to PCI bus transfer speeds. The Virtual System Bus of computer 300 is capable of 100 megabytes per second peak bandwidth using commercially available technology and fiber-optic cable. The Virtual System Interface Controllers 301 -A and 301-B control the exchange of commands, status, programs, and information across the Virtual System Bus connecting the processor platform and I/O complexes of the computer 300.
Hybrid 390/PC computer 300 includes prior art computer 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and is connected by the Virtual System Bus 600 consisting of new art 303 A Virtual System Interface Controller located in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controller 303B which is identical to VSIC 303A to an I/O Processor Complex 301 which consists of Pentium microprocessor 305, Pentium Memory 306, and PCI System Bus 304 to a Small Computer System Interface (SCSI raid disk controller card 307, Small Computer System Interface (SCSI) tape system controller card 309, and PCI 390 ESCON Channel Card 311 which in turn control RAID Disk 308, Tape 310, and drive ESCON Fiber-optic cables 312.
Prior art Computer 200 depends on Local Area Network (LAN) attachments to communicate with other systems. Prior art LAN bandwidths are 10 Million bits per second (mbs) Ether-Net, lOO bs Ethernet, 25mbs ATM, and 4-16mbs Token Ring. All of the above mentioned LAN protocols are inefficient because of the layers of software between the application and the media. Other than LAN connections, or a special channel card which plugs into Computer 200's PCI Bus limited connector allocation, the computer 200 is a closed system.
However, Computer 300's greatly improved organization is made possible by the Virtual System Bus and the Virtual System Interface Controllers join the independent platforms at system bus speed. The Virtual System Image Controller 303 A and 303B and Virtual System Bus 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Bus 306 of the I/O Processor-Channel Complex 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
Through the Virtual System Bus and Virtual System Interface Controller connections, direct memory-to-memory transfer can occur at system bus speed, thereby increasing performance by making programs and information available much faster than prior art Personal Computer to Personal Computer connections and much more flexible, less expensive than prior art proprietary technology and without the distance limitations of prior art electrical buses. In one embodiment VSIC controllers 303 -A and 303-B are identical. The VSIC hardware has duplex capability in that frames may assembled and transmitted simultaneously while incoming frames are received and disassembled.
For general operation purposes, VSIC controllers 303-A and 303-B each contain PCI bus access logic, a RISC microprocessor, volatile and non-volatile memory, and support UART logic to download operational software.
For frame transmission purposes, the VSIC controller contains a frame assembler, frame transmitter, parallel to serial conversion circuitry, and an optical (or telecommunications) transmitter with appropriate connectors to the optical fiber or other media. RISC software (401 or 402) writes the data into the frame assembler and when complete, the hardware automatically sends the frame to the receiving VSIC controller.
For frame reception purposes, the VSIC controller contains optical (or telecommunications) receiver with appropriate connectors to the optical fiber or other media, serial-to-parallel conversion circuitry, frame receiver and dissembler. The dissembler holds the frame until the RISC software (401 or 402) transfer the frame onto the PCI bus.
In Figure 7, novel 390/PC Computer 300 is shown to also contain new art software components 401 , 402, 403, 404, 405, 406, 407, and 408 connecting via the Virtual System
Bus prior art Software Components 208 of prior art Computer 200. The P-390 software complex 209 is made up of S\390 operating system and S\390 application software. The
PC Pentium processor software complex 210 is made up of OS/2 operating system,
Communications Manager/2 215, and the P-390 I/O Subsystem 210. In prior art 390/PC
Computer 200, the I/O Subsystem 210 PC application programs emulate a S\390 channel,
I/O control units, and devices. Prior art P-390 I/O Subsystem 210 software components are the P-390 Device Driver 211, S\390 Channel Emulator 212, and device managers 213.
The P-390 software complex works with the Pentium Software complex. The Pentium hardware and software perform some of the I/O functions for the S\390 Operating System and application programs. P390 device driver 211 software provides the operational interface between the P-390 software 208 and the Pentium software 210 which now performs elementary single channel and I/O device emulation to support non-critical performance I/O operations. In novel computer 300, high performance I/O operations are possible because
Virtual System Bus 600, controlled by Virtual System Interface Controller subsystems 401 and 402 permits one or a plurality of External I/O processor-channel complexes 304 each with multiple channel-to-I/O device paths to off-load System 390 I/O operations from the software emulation in PC peripherals of prior art computer 200.
In novel computer 300, when a System 390 I/O operation is initiated by the P-390 Card 201 or other 390 Processor, the I/O address is checked to determine if the I/O operation is an emulated or an actual operation to the external I/O Processor Channel complex. If the operation is to be performed by the External I/O Processor-Channel complex, the parameters concerning the I/O operation from P-390 complex 208 are provided across the local PCI bus to the Virtual System Interface Serial adapter 303 A. The Virtual System Interface Controller 401 formats the information into a VSIC frame then transmits and serializes the byte format information frame at a Gigabaud rate via the Virtual System Bus to the opposite serial adapter which deserializes the information for The Virtual System Interface Controller 402 to controls that transfer on the remote PCI bus to the required location in the I/O processor memory. The I/O processor subsystem 404 may be polling for a new operation information or be interrupted by the Virtual System Interface Controller 402 when information such a new operational command arrives. The I/O Processor may have previously made a data fetch request to the VSIC 402 and be prepared to receive data to perform a Write operation, or be in a less busy state and able to poll or, if in a busy state when a new operation arrives, take an interrupt when the I/O Processor subsystem's 404 work load permits.
To initiate an I/O operation, P-390 Access software 217 provides information such as main storage starting Channel Command Word Address (CCW), I/O device address, and I O Control Block to VSIC software 401 which formats a VSIC frame for VSIC hardware 303 A which serializes and transmits that frame containing the beginning I/O operation parameters to VSIC hardware 303B which deserializes the frame for VSIC software 402 to be sent to the I/O processor subsystem software 404. The I/O processor 404 prepares a device initiation by setting up pointers, and via the Virtual System Bus and VSIC software 402, VSIC hardware 303B issues a request to the P-390 Access Software 217 for a CCW and data if applicable. P-390 software 217 provides that CCW and data if applicable to VSIC software 401 and hardware 303 A for transmission across the Virtual System Bus to VSIC hardware 303B and VSIC software 402 provides the CCW and a quantity of data, if applicable, to I/O Processor Sub System 404 and initiates the I/O channel subsystem 405. During the operation if required, I/O channel subsystem 405 asks I/O processor subsystem for more data if a write operation to the disk subsystem 406 or tape subsystem 407 or external I O channel subsystem 408. Disk subsystem 406, tape subsystem 407, or external I/O channel subsystem 408 execute the particular command, e.g., handle control or data transfer with the respective peripheral device or devices. If more data is required or is to be stored, the I O channel sub-system 405 requests data from or passes data to the I/O processor 404 which is presented to VSIC 402 and VSIC hardware 303B across the Virtual System Bus to VSIC hardware 303 A and VSIC software subsystem 401 to perform a data store or fetch from the P-390 Access software 217. The operation continues in a like manner until the ending CCW count is exhausted or the operation ends due to function or record length. The disk subsystem 406, tape subsystem 407, or external I/O channel subsystem 408 as applicable presents the ending status to the I/O channel subsystem 405 then to the I/O processor 404 which presents the VSIC 402 the information and parameters to frame and transfer via the Virtual System Bus to VSIC 401 and on to P-390 Access software 217 of the Central processor complex 208 as a Subchannel status word which signals the I/O program with the operation's completion status. Figure 8 shows detail of the hardware components of one embodiment of new art
390/PC Computer 300. The System 390 Processor, P-390 Card 201, in this embodiment of new art computer 300 is now connected to a plurality of External I O Processor channel complexes 304 via the Virtual System Bus (VSB) 600 and VSIC Controller hardware 303A (2) and 303B (2). Greater I/O throughput, more peripheral devices, and disk storage capacity providing more information stored and available to users results from the multiplicity of I/O Processors providing more channel paths to more I/O peripherals. To illustrate an I O Operation to I/O Processor-Channel 304, the example starts with P-390 Card 201 decoding an I/O instruction such as a Start Subchannel. The Start Subchannel is accompanied by a subchannel address. The subchannel address corresponds to a channel path (CHPID) and device address. Prior to the instruction being executed by the P-390 processor, parameters in the form of Channel Command Words which describe the operation and the P-390 memory locations involved with data storage or retrieval had been set up in the P-390 Card 201 memory. As previously mentioned, data to be moved from P-390 memory to external storage is called a Write Operation while data to be retrieved from external storage and placed in P-390 memory is called a Read Operation. The P-390 Access software 217 then provides an I/O Control Block containing a valid Start Subchannel address of the device that is to perform the operation to VSIC software 401. The subchannel address translates to a CHPID and device address of the I/O Processor Complex as was defined in the I/O configuration of computer system 300, a normal mainframe means to establish logical to physical I/O paths to I/O devices. VSIC software 401 examines the configuration table to determine which of the VSIC nodes 303 A connects to the proper path of the Virtual System Bus and to the specific I/O processor complex. If VSB paths are redundant, The VSIC software 401 chooses the VSIC Node 303A according to a first and second choice priority scheme. Upon determining the proper node, the VSIC software 401 writes the I/O Control Block and other information to the VSIC node 303 A, adds an End of Frame and VSIC Node 303 A serializes and transmits that frame to partner VSIC node 303B which receives and de-serializes the frame for VSIC Software 402. Software 401 in conjunction with external software module 402 has provided a block of information to the I/O Processor sub-system 404 called an I/O Control Block (IOCB). The IOCB contains stimulus to initiate the operation and parameters of the operation. The I/O Processor subsystem software 404 selects which channel and device that is to be accessed. The selected channel starts the I/O device by an internal or external initial selection to provide the command. The command initiates an input (read) output (write), or control operation. Backspace, seek, recalibrate, and set sector are examples of control commands which prepare the device for an operation or serve to position the recording media for reading or writing data. If the command is a read or write, data is transferred according to that respective command. If a write, data is presented from P-390 memory by P-390 Access software 217 to VSIC software 401 to VSIC node 303A for transport over the Virtual System Bus to VSIC node 303B. VSIC software 402 transfers that data to the I/O processor 404 and hardware 304 for channel subsystem 405 to transfer to the I O device for proper action such as writing to a disk or tape storage device. If a read, data is read from the disk, tape or other device and presented to the appropriate CHPLD. The CHPID works with the I/O Processor to send the data to the P-390 memory. The I/O Processor 404 identifies the data for VSIC software 402 which writes the data to VSIC hardware node 303B which serializes and transmits the framed data across the Virtual System Bus 600 to VSIC 303A which receives and de-serializes the frame for VSIC software 401 to send the data to the proper location via P-390 Access software 217 into the P-390 memory for use by the 390 application program. The I/O operation may terminate at the end of data transfer and ending status presentation or, if conditions require, will continue to execute more commands in a continuous operation called CHAINING until all parameters and ending status of the I O operation have been satisfied. (Chaining allows multiple commands or data fields to be transferred by a single S\390 I/O instruction.) At the end, the I/O control unit and device will present status to the channel signifying end of operation. That status is presented to the I/O Processor 404 which uses VSIC software 402, to present the status information to VSIC node 301-B which serializes and transmits the information across the VSB to VSIC controller node 301 -A and VSIC software 401 to present the status and interrupt information to P-390 Access software 217 to inform the S\390 processor that the I/O operation has completed. Various types of channels may be controlled by the I/O processor complex 304.
For example, a first type is an external 390 channel 31 1 with a standard interface such as ESCON and will operate prior art 390 I O control units and devices as does any mainframe channel. A second type is a Small Computer System Interface (SCSI) channel 307 or 309 which integrates with commercially supplied SCSI controllers cards to operate tape drives, disk drives, or other SCSI peripherals. A third type of channel can actually integrated into large capacity RALD disk storage systems which have the internal processing power. One advantage of RALD systems is they can provide several levels of error correction and data recovery in the event of the failure of a disk drive during operation.
Figure 9 is a block diagram depicting one embodiment ofa System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention. Hybrid 390/PC computer 300 includes two prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controllers 303B which are identical to VSIC 303A to share data of an independent I/O Processor Complex 301 that consist of Pentium microprocessor 305, Pentium Memory 306, and PCI System Bus 304 to a Small Computer System Interface (SCSI raid disk controller card 307, Small Computer System Interface (SCSI) tape system controller card 309, and PCI 390 ESCON Channel Card 311 which in turn controls RAID Disk 308, Tape 310, and drive ESCON Fiber-optic cables 312. However, Computer 300's greatly improved organization is made possible by the
Virtual System Bus and the Virtual System Interface Controllers join the independent platforms at system bus Speed. The Virtual System Image Controller 303 A and 303B and Virtual System BUS 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Buses 306 of the I/O Processor-Channel Complexes 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
Through the Virtual System Bus and Virtual System Interface Controller connections, direct memory-to-memory transfer can occur at system bus speed, thereby increasing performance by making programs and information available much faster than prior art Personal Computer to personal computer connections and much more flexible, less expensive than prior art proprietary technology and without the distance limitations of prior art electrical buses.
Figure 10 is a block diagram depicting one embodiment of a System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention, which is similar to the embodiment of Fig. 9, and using a serial switcher- router Hybrid 390/PC computer 300 includes two or more prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables and Virtual System Interface Controllers 303B which are identical to VSIC 303A to share data of two or more independent I/O Processor Complexes 301 which consist of Pentium microprocessors 305, Pentium Memories 306, and PCI System Buses 304 to Small Computer System Interface (SCSI) raid disk controller cards 307, Small Computer System Interface (SCSI) tape system controller cards 309, and PCI 390 ESCON Channel Cards 311 which in turn control RAID Disk 308, Tape 310, and drive ESCON Fiber-optic cables 312. This configuration includes a prior art serial switcher-router which enables any connected CPU complex 200 to transfer commands programs or information between CPU complexes or I/O processor Complexes in an any-to-any connection at or near System Bus Speeds.
Figure 1 1 is a block diagram depicting one embodiment of a System 390 compatible Hybrid 390/PC computer 300 constructed in accordance with the teachings of this invention, which is similar to the embodiment of Fig. 9, and using a serial switcher- router, and using a virtual system bus. Hybrid 390/PC computer 300 includes two or more prior art computers 200, which in turn includes P-390 CPU Card 201, PCI Local System Bus 203, Pentium microprocessor 205, Pentium Memory 206, and PC Peripherals 207 and are connected by the Virtual System Bus 600, in this embodiment, consisting of 2 or more new art 303 A and 303C Virtual System Interface Controllers situated in prior art computer 200, duplex fiber-optic cables, Duplex telecommunication media such as ATM or SONNET and Virtual System Interface Controllers 303B and 303D which are identical to VSIC 303A and 303C to share data of two or more independent I/O Processor Complexes 301 which consist of Pentium microprocessors 305, Pentium Memories 306, and PCI System Buses 304 to Small Computer System Interface (SCSI) raid disk controller cards 307, Small Computer System Interface (SCSI) tape system controller cards 309, and PCI 390 ESCON Channel Cards 311 which in turn control RAID Disk 308, Tape 3 10, and drive ESCON Fiber-optic cables 312. This configuration may include a prior art serial switcher-router to enable any connected CPU complex 200 to transfer commands programs or information between CPU complexes or I/O processor Complexes in an any-to-any connection, either locally or geographically remote at or near System Bus Speeds.
However, Computer 300's greatly improved organization is made possible by the Virtual System Bus and the Virtual System Interface Controllers that join the independent platforms at system bus Speed. The Virtual System Image Controller 303 A and 303B and Virtual System BUS 600 connects the PCI System Bus 203 of the Prior art Computer 200 to the System Buses 306 of the I/O Processor-Channel Complexes 304 and provides bandwidth of 100 Million Bytes Per Second or faster.
Through the Virtual System Bus and Virtual System Interface Controller connections, direct memory-to-memory transfer can occur at system bus speed, thereby increasing performance by making programs and information available much faster than prior art Personal Computer to personal computer connections and much more flexible, less expensive than prior art proprietary technology and without the distance limitations of prior art electrical buses.
Figure 12 shows the prior art Open Systems Interconnect (OSI) Reference Model, one of the standard computer networking architectures contains seven layers, six are software and one is the software / physical layer. A network architecture provides the plan and rules to govern the implementation and function of the hardware and software components by which the network connects and the computers communicate. The move to personal computers and workstations away from dumb terminals and Host Centric control, influenced an open structure. The Open Systems Interconnect Reference Model (OSIRM) defines the functions and of protocols necessary for international data communications. This model was developed by the International Standards Organization (ISO), an international body of 90 countries chartered to cover technology issues among the members. Work began on the OSI architecture in 1977, 3 years after IBM's 1974 announcement of Systems Networking Architecture (SNA). IBM has since (1988), introduced a networking structure called Systems Application Architecture (S AA) which is very similar to the OSIRM standard.
Figure 12 also depicts the prior art TCP/IP networking protocol which evolved from an early research computer network, the ARPANET. The World Wide Internet, a TCPTP Internet, grew from the ARPANET. The prior art Local and Wide Area network standards provide for multi-vendor inter-operability among users and to enable those users to send or retrieve files and e-mail, "surf the Internet", or down load programs from a multiplicity of hosts or servers Local and Wide Area networking standards are oriented to the individual personal user and the personal computer or workstation which is a more powerful form of personal computer and primarily provide a one-on-one form of computer communication; user to host, host to user performing operations such as file to storage, copy files, retrieve file from storage, send electronic mail, receive electronic mail, etc. Hence the name attached to this form of computer communication is called "Client/Server". The Open Systems Interconnect Reference Model (OSIRM): 1) The Application Layer permits application programs to transparently send and receive information through the system's interconnection software layers. 2) The Presentation Layer preserves the information content of data transferred across the network. The two systems' Presentation layers negotiate the syntax for transferring messages exchanged by two systems. The presentation also performs the necessary conversions between data formats of the two systems. 3) The Session Layer manages the user-to-network interactive sessions and all session oriented transmissions. During communications between users, normally terminals or LAN workstations, and a central processor or front end processor, the session layer controls the information required by the user. Some examples of the session layer are terminal-to- mainframe log-on procedures, transfer of user information, and setting up information and resource allocations.
4) The Transport layer controls the quality and methods of data transport across the entire network. This layer can be independent of the number of networks or the types of networks through which the data must move. The transport layer's responsibility is to manage end-to-end control of complete message transmission, retransmission and delivery. The Transport layer assures that packet/message segmentation and the reassembly process is complete. The transport layer provides for higher level error correction and retransmission for services such as Frame Relay and SMDS. OSI Transport Protocol and Transmission Control Protocol (TCP) are examples of transport layers, however TCP is used in conjunction (TCP/IP) with the Internet protocol(LP). 5) The Network layer manages details of transmitting data across the physical network between network elements as well as between networks. It is this layer's responsibility to manage flow control and define data call establishment procedures for packet and cell switched networks and to manage the segmentation and assembly of data across the network. For packet networks, the network layer is the most protocol intensive layer. 6) The Link layer is sometimes called the data link layer, The data link manages the flow of data between user and network.
7) The Physical Layer, which is addressable, manages the connection of network elements, including voltages and currents, optical wavelength (if applicable), connector pin definitions, and signaling formats. RS 232, 449, x.21, V.35, IEEE 802 LAN, ISO FDDI and others are examples.
Prior art TCP/IP architecture functional layers do not separate application oriented functions into the three distinct layers as does the OSIRM. The TCP/TP Application layer approximates the OSI Application, Presentation, and Session layers. The OSI and TCP/IP Transport are equivalent, as are the OSI Network and TCP/IP Internet layers. The OSI Data Link and TCP/TP Network Interface layers perform the framing for the Physical layer and Hardware layers but the TCP/IP Hardware layer is not addressable by the software layers and any form of communication circuit may be used by TCP/TP as long as a Network Interface function can control that communication circuit. The Network Interface Card (NIC) which attaches to the interconnect media is assigned an unique address when manufactured and is of no concern to the upper software layers as is the physical layer of the OSI. TCP/TP Functional layers:
1) Application Layer is where the application programs that use the Internet operate. Some application layer software implements a set of standardized Application Layer protocols that directly services terminal or workstation users. Other Application layer software provides Application Programming Interfaces (APIs) for user-written programs to communicate over the TCP/IP Internet. TCP/IP Application layer protocols provide services such as remote login, file copying and sharing, electronic mail (e-mail), directory assistance, and network management.
2) The Transport Layer serves computer systems with TCP/IP communication software where several application processes may be running concurrently. The Transport Layer provides end-to-end data transport services to those applications which require the TCP/IP communication capability.
3) Internet Layer contains the Internet Protocol (IP) function The IP is responsible for transport data from a source host to a destination host. Whether the host are on the same network or on different physical networks connected by routers, IP makes a complex Internet appear as a single integrated or virtual network. The LP process executes in each host and router on the path from the source host to the destination host.
4) The Network Interface layer presents a standard interface to the Internet layer and handles hardware dependent functions .
5) The Hardware layer is generally considered to be independent of TCP/IP architecture since the Internet layer provides the interface to the Internet layer. The Hardware layer is concerned with physical entities such as the NICs, transceivers, cables, hubs, connectors, and the like which physically interconnect connect the network. Figure 13 depicts the prior art OSIRM, IBM SNA/SAA, DECnet, and ISDN architecture layers and show the similarity. Those Prior art data transmission architectures have a predominant ancestor, the prior art LBM Systems Networking Architecture or SNA. IBM provided a hierarchy of network access methods (drivers) to accommodate a variety of users and applications, however the final controller was the front-end or communications processor which worked directly with a mainframe host and was called Host Centric networking since the host mainframe controlled the entire network. Loosely coupled mainframes also follow the SNA architecture for the channel-to-channel connection between the systems. In LBM Systems Networking Architecture there are seven layers of which the
Presentation and Transaction layers combined are called the Function Management layer which provides the network services functions.
1) The Transaction Services layer provides network management and configuration services as well as a functional user interface to the network operations. 2) The Presentation layer formats and presents the data to the users as well as performing data translation, compression and encryption.
In IBM Systems Networking Architecture, the network control functions reside in the following layers:
3) The Data Flow control layer provides services related to user sessions with both layers operating at times in tandem.
4) The Transmission layer establishes, maintains, and terminates SNA sessions for which it provides session management and flow control while performing some routing functions
5) The Path Control layer provides the flow control and routing between point-to-point logical channels on virtual circuits throughout the network by establishing the logical connection between the source and the destination nodes.
6) The Data Link layers define flow control functions of serial data links employing the SDLC protocol and channel attachments employing the S370/390 protocols.
7) The Physical Layer manages the connection of network elements, including voltages and currents, optical wavelength (if applicable), connector pin definitions, and signaling formats
OSIRM, IBM System Application Architecture (SAA), DECnet, and ISDN architecture layers show the similarity among the networking architectures which fall into standardized or proprietary as are SAA and DECnet. The similarities show that LAN networking is adapted or standardized to perform the required communications function in order to provide the Client/Server service required by the respective user.
Prior art LAN WAN Open System standards do present advantages such as: 1. Vendor Independence.
2. Inter-operability of different platforms.
3. International standard architecture.
4. Vendor competition for lowest cost.
Prior art Open System standards do present disadvantages such as follows: 1. Lower performance since proprietary solutions often perform better because of overhead to insure compatibility.
2. For the same reason proprietary non standard solutions are often less expensive.
3. Standards committees often gridlock on key standards issues which relate to individual businesses. 4. Lower costs for non standard solutions because there are fewer standard satisfying functions which may not increase performance but doe increase development and testing costs.
Figure 14 depicts an embodiment of the new art VSIC and Virtual System Bus 600 connecting the simple version of new art Computer 300 consisting of two processor nodes COMPUTER 200 and I/O Processor 301. A memory REQUEST operation may be initiated by COMPUTER 200 or I/O Processor 301 depending on the functional characteristics of the operation, requester, and responder. Figure 14 also depicts the embodiment of Request Frame Header 141 carried in the Request Frame 140 which conveys a memory operation request to the responding platform enabling memory to memory operations to be performed between those platforms. Even though only two nodes exist in the figure, both platforms have a node address assigned at VSIC initialization.
An off platform operation initiated by COMPUTER 200, the P-390 Card 201, via the P-390 S/W Interface Layer 217 presents an Operation request and the parameters to VSIC software 401. VSIC Software 401 checks the configuration validity and, if permitted, initiates the request and information transfer to VSIC Hardware 303 A. After that information is passed to the VSIC 303 A, the VSIC Software 401 concludes with a command that ENDS the frame. The VSIC hardware then serializes and transmits the Request Frame 140 immediately. VSIC Hardware 303B receives the Frame 140 and restores the information to parallel or byte oriented format. VSIC Software 402 then reads the Frame from VSIC Hardware 303B and initiates the operation. The Request Frame Header 141 always contains a Source Node Address 144 and a Destination Node Address 143.
The Request Frame Header 141 : 1) The Destination Node Address 143 is the platform which receives the frame; 2) The Source Node Address 144 is the platform which sent the frame.
3) A task ID 145 is attached to application operations to lower the communication overhead by allowing the VSIC at each end to keep track of operations across the virtual system bus which last longer than a simple request-response, or those operations which require specific memory address areas dedicated solely to that operation. 4) Request Frame Header 141 particularly coveys the Memory Interface Request Operations 142. Examples of Memory Interface Request Operations 142:
1) Fetch Data is a request to provide data from the Destination Node's memory;
2) Store Data is a request to store data into the Destination Node's memory; 3) Fetch and Set Lock fetches a memory area and locks further fetches from that area until the requester stores data back into the area and releases the lock. Used primarily for communication mailboxes, the requester can assure that a communication area is not modified by more than one requester;
4) Partial Store is a request to store only a partial word of data into a memory location and which does not modify the adjacent bytes filling up the word or double word as the case might be;
5) Set Storage Key sets up a protection key for a specific area of memory and accesses require the key along with the memory address in order to access that area;
6) Set Address Limit makes an area of memory available to one function and unavailable to another function providing more granularity to protecting areas of memory from unauthorized fetches or stores; 7) Data Transfer Initialization is primarily for I/O Processors to set up a Direct Memory Exchange of data between platforms and memories;
8) Data Transfer Write is a continuous output data exchange operation defined by the parameters given the destination node during the Data Transfer Initialization operation from the source node processor.
9) Data Transfer Read is a continuous input data exchange operation defined by the parameters given the destination node during the Data Transfer Initialization operation from the source node processor.
10) Stop Data Transfer tells the destination node to cease the previously set up Write or Read Data Transfer operation.
Figure 15 depicts a new art example of the VSIC and Virtual System Bus connecting a simple version of new art Computer 300 consisting of two processor nodes, prior art Computer 200 and new art I/O Processor 301 . A memory Response operation responds to a REQUEST initiated by either prior art Computer 200 or new art I O Processor 301 depending on the functional characteristics of the operation, requester, and responder. Figure 15 also depicts the RESPONSE Frame Header 151 carried in the Request Frame 150 which conveys a memory operation RESPONSE to the REQUESTING platform to confirm any memory to memory operations performed between those platforms. Both platforms have a node address assigned at VSIC initialization even though only two nodes exist in the example of Figure 15.
In Figure 14, an off platform operation was initiated by the prior art Computer 200, the P-390 Card 201, via the P-390 S/W Interface Layer 217 which presented an operation request and the parameters to VSIC software 401. VSIC Software 401 checked the configuration validity and sent the operation request and information transfer to VSIC Hardware 303 A. After that information was passed to the VSIC 303 A, the VSIC Software 401 concluded with a command that Closed the Request Frame 140. The VSIC hardware serialized and immediately transmitted the Request Frame 140 on the Virtual System Bus 600 to VSIC Hardware 303B which received the Frame 150 and restored the information to parallel or byte oriented format. VSIC Software 402 read the Frame from VSIC Hardware 303B and initiated the operation.
In Figure 15, after the operation is initiated, or completed as the case might be, Software 402 owes a response to the requester. When appropriate, the responding Software 402 in this case, provides the necessary information to VSIC Hardware 303B and closes the Response Frame. The VSIC Hardware 303B immediately serializes and transmits the Response Frame 150 over the VIRTUAL SYSTEM BUS 600 to VSIC Hardware 303 A. VSIC Software 401 reads the Response Frame 150 from the Hardware 303A and provides the response to the P-390 Software Interface 217 and the operation is complete. The Response Frame Header 151 always contains a Source Node Address 144 and a Destination Node Address 143 which are identical in a Request Frame 140.
Response Frame Header 151:
1) The Destination Node is the platform which receives the Response Frame; 2) The Source Node is the platform which sent the Response Frame;
3) A task ID is attached to application operations to lower the communication overhead by allowing the VSIC software 401 and 402 at each end to keep track of operations across the Virtual System Bus 600 which last longer than a simple request-response, or those operations which require specific memory address areas dedicated solely to that operation; 4) Response Frame Header 151 coveys the Memory Interface Response Operations OPCODES 152;
Examples of Memory Interface Response Operations 152:
1) Fetch Data Response usually carries the requested data and response for the requesting node. 2) Store Data Response is a response that data was stored successfully or unsuccessfully as the case might be.
3) Fetch and Set Lock Response is a response if successful accompanied by data and a response indicating the lock was successful or the lock attempt was unsuccessful, probably due to a previously set lock; 4) Partial Store Response is a response that data was stored successfully or unsuccessfully as the case might be;
5) Set Storage Key Response is a response that the storage key was set successfully or unsuccessfully as the case might be;
6) Set Address Limit Response is a response that the Address Limit was set successfully or unsuccessfully as the case might be; 7) Data Transfer Initialization Response is a response that the Data Transfer was initialized successfully or unsuccessfully as the case might be;
8) Data Transfer Write Response is a response that the Data Transfer write proceeded successfully or unsuccessfully as the case might be; 9) Data Transfer Read Response is a response that the Data Transfer Read finished successfully or unsuccessfully as the case might be;
10) Stop Data Transfer Response is a response that the Data Transfer was terminated successfully or unsuccessfully as the case might be;
Figure 16 depicts the new art VSIC Frame in Serial transmission format 140-S Request Frame and 150-S Response Frame serialized for transmission in a Serial VIRTUAL SYSTEM BUS 600. Diagram 16 also depicts the parallel format 140-P Request and 150-P Response Frames in the form necessary to computers internally since computers are organized in parallel byte, word or multiple word structures and the parallel format is manipulated and moved by the VSIC software 401 and 402.
The Serial Frame 140-S and 150-S format:
1) Serial transmission encoding is the 8B/10B format defined in Prior art Fibre-Channel and Prior art ESCON serial transmission specifications;
2) The Serial Start of Frame Delimiter 161 is a prior art Fibre-channel SOFn (normal) transmission control character indicating to the serial receiver and decoder that a frame is beginning;
3) The Software Start of Frame 162 is a recognizable pattern of FF, FF, CC, and transmitted word count in HEX or base 16 notation. The CC character identifies the format of the frame. The S/W SOF permits easy manipulation of incoming frames by providing an easily recognized boundary indication since the serial SOF is meaningless in a parallel format,
4) The Frame Header 141 or 151 is described in Figures 14 & 15;
5) Memory Address 163 of the data to be stored or retrieved;
6) Data Words 164 if applicable
7) The transmitted Software End of Frame Delimiter 165 is a recognizable pattern of FC, FC, and two reserved Bytes sent by the transmitting Software 401 or 402.
8) Cyclic Redundancy Check word 166 (32 bits) 9) The Serial End of Frame Delimiter 167 is a prior art Fibre-Channel EOFn (normal) transmission character indicating to the serial receiver and decoder that a frame is ending;
The Parallel Frame 140-P and 150-P format:
1) In parallel byte format, the prior art serial SOFn and EOFn transmission delimiters are meaningless and therefore removed when the frame is restored to parallel format. The
Cyclic Redundancy (CRC) Check Word was used by the serial reception circuits to calculate the arrival frame's CRC and compare that CRC with the CRC as transmitted. Once checked as good, the CRC is meaningless in parallel and therefore removed when the frame is restored to parallel format. If bad, an error condition is indicated and the invalid frame bit in the S/W EOF is set.
2) Software Start of Frame Delimiter 162 is a recognizable pattern to aid software 401 and 402 in manipulating frame transmit and receive buffers during the actual operation.
3) The Frame Header 141 or 151 is described in Figures 14 & 15; 4 ) Memory Address 163 of the data to be stored or retrieved; 5) Data Words 164 if applicable
6) The Received Software End of Frame 168 with FC, FC, and Error bits if applicable and 4 bit sequence count, and the received word count are appended by the receiving VSIC Hardware 303 A or 303B as the frame is restored to parallel format;
The Software End of Frame 168LS/W EOF is particularly useful to the Software 401 & 402 when allocating buffer space and determining where one frame ends, another begins, and if the frame is valid since the receiving VSIC appends a Sequence Count of 4 bits, invalid frame and error bits if frame was invalid, and the received word count to compare with the transmitted word count. In addition, the S/W EOF aids Software 401 and 402 by informing that no further frames have been received. For example, if Software 401 or 402 attempt to read a frame from VSIC hardware 303 A or 303B and no frame has been received, the last frame's Software EOF will be presented. If no new frames are present, the sequence count will remain the same as the last.
Figure 17 Depicts a new art Clustered Computer 300 joined by the New Art Virtual System Bus 600. In this example, 5 Prior art computers 200 are joined to 3 New art I/O Processors 301. The size of the cluster may be much larger. With a CC format frame, the capability of 128 CPU platforms may be joined to 128 I/O Processors or other processors.
CPU platforms, in this example are P-390 Card 201 based since this is an extended mainframe complex, and the I/O Processors are Pentium based, however the CPU platforms or I/O processor 301 platform may be Pentium or other microprocessor based. The Any-to-Any switch 170 directs frames Destination Node Address 143. Node addresses are unique in a cluster. The Any-to-Any switch 170 is Node addressed. Ports receive from the Source Node Address 144 and switch a frame to a Destination Node Address 143. A frame arriving at a switch port has its Destination Node address 143 checked against the configuration and the frame is directed to that port/node. If the Destination Node Address 143 doesn't match a port on the switch 170 and another switch is configured, the frame is sent to the second Any-to-Any switch 170 node or returned to the Source Node Address 144 as an error if no additional Any-to-Any switch 170 node is configured When the frame arrives at that Any-to-Any switch 170, the frame is sent to the proper node address as determined by the configuration of that Any-to-Any Switch 170. In the event that the node address in invalid, an error frame is returned to the source node address. At initialization of the Cluster Computer 300, the node addresses are stored in the Any-to-Any Switch 170 configuration table. Any modification to the cluster such as additional processor nodes being installed requires a partial or full re-initialization. In case ofa partial re-initialization, normal operations may continue.
New Art Cluster Computer 300 provides the benefits of prior art mainframe 100- MP in that the new art Cluster Computer 300 configuration demonstrates the superiority of mainframe processing architecture which supports simultaneous operation of application programs sharing physical resources and information by several processors and in conjunction with the superior I/O architecture supports multiple paths to allow sharing and redundant access to information. The failure ofa Central processor, channel or even an I/O Processor complex would not prevent program execution or access to the valuable information and thereby disable the function of the entire mainframe.
The new art Cluster Computer 300 overcomes the mainframe 100-MP's disadvantage due to length of the electrical system bus and the limitation of distance that binary signals may be driven reliably over an electrical bus. In Cluster Computer 300, the Central and I/O Processors need not be physically close together and are implemented in commercially available components and require little if any expensive proprietary technology and packaging. The Personal Computer platforms require little or no expensive building facilities with environmental control, since personal computer technology is more suited for the normal office environment. The Cluster Computer 300 foot print can be distributed throughout a normal office area or placed in a dedicated floor space. The distributed platforms present no more exposure to natural or man-made disaster than highly portable personal computers, and because of the Virtual System Bus 600, the CPU Platforms and particularly I/O Processing platforms may be remotely located in secure areas thereby greatly minimizing loss or theft of critical information. All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.
The invention now being fully described, it will be apparent to one of ordinary skill in the art that many changes and modifications can be made thereto without departing from the spirit or scope of the appended claims. The invention now being fully described, it will be apparent to one of ordinary skill in the art that many changes and modifications can be made thereto without departing from the spirit or scope of the appended claims.

Claims

WHAT IS CLAIMED:
1. A mainframe computer comprising: a central processor complex comprising: a central processor; main memory; one or more mass storage devices; a PC based system bus, coupling said central processor and said main memory to said one or more mass storage devices; and a system bus controller for controlling communications on said system bus; one or more I/O processor channel complexes, each for interfacing with one or more peripheral devices; and a virtual system bus coupling said PC based system bus to said one or more I/O processor channel complexes at or near the bus speed of said PC based system bus.
2. A mainframe as in claim 1 wherein each of said one or more I/O processor channel complexes comprises: a PC, including a PC based memory bus.
3. A mainframe as in claim 2 wherein said central processor complex further comprises a virtual system interface controller coupled between said PC based memory bus of said central processor complex and said virtual system bus.
4. A mainframe as in claim 2 wherein each of said one or more I/O processor complexes further comprises a virtual system interface controller coupled between said PC based memory bus of said I/O processor channel complex and said virtual system bus.
5. A mainframe as in claim 2 wherein each of said one or more I/O processor channel complexes further comprise: one or more peripheral controllers, each coupled between said I/O processor channel complex PC based system memory bus and an external peripheral device.
6. A mainframe as in claim 1 wherein said virtual system bus comprises one or more fiber optic links.
7. A mainframe as in claim 1 wherein said virtual system bus comprises one or more ATM links.
8. A mainframe as in claim 1 wherein said virtual system bus comprises one or more
ISDN links.
9. A mainframe as in claims 3 or 4 wherein said virtual system interface controllers provide for the assembly of data frames to be transmitted, concurrently with the reception of data frames.
10. A mainframe as in claim 9 wherein said virtual system interface controllers provide for the transmission of assembled data frames concurrently with the reception of data frames.
11. A mainframe as in claim 1 which further comprises a plurality of central processor complexes, each communicating with one or more shared I/O processor complexes.
12. A mainframe as in claim 11 wherein said shared I/O processor channel complex comprises a plurality of virtual system interface controllers, each associated with a virtual system interface controller of one of said plurality of central processor complexes.
13. A mainframe as in claim 1 comprising a plurality of such central processor complexes and a plurality of such I/O processor channel complexes, and a switch for controlling the coupling of at least one of said central processor complexes with at least some of said plurality of I/O processor channel complexes.
14. A mainframe as in claim 1 further comprising one or more redundant virtual system buses between said central processor complex and said I/O processor channel complex.
15. A mainframe as in claim 14 wherein said virtual system bus and said one or more redundant virtual system busses are selected from the group consisting of: fiber optic links, commercial telecommunication links, private telecommunication links.
PCT/US1998/013532 1997-06-30 1998-06-29 Novel computer platform connection utilizing virtual system bus WO1999000744A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US88495597A 1997-06-30 1997-06-30
US08/884,955 1997-06-30

Publications (1)

Publication Number Publication Date
WO1999000744A1 true WO1999000744A1 (en) 1999-01-07

Family

ID=25385809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/013532 WO1999000744A1 (en) 1997-06-30 1998-06-29 Novel computer platform connection utilizing virtual system bus

Country Status (1)

Country Link
WO (1) WO1999000744A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9904636B2 (en) 2015-10-22 2018-02-27 John Boyd Modular ultra-wide internal bus mainframe processing units

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5107489A (en) * 1989-10-30 1992-04-21 Brown Paul J Switch and its protocol for making dynamic connections
US5524218A (en) * 1993-12-23 1996-06-04 Unisys Corporation Dedicated point to point fiber optic interface
US5572352A (en) * 1993-06-14 1996-11-05 International Business Machines Corporation Apparatus for repowering and monitoring serial links

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5107489A (en) * 1989-10-30 1992-04-21 Brown Paul J Switch and its protocol for making dynamic connections
US5572352A (en) * 1993-06-14 1996-11-05 International Business Machines Corporation Apparatus for repowering and monitoring serial links
US5524218A (en) * 1993-12-23 1996-06-04 Unisys Corporation Dedicated point to point fiber optic interface

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9904636B2 (en) 2015-10-22 2018-02-27 John Boyd Modular ultra-wide internal bus mainframe processing units

Similar Documents

Publication Publication Date Title
US7475124B2 (en) Network block services for client access of network-attached data storage in an IP network
EP1399829B1 (en) End node partitioning using local identifiers
US6975623B2 (en) Interconnection architecture for managing multiple low bandwidth connections over a high bandwidth link
US6907457B2 (en) Architecture for access to embedded files using a SAN intermediate device
JP4871880B2 (en) Storage shelf router integrated circuit
US6553408B1 (en) Virtual device architecture having memory for storing lists of driver modules
US6044415A (en) System for transferring I/O data between an I/O device and an application program's memory in accordance with a request directly over a virtual connection
JP4457185B2 (en) Silicon-based storage virtualization server
EP0935200B1 (en) Highly scalable parallel processing computer system architecture
US6718392B1 (en) Queue pair partitioning in distributed computer system
US7543084B2 (en) Method for destroying virtual resources in a logically partitioned data processing system
US20020073257A1 (en) Transferring foreign protocols across a system area network
US8489848B2 (en) Data communications between the computer memory of the logical partitions and the data storage devices through a host fibre channel adapter
US6941350B1 (en) Method and apparatus for reliably choosing a master network manager during initialization of a network computing system
US20070094402A1 (en) Method, process and system for sharing data in a heterogeneous storage network
US7409432B1 (en) Efficient process for handover between subnet managers
CN1985492A (en) Method and system for supporting iSCSI read operations and iSCSI chimney
US6889380B1 (en) Delaying loading of host-side drivers for cluster resources to avoid communication failures
US7636772B1 (en) Method and apparatus for dynamic retention of system area network management information in non-volatile store
US20020078265A1 (en) Method and apparatus for transferring data in a network data processing system
Kronenberg et al. The VAXcluster concept: An overview of a distributed system
US8271258B2 (en) Emulated Z-series queued direct I/O
US6173321B1 (en) Using a systems network architecture logical unit activation request unit as a dynamic configuration definition in a gateway
WO1999000744A1 (en) Novel computer platform connection utilizing virtual system bus
US7539711B1 (en) Streaming video data with fast-forward and no-fast-forward portions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1999505865

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase