CA1197019A - Multi-processor office system complex - Google Patents
Multi-processor office system complexInfo
- Publication number
- CA1197019A CA1197019A CA000465121A CA465121A CA1197019A CA 1197019 A CA1197019 A CA 1197019A CA 000465121 A CA000465121 A CA 000465121A CA 465121 A CA465121 A CA 465121A CA 1197019 A CA1197019 A CA 1197019A
- Authority
- CA
- Canada
- Prior art keywords
- bus
- memory
- unit
- units
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired
Links
Landscapes
- Multi Processors (AREA)
Abstract
ABSTRACT
A multi-processor system formed of a plurality of intelligent processing nodes interconnected by one or more transmission lines to form a shared resource cluster. The system is comprised of a multi-conductor bus including data and address lines. A plurality of units each has a random access memory connected to the bus in respective slots along the bus. The connection includes apparatus for slot identification for providing a respective coded signal combination representing a slot address to each unit as it is connected to the bus. Each unit has apparatus for storing its slot address, and register apparatus for storing a range of memory addresses assigned to that unit. Further apparatus in each unit accesses the random access memory of that unit on the basis of addresses received on the bus which fall within the range stored in the register apparatus.
A multi-processor system formed of a plurality of intelligent processing nodes interconnected by one or more transmission lines to form a shared resource cluster. The system is comprised of a multi-conductor bus including data and address lines. A plurality of units each has a random access memory connected to the bus in respective slots along the bus. The connection includes apparatus for slot identification for providing a respective coded signal combination representing a slot address to each unit as it is connected to the bus. Each unit has apparatus for storing its slot address, and register apparatus for storing a range of memory addresses assigned to that unit. Further apparatus in each unit accesses the random access memory of that unit on the basis of addresses received on the bus which fall within the range stored in the register apparatus.
Description
This app]icatinn is a division o~ Canadian Serial llo. 3qh,fi75 fi.l~d Fehruc~y 1~, 19~2 .
BACKG~OU~:D OF TilE Ill~/EII~ 'ON
The present inventlon relates to data and word processing systems in general, and more particularly, to a multi-terminal document preparation and data processlng system of the shared-resource or clustered configuration type which combines the similar, yet divergent, technologies of word and data processing to perform a full range of business tasks both within the office and from remot~ locatlons.
Systems capable of perorming data and wvrd processing fall within the basic categories of (1) full-featured stand-alone units, (23 shared logic systems containing a number of display-based work stations sharing the logic of a central computer, and (3) shared-resource or clustered configurations ln which intelllgent termina~s or work stations are interconnected to provide common access to central computer, controller andlor disc storage. The advantages of stand-alone units reside in their abillty to function independently of other units, and therefore, are not sub~ected to operating malfunctior~s as a result of breakdown of other units; however, such stand-alone units have the disadvantage of providing a higher-per-statlon cost and limited capability insofar as data storage and available features is concerned.
On the other hand, shared logic systems in which work stations share the logic of a central computer ior storage3 retrieval, text manipulation and printing reduce the cost per work station and provide a ~reater capabillty insofar as features and stora~e capabllity is concerned, but when the central computer malfunctions, the entire system is affected.
The most recent development in data and word processing office systems is directed to the shared-resource or clustered oonisura-tion approach in which work stations are provided in a selec~ed number on a modular basis and interconnected to provide a full sharing of capabilitles throughout the system while maintainin~ a certain inde-pendence and isolatlon within each work station insof~r as effects of ~7~19 malfunctions in other work stations are concerned. This modular approach also permits the adaptation of such systems to offices oE large and small size alike, permitting growth of the system in step with the need for increased services within the office.
Many systems have been proposed to handle word or data processing applications, but very few systems have been integrated to handle both applications. Those systems which have accomplished such integration have done so by interconnecting systems originally developed independently of each other as opposed to a design that integrates both word and data processing from the outset. Thus, these semi-integrated systems fail to provide the degree cf efficiency in either the data pro-cessing or the word processing area which is required at the level oi present-day technology.
The advantages of modularity have been applied to various areas of system desi~n in the past years in an effort to accommodate the economic and functional requirements of business customers and avoid the obsolescence which is built into non-expandable systems of a predetermined size. In both the shared-logic systems and the shared-resource or clustered configuration systems proposed to date, the basic requirement of modularity has been implemented by providing for expansion on a single level of modularity generally through the ability to add wor}c stations to the system as the need for greater capability arises. In clustered configuration systems in which intelligent work stations or terminals are interconnected in a system providing for shared-resource, as opposed to the shared-logic system in which non-intelligent terminals or semi-intelligent termlnals are connected to a central computer, the work stations are relatively costly so that the additio of a work station each time increassd capability is requlred places a heavy ~urden on the owner of the system. Thus, there is a present need for a system of the clustered configuration type in which modularity is provided 7~1~
on two levels so that system desi~n and expansion can occur not only with the addition of work sta~ions hut with the expansion of exlstlng work stations, providing greater con~rol over the si~e, capability and flexibility of the system.
BRIEF DESCRIPTION OF TEIE INVENTION
It is therefore a principal object of the present invention to provide an integrated word processing and data processing system of the shared-resource or clustered configuration type providing two levels of modularity, the highest level being in the cluster which is built up using nodes configured around a basic set of functional hardware modules.
It is ano~her object of the present invention to provide a system of the type described in which all elements are highly program-mable so that different requirements can be accommodated with softwarel firmware changes instead of hardware changes.
It is another object of the present invention to provide a sys~em of the type described which minimizes product life cycle costs.
I~ is a further object of the present invention to provide a system of the type described which is capable of accommodating both low cost and high performance applications.
It is a further object of the present invention to provide a system of the type described in which modularity of functions provide for future hardware/software growth with minimal system impact.
It is a further object oi the present invention to provide a system of the type described which eliminates the need for dedicated processors for certain functions, specific numbers of processors in a system, specific processor types~ particular memory mapping or pro-tection hardware, and other architectural dependencies.
The basic unit of the system in accordance with the present invention is an intelligent processing node provided in the form of a stand-alone intelligent unit providing the capability for document text entry, ~7~
modification, storage, and hard-copy output. A major feature of this system is the ability to connect up to sixteen nodes, wlth a high-speed cluster communication link to form a cluster, which represents the first level of modularity in the system. Nodes within a cluster can share each other's peripheral resources including iloppy disc storage and output devices~ This allows greater flexibility in the design and growth of the system and provides a basis for various advanced features such as electronic mail distribution and other data communication and pro-cesslng features.
The work station is based on an intelligent terminal that is cable-connected to a node ~hat contains one or more processing units, floppy discs and device control electronics. The intelligent terminal incorporates a keyboard, a raster-scan CRT display, and a read-write memory, and is driven by a microprocessor. The node can support a plurality of terminals, depending on desired work station response, but is also capable of supportin0 several types of peripherals, including a floppy disc, rigid disc, daisy-wheel printer, draft printer, twin-wheel printer, high-speed dedicated cluster link communications, commercial carrier data communications and a typesetter. The number and combi-nation of peripherals per node is limited only by the device controller slots and controller channel availability in the node and by desired response times.
The memory which forms part of each general purpose processor in each node can be dual ported so that other processors in the node can access it. This feature tends to further reduce bus contention by allowing I/0 controllers and other processors to deposit data directly into the local memory of the processor responsib1e for handling it. This also makes it possible to provide for auto-configuratiorl of the memory address space available on the boards connected in common to the bus, which combined address space provides the appearance of a shared global 01 memory. In this regard, each card is provicled with a physical 02 I/0 address corresponding to the slot it occupies on the bus, and 03 by use of this I/O address, the memory address block assignments 04 for each card can be automatically established, as desired, by 05 the system, simply changing the assigned address data stored in a 06 register on i-ts and/or another card or cards on the bus. This 07 eliminates the manual assignment of addresses via switches, which 0~ leads to possible operator error and malfunction of the system.
09 Thus, a system is provided in accordance with the present invention in which a first level of modularity is built into the 11 cluster through the interconnection of a desired number of nodes 12 via the cluster communications link, while a second level of 13 modularity is provided within the node itself by permitting the 14 varied connection of different numbers of intelligen~ terminals and other peripheral devices to the control pedestal.
16 More particularly, the present invention is a data 17 processing system comprising a multi-conductor bus including data 18 and address lines. A plurality of units each has a random access 19 memory connected to the bus in respective slots along the bus. The connection includes apparatus for slot identifica-tion for providing 21 a respective coded signal combination representing a slot address 22 to each unit as it is connected to the bus. Each unit has 23 apparatus for storing its slot address~ and register apparatus for
BACKG~OU~:D OF TilE Ill~/EII~ 'ON
The present inventlon relates to data and word processing systems in general, and more particularly, to a multi-terminal document preparation and data processlng system of the shared-resource or clustered configuration type which combines the similar, yet divergent, technologies of word and data processing to perform a full range of business tasks both within the office and from remot~ locatlons.
Systems capable of perorming data and wvrd processing fall within the basic categories of (1) full-featured stand-alone units, (23 shared logic systems containing a number of display-based work stations sharing the logic of a central computer, and (3) shared-resource or clustered configurations ln which intelllgent termina~s or work stations are interconnected to provide common access to central computer, controller andlor disc storage. The advantages of stand-alone units reside in their abillty to function independently of other units, and therefore, are not sub~ected to operating malfunctior~s as a result of breakdown of other units; however, such stand-alone units have the disadvantage of providing a higher-per-statlon cost and limited capability insofar as data storage and available features is concerned.
On the other hand, shared logic systems in which work stations share the logic of a central computer ior storage3 retrieval, text manipulation and printing reduce the cost per work station and provide a ~reater capabillty insofar as features and stora~e capabllity is concerned, but when the central computer malfunctions, the entire system is affected.
The most recent development in data and word processing office systems is directed to the shared-resource or clustered oonisura-tion approach in which work stations are provided in a selec~ed number on a modular basis and interconnected to provide a full sharing of capabilitles throughout the system while maintainin~ a certain inde-pendence and isolatlon within each work station insof~r as effects of ~7~19 malfunctions in other work stations are concerned. This modular approach also permits the adaptation of such systems to offices oE large and small size alike, permitting growth of the system in step with the need for increased services within the office.
Many systems have been proposed to handle word or data processing applications, but very few systems have been integrated to handle both applications. Those systems which have accomplished such integration have done so by interconnecting systems originally developed independently of each other as opposed to a design that integrates both word and data processing from the outset. Thus, these semi-integrated systems fail to provide the degree cf efficiency in either the data pro-cessing or the word processing area which is required at the level oi present-day technology.
The advantages of modularity have been applied to various areas of system desi~n in the past years in an effort to accommodate the economic and functional requirements of business customers and avoid the obsolescence which is built into non-expandable systems of a predetermined size. In both the shared-logic systems and the shared-resource or clustered configuration systems proposed to date, the basic requirement of modularity has been implemented by providing for expansion on a single level of modularity generally through the ability to add wor}c stations to the system as the need for greater capability arises. In clustered configuration systems in which intelligent work stations or terminals are interconnected in a system providing for shared-resource, as opposed to the shared-logic system in which non-intelligent terminals or semi-intelligent termlnals are connected to a central computer, the work stations are relatively costly so that the additio of a work station each time increassd capability is requlred places a heavy ~urden on the owner of the system. Thus, there is a present need for a system of the clustered configuration type in which modularity is provided 7~1~
on two levels so that system desi~n and expansion can occur not only with the addition of work sta~ions hut with the expansion of exlstlng work stations, providing greater con~rol over the si~e, capability and flexibility of the system.
BRIEF DESCRIPTION OF TEIE INVENTION
It is therefore a principal object of the present invention to provide an integrated word processing and data processing system of the shared-resource or clustered configuration type providing two levels of modularity, the highest level being in the cluster which is built up using nodes configured around a basic set of functional hardware modules.
It is ano~her object of the present invention to provide a system of the type described in which all elements are highly program-mable so that different requirements can be accommodated with softwarel firmware changes instead of hardware changes.
It is another object of the present invention to provide a sys~em of the type described which minimizes product life cycle costs.
I~ is a further object of the present invention to provide a system of the type described which is capable of accommodating both low cost and high performance applications.
It is a further object of the present invention to provide a system of the type described in which modularity of functions provide for future hardware/software growth with minimal system impact.
It is a further object oi the present invention to provide a system of the type described which eliminates the need for dedicated processors for certain functions, specific numbers of processors in a system, specific processor types~ particular memory mapping or pro-tection hardware, and other architectural dependencies.
The basic unit of the system in accordance with the present invention is an intelligent processing node provided in the form of a stand-alone intelligent unit providing the capability for document text entry, ~7~
modification, storage, and hard-copy output. A major feature of this system is the ability to connect up to sixteen nodes, wlth a high-speed cluster communication link to form a cluster, which represents the first level of modularity in the system. Nodes within a cluster can share each other's peripheral resources including iloppy disc storage and output devices~ This allows greater flexibility in the design and growth of the system and provides a basis for various advanced features such as electronic mail distribution and other data communication and pro-cesslng features.
The work station is based on an intelligent terminal that is cable-connected to a node ~hat contains one or more processing units, floppy discs and device control electronics. The intelligent terminal incorporates a keyboard, a raster-scan CRT display, and a read-write memory, and is driven by a microprocessor. The node can support a plurality of terminals, depending on desired work station response, but is also capable of supportin0 several types of peripherals, including a floppy disc, rigid disc, daisy-wheel printer, draft printer, twin-wheel printer, high-speed dedicated cluster link communications, commercial carrier data communications and a typesetter. The number and combi-nation of peripherals per node is limited only by the device controller slots and controller channel availability in the node and by desired response times.
The memory which forms part of each general purpose processor in each node can be dual ported so that other processors in the node can access it. This feature tends to further reduce bus contention by allowing I/0 controllers and other processors to deposit data directly into the local memory of the processor responsib1e for handling it. This also makes it possible to provide for auto-configuratiorl of the memory address space available on the boards connected in common to the bus, which combined address space provides the appearance of a shared global 01 memory. In this regard, each card is provicled with a physical 02 I/0 address corresponding to the slot it occupies on the bus, and 03 by use of this I/O address, the memory address block assignments 04 for each card can be automatically established, as desired, by 05 the system, simply changing the assigned address data stored in a 06 register on i-ts and/or another card or cards on the bus. This 07 eliminates the manual assignment of addresses via switches, which 0~ leads to possible operator error and malfunction of the system.
09 Thus, a system is provided in accordance with the present invention in which a first level of modularity is built into the 11 cluster through the interconnection of a desired number of nodes 12 via the cluster communications link, while a second level of 13 modularity is provided within the node itself by permitting the 14 varied connection of different numbers of intelligen~ terminals and other peripheral devices to the control pedestal.
16 More particularly, the present invention is a data 17 processing system comprising a multi-conductor bus including data 18 and address lines. A plurality of units each has a random access 19 memory connected to the bus in respective slots along the bus. The connection includes apparatus for slot identifica-tion for providing 21 a respective coded signal combination representing a slot address 22 to each unit as it is connected to the bus. Each unit has 23 apparatus for storing its slot address~ and register apparatus for
2~ storing a range of memory addresses assigned to that unit. Further apparatus in each unit accesses the random access memory of -that ?.6 unit on the basis of addresses received on the bus which fall 27 within the range stored in the register apparatus.
2~ These and other obj~cts, features and advantayes oE the 29 present invention will become more apparent from the detailed description of a preferred embodiment presented here~n in 31 conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic diagram of one embodlment of the present invention forming a system cluster;
Figure 2 is a schematic diagram of the configuration of an intelligent processor node;
Figure 3 is a schematic diagram of the architectural arrange-ment of elements forming the intelligent processor node;
Figure 4 is a schematic diagram illustrating the available variations in configura~ion of a typical cluster;
Figure 5 is a schematic block diagram of the general purpose processor provided in each node;
Figures 5A through 5G are diagrams illustrating the on-board memory feature of the present invention;
Figure 6 is a schematic diagram of the serial multiplexer controller;
Figure 7 is a schema~ic diagram oi a mass storage controller;
Figure 8 is a schematic diagram of the global memory arrangement;
Figure 9 is a schematic diagram illustrating the mernory address auto-configuration and bus identification feature of the present invention;
and Figure lO is a schematic circuit diagram of the cluster com-munication link configuration.
DESCRIPTION OF THE Pl~EFERRED EMBODIMENT
_ _ _ _ _ The present invention provides a multi-terminal docu~lent preparation and distribution s~stem which utilizes dis~ributed processing to provide a flexible, reliable system architecture with facilities for creation, revision, storage, and distribution of various types of docu-mentation with capability for both word processing and data processing on an integrated basis. The system comprises one or more clusters of processor nodes to which one or more work stations and other peripheral devices may be selectively connected to provide two levels of modularity which establishes a high level of Flexibility in design and function within the system. Each node may have one or more intelligent display/
keyboard terminals with a self-contained microcomputer and sufficient memory and processing power to function as a stand-alone word processor wor~s station or as an integral component in a shared-peripheral cluster configuration with other nodes.
Figure 1 illustrates the basic configuration of the system cluster which includes two c,r more intelligent processing nodes 10 interconnected by one or more cluster communication links 15 to which the nodes 10 are connected by way of taps 14. To the intelligent processing nodes 10 there are connected in selectively-variable combinations various peripheral devices 12, including intelligent terminals, floppy disc storage units, rigid disc storage units, daisy-wheel printers, draft printers, typesetters, modems for remote communication with other systems, and simllar peri-pheral devices.
The cluster is bu~lt around the cluster communication link 15 which is a passive coaxial data link supporting up to slxteen active taps 14 for connection of nodes to the link. Nodes rnay be connected any-where along the data link 15, which provides a half-duple~ multiplexed interconnection, with data transfers between nodes 10 being broken into packets which are interleaved with other inter~node transfers. The ~37~
cluster communication link 15 is the mechanism by which the intelligent work stations and other intelligent peripherals 12 connected to the nodes 10 interface with one another within the cluster. In terms of the cluster, a node 10 is deiined as any element which attaches to the data link 15 via a tap 14 and is not restricted to a specific piece of hardware.
The primary purpose of the cluster communication link 15 is to provide a medium speed communications path for loosely coupling nodes 10 so that systems larger than a single node can be provided in a flexible manner. The use of a passive serial link 15 also provides improved reliability, physical dispersion of system elements, and increases the ilexibility in system configuration. With the multi-layer configuration provided by the cluster, as seen in Figure 1, tightly-coupled high bandwidth processing takes place within the node 10 so that large systems can be partitioned into smaller functional units in a relatively-simple manner. Data transfer on the cluster communication link 15 is provided in accordance with high level data link control (HDLC) protocol and uses a rotating master scheme to avoid contention on the link, to provide load sharing and minimize the number oi single point failures which can disable the link.
During normal system operation, mastership of the link 15 is continuously exchanged between active nodes. A single node will retain the link for a maximum oE 50 ms without allowing other nodes the chance to assume mastership. Master exchange is accomplished by polling the other nodes to determine lf there is any wish to use the link. The current master will use the results of the poll cycle to determine which node is to be selected as the next master and will inform that node that it is to assume mastership. If no other node requests the use ~f the link dur~l~ the poll cycle, the current master can retain control of the link. The actual polling is based on a round robin activelinactive queue scherne. The master node polls the following nodes in the active queue, which is a circular queue, until it finds one which wants to assume control of the link or all other nodes have been polled. If another node wants control, then mastership is passed to that node. If no other node wants the link, the control is always retained by the current master. In this way, no dedicated bus master or other bus controller is required, lending to the simplicity of the cluster config~ration.
The active queue contains all nodes which respond to a poll while the inactive queue contains all possible nodes except those on the active queue. In order to join in the link communications, a node must be transferred from the inactive queue to the active queue. This is accomplished by having a flag in the active queue which indicates that nodes on the inactive queue are to be polled, which is performed once every two passes through the active queue, and ~hese nodes are then added to the active queue if they respond. When the current master detects the flag in the active queue indicating that the inactive queue is to be polled, then the inactive queue is used as a source of the poll addresses. Once a node is in the active queue, it remains there until it fails to respond to a poll three times, in which case it is then moved to the inactive queue.
Due to the rotating master concept of bus mastership, there will be only one master at a time and any node requiring use of the link must wait until it is selected. However, durin~ system powerup or in case of a failure in the current master, situations will exist where no node is master and one must be assigned to return to the normal mode of operation. When a node is first powered up, it can determine if the link is active by listening for traffic on the link, or if i~ is already active, it can determine that the master has failed if it does not receive a poll within two seconds. When a node detects that the link is inactive and it needs to use the link, it enters a contention mode in an attempt ~o acquire mastership. In the contention mode the node starts the poll cycle and listens to its own transmission as well as any responses. If the node hears its own transmission garbled, it enters a tirneout routine with the delay based on the node identiiication and attempts 5 the poll again if it has not seen any other transmission during the delay interval. If the node receives a response intended for another node, then it assumes that the other node has assumed control.
The intercommunication system formed by the cluster illustrated in Figure l provides message routing between tasks in different nodes.
lO Thus, if a file is needed in one node which resides in memory in a second node, a request to read the file would be formatted into a message within the first node, the message including the identity of the first node and its reply exchange. The message would then be sent to the second node where the request would be processed. The second node would then 15 format the required file into a message, which would be sent back to ~he first node, completing the request.
Thus, the cluster provides a multi-level interconnection system of intelligent processing modules which combines the best fea~
tures of stand-alone units and shared-logic systems. Peripheral units 20 lZ, such as intelligent terminals, iorming part of a node or work station can operate on a stand-alone basis or communicate with one another or wlth other intelligent peripheral units providing storage and other capabilities through the commonly-connected intelligent processing nodes 10 or communicate with other intelligent peripheral devices 12 25 connected to other intelligent processing nodes lO via the cluster com-munication link 15. As seen in Figure l, a plurality of intelligent processing nodes 10 (up to sixteen) can be interconnected via a single cluster communication link 15 and each intelligent processing node lO
can be connected via taps 14 to up to twenty-four cluster communication 30 links 15. Such an arrangement provides multi-level flexibility in the 7~
coniiguration of the cluster both from the point of view of size and the available functions provided within the cluster. Thus, the cluster con-cept provides a system capable of inter-node communications and sharing of peripheral resources at a much lower per-terminal cost than typical shared-logic controller type systems.
Just as the cluster ls built around the cluster communication link 15 using functional node types, the nodes lO are built around a synchronous exchange bus 25 using functional hardware modules, as seen in Figure 2. The synchronous exchange bus 25 provides a tightly-coupled high bandwidth bus structure optimized for multi~processor use, and is a unified bus architecture which places minimum constraints on the internal structure of each node, allowlng for a more long-term growth capability within the system.
Connected to the synchronous exchange bus 25 are one or more general purpose processors 30, a plurality of I/O subsystems 35 for connection between the bus 25 and one or more of the cluster com-munications links 15 or othe~ peripherals and communication lines, a magnetic tape subsystem 40 connecting the bus 25 to one or more magnetic tape units 42, a floppy disc subsystem 45 connecting the bus to one or more floppy disc units 48, and a rigid disc subsystem 50 con-necting the bus 25 to the one or more rigid disc units 52. All of the modules connected to the bus 25, as seen in Figure 2, are stand-alone microprocessor based subsystems which facilitate the layering of functions, contributing to the flexibility of design within the system.
The synchronous exchange bus 25 can accommodate up to sixteen modules in any mixture. Thus, even though some combinations, such as sixteen ~eneral purpose processors 30 or rigid disc subsystems 50, might not be particularly useful~ there are no hardware limitations to preclude such combinations. Due to the multi-master nature of the synchronous exchange bus ~5, multi-processor systems can be built by slmply connec~ng more than one general purpose processor 30 to ~he bus 25, and lncorporatlon of local memory ln the general purpose pro-cessor 30 allows lt to functlon more effectlvely in a multl-processor envlronment by reduclng the number of bus accesses.
One of the most important elements in a computer system ls the bus structure that holds all of the hardware components together.
Thls bus structure contains the necessary slgnals to a~ow the various system components to lnteract wlth each other, l.e., It allows memory and I/O data transfers, dlrect memory accesses, genera~on of lnter-rupts, and the llke. The synchronous exchange bus 25 ls the flexible bus structure used to interface a famlly of products whlch include s~xteen bit slngle board computers, memory expansion boards, digltal IIO boards and peripheral controllers. The structure of the synchronous exchange bus 25 is built upon the master/slave concept where the master devlce ln ~he system takes control of the bus 25 and the slave devlce, upon decodlng lts address, acts upon the comrnand provlded by the master. Thls handshake between master and slave device allows modules of dlfferent speeds to use the bus 25 and allows data rates of up to flve milllon transfers per second ln bytes, words or double words.
The synchronous exchange bus 2~ comprlses address and data lines and those control lines necessary to carry the signa~s whlch allow the various system components to interæt with each other. The arbltration for bus mastershlp between the various system components connected to the bus 25 occurs synchronously with priorlty ~elng determlned by physlcal location on the bus, as described more partl-cularly ln my copendlng Canadian Application Serial No. 393, 928, filed January 12, 1982, entltled "Synchronous Bus Arbiter". Although the arbitratlon for bus mastershlp on the synchronous exchange bus 25 occurs synchronously, the data transfers occur asynchronously at a ~12-rate determined by the particular master/slave pair passing data across the bus at a given point in time.
The synchronous exchange bus 25 is a t~me-division multiplexed bus with a unified bus architecture and no dedicated/required modules.
This type of bus minimizes configuration problems and provides th ~naximum flexibility in system/module design. In order to cover the wide range of applications desired for the system, and allow Euture expansion in a flexible manner, the synchronous exchange bus 25 provides a high bandwidth, low cost, processor independent bus by using standard drivers/receivers and multiplexed address/data lines.
Figure 3 shows the architectural configuration for a typical node including an intelligent work station terminal 125, a printer/
typesetter unit 126, and a modem 127 connected to the intelligent processing node electronics in pedestal 100. Providing the terminal 125 and the pedestal 100 in physically-separate packages effectively separates the display and keyboard functions from the processing and communication functions, with the terminal 125 and the pedestal 100 being coupled by an asynchronous link 110. The pedest~l 100 is in turn connected to the cluster communication link 15 by a tap 14 via line 18, as already described in connection with Figure 1.
The node electronics contains the general purpose processor 30, an I/O controller in the form of a serial multiplexer controller 35, a floppy disc controller 45, and a global memory 43, and as already indicated, up to sixteen controller units may be connected to the asyn-chronous exchange bus 25 in virtually any mixture so that the particular combination illustrated in Figure 3 merely represents an example of a basic corlfiguration available in accordance with the present invention.
As seen in Figure 4, which illustrates an example of a ~ypical cluster, a double pedestal 101, 104 provides a work station node interconnecting four intelligent terminals 125, four floppy disc units 48 and a printer ~97~
126a via the cluster communication link 15. At the same time an extended storage node 102 connects four bulk storage units 44 to the link 15, while single pedestal 103 provldes a pair of terminals 125, four floppy disc units 48 and a printer 126a. The single pedestal 100 provides a termlnal 125, two l~oppy disc units 48, a printer 126a and a modem 127, and the extended telecommunication node 105 provides for communication to remote systems via modem 127 as well as access to bulk storage ~.
Wlth such flexibility in the design of the system, the specific needs of each individual user on a present and future basis can be easily configured.
The work station terminal 125 is essentially a standard intelli-gent terminal of the type commonly available in the industry, such as the Harris standard terminal manufactured and sold by Harris Corporation.
Such a standard ~ermlnal typically includes a processor module associated Yv1th ROM, RAM and a serial IIO port.
As seen in Figure 5, the general purpose processor 30 pro-vided in each node 100 comprises an available microprocessor, such as an Intel 8086 microprocessorJ a RAM 302 capable of providing 128 K
bytes of storage, a bootstrap ROM 303J an I/O port 304 for coupling to a remote diagnostic facilityJ a synchronous exchange bus interface 306 and a synchronous exchange bus interrupt interface 305 along with the standard timing circuits 307 associated with the microprocessor 301.
The RAM memory 302 is divided into two equal memory areas oE 64 K
aach, which has speci 1 advanta~es in a multi-processor configuration.
Where only a s1ngle general purpose processor 30 is provided ln the node, the division of the RAM memory 302 is of no special consequence since together the two portions form a contiguous 128 K memory with no appar-ent boundary at the 64 K point. By providing the general purpose processor with a portion of dual ported memory, many small systems can be built without a global memory since the dual ported memory looks just like a shared global memory to the other elements of the sys~em. When a global memory ~3 is provided in the pedestal, the general purpose processor 30 will send each memory request either to its on-board memory area (RAM 302) or to the off-board global memory 43 depending on the address for that request. In a single processor configuration, there is no effective boundary at the end oi the general purpose processor's memory 302 (128 K point~ since the global memory 43 would respond to the next address (128 K~ 1 byte~. Again, programs and data could span this boundary without consequence except ~or perhaps a slightly-longer access time due to access to the synchronous exchange bus 25.
However, the 6~ K/64 K split of the RAM memory 302 in ~he general purpose processor 30 does become a consideration in a multi-processor configuration. For example, the first 64 K of the memory 302 in a first general purpose processor is made accessible to, and only to, the processor residing on the same card. Then, the second 6~ K portion of the memory 302 acts exactly as if it were a global memory on the general purpose processor card itself, which can be read from or written into by any and every other general purpose processor or IIO controller in the system.
Thus, each general purpose processor actually contains a microprocessor plus 64 K of local memory and 64 K of global memory. This memory splitting feature in a multi-processor configuration provides considerable advantages in the handling of tasks within the system, as will be des-cribed in conjunction with Figures 5A through 5G.
Figure 5A schematically shows a single processor system executing three assigned tasks, A, B and C. In a single processor system, the assignment of tasks is controlled by a simple multi-tasking algorithm since there is only the single processor to handle the various tasks. Thus, the processor simply selects one of the tasks that it knows about for execution. The situation is only slightly more involved when two processors are available within the system, as seen in Figure 5B.
Here there may be, but not necessarily, a choice in processors to be assigned to perform the tasks A, B and C. For example, if tasks A
and B are asslgned to CPU 1 and tasl~ C ls asslgned to CPU 2, then there ls no cholce in asslgnment. CPU 1 operates In a mul~l-tasklng mode as It dld before, and CPU 2 operates only on the slngle task C. Conceptually, the two processors CPU 1 and CPU 2 are still totally lndependent, even though they contend for the common bus ;o whlch they are connected and their tasks are ln the same memory.
If CPU 1 and CPU 2 are allowed to know about the other's software tasks, then there ls a choice to be made ln processor asslgnment.
For example, if tasks A, B and C are allowed to execute on elther CPU 1 or CPU 2, whichever ls available, as deplcted ln Flgure 5C, then the only compllcation ls to guarantee that CPU 1 and CPU 2 are not executlng the same task at the same tlme. They may alternate executiOn of a glven task, or execute different tasks at the same tlme, wlthout confusion.
Each simply selects a taslc rhat is ready to execute but ls not already executing from the lists of tasks lt knows about (ln thls case, tas}cs A, B and C). However, ln a multi-processor system where processors are all connected t~ a common bus, the trafflc on the bus carries the load ior all processors. If the hardware were configured with the processors and memory as lndependent unlts on a common bus, as seen in Figure 5D, the bus would rapldly become a throughput bottleneck. This is especially true as addltion~ processors are added to the system on the common bus. On the other hand, if each processor has its software ln lts own prlvate on-board memory, it would have no need to use the bus. Performance would lmprove for this reason; however, this would totally prevent the abllity to asslgn a taslc to more than one processor. The multl-processorJ~lobal memory concept of the present inventlon ln whlch the on-board memory assoclated wlth each general purpose processor ls subdlvlded into separate 64 ~ memory areas ~o provlde an on-board global memory area on each board offers a solutlon to this problem, as demonstrated ln Flgure 5E, provlding a system capable of supporting many processors wlth very llttle system bus contentlon .
~g~
If a copy of system software is placed in global memory, then all but one of the processors in the multi-processor system will use the s~rnchronous exchange bus to execute its s~stem code. If, however, an identical copy is placed in an identical position of each general purpose processor's local memory (0 - 64 K region), then each processor will have its own copy of software and will stay off the synchronous exchange bus. Since the copies are identically placed, each processor would view the system software as if it were sharing one copy in global memory.
This arrangement, as shown in Figure 5E, leaves a system capable of supporting many processors with very little system bus contention.
Synchronous exchange bus loading in such an arrangement results primarily from I/O traffic and communication between tasks that reside on different general purpose processors. This inter-task bus commu-nication can be minimized by grouping highly-interactive software tasks on the same general purpose processor global memory space.
In accordance with the present invention, the global 64 K
memory portion of the RAM 302 has a programmable base address, while the local 69 K portion always starts at address 0. This allows the global memory portions of the RAMs 302 in each general purpose processor to be stacked to form a large contiguous addressing space. If software programs are loaded without care into global memory, as seen in Figure 5F, unnecessary synchronous exchange bus traffic will result from the processors going off-board to execute their assigned tasks. However, since a CPU reference to global memory residing on the same card as the requesting processor does not use the synchronous exchange bus, by taking more care in selecting the memory position for software, i.e., by loadins software into the proper area of memory so that it resides on the same card as its controlling processorJ the synchronous exchange bus traffic can be significantly reducedJ as shown in Figure 5G, ~g~
Thls speclal memory feature of the present inventlon also facilltates the handllng of Interrupts to the processors connected to th~ synchronous exchange bus 25, When deallng wlth multiple prGcessors, lt becomes necessary to alter other prc~cessors when an event has occurred, an I/O is complete, a task is ready to run, and the like. This is typically done using lnterrupts. It ls hlghly desirable, however, to interrupt only those processors that need to be made aware of the event. Even more lmportant ls the ability to inform the processor of the reason for lts belng lnterrupted so that lt need not search tables, lists, etc., looklng for the reason. Thls is accomplished by an Interrupt Coupllng and Monltorlng System, as disclosed ln copendlng Canadian Application Serial No. 394, 2.~ iled January 15, 1982, and asslgned to the same asslgnee as the present application .
As seen ln Flgùre 6, the serial multlplexer controller 35 incorporates a Z-80 microprocessor 350, RAM memory 351, ROM memory 352, four independent serial lnterfaces 353, a system data channel interface 354, a local direct memory access controller 355, and the standard CPU support loglc 356 and timing generators 357 associated wlth this type of processor system. The basic ob~ectlve of the serial multiplexer controller ls to provlde the real tlme I/O processlng for the system so that the general purpose processors 30 do not have to contend with the lnterrupt and real time processing/latency require~
ments of the system. Another objective of the serlal multiplexer controller ls to provide a flexlble interface so that dlfferent communica-tion and peripheral interfaces can be handled by a common controller elther directly or via slmple adapters.
Each serial multlplexer controller 35 provldes four lndepen-dent serial lnterfaces, whlch may be used for connection to the cluster communicatlon llnk 15, as shown in Figure 3, and ~7~
for connectlon to work statlon termlnals 125, prlnter/typesetters 126, modems 127 and slmilar lntelllgent perlpheral devlces in any mlxture, as deslred. As ln the case o~ the general purpose processor 30, one or more serlal multlple~er controllers 35 can be provlded ln each pedestal connected to the common synchronous exchange bus 25 dependlng upon design requirements to provide more or less Interface c: ~acity.
As shown in Figure 7, the mass storage controllers connected to the synchronous exchange bus 25 in each node are very slmilar ln conflguration to the serial multlplexer controller 3S except that they interiace to mass storage devices, such as a lloppy dlsc drive, rlgid disc drive, magnetlc tape drive and the llke. In thls regard, a processor 701 ls connected to a ROM 702 and RAM 703 via a processor bus 705, to whlch there is also connected address reglster 706, data input register 707, data output reglster 70&, lnterrupt circuitry 709 and storage lnterface clrcuitry 710 providing interface to the si:orage devlces .
By inter~acing the mass storage àevices with an lntelligent controller, lt ls posslble to remove some of the real tlme proeessing from the general purpose processors connected to the bus 25 and also to ma3ce the interfaces to all mass storage devlces look alike so that the rest of the system is not aware of the àevlce characterlstics. Thls also allows the mass storage controller to perform high level functlons and relieve some of the processlng requirements of the system.
T}he global memory unit 43, whlch may be optionally connected to the synchronous exchange bus 25, as seen in Flgure 8, to provlde additional memory ln the node, ls baslcally a RAM wlth software con-trolled address ranse settlng. Since all other units connected to the bus 25 contain processors. their addressing ls easily configured by the on-board processors; however, the global memory belng a non-intelligent ~1~
unit must have an external input to set its address allocation. This is accomplished by configuring the RAM to include control registers which another processor can read from and write into in order to control the global memory address range assigned thereto.
In addition to the problem oi how to address the control registers of the global memory, all units connected to the bus 25 in e ch node share a common problem of establishing initial communications before memory addresses are assigned. In this regard, it is not desir-able to use fixed memory addresses since this requires a discontinuity in the memory space and also additional decoding logic to decode the large }lumber of bits in the memory address. An additional problem is how to set the memory addresses to be used on each card. In past systems, this has been accomplished by either using swi~ches on the circuit boards, which require operator setting to configure the system, or by assigning addresses by device type, which requires a much larger number of addresses than would ever be present in a single system and limits future expansion of the system.
These problems are solved in accordance with the present invention in the manner shown in Figure 9. The synchronous exchange bus 25 includes a plurality of data/address lines to permit addressing of units on the bus and effect transfer oi data to and from such units.
The ASYNC line indicates when address information is stable on the bus and the DSYNC line indicates when data is stable on the bus. The bus 25 also includes bus identification lines BID(0) and BID(l) by which physical I/O addresses are assigned to each card as it is plugged into the bus. In this regard, a plurality oE conductors C on each card engage contacts D which are connected to the bus identification lines BID(0) and BID(l) in a coded combination representing the physical address of the slot on the bus, so that this address is automatically assigned to the card as it is plugged in.
The I/O or slot address of each card ls stored ln a reglster R2 on the card, whlch is also handwlred to provlde addltlona~ coding to lderltify the card ~ype. Thls allows other cards to determlne what type of card ls ln each slot simply by readlng the contents of reglster R2 on the card.
In place of the manually-operable switches to set the memory address assignment for each card, as typically provided ln the prlor art, each card connected to the bus 25 also includes a reglster Rl ln whlch the memory address assignment for that card is stored. Thus, when an address appears on the data/address llnes as lndicated by the ASYNC llne and the state of the IOEN llne lndi~ates that thls address ls a rnemory address, the recelved address ls checked against the asslgned b!oclc of memory addresses ln re~lster Rl to determine lf memory space on that particular card ls belng addressed. On the other hand, if the state of the IOEN line lndicates that the recelved address is an I/O address, then that address is compared to the contents of the R2 register. Once the card has been accessed vla lts I/O address, a new memory address asslgnment can be wrltten lnto the reglster Rl. C)f course, the contents of register Rl can be changed by lts on-board processor at any time. These operatlons are carrled out when controlled by the on-board processor and suitable lo~ic circultry, as represented, for exarnple, by the arbitrator 310 in the processor 35 as shown ln Figure 8.
Thus, slnce each card ls automatlcally assigned a fixed I/O
address accordlng to the slot it occuples on the bus 25, the memory address space assigned to that card can be varied to permit reconfigura-tion of the memory space ln the system slmply by addressing the board via its slot or I/O address and placing in the address reglster on the card the new memory address asslgnment for that card. In this way, all card slots have access to their slot number and information con-cernlng the other cards connected to the bus and have the ability to asslgn memory addresses.
This type of operation permits the system to configure itself and results in fewer operator errors in the setting of switches to assign memory addresses, as typical in the prior art. Further, the operators do not need to know about the internal details of the system. It also increases the reliability of the system by allowing it to automatically reconfigure around failed modules and continue operation.
Figure 10 shows the details of the cluster communication link which features a p ssive coaxial line to increase the system reliability and provide DC isolation so that a common system ground becomes unnecessary. As indicated with respect to Figure 1, up to sixteen nodes may be connected to the link 15 via transformer taps 14.
While I have shown and described several embodiments of the present invention, it is understood that the invention is not limited to ~he details shown and described herein but is susceptible of numerous changes and modifications as known to one of ordinary skill in the art, and I therefore do not wish to be limited to the details shown and described herein but intend to cover all such changes and modifications obvious to those skilled in the art.
2~ These and other obj~cts, features and advantayes oE the 29 present invention will become more apparent from the detailed description of a preferred embodiment presented here~n in 31 conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic diagram of one embodlment of the present invention forming a system cluster;
Figure 2 is a schematic diagram of the configuration of an intelligent processor node;
Figure 3 is a schematic diagram of the architectural arrange-ment of elements forming the intelligent processor node;
Figure 4 is a schematic diagram illustrating the available variations in configura~ion of a typical cluster;
Figure 5 is a schematic block diagram of the general purpose processor provided in each node;
Figures 5A through 5G are diagrams illustrating the on-board memory feature of the present invention;
Figure 6 is a schematic diagram of the serial multiplexer controller;
Figure 7 is a schema~ic diagram oi a mass storage controller;
Figure 8 is a schematic diagram of the global memory arrangement;
Figure 9 is a schematic diagram illustrating the mernory address auto-configuration and bus identification feature of the present invention;
and Figure lO is a schematic circuit diagram of the cluster com-munication link configuration.
DESCRIPTION OF THE Pl~EFERRED EMBODIMENT
_ _ _ _ _ The present invention provides a multi-terminal docu~lent preparation and distribution s~stem which utilizes dis~ributed processing to provide a flexible, reliable system architecture with facilities for creation, revision, storage, and distribution of various types of docu-mentation with capability for both word processing and data processing on an integrated basis. The system comprises one or more clusters of processor nodes to which one or more work stations and other peripheral devices may be selectively connected to provide two levels of modularity which establishes a high level of Flexibility in design and function within the system. Each node may have one or more intelligent display/
keyboard terminals with a self-contained microcomputer and sufficient memory and processing power to function as a stand-alone word processor wor~s station or as an integral component in a shared-peripheral cluster configuration with other nodes.
Figure 1 illustrates the basic configuration of the system cluster which includes two c,r more intelligent processing nodes 10 interconnected by one or more cluster communication links 15 to which the nodes 10 are connected by way of taps 14. To the intelligent processing nodes 10 there are connected in selectively-variable combinations various peripheral devices 12, including intelligent terminals, floppy disc storage units, rigid disc storage units, daisy-wheel printers, draft printers, typesetters, modems for remote communication with other systems, and simllar peri-pheral devices.
The cluster is bu~lt around the cluster communication link 15 which is a passive coaxial data link supporting up to slxteen active taps 14 for connection of nodes to the link. Nodes rnay be connected any-where along the data link 15, which provides a half-duple~ multiplexed interconnection, with data transfers between nodes 10 being broken into packets which are interleaved with other inter~node transfers. The ~37~
cluster communication link 15 is the mechanism by which the intelligent work stations and other intelligent peripherals 12 connected to the nodes 10 interface with one another within the cluster. In terms of the cluster, a node 10 is deiined as any element which attaches to the data link 15 via a tap 14 and is not restricted to a specific piece of hardware.
The primary purpose of the cluster communication link 15 is to provide a medium speed communications path for loosely coupling nodes 10 so that systems larger than a single node can be provided in a flexible manner. The use of a passive serial link 15 also provides improved reliability, physical dispersion of system elements, and increases the ilexibility in system configuration. With the multi-layer configuration provided by the cluster, as seen in Figure 1, tightly-coupled high bandwidth processing takes place within the node 10 so that large systems can be partitioned into smaller functional units in a relatively-simple manner. Data transfer on the cluster communication link 15 is provided in accordance with high level data link control (HDLC) protocol and uses a rotating master scheme to avoid contention on the link, to provide load sharing and minimize the number oi single point failures which can disable the link.
During normal system operation, mastership of the link 15 is continuously exchanged between active nodes. A single node will retain the link for a maximum oE 50 ms without allowing other nodes the chance to assume mastership. Master exchange is accomplished by polling the other nodes to determine lf there is any wish to use the link. The current master will use the results of the poll cycle to determine which node is to be selected as the next master and will inform that node that it is to assume mastership. If no other node requests the use ~f the link dur~l~ the poll cycle, the current master can retain control of the link. The actual polling is based on a round robin activelinactive queue scherne. The master node polls the following nodes in the active queue, which is a circular queue, until it finds one which wants to assume control of the link or all other nodes have been polled. If another node wants control, then mastership is passed to that node. If no other node wants the link, the control is always retained by the current master. In this way, no dedicated bus master or other bus controller is required, lending to the simplicity of the cluster config~ration.
The active queue contains all nodes which respond to a poll while the inactive queue contains all possible nodes except those on the active queue. In order to join in the link communications, a node must be transferred from the inactive queue to the active queue. This is accomplished by having a flag in the active queue which indicates that nodes on the inactive queue are to be polled, which is performed once every two passes through the active queue, and ~hese nodes are then added to the active queue if they respond. When the current master detects the flag in the active queue indicating that the inactive queue is to be polled, then the inactive queue is used as a source of the poll addresses. Once a node is in the active queue, it remains there until it fails to respond to a poll three times, in which case it is then moved to the inactive queue.
Due to the rotating master concept of bus mastership, there will be only one master at a time and any node requiring use of the link must wait until it is selected. However, durin~ system powerup or in case of a failure in the current master, situations will exist where no node is master and one must be assigned to return to the normal mode of operation. When a node is first powered up, it can determine if the link is active by listening for traffic on the link, or if i~ is already active, it can determine that the master has failed if it does not receive a poll within two seconds. When a node detects that the link is inactive and it needs to use the link, it enters a contention mode in an attempt ~o acquire mastership. In the contention mode the node starts the poll cycle and listens to its own transmission as well as any responses. If the node hears its own transmission garbled, it enters a tirneout routine with the delay based on the node identiiication and attempts 5 the poll again if it has not seen any other transmission during the delay interval. If the node receives a response intended for another node, then it assumes that the other node has assumed control.
The intercommunication system formed by the cluster illustrated in Figure l provides message routing between tasks in different nodes.
lO Thus, if a file is needed in one node which resides in memory in a second node, a request to read the file would be formatted into a message within the first node, the message including the identity of the first node and its reply exchange. The message would then be sent to the second node where the request would be processed. The second node would then 15 format the required file into a message, which would be sent back to ~he first node, completing the request.
Thus, the cluster provides a multi-level interconnection system of intelligent processing modules which combines the best fea~
tures of stand-alone units and shared-logic systems. Peripheral units 20 lZ, such as intelligent terminals, iorming part of a node or work station can operate on a stand-alone basis or communicate with one another or wlth other intelligent peripheral units providing storage and other capabilities through the commonly-connected intelligent processing nodes 10 or communicate with other intelligent peripheral devices 12 25 connected to other intelligent processing nodes lO via the cluster com-munication link 15. As seen in Figure l, a plurality of intelligent processing nodes 10 (up to sixteen) can be interconnected via a single cluster communication link 15 and each intelligent processing node lO
can be connected via taps 14 to up to twenty-four cluster communication 30 links 15. Such an arrangement provides multi-level flexibility in the 7~
coniiguration of the cluster both from the point of view of size and the available functions provided within the cluster. Thus, the cluster con-cept provides a system capable of inter-node communications and sharing of peripheral resources at a much lower per-terminal cost than typical shared-logic controller type systems.
Just as the cluster ls built around the cluster communication link 15 using functional node types, the nodes lO are built around a synchronous exchange bus 25 using functional hardware modules, as seen in Figure 2. The synchronous exchange bus 25 provides a tightly-coupled high bandwidth bus structure optimized for multi~processor use, and is a unified bus architecture which places minimum constraints on the internal structure of each node, allowlng for a more long-term growth capability within the system.
Connected to the synchronous exchange bus 25 are one or more general purpose processors 30, a plurality of I/O subsystems 35 for connection between the bus 25 and one or more of the cluster com-munications links 15 or othe~ peripherals and communication lines, a magnetic tape subsystem 40 connecting the bus 25 to one or more magnetic tape units 42, a floppy disc subsystem 45 connecting the bus to one or more floppy disc units 48, and a rigid disc subsystem 50 con-necting the bus 25 to the one or more rigid disc units 52. All of the modules connected to the bus 25, as seen in Figure 2, are stand-alone microprocessor based subsystems which facilitate the layering of functions, contributing to the flexibility of design within the system.
The synchronous exchange bus 25 can accommodate up to sixteen modules in any mixture. Thus, even though some combinations, such as sixteen ~eneral purpose processors 30 or rigid disc subsystems 50, might not be particularly useful~ there are no hardware limitations to preclude such combinations. Due to the multi-master nature of the synchronous exchange bus ~5, multi-processor systems can be built by slmply connec~ng more than one general purpose processor 30 to ~he bus 25, and lncorporatlon of local memory ln the general purpose pro-cessor 30 allows lt to functlon more effectlvely in a multl-processor envlronment by reduclng the number of bus accesses.
One of the most important elements in a computer system ls the bus structure that holds all of the hardware components together.
Thls bus structure contains the necessary slgnals to a~ow the various system components to lnteract wlth each other, l.e., It allows memory and I/O data transfers, dlrect memory accesses, genera~on of lnter-rupts, and the llke. The synchronous exchange bus 25 ls the flexible bus structure used to interface a famlly of products whlch include s~xteen bit slngle board computers, memory expansion boards, digltal IIO boards and peripheral controllers. The structure of the synchronous exchange bus 25 is built upon the master/slave concept where the master devlce ln ~he system takes control of the bus 25 and the slave devlce, upon decodlng lts address, acts upon the comrnand provlded by the master. Thls handshake between master and slave device allows modules of dlfferent speeds to use the bus 25 and allows data rates of up to flve milllon transfers per second ln bytes, words or double words.
The synchronous exchange bus 2~ comprlses address and data lines and those control lines necessary to carry the signa~s whlch allow the various system components to interæt with each other. The arbltration for bus mastershlp between the various system components connected to the bus 25 occurs synchronously with priorlty ~elng determlned by physlcal location on the bus, as described more partl-cularly ln my copendlng Canadian Application Serial No. 393, 928, filed January 12, 1982, entltled "Synchronous Bus Arbiter". Although the arbitratlon for bus mastershlp on the synchronous exchange bus 25 occurs synchronously, the data transfers occur asynchronously at a ~12-rate determined by the particular master/slave pair passing data across the bus at a given point in time.
The synchronous exchange bus 25 is a t~me-division multiplexed bus with a unified bus architecture and no dedicated/required modules.
This type of bus minimizes configuration problems and provides th ~naximum flexibility in system/module design. In order to cover the wide range of applications desired for the system, and allow Euture expansion in a flexible manner, the synchronous exchange bus 25 provides a high bandwidth, low cost, processor independent bus by using standard drivers/receivers and multiplexed address/data lines.
Figure 3 shows the architectural configuration for a typical node including an intelligent work station terminal 125, a printer/
typesetter unit 126, and a modem 127 connected to the intelligent processing node electronics in pedestal 100. Providing the terminal 125 and the pedestal 100 in physically-separate packages effectively separates the display and keyboard functions from the processing and communication functions, with the terminal 125 and the pedestal 100 being coupled by an asynchronous link 110. The pedest~l 100 is in turn connected to the cluster communication link 15 by a tap 14 via line 18, as already described in connection with Figure 1.
The node electronics contains the general purpose processor 30, an I/O controller in the form of a serial multiplexer controller 35, a floppy disc controller 45, and a global memory 43, and as already indicated, up to sixteen controller units may be connected to the asyn-chronous exchange bus 25 in virtually any mixture so that the particular combination illustrated in Figure 3 merely represents an example of a basic corlfiguration available in accordance with the present invention.
As seen in Figure 4, which illustrates an example of a ~ypical cluster, a double pedestal 101, 104 provides a work station node interconnecting four intelligent terminals 125, four floppy disc units 48 and a printer ~97~
126a via the cluster communication link 15. At the same time an extended storage node 102 connects four bulk storage units 44 to the link 15, while single pedestal 103 provldes a pair of terminals 125, four floppy disc units 48 and a printer 126a. The single pedestal 100 provides a termlnal 125, two l~oppy disc units 48, a printer 126a and a modem 127, and the extended telecommunication node 105 provides for communication to remote systems via modem 127 as well as access to bulk storage ~.
Wlth such flexibility in the design of the system, the specific needs of each individual user on a present and future basis can be easily configured.
The work station terminal 125 is essentially a standard intelli-gent terminal of the type commonly available in the industry, such as the Harris standard terminal manufactured and sold by Harris Corporation.
Such a standard ~ermlnal typically includes a processor module associated Yv1th ROM, RAM and a serial IIO port.
As seen in Figure 5, the general purpose processor 30 pro-vided in each node 100 comprises an available microprocessor, such as an Intel 8086 microprocessorJ a RAM 302 capable of providing 128 K
bytes of storage, a bootstrap ROM 303J an I/O port 304 for coupling to a remote diagnostic facilityJ a synchronous exchange bus interface 306 and a synchronous exchange bus interrupt interface 305 along with the standard timing circuits 307 associated with the microprocessor 301.
The RAM memory 302 is divided into two equal memory areas oE 64 K
aach, which has speci 1 advanta~es in a multi-processor configuration.
Where only a s1ngle general purpose processor 30 is provided ln the node, the division of the RAM memory 302 is of no special consequence since together the two portions form a contiguous 128 K memory with no appar-ent boundary at the 64 K point. By providing the general purpose processor with a portion of dual ported memory, many small systems can be built without a global memory since the dual ported memory looks just like a shared global memory to the other elements of the sys~em. When a global memory ~3 is provided in the pedestal, the general purpose processor 30 will send each memory request either to its on-board memory area (RAM 302) or to the off-board global memory 43 depending on the address for that request. In a single processor configuration, there is no effective boundary at the end oi the general purpose processor's memory 302 (128 K point~ since the global memory 43 would respond to the next address (128 K~ 1 byte~. Again, programs and data could span this boundary without consequence except ~or perhaps a slightly-longer access time due to access to the synchronous exchange bus 25.
However, the 6~ K/64 K split of the RAM memory 302 in ~he general purpose processor 30 does become a consideration in a multi-processor configuration. For example, the first 64 K of the memory 302 in a first general purpose processor is made accessible to, and only to, the processor residing on the same card. Then, the second 6~ K portion of the memory 302 acts exactly as if it were a global memory on the general purpose processor card itself, which can be read from or written into by any and every other general purpose processor or IIO controller in the system.
Thus, each general purpose processor actually contains a microprocessor plus 64 K of local memory and 64 K of global memory. This memory splitting feature in a multi-processor configuration provides considerable advantages in the handling of tasks within the system, as will be des-cribed in conjunction with Figures 5A through 5G.
Figure 5A schematically shows a single processor system executing three assigned tasks, A, B and C. In a single processor system, the assignment of tasks is controlled by a simple multi-tasking algorithm since there is only the single processor to handle the various tasks. Thus, the processor simply selects one of the tasks that it knows about for execution. The situation is only slightly more involved when two processors are available within the system, as seen in Figure 5B.
Here there may be, but not necessarily, a choice in processors to be assigned to perform the tasks A, B and C. For example, if tasks A
and B are asslgned to CPU 1 and tasl~ C ls asslgned to CPU 2, then there ls no cholce in asslgnment. CPU 1 operates In a mul~l-tasklng mode as It dld before, and CPU 2 operates only on the slngle task C. Conceptually, the two processors CPU 1 and CPU 2 are still totally lndependent, even though they contend for the common bus ;o whlch they are connected and their tasks are ln the same memory.
If CPU 1 and CPU 2 are allowed to know about the other's software tasks, then there ls a choice to be made ln processor asslgnment.
For example, if tasks A, B and C are allowed to execute on elther CPU 1 or CPU 2, whichever ls available, as deplcted ln Flgure 5C, then the only compllcation ls to guarantee that CPU 1 and CPU 2 are not executlng the same task at the same tlme. They may alternate executiOn of a glven task, or execute different tasks at the same tlme, wlthout confusion.
Each simply selects a taslc rhat is ready to execute but ls not already executing from the lists of tasks lt knows about (ln thls case, tas}cs A, B and C). However, ln a multi-processor system where processors are all connected t~ a common bus, the trafflc on the bus carries the load ior all processors. If the hardware were configured with the processors and memory as lndependent unlts on a common bus, as seen in Figure 5D, the bus would rapldly become a throughput bottleneck. This is especially true as addltion~ processors are added to the system on the common bus. On the other hand, if each processor has its software ln lts own prlvate on-board memory, it would have no need to use the bus. Performance would lmprove for this reason; however, this would totally prevent the abllity to asslgn a taslc to more than one processor. The multl-processorJ~lobal memory concept of the present inventlon ln whlch the on-board memory assoclated wlth each general purpose processor ls subdlvlded into separate 64 ~ memory areas ~o provlde an on-board global memory area on each board offers a solutlon to this problem, as demonstrated ln Flgure 5E, provlding a system capable of supporting many processors wlth very llttle system bus contentlon .
~g~
If a copy of system software is placed in global memory, then all but one of the processors in the multi-processor system will use the s~rnchronous exchange bus to execute its s~stem code. If, however, an identical copy is placed in an identical position of each general purpose processor's local memory (0 - 64 K region), then each processor will have its own copy of software and will stay off the synchronous exchange bus. Since the copies are identically placed, each processor would view the system software as if it were sharing one copy in global memory.
This arrangement, as shown in Figure 5E, leaves a system capable of supporting many processors with very little system bus contention.
Synchronous exchange bus loading in such an arrangement results primarily from I/O traffic and communication between tasks that reside on different general purpose processors. This inter-task bus commu-nication can be minimized by grouping highly-interactive software tasks on the same general purpose processor global memory space.
In accordance with the present invention, the global 64 K
memory portion of the RAM 302 has a programmable base address, while the local 69 K portion always starts at address 0. This allows the global memory portions of the RAMs 302 in each general purpose processor to be stacked to form a large contiguous addressing space. If software programs are loaded without care into global memory, as seen in Figure 5F, unnecessary synchronous exchange bus traffic will result from the processors going off-board to execute their assigned tasks. However, since a CPU reference to global memory residing on the same card as the requesting processor does not use the synchronous exchange bus, by taking more care in selecting the memory position for software, i.e., by loadins software into the proper area of memory so that it resides on the same card as its controlling processorJ the synchronous exchange bus traffic can be significantly reducedJ as shown in Figure 5G, ~g~
Thls speclal memory feature of the present inventlon also facilltates the handllng of Interrupts to the processors connected to th~ synchronous exchange bus 25, When deallng wlth multiple prGcessors, lt becomes necessary to alter other prc~cessors when an event has occurred, an I/O is complete, a task is ready to run, and the like. This is typically done using lnterrupts. It ls hlghly desirable, however, to interrupt only those processors that need to be made aware of the event. Even more lmportant ls the ability to inform the processor of the reason for lts belng lnterrupted so that lt need not search tables, lists, etc., looklng for the reason. Thls is accomplished by an Interrupt Coupllng and Monltorlng System, as disclosed ln copendlng Canadian Application Serial No. 394, 2.~ iled January 15, 1982, and asslgned to the same asslgnee as the present application .
As seen ln Flgùre 6, the serial multlplexer controller 35 incorporates a Z-80 microprocessor 350, RAM memory 351, ROM memory 352, four independent serial lnterfaces 353, a system data channel interface 354, a local direct memory access controller 355, and the standard CPU support loglc 356 and timing generators 357 associated wlth this type of processor system. The basic ob~ectlve of the serial multiplexer controller ls to provlde the real tlme I/O processlng for the system so that the general purpose processors 30 do not have to contend with the lnterrupt and real time processing/latency require~
ments of the system. Another objective of the serlal multiplexer controller ls to provide a flexlble interface so that dlfferent communica-tion and peripheral interfaces can be handled by a common controller elther directly or via slmple adapters.
Each serial multlplexer controller 35 provldes four lndepen-dent serial lnterfaces, whlch may be used for connection to the cluster communicatlon llnk 15, as shown in Figure 3, and ~7~
for connectlon to work statlon termlnals 125, prlnter/typesetters 126, modems 127 and slmilar lntelllgent perlpheral devlces in any mlxture, as deslred. As ln the case o~ the general purpose processor 30, one or more serlal multlple~er controllers 35 can be provlded ln each pedestal connected to the common synchronous exchange bus 25 dependlng upon design requirements to provide more or less Interface c: ~acity.
As shown in Figure 7, the mass storage controllers connected to the synchronous exchange bus 25 in each node are very slmilar ln conflguration to the serial multlplexer controller 3S except that they interiace to mass storage devices, such as a lloppy dlsc drive, rlgid disc drive, magnetlc tape drive and the llke. In thls regard, a processor 701 ls connected to a ROM 702 and RAM 703 via a processor bus 705, to whlch there is also connected address reglster 706, data input register 707, data output reglster 70&, lnterrupt circuitry 709 and storage lnterface clrcuitry 710 providing interface to the si:orage devlces .
By inter~acing the mass storage àevices with an lntelligent controller, lt ls posslble to remove some of the real tlme proeessing from the general purpose processors connected to the bus 25 and also to ma3ce the interfaces to all mass storage devlces look alike so that the rest of the system is not aware of the àevlce characterlstics. Thls also allows the mass storage controller to perform high level functlons and relieve some of the processlng requirements of the system.
T}he global memory unit 43, whlch may be optionally connected to the synchronous exchange bus 25, as seen in Flgure 8, to provlde additional memory ln the node, ls baslcally a RAM wlth software con-trolled address ranse settlng. Since all other units connected to the bus 25 contain processors. their addressing ls easily configured by the on-board processors; however, the global memory belng a non-intelligent ~1~
unit must have an external input to set its address allocation. This is accomplished by configuring the RAM to include control registers which another processor can read from and write into in order to control the global memory address range assigned thereto.
In addition to the problem oi how to address the control registers of the global memory, all units connected to the bus 25 in e ch node share a common problem of establishing initial communications before memory addresses are assigned. In this regard, it is not desir-able to use fixed memory addresses since this requires a discontinuity in the memory space and also additional decoding logic to decode the large }lumber of bits in the memory address. An additional problem is how to set the memory addresses to be used on each card. In past systems, this has been accomplished by either using swi~ches on the circuit boards, which require operator setting to configure the system, or by assigning addresses by device type, which requires a much larger number of addresses than would ever be present in a single system and limits future expansion of the system.
These problems are solved in accordance with the present invention in the manner shown in Figure 9. The synchronous exchange bus 25 includes a plurality of data/address lines to permit addressing of units on the bus and effect transfer oi data to and from such units.
The ASYNC line indicates when address information is stable on the bus and the DSYNC line indicates when data is stable on the bus. The bus 25 also includes bus identification lines BID(0) and BID(l) by which physical I/O addresses are assigned to each card as it is plugged into the bus. In this regard, a plurality oE conductors C on each card engage contacts D which are connected to the bus identification lines BID(0) and BID(l) in a coded combination representing the physical address of the slot on the bus, so that this address is automatically assigned to the card as it is plugged in.
The I/O or slot address of each card ls stored ln a reglster R2 on the card, whlch is also handwlred to provlde addltlona~ coding to lderltify the card ~ype. Thls allows other cards to determlne what type of card ls ln each slot simply by readlng the contents of reglster R2 on the card.
In place of the manually-operable switches to set the memory address assignment for each card, as typically provided ln the prlor art, each card connected to the bus 25 also includes a reglster Rl ln whlch the memory address assignment for that card is stored. Thus, when an address appears on the data/address llnes as lndicated by the ASYNC llne and the state of the IOEN llne lndi~ates that thls address ls a rnemory address, the recelved address ls checked against the asslgned b!oclc of memory addresses ln re~lster Rl to determine lf memory space on that particular card ls belng addressed. On the other hand, if the state of the IOEN line lndicates that the recelved address is an I/O address, then that address is compared to the contents of the R2 register. Once the card has been accessed vla lts I/O address, a new memory address asslgnment can be wrltten lnto the reglster Rl. C)f course, the contents of register Rl can be changed by lts on-board processor at any time. These operatlons are carrled out when controlled by the on-board processor and suitable lo~ic circultry, as represented, for exarnple, by the arbitrator 310 in the processor 35 as shown ln Figure 8.
Thus, slnce each card ls automatlcally assigned a fixed I/O
address accordlng to the slot it occuples on the bus 25, the memory address space assigned to that card can be varied to permit reconfigura-tion of the memory space ln the system slmply by addressing the board via its slot or I/O address and placing in the address reglster on the card the new memory address asslgnment for that card. In this way, all card slots have access to their slot number and information con-cernlng the other cards connected to the bus and have the ability to asslgn memory addresses.
This type of operation permits the system to configure itself and results in fewer operator errors in the setting of switches to assign memory addresses, as typical in the prior art. Further, the operators do not need to know about the internal details of the system. It also increases the reliability of the system by allowing it to automatically reconfigure around failed modules and continue operation.
Figure 10 shows the details of the cluster communication link which features a p ssive coaxial line to increase the system reliability and provide DC isolation so that a common system ground becomes unnecessary. As indicated with respect to Figure 1, up to sixteen nodes may be connected to the link 15 via transformer taps 14.
While I have shown and described several embodiments of the present invention, it is understood that the invention is not limited to ~he details shown and described herein but is susceptible of numerous changes and modifications as known to one of ordinary skill in the art, and I therefore do not wish to be limited to the details shown and described herein but intend to cover all such changes and modifications obvious to those skilled in the art.
Claims (8)
1. A data processing system comprising a multi-conductor bus including data and address lines; a plurality of units each having a random access memory connected to said bus in respective slots along the bus, said connection including slot identification means for providing a respective coded signal combination representing a slot address to each unit as it is connected to said bus, each unit having means for storing its slot address, register means in each unit for storing a range of memory addresses assigned to that unit; and means in each unit for accessing the random access memory of that unit on the basis of addresses received on said bus which fall within the range stored in said register means.
2. A data processing system according to claim 1, wherein each unit is connected to said bus by means of a plug-in connection, which plug-in connection includes said slot identification means for providing said respective coded signal combination representing a slot address to each unit as it is plugged into said bus.
3. A data processing system according to claim 2, further including means in each unit responsive to detection of its slot address and data representing an assigned range of memory addresses on said bus for storing said data in said register means.
4. A data processing system according to claim 2 or 3, wherein at least two of said units include an on-board microprocessor and wherein the random access memory in each of said two units has a first memory portion storing operating instructions and data for said on-board microprocessor and a second memory portion forming an on-board global memory accessible by at least the on-board microprocessor of the other of said two units via said bus.
5. A data processing system according to claim 2, wherein said multi-conductor bus includes a plurality of bus identification lines, and said slot identification means includes first conductor means connected to said bus identification lines in a coded combination at each slot and second conductor means in each unit engageable with said first conductor means when said unit is plugged into said bus.
6. A data processing system according to claim 2, wherein said units include a general purpose processor, a microprocessor controlled memory controller means for data storage, and serial multiplexer means including a microprocessor, a plurality of I/O ports connected to said microprocessor via an I/O bus and a plurality of intelligent peripheral devices connected to respective I/O
ports.
ports.
7. A data processing system according to claim 6, wherein a plurality of said units are serial multiplexer means.
8. A data processing system according to claim 6, wherein one of said units is a non-intelligent global memory device providing data storage apart from that provided in the random access memory of the other units.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US23804881A | 1981-02-25 | 1981-02-25 | |
US238,048 | 1981-02-25 | ||
CA000396675A CA1184310A (en) | 1981-02-25 | 1982-02-19 | Multi-processor office system complex |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA000396675A Division CA1184310A (en) | 1981-02-25 | 1982-02-19 | Multi-processor office system complex |
Publications (1)
Publication Number | Publication Date |
---|---|
CA1197019A true CA1197019A (en) | 1985-11-19 |
Family
ID=25669587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA000465121A Expired CA1197019A (en) | 1981-02-25 | 1984-10-10 | Multi-processor office system complex |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA1197019A (en) |
-
1984
- 1984-10-10 CA CA000465121A patent/CA1197019A/en not_active Expired
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2036688C (en) | Multiple cluster signal processor | |
EP0737336B1 (en) | A multiprocessor programmable interrupt controller system with processor-integrated interrupt controllers | |
US4001790A (en) | Modularly addressable units coupled in a data processing system over a common bus | |
US4933846A (en) | Network communications adapter with dual interleaved memory banks servicing multiple processors | |
US4000485A (en) | Data processing system providing locked operation of shared resources | |
US4030075A (en) | Data processing system having distributed priority network | |
US3993981A (en) | Apparatus for processing data transfer requests in a data processing system | |
US3995258A (en) | Data processing system having a data integrity technique | |
US3997896A (en) | Data processing system providing split bus cycle operation | |
US4720784A (en) | Multicomputer network | |
US4212057A (en) | Shared memory multi-microprocessor computer system | |
US4470114A (en) | High speed interconnection network for a cluster of processors | |
US4814970A (en) | Multiple-hierarchical-level multiprocessor system | |
US4979100A (en) | Communication processor for a packet-switched network | |
US5410710A (en) | Multiprocessor programmable interrupt controller system adapted to functional redundancy checking processor systems | |
US4797815A (en) | Interleaved synchronous bus access protocol for a shared memory multi-processor system | |
EP0827085B1 (en) | Method and apparatus for distributing interrupts in a scalable symmetric multiprocessor system without changing the bus width or bus protocol | |
EP0301610A2 (en) | Data processing apparatus for connection to a common communication path in a data processing system | |
CA1184310A (en) | Multi-processor office system complex | |
KR900001120B1 (en) | Distributed priority network logic for allowing a low priority unit to reside in a high priority position | |
EP0139568B1 (en) | Message oriented interrupt mechanism for multiprocessor systems | |
CA1197019A (en) | Multi-processor office system complex | |
Gustavson | Introduction to the Fastbus | |
WO1991010958A1 (en) | Computer bus system | |
Ibbett et al. | Centrenet–A High Performance Local Area Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MKEX | Expiry |