EP0073239A1 - Multi-prozessor-bürosystemkomplex - Google Patents

Multi-prozessor-bürosystemkomplex

Info

Publication number
EP0073239A1
EP0073239A1 EP82901083A EP82901083A EP0073239A1 EP 0073239 A1 EP0073239 A1 EP 0073239A1 EP 82901083 A EP82901083 A EP 82901083A EP 82901083 A EP82901083 A EP 82901083A EP 0073239 A1 EP0073239 A1 EP 0073239A1
Authority
EP
European Patent Office
Prior art keywords
bus
memory
processor
data
microprocessor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP82901083A
Other languages
English (en)
French (fr)
Inventor
Mize Johnson, Jr.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP0073239A1 publication Critical patent/EP0073239A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • G06F12/0653Configuration or reconfiguration with centralised address assignment
    • G06F12/0661Configuration or reconfiguration with centralised address assignment and decentralised selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/368Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/368Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control
    • G06F13/37Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control using a physical-position-dependent priority, e.g. daisy chain, round robin or token passing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4208Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a system bus, e.g. VME bus, Futurebus, Multibus
    • G06F13/4213Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a system bus, e.g. VME bus, Futurebus, Multibus with asynchronous protocol

Definitions

  • the present invention relates to data and word processing systems in general, and more particularly, to a multi-terminal document preparation and data processing system of the shared-resource or clustered configuration type which combines the similar, yet divergent, technologies of word and data processing to perform a full range of business tasks both within the office and from remote locations .
  • Systems capable of performing data and word processing fall within the basic categories of ( 1) full-featured stand-alone units, (2) shared logic systems containing a number of display-based work stations sharing the logic of a central computer, and (3) sharedresource or clustered configurations in which intelligent terminals or work stations are interconnected to provide common access to central computer, controller and/or disc storage.
  • standalone units reside in their ability to function independently of other units, and therefore, are not subjected to operating malfunctions as a result of breakdown of other units ; however, such stand-alone units have the disadvantage of providing a higher-per-station cost and limited capability insofar as data storage and available features is concerned.
  • shared logic systems in which work stations share the logic of a central computer for storage, retrieval, text manipulation and printing reduce the cost per work station and provide a greater capability insofar as features and storage capability is concerned , but when the central computer malfunctions , the entire system is affected.
  • the basic unit of the system in accordance with the present invention is an intelligent processing node provided in the form of a stand-alone intelligent unit providing the capability for document text entry, modification, storage, and hard-copy output.
  • a major feature of this system is the ability to connect up to sixteen nodes , with a highspeed cluster communication link to form a cluster, which represents the first level of modularity in the system . Nodes within a cluster can share each other's peripheral resources including floppy disc storage and output devices . This allows greater flexibility in the design and growth of the system and provides a basis for various advanced features such as electronic mail distribution and other data communication and processing features .
  • the work station is based on an intelligent terminal that is cable-connected to a node that contains one or more processing units, floppy discs and device control electronics .
  • the intelligent terminal incorporates a keyboard, a raster-scan CRT display, and a read-write memory, and is driven by a microprocessor.
  • the node can support a plurality of terminals, depending on desired work station response, but is also capable of supporting several types of peripherals, including a floppy disc, rigid disc, daisy-wheel printer, draft printer, twin-wheel printer, high-speed dedicated cluster link communications, commercial carrier data communications and a typesetter.
  • the number and combination of peripherals per node is limited only by the device controller slots and controller channel availability in the node and by desired response times .
  • each general purpose processor in each node can be dual ported so that other processors in the node can access it. This feature tends to further reduce bus contention by allowing I/O controllers and other processors to deposit data directly into the local memory of the processor responsible for handling it. This also makes it possible to provide for auto-configuration of the memory address space available on the boards connected in common to the bus, which combined address space provides the appearance of a shared global memory.
  • each card is provided with a physical I/O address corresponding to the slot it occupies on the bus, and by use of this I/O address, the memory address block assignments for each card can be automatically established, as desired, by the system, simply changing the assigned address data stored in a register on its and/or another card or cards on the bus . This eliminates the manual assignment of addresses via switches, which leads to possible operator error and malfunction of the system .
  • a system in accordance with the present invention in which a first level of modularity is built into the cluster through the interconnection of a desired number of nodes via the cluster communications link, while a second level of modularity is provided within the node itself by permitting the varied connection of different numbers of intelligent terminals and other peripheral devices to the control pedestal.
  • Figure 1 is a schematic diagram of one embodiment of the present invention forming a system cluster
  • Figure 2 is a schematic diagram of the configuration of an intelligent processor node
  • Figure 3 is a schematic diagram of the architectural arrangement of elements forming the intelligent processor node
  • Figure 4 is a schematic diagram, illustrating the available variations in configuration of a typical cluster
  • Figure 5 is a schematic block diagram of the general purpose processor provided in each node
  • Figures 5A through 5G are diagrams illustrating the on-board memory feature of the present invention.
  • Figure 6 is a schematic diagram of the serial multiplexer controller:
  • Figure 7 is a schematic diagram of a mass storage controller
  • Figure 8 is a schematic diagram of the global memory arrangement
  • Figure 9 is a schematic diagram illustrating the memory address auto-configuration and bus identification feature of the present invention.
  • Figure 10 is a schematic circuit diagram of the cluster communication link configuration. Best Mode For Carrying Out Invention
  • the present invention provides a multi-terminal document preparation and distribution system which utilizes distributed processing to provide a flexible, reliable system architecture with facilities for creation, revision, storage, and distribution of various types of documentation with capability for both word processing and data processing on an integrated basis.
  • the system comprises one or more clusters of processor nodes to which one or more work stations and other peripheral devices may be selectively connected to provide two levels of modularity which establishes a high level of flexibility in design and function within the system .
  • Each node may have one or more intelligent display /keyboard terminals with a self-contained microcomputer and sufficient memory and processing power to function as a stand-alone word processor work station or as an integral component in a shared-peripheral cluster configuration with other nodes .
  • Figure 1 illustrates the basic configuration of the system cluster which includes two or more intelligent processing nodes 10 interconnected by one or more cluster communication links 15 to which the nodes 10 are connected by way of taps 14.
  • various peripheral devices 12 including intelligent terminals, floppy disc storage units, rigid disc storage units, daisy-wheel printers, draft printers, typesetters, modems for remote communication with other systems, and similar peripheral devices.
  • the cluster is built around the cluster communication link 15 which is a passive coaxial data link supporting up to sixteen active taps 14 for connection of nodes to the link.
  • Nodes may be connected anywhere along the data link 15, which provides a half-duplex multiplexed interconnection, with data transfers between nodes 10 being broken into packets which are interleaved with other inter-node transfers .
  • the cluster communication link 15 is the mechanism by which the intelligent work stations and other intelligent peripherals 12 connected to the nodes 10 interface with one another within the cluster.
  • a node 10 is defined as any element which attaches to the data link 15 via a tap 14 and is not restricted to a specific piece of hardware.
  • the primary purpose of the cluster communication link 15 is to provide a medium speed communications path for loosely coupling nodes 10 so that systems larger than a single node can be provided in a flexible manner.
  • the use of a passive serial link 15 also provides improved reliability, physical dispersion of system elements, and increases the flexibility in system configuration.
  • Data transfer on the cluster communication link 15 is provided in accordance with high level data link control (HDLC) protocol and uses a rotating master scheme to avoid contention on the link, to provide load sharing and minimize the number of single point failures which can disable the link.
  • HDLC high level data link control
  • mastership of the link 15 is continuously exchanged between active nodes .
  • a single node will retain the link for a maximum of 50 ms without allowing other nodes the chance to assume mastership.
  • Master exchange is accomplished by polling the other nodes to determine if there is any wish to use the link.
  • the current master will use the results of the poll cycle to determine which node is to be selected as the next master and will inform that node that it is to assume mastership . If no other node requests the use of the link during the poll cycle, the current master can retain control of the link.
  • the actual polling is based on a round robin active/ inactive queue scheme.
  • the master node polls the following nodes in the active queue, which is a circular queue, until it finds one which wants to assume control of the link or all other nodes have been polled. If another node wants control, then mastership is passed to that node. If no other nodes wants the link, the control is always retained by the current master. In this way, no dedicated bus master or other bus controller is required, lending to the simplicity of the cluster configuration.
  • the active queue contains all nodes which respond to a poll while the inactive queue contains all possible nodes except those on the active queue.
  • a node In order to join in the link communications, a node must be transferred from the inactive queue to the active queue. This is accomplished by having a flag in the active queue which indicates that nodes on the inactive queue are to be polled, which is performed once every two passes through the active queue, and these nodes are then added to the active queue if they respond . When the current master detects the flag in the active queue indicating that the inactive queue is to be polled, then the inactive queue is used as a source of the poll addresses . Once a node is in the active queue, it remains there until it fails to respond to a poll three times, in which case it is then moved to the inactive queue.
  • the node In the contention mode the node starts the poll cycle and listens to its own transmission as well as any responses . If the node hears its own transmission garbled, it enters a timeout routine with the delay based on the node identification and attempts the poll again if it has not seen any other transmission during the delay interval. If the node receives a response intended for another node, then it assumes that the other node has assumed control.
  • the intercommunication system formed by the cluster illustrated in Figure 1 provides message routing between tasks in different nodes .
  • a request to read the file would be formatted into a message within the first node, the message including the identity of the first node and its reply exchange.
  • the message would then be sent to the second node where the request would be processed.
  • the second node would then format the required file into a message, which would be sent back to the first node, completing the request.
  • the cluster provides a multi-level interconnection system of intelligent processing modules which combines the best features of stand-alone units and shared-logic systems .
  • Peripheral units 12, such as intelligent terminals, forming part of a node or work station can operate on a stand-alone basis or communicate with one another or with other intelligent peripheral units providing storage and other capabilities through the commonly-connected intelligent processing nodes 10 or communicate with other intelligent peripheral devices 12 connected to other intelligent processing nodes 10 via the cluster communication link 15.
  • a plurality of intelligent processing nodes 10 can operate on a stand-alone basis or communicate with one another or with other intelligent peripheral units providing storage and other capabilities through the commonly-connected intelligent processing nodes 10 or communicate with other intelligent peripheral devices 12 connected to other intelligent processing nodes 10 via the cluster communication link 15.
  • cluster communication links 15 can be interconnected via a single cluster communication link 15 and each intelligent processing node 10 can be connected via taps 14 to up to twenty-four cluster communication links 15.
  • cluster communication links 15 Such an arrangement provides multi-level flexibility in the configuration of the cluster both from the point of view of size and the available functions provided within the cluster.
  • the cluster concept provides a system capable of inter-node communications and sharing of peripheral resources at a much lower per-terminal cost than typical shared-logic controller type systems.
  • the nodes 10 are built around a synchronous exchange bus 25 using functional hardware modules, as seen in Figure 2.
  • the synchronous exchange bus 25 provides a tightlycoupled high bandwidth bus structure optimized for multi-processor use, and is a unified bus architecture which places minimum constraints on the internal structure of each node, allowing for a more long-term growth capability within the system .
  • synchronous exchange bus 25 Connected to the synchronous exchange bus 25 are one or more general purpose processors 30, a plurality of I/O subsystems 35 for connection between the bus 25 and one or more of the cluster communications links 15 or other peripherals and communication lines, a magnetic tape subsystem 40 connecting the bus 25 to one or more magnetic tape units 42, a floppy disc subsystem 45 connecting the bus to one or more floppy disc units 48, and a rigid disc subsystem 50 connecting the bus 25 to the one or more rigid disc units 52. All of the modules connected to the bus 25, as seen in Figure 2, are stand-alone microprocessor based subsystems which facilitate the layering of functions, contributing to the flexibility of design within the system.
  • the synchronous exchange bus 25 can accommodate up to sixteen modules in any mixture. Thus, even though some combinations, such as sixteen general purpose processors 30 or rigid disc subsystems 50, might not be particularly useful, there are no hardware limitations to preclude such combinations. Due to the multi-master nature of the synchronous exchange bus 25, multi-processor systems can be built by simply connecting more than one general purpose processor 30 to the bus 25, and incorporation of local memory in the general purpose processor 30 allows it to function more effectively in a multi-processor environment by reducing the number of bus accesses .
  • the bus structure that holds all of the hardware components together.
  • This bus structure contains the necessary signals to allow the various system components to interact with each other, i. e. , it allows memory and I/O data transfers, direct memory accesses, generation of interrupts, and the like.
  • the synchronous exchange bus 25 is the flexible bus structure used to interface a family of products which include sixteen bit single board computers, memory expansion boards, digital I/O boards and peripheral controllers .
  • the structure of the synchronous exchange bus 25 is built upon the master/slave concept where the master device in the system takes control of the bus 25 and the slave device, upon decoding its address, acts upon the command provided by the master.
  • the synchronous exchange bus 25 comprises address and data lines and those control lines necessary to carry the signals which allow the various system components to interact with each other.
  • the arbitration for bus mastership between the various system components connected to the bus 25 occurs synchronously with priority being determined by physical location on the bus, as described more particularly in my cop ending U.S . Application Serial No. , filed January 12, 1981, entitled "Synchronous Bus Arbiter" .
  • the arbitration for bus mastership on the synchronous exchange bus 25 occurs synchronously, the data transfers occur asynchronously at a rate determined by the particular master/slave pair passing data across the bus at a given point in time.
  • the synchronous exchange bus 25 is a time-division multiplexed bus with a unified bus architecture and no dedicated/required modules . This type of bus minimizes configuration problems and provides the maximum flexibility in system/module design. In order to cover the wide range of applications desired for the system, and allow future expansion in a flexible manner, the synchronous exchange bus 25 provides a high bandwidth, low cost, processor independent bus by using standard drivers/receivers and multiplexed address /data lines.
  • Figure 3 shows the architectural configuration for a typical node including an intelligent work station terminal 125, a printer/typesetter unit 126, and a modem 127 connected to the intelligent processing node electronics in pedestal 100.
  • Providing the terminal 125 and the pedestal 100 in physically-separate packages effectively separates the display and keyboard functions from the processing and communication functions, with the terminal 125 and the pedestal 100 being coupled by an asynchronous link 110.
  • the pedestal 100 is in turn connected to the cluster communication link 15 by a tap 14 via line 18, as already described in connection with Figure 1.
  • the node electronics contains the general purpose processor 30, an I/O controller in the form of a serial multiplexer controller 35, a floppy, disc controller 45, and a global memory 43, and as already indicated, up to sixteen controller units may be connected to the asynchronous exchange bus 25 in virtually any mixture so that the particular combination illustrated in Figure 3 merely represents an example of a basic configuration available in accordance with the present invention.
  • a double pedestal 101, 104 provides a work station node interconnecting four intelligent terminals 125, four floppy disc units 48 and a printer 126a via the cluster communication link 15.
  • an extended storage node 102 connects four bulk storage units 44 to the link 15, while single pedestal 103 provides a pair of terminals 125, four floppy disc units 48 and a printer 126a.
  • the single pedestal 100 provides a terminal 125, two floppy disc units 48, a printer 126a and a modem 127, and the extended telecommunication node 105 provides for communication to remote systems via modem 127 as well as access to bulk storage 44, With such flexibility in the design of the system, the specific needs of each individual user on a present and future basis can be easily configured.
  • the work station terminal 125 is essentially a standard intelligent terminal of the type commonly available in the industry, such as the Harris standard terminal manufactured and sold by Harris Corporation. Such a standard terminal typically includes a processor module associated with ROM, RAM and a serial I/O port.
  • the general purpose processor 30 provided in each node 100 comprises an available microprocessor, such as an Intel 8086 microprocessor, a RAM 302 capable of providing 128 K bytes of storage, a bootstrap ROM 303, an I/O port 304 for coupling to a remote diagnostic facility, a synchronous exchange bus interface 306 and a synchronous exchange bus interrupt interface 305 along with the standard timing circuit 307 associated with the microprocessor 301.
  • the RAM memory 302 is divided into two equal memory areas of 64 K each, which has special advantages in a multi-processor configuration .
  • the division of the RAM memory 302 is of no special consequence since together the two portions form a contiguous 128 K memory with no apparent boundary at the 64 K point.
  • the general purpose processor By providing the general purpose processor with a portion of dual ported memory, many small systems can be built without a global memory since the dual ported memory looks just like a shared global memory to the other elements of the system.
  • the general purpose processor 30 When a global memory 43 is provided in the pedestal, the general purpose processor 30 will send each memory request either to its on-board memory area (RAM 302) or to the off-board global memory 43 depending on the address for that request.
  • the 64 K/64 K split of the RAM memory 302 in the general purpose processor 30 does become a consideration in a multiprocessor configuration.
  • the first 64 K of the memory 302 in a first general purpose processor is made accessible to, and only to, the processor residing on the same card.
  • the second 64 K portion of the memory 302 acts exactly as if it were a global memory on the general purpose processor card itself, which can be read from or written into by any and every other general purpose processor or I/O controller in the system .
  • each general purpose processor actually contains a microprocessor plus 64 K of local memory and 64 K of global memory.
  • FIG. 5A schematically shows a single processor system executing three assigned tasks A, B and C .
  • the assignment of tasks is controller by a simple multi-tasking algorithm since there is only the single processor to handle the various tasks.
  • the processor simply selects one of the tasks that it knows about for execution. The situation is only slightly more involved when two processors are available within the system, as seen in Figure 5B .
  • processors may be assigned to perform the tasks A, B and C .
  • tasks A and B are assigned to CPU 1 and task C is assigned to CPU 2, then there is no choice in assignment.
  • CPU 1 operates in a multi-tasking mode as it did before, and CPU 2 operates only oh the single task C .
  • the two processors CPU 1 and CPU 2 are still totally independent, even though they contend for the common bus to which they are connected and their tasks are in the same memory.
  • CPU 1 and CPU 2 are allowed to know about the other's software tasks, then there is a choice to be made in processor assignment. For example, if tasks A, B and C are allowed to execute on either CPU 1 or CPU 2, whichever is available, as depicted in Figure 5C, then the only complication is to guarantee that CPU 1 and CPU 2 are not executing the same task at the same time. They may alternate execution of a given task, or execute different tasks at the same time, without confusion. Each simply selects a task that is ready to execute but is not already executing from the lists of tasks it knows about (in this case, tasks A, B and C) .
  • the multi-processor/global memory concept of the present invention in which the on-board memory associated with each general purpose processor is subdivided into separate 64 K memory areas to provide an on-board global memory area on each board offers a solution to this problem, as demonstrated in Figure 5E , providing a system capable of supporting many processors with very little system bus contention.
  • the global 64 K memory portion of the RAM 302 has a programmable base address, while the local 64 K portion always starts at address 0. This allows the global memory portions of the RAMs 302 in each general purpose processor to be stacked to form a large contiguous addressing space. If software programs are loaded without care into global memory, as seen in Figure 5F, unnecessary synchronous exchange bus traffic will result from the processors going off-board to execute their assigned tasks. However, since a CPU reference to global memory residing on the same card as the requesting processor does not use the synchronous exchange bus, by taking more care in selecting the memory position for software, i.e. , by loading software into the proper area of memory so that it resides on the same card as its controlling processor, the synchronous exchange bus traffic can be significantly reduced, as shown in Figure 5G .
  • This special memory feature of the present invention also facilitates the handling of interrupts to the processors connected to the synchronous exchange bus 25.
  • interrupts When dealing with multiple processors, it becomes necessary to alter other processors when an event has occurred, an I/O is complete, a task is ready to run, and the like. This is typically done using interrupts . It is highly desirable, however, to interrupt only those processors that need to be made aware of the event. Even more important is the ability to inform the processor of the reason for its being interrupted so that it need not search tables, lists, etc. , looking for the reason. This is accomplished by an Interrupt Coupling and Monitoring System, as disclosed in copending U .S . Application Serial No, , filed January 15, 1981, and assigned to the same assignee as the present application.
  • the serial multiplexer controller 35 incorporates a Z-80 microprocessor 350, RAM memory 351, ROM memory 352, four independent serial interfaces 353, a system data channel interface 354, a local direct memory access controller 355, and the standard CPU support logic 356 and timing generators 357 associated with this type of processor system.
  • the basic objective of the serial multiplexer controller is to provide the real time I/O processing for the system so that the general purpose processors 30 do not have to contend with the interrupt and real time processing/latency requirements of the system .
  • Another objective of the serial multiplexer controller is to provide a flexible interface so that different communication and peripheral interfaces can be handled by a common controller either directly or via simple adapters.
  • Each serial multiplexer controller 35 provides four independent serial interfaces, which may be used for connection to the cluster communication link 15, as shown in Figure 3, and for connection to work station terminals 125, printer/typesetters 126, modems 127 and similar intelligent peripheral devices in any mixture, as desired .
  • one or more serial multiplexer controllers 35 can be provided in each pedestal connected to the common synchronous exchange bus 25 depending upon design requirements to provide more or less interface capacity.
  • the mass storage controllers connected to the synchronous exchange bus 25 in each node are very similar in configuration to the serial multiplexer controller 35 except that they interface to mass storage devices , such as a floppy disc drive, rigid disc drive, magnetic tape drive and the like.
  • a processor 701 is connected to a ROM 702 and RAM 703 via a processor bus
  • the global memory unit 43 which may be optionally connected to the synchronous exchange bus 25, as seen in Figure 8, to provide additional memory in the node, is basically a RAM with software controlled address range setting. Since all other units connected to the bus 25 contain processors, their addressing is easily configured by the on-board processors. however, the global memory being a non-intelligent unit must have an external input to set its address allocation. This is accomplished by configuring the RAM to include control registers which another processor can read from and write into in order to control the global memory address range assigned thereto.
  • the synchronous exchange bus 25 includes a plurality of data/address lines to permit addressing of units on the bus and effect transfer of data to and from such units .
  • the ASYNC line indicates when address information is stable on the bus and the DSYNC line indicates when data is stable on the bus.
  • the bus 25 also includes bus identification lines BID (0) and BID (1) by which physical I/O addresses are assigned to each card as it is plugged into the bus .
  • each card engage contacts D which are connected to the bus identification lines BID (0) and BID ( 1) in a coded combination representing the physical address of the slot on the bus, so that this address is automatically assigned to the card as it is plugged in.
  • the I/O or slot address of each card is stored in a register R2 on the card, which is also handwired to provide additional coding to identify the card type. This allows other cards to determine what type of card is in each slot simply by reading the contents of register R2 on the card .
  • each card connected to the bus 25 also includes a register Rl in which the memory address assignment for that card is stored.
  • each card since each card is automatically assigned a fixed I/O address according to the slot it occupies on the bus 25, the memory address space assigned to that card can be varied to permit reconfiguration of the memory space in the system simply by addressing the board via its slot or I/O address and placing in the address register on the card the new memory address assignment for that card .
  • all card slots have access to their slot number and information concerning the other cards connected to the bus and have the ability to assign memory addresses .
  • This type of operation permits the system to configure itself and results in fewer operator errors in the setting of switches to assign memory addresses, as typical in the prior art. Further, the operators do not need to know about the internal details of the system. It also increases the reliability of the system by allowing it to automatically reconfigure around failed modules and continue operation.
  • Figure 10 shows the details of the cluster communication link which features a passive coaxial line to increase the system reliability and provide DC isolation so that a common system ground becomes unnecessary, As indicated with respect to Figure 1, up to sixteen nodes may be connected to the link 15 via transformer taps 14.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multi Processors (AREA)
  • Exchange Systems With Centralized Control (AREA)
EP82901083A 1981-02-25 1982-02-24 Multi-prozessor-bürosystemkomplex Withdrawn EP0073239A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US23804881A 1981-02-25 1981-02-25
US238048 1981-02-25

Publications (1)

Publication Number Publication Date
EP0073239A1 true EP0073239A1 (de) 1983-03-09

Family

ID=22896287

Family Applications (1)

Application Number Title Priority Date Filing Date
EP82901083A Withdrawn EP0073239A1 (de) 1981-02-25 1982-02-24 Multi-prozessor-bürosystemkomplex

Country Status (6)

Country Link
EP (1) EP0073239A1 (de)
CA (1) CA1184310A (de)
ES (1) ES8303741A1 (de)
GB (2) GB2107906B (de)
IT (1) IT1149773B (de)
WO (1) WO1982002965A1 (de)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4484273A (en) * 1982-09-03 1984-11-20 Sequoia Systems, Inc. Modular computer system
CA1266524A (en) * 1983-08-30 1990-03-06 Shinobu Arimoto Image processing system
FR2551282B1 (fr) * 1983-08-30 1994-05-13 Canon Kk Systeme de traitement d'image
US4731750A (en) * 1984-01-04 1988-03-15 International Business Machines Corporation Workstation resource sharing
GB2175421B (en) * 1985-05-13 1989-11-29 Singer Link Miles Ltd Computing system
GB2191612A (en) * 1986-06-11 1987-12-16 Ibm Display terminal
US5093913A (en) * 1986-12-22 1992-03-03 At&T Laboratories Multiprocessor memory management system with the flexible features of a tightly-coupled system in a non-shared memory system
EP0325080B1 (de) * 1988-01-22 1994-04-20 International Business Machines Corporation Protokoll und Vorrichtung für selektives Abtasten von verschiedenen Leitungen, die mit einem Übertragungsgerät verbunden sind
EP0325077B1 (de) * 1988-01-22 1992-09-09 International Business Machines Corporation Abtasterschnittstelle für Leitungsadapter einer Übertragungssteuerung
GB2206225A (en) * 1988-08-01 1988-12-29 Feltscope Ltd Point of sale terminals microcomputer system
EP0562251A2 (de) * 1992-03-24 1993-09-29 Universities Research Association, Inc. Durch ein dynamisches wiederkonfigurierbares serielles Netzwerk gesteuertes Paralleldatenübertragungsnetzwerk

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3916383A (en) * 1973-02-20 1975-10-28 Memorex Corp Multi-processor data processing system
US4030072A (en) * 1974-12-18 1977-06-14 Xerox Corporation Computer system operation and control
US4253146A (en) * 1978-12-21 1981-02-24 Burroughs Corporation Module for coupling computer-processors
US4253144A (en) * 1978-12-21 1981-02-24 Burroughs Corporation Multi-processor communication network
US4245306A (en) * 1978-12-21 1981-01-13 Burroughs Corporation Selection of addressed processor in a multi-processor network
US4240143A (en) * 1978-12-22 1980-12-16 Burroughs Corporation Hierarchical multi-processor network for memory sharing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO8202965A1 *

Also Published As

Publication number Publication date
IT8219853A0 (it) 1982-02-25
ES509892A0 (es) 1983-02-01
GB2107906B (en) 1985-10-09
WO1982002965A1 (en) 1982-09-02
GB2144892A (en) 1985-03-13
CA1184310A (en) 1985-03-19
GB8423510D0 (en) 1984-10-24
ES8303741A1 (es) 1983-02-01
IT1149773B (it) 1986-12-10
GB2107906A (en) 1983-05-05

Similar Documents

Publication Publication Date Title
EP0451938B1 (de) Mehrgruppen-Signalprozessor
EP0288636B1 (de) Netzwerkübertragungsadapter
US4633392A (en) Self-configuring digital processor system with logical arbiter
US4763249A (en) Bus device for use in a computer system having a synchronous bus
US5765036A (en) Shared memory device with arbitration to allow uninterrupted access to memory
US4470114A (en) High speed interconnection network for a cluster of processors
US4720784A (en) Multicomputer network
US4870704A (en) Multicomputer digital processing system
US4562535A (en) Self-configuring digital processor system with global system
EP0173809B1 (de) Arbitrierungsgerät und -verfahren mit Unterbrechungs/DMA-Anforderungen in Multiplexschaltungen
JP2593146B2 (ja) データハブ
US4001790A (en) Modularly addressable units coupled in a data processing system over a common bus
EP1422626B1 (de) Multi-Kern Kommunikationsmodul, Datenkommunikationssystem mit einem derartigen Modul und Datenkommunikationsverfahren
EP0139569A2 (de) Arbitrierungsmechanismus zur Steuerungszuweisung eines Übertragungsweges in einem digitalen Rechnersystem
US4661905A (en) Bus-control mechanism
EP0069774A1 (de) Unterbrecher-kopplungs- und -überwachungssystem
GB1572426A (en) Microcomputer systems including a memory
EP0140751A2 (de) Cache-Ungültigkeitserklärungsanordnung für Mehrprozessorsysteme
EP0138676A2 (de) Wiederholungsmechanismus zur Steuerungsfreigabe eines Übertragungsweges in einem digitalen Rechnersystem
EP0073239A1 (de) Multi-prozessor-bürosystemkomplex
KR900001120B1 (ko) 우선도가 낮은 유니트를 우선도가 높은 위치에 위치시키기 위한 분배된 우선도 회로망 로직을 가진 데이타 처리 시스템
EP0139568B1 (de) Bericht orientierter Unterbrechungsmechanismus für Mehrprozessorsysteme
US4658353A (en) System control network for multiple processor modules
CA1197019A (en) Multi-processor office system complex
WO1991010958A1 (en) Computer bus system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Designated state(s): BE CH DE FR GB LI SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19830422