WO1989009967A1 - Ordinateur a architecture de communication hybride - Google Patents

Ordinateur a architecture de communication hybride Download PDF

Info

Publication number
WO1989009967A1
WO1989009967A1 PCT/US1989/001456 US8901456W WO8909967A1 WO 1989009967 A1 WO1989009967 A1 WO 1989009967A1 US 8901456 W US8901456 W US 8901456W WO 8909967 A1 WO8909967 A1 WO 8909967A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor
computer
bus
interface
external communication
Prior art date
Application number
PCT/US1989/001456
Other languages
English (en)
Inventor
Charles A. Vollum
Noel W. Henson
Original Assignee
Cogent Research, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cogent Research, Inc. filed Critical Cogent Research, Inc.
Publication of WO1989009967A1 publication Critical patent/WO1989009967A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17356Indirect interconnection networks
    • G06F15/17368Indirect interconnection networks non hierarchical topologies
    • G06F15/17375One dimensional, e.g. linear array, ring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network

Definitions

  • This invention relates to a computing machine with hybrid communication architecture.
  • a digital computer solves a problem by break ⁇ ing the problem down into multiple steps.
  • a compu ⁇ ter with a single processor is able to execute one step at a time. It takes an inordinate time to solve a complex problem by use of such a sequential mode of operation. By operating multiple proces ⁇ sors in parallel, it is generally possible to reduce substantially the time required to solve a problem. If multiple processors are operated in parallel, it is necessary for the processors to share data.
  • One technique for sharing data among multiple processors is for there to be a common, or global, memory to which all the processors have access on an equal footing.
  • a problem with this technique arises from the fact that only one pro ⁇ cessor can access the memory at a given time, and therefore contention -problems limit the number of processors that can be accommodated.
  • the number of processors can be increased somewhat by use of coherent caching or crossbar switching, but such techniques are costly and cumbersome.
  • a second method of allowing sharing of data by multiple processors involves use of a parallel communication bus.
  • a bus allows a great deal of flexibility in communication, including the ability to broadcast data from one processor to many or all of the others in a single operation. Each proces ⁇ sor is able to execute independently until the need to communicate arises. However, when communication is necessary, contention problems arise since only one processor can transmit on the bus at a time. Use of multiple buses can reduce problems due to contention, but multiple buses also reduce flexi ⁇ bility and add greatly to cost and complexity.
  • a third technique for allowing multiple pro ⁇ cessors to share data is provided by point-to-point communication links.
  • Processors that have built-in links are commercially available, and therefore they are very easy to provide. Links offer virtually unlimited expansion possibilities, since the number of communication paths increases whenever a processor is added. However, links are the most difficult to use, since the physical interconnection pattern must match the pattern of communication required by the program that is being executed. If processors are added to or removed from the system, a new pattern of link connections must be established, and the program must be rewritten, recompiled or, at the very least, relinked to match. Broadcasting a message is difficult and time consuming, since the message must be copied from one processor to the next until it has reached all processors. Since two different programs will generally require two different physical interconnection patterns, it has not hitherto been possible to execute multiple programs simultaneously using links for communication between processors unless the programs are specifically designed to require the same communication patterns.
  • the difficulty of matching the physical interconnection pattern with the pattern of commu- nication required by the program is partially
  • the read operation is identical to i ⁇ except that the matching tuple is not removed from the tuple space.
  • the eval operation is a specialized form of out. Out creates a passive tuple, whereas eval creates an active tuple.
  • a processor performs an _i_n operation, it is necessary to search tuple space for the matching tuple. It would be time consuming to examine each tuple in turn to determine whether it matches the template, and therefore the tuples are classified and a directory is created to facilitate the search.
  • different portions of the directory are accessed by the different processors, and in order to complete an i n operation, potentially all the processors must examine their portions of the directory against the template in order to determine whether a matching tuple exists.
  • the Linda system makes the programmer's job vastly easier; since he need not know the source or the destination of his data.
  • the directory is automatically consulted and used to match a request for a tuple with a tuple that is available. When a required tuple is not available, the requester waits until such a tuple becomes available.
  • the Linda system can be implemented on any of the above-mentioned parallel processing architectures, most efficiently with global memory, next most efficiently with one or more buses, and least efficiently with links.
  • Linda is based on the use of tuples.
  • a tuple is a collection of related data. Elements of a tuple are fields holding actual values or for als.
  • a passive tuple is simply a collec ⁇ tion of data items, whereas an active tuple is a process which becomes a passive tuple.
  • Tuples exist in an abstract space called tuple space.
  • the tuple space may exist over multiple processors.
  • Four prin ⁇ cipal operations can be performed on tuple space.
  • the out operation is an operation that creates a tuple and places it in tuple space.
  • the i_n operation is the reverse of out: it specifies a tuple that it desires, in the form of a template, and the computer matches the template against all the tuples existing in tuple space. If a matching tuple is found, it is removed from tuple space and is returned to the requesting process. When no tuple matches, the _in operation blocks, and the requesting process is suspended until another process, through an out operation, creates a matching tuple. At this point, the requesting process continues. An out operation can never block. great volume of them can be moved over the bus in a short period.
  • a preferred embodiment of the present inven ⁇ tion is a computer which comprises a plurality of processor modules, each having at least first and
  • the first I/O connection interface of each processor module is connected to a common bus.
  • processor module is connected to a switch, the switch being operative to connect the second I/O connection interface of a selected processor module selectively to the second I/O connection interface of any other processor module.
  • a controller is
  • the Linda directory can be updated by broadcasting
  • the tuple is transmitted from the pro ⁇ ducing processor to the requesting processor in a non-broadcast fashion, by way of the switch.
  • the bus is available to process other updates and arrange other connections, so that many transmissions can occur simultaneously.
  • FIG. 1 is a bLock diagram of a first networked computer system embodying the present invention
  • FIG. 2 is a more detailed block diagram of one of the computer stations shown in FIG. 1,
  • FIG. 3 is a simplified block diagram of a second networked computer system embodying the invention.
  • FIG. 4 is a simplified block diagram of a stand-alone computer emoodying the invention.
  • the computer system illustrated in FIG. 1 comprises a user terminal 10, several computer stations 12 and a disc drive station 14.
  • the user terminal 10 comprises a processor 16 which is connected to various user utilities, such as a display card 18, a hard and/or floppy disc drive card 20, and a keyboard card 24, through its memory bus 26.
  • the memory bus is also connected to local random access memory (RAM) 28 and local read only memory (ROM) 30.
  • the processor 16 has four link interfaces 44.
  • a second processor 42 is connected over its memory bus to local RAM 48 and has four link interfaces 52.
  • each processor 16, 42 is an Inmos I S T800 transputer.
  • One link interface of the processor 16 is connected to a link interface of the processor 42, and the other three link interfaces of each processor are connected to respective demultiplexed interfaces of a 6:1 byte domain multiplexer/demultiplexer (mux/demux) 60, which has a multiplexed interface connected to a fiber optic transceiver 64.
  • the mux/demux 60 and the fiber optic transceiver 64 are used to transmit data and instructions between the processors 16, 42 and a fiDer optic cable 68, which connects the user terminal 10 to one of the computer stations 12. Messages are transmitted over the cable 68 at a rate of about 100 Mb/s.
  • the mux/demux 60 has a multiplexer channel and a demultiplexer channel.
  • the multiplexer channel of the mux/demux comprises, for each demultiplexed interface, a serial to parallel converter which receives serial data over its link in words of eight bits at a rate 10-20 Mb/s and, for each eight-bit serial word, generates an eight-bit parallel word and applies it to a parallel bus with a four-bit tag that designates the particular demultiplexed interface that provided the serial word.
  • a twelve-bit word is applied to the parallel bus.
  • the parallel bus is connected to a high speed parallel-to-serial con ⁇ verter, which reads the twelve-bit parallel words in turn and generates an electrical signal composed of a succession of twelve-bit serial words.
  • the electrical signal provided by the parallel-to- serial converter is applied to the fiber optic transceiver 64.
  • the demultiplexer channel of the mux/demux 60 comprises a serial-to-parallel converter which receives twelve-bit serial words from the fiber optic transceiver 64 and generates twelve-bit parallel words which are applied to the parallel bus. Each twelve-bit word comprises a four-bit tag and an eight-bit data word.
  • the mux/demux receives up to six serial signals and interleaves them to provide a single serial output signal at its multiplexed interface.
  • the output signal of the mux/demux is applied to the fiber optic trans-ordinatever 64, which launches an optical signal, coded in accordance with the signal provided by the mux/demux, into the fiber optic cable 68.
  • a coded optical signal is received over the fiber optic cable and the fiber optic transceiver generates a serial electrical signal in response thereto.
  • the serial electrical signal is applied to the multiplexed interface of the mux/demux 60 and the mux/demux demultiplexes it into up to six signals which are provided at the demultiplexed interfaces respectively of the mux/demux .
  • the processors 16, 42 manipulate the data or commands in accordance with programs stored in the ROM 30 and apply the data or commands in serial fashion over the links, the mux/demux 60 and the transceiver 64 to the fiber optic cable 68.
  • the serial signal is demulti lexed into up to six serial signals which are applied to the processors 16, 42 over the links, and the processors apply the data or commands to the display card 13 or the disc drive card 20.
  • each computer station 12 com ⁇ prises several processor modules 110, each of which is composed of a processor 114 and a random access memory 118.
  • the processor 114 is an Inmos IMS T800 transputer, comprising a CPU 122, an external parallel interface 126 and four link inter ⁇ faces 130A-130D.
  • the processor 114 also includes other components, but these are not relevant to an understanding of the present invention and there- fore are not illustrated in the drawings.
  • the parallel interface 126 is connected to an external 32-bit parallel bus 138, which is connected to the external parallel interface of each other processor module, a cross ⁇ bar switch operator 142 and at least one external communication module 144.
  • the crossbar switch operator 142 is an Inmos IMS T414 transputer, which has essentially the same architecture as the IMS T800 transputer.
  • the link interfaces of the processor modules cite connected to a programmable crossbar switch 146, which is implemented by four Inmos IMS C004 programmable link switches 150A-150D.
  • Each link switch has 32 data link connections. Respective data link connections of the link switch 150A, for example, are connected to the link interfaces 130A of the processors 114.
  • Each link switch also has a configuration link connection 158 over which it receives a signal for establishing the manner in which the data link connections are connected by the switch.
  • the configuration link connections 158 of the four link switches are connected to respec ⁇ tive link interfaces of the operator 142.
  • the crossbar switch 146, the switch operator 142 and the parallel bus 138 are all carried by a mother board having sixteen connection slots.
  • Each connec ⁇ tion slot can receive either a communications card, which carries a communication module 144, or a pro ⁇ cessor card, which carries two processor modules 110. Therefore, the maximum number of processor modules that can be accommodated (it being necessary to have at least one communication module) is thiry.
  • the four programmable link switches provide an aggregate of 128 link connections, and the maximum of thirty processor modules occupy 120 of these data link connections. The other eight link connections are connected to the external communication module, for purposes which will be described below. If there is more than one communication module (and consequently fewer than thirty processor modules), each com.munica- tion module is connected to two link connections of each link switch.
  • the disc drive station 14 comprises a single processor 160 which is connected over its memory bus 168 to a high speed disc drive card 162 and to local RAM and local ROM.
  • the processor 160 which may be an Inmos IMS T800 transputer, has four link interfaces which are connected to a 4:1 mux/demux 164.
  • the mux/demux 164 of the disc drive station 14 is essentially the same as the mux/demux 60 of the user terminal 10, and is connected to a fiber optic transceiver 166.
  • the disc drive station provides high speed disc access without being burdened by the need to generate a display.
  • the computer stations 12 shown in FIG. 1 each have two communication modules 144, which enable the stations 12 to be connected in a linear arrangement between the user terminal and the disc drive station. Each computer station therefore can accommodate up to twenty-eight processor modules
  • the user terminal 10, the disc drive station 14 and the computer stations 12 may be connected in a star arrangement, as shown in FIG. 3.
  • the computer stations 12A-12D each need only one communication module, but the computer station 12E has six communication modules.
  • a third possibility is for the computer stations to be organized as a hierarchical tree. Numerous, other arrangements, .employing different interconnection schemes among the user terminal, the disc drive station and the necessary processor modules, are possible.
  • Each computer station 12 executes an applica ⁇ tion by use of the Linda language.
  • Data are received by the computer station by way of an external communication module 144 and are stored in the external memories 118.
  • the data stored in the memories 118 are associated as tuples, and each external memory 118 includes a directory portion containing information regarding the tuples that are * stored in that memory.
  • the processor 114 of a processor module 110 executes an out opera ⁇ tion
  • the tuple generated in the out operation is loaded into the processor module's external memory and the processor module's portion of the directory is updated to reflect addition of this tuple.
  • a processor module performs an _i_n operation, it first examines its own portion of the directory against the template that defines the desired tuple.
  • the requesting processor module may broadcast the template over the parallel bus 138 to other processor modules of the computer station.
  • the receiving processor modules examine their respective portions of the directory, and the first processor module that finds a match places a signal on the bus to indi ⁇ cate that the other processor modules should cease the search.
  • the requesting and producing processor modules then provide signals to the operator 142, and the operator responds by causing the switch 146 to establish a connection between a link interface of the requesting processor module and the corres ⁇ ponding link interface of the producing processor module.
  • the matching tuple is then transmitted from the producing processor module to the requesting processor module through the links and the crossbar switch 114, and does not occupy the bus 138.
  • the directory portion of the requesting processor module is updated to reflect the fact that the tuple space of that processor module now contains the specified tuple.
  • the directory portion of the producing module is updated to reflect the fact that it no longer has the specified tuple in its tuple space.
  • the directory portion of the module that executes the opera- tion is updated. It will therefore be seen that it is not necessary to burden the bus with messages pertaining to the contents of the tuple space of each processor module.
  • Bus operation has four distinct cycles.
  • a processor that requires access to the bus 138 in order to transmit a message asserts a bus request signal on a bus control line and an arbitration cycle takes place. If no other processor requires access to the bus at that time, the first-mentioned processor wins the arbitration by default. If one or more other processors requires access to the bus, distributed arbitration logic ensures fair arbitration among the processors that require access.
  • a selection cycle takes place.
  • the transmitting processor writes a single 32-bit word onto the bus. If the computer station has twenty-eight processor modules and two communication modules, twenty-eight bits of this word define, on a one bit per module basis, a mask of the processor modules that are to receive the ensuing message. Two more bits deter ⁇ mine whether the external communication modules are to receive the message. This is the selection operation.
  • the transmitting processor can select any one or more of the other processors to receive its message.
  • Each processor that is selected by the trans ⁇ mitting processor to receive its message receives an interrupt from its parallel interface.
  • the interrupt forces the receiving processor into a receive message mode, in which each receiving processor reads the parallel bus.
  • the transmitting processor receives a status bit that indicates whether a receiving processor is in the receive message mode, and does not transmit data until all the receiving processors are in the receive message mode.
  • the transmitting processor and the receiving processors are interlocked and the selection cycle is com- plete.
  • a transmission cycle then occurs, in which the transmitting processor transmits its message over the bus in words of 32 bits.
  • the transmitting processor holds its data word on the bus until it has received an acknowledgement bit from each receiving processor.
  • the acknowledgement bit indi ⁇ cates that a receiving processor has read the data word from the bus.
  • the transmitting processor then ends its write cycle and each receiving processor ends its read cycle.
  • the transmitting processor then enters another write cycle, in which the next data word is transmitted.
  • the first word placed on the bus during the transmission phase represents the number of words to be transmitted.
  • the receiving procesors count the number of words actually transmitted, and when the number trans ⁇ mitted is equal to the number represented by the first word, they process the message.
  • the trans ⁇ mitting processor enters a disconnect cycle in which it negates its bus request, and this allows arbitration to take place again.
  • the processors that were previously selected are deselected.
  • the transmission cycle is then complete, and the bus is available for another transmission.
  • each processor module could include a FIFO buffer for receiving and temporarily holding data transmitted to that processor module over the bus. In this fashion, the receiving processors are able to run independently of each other.
  • a FIFO buffer has limited capa- city, and if one or more of the buffers is filled during a transmission, it would be necessary to fall back on the previously-described mode of transmitting data from a transmitting processor to the receiving processors.
  • Each link is composed of two lines, allowing bi-directional communication between two proces ⁇ sors.
  • the producing processor transmits words of eleven bits serially over the link using one line, the first bit of each word being a start bit and the last two bits being an ending.
  • the requesting processor On receipt of the first bit of a word, the requesting processor transmits an acknowledgment to the producing pro ⁇ cessor over the other line of the link.
  • the acknowledgment code is a unique two-bit code and is received by the producing processor before it com ⁇ pletes transmitting its word.
  • the length of the serial word is such that the acknowledgment code can travel through three link switches and still be received by-the producing processor, indi- eating that the next word can be sent, before the transmission of the first word is completed. Accordingly, the producing processor can send a second word immediately after the first word, without having to wait until after the end of the first word to receive an acknowledgement.
  • the computer station shown in FIG. 2 is able to recon ⁇ figure the pattern of link connections among its processors dynamically, in response to changes in the communication pattern required by the program that is being executed. Also, multiple programs can be run simultaneously, without regard to the communications patterns required by the programs respectively, since the pattern of link connec ⁇ tions is not fixed at the start of a program. The programmer need not consider the communica ⁇ tions pattern that will be required by a program that he is writing, since a suitable pattern of link connections is established automatically.
  • the user terminal 10 can run the same programs as the computer stations 12. However, when it is used as the terminal for a network, the processors 16 and 42 do not run applications but are concerned with graphics, creating the display, reading the keyboard and accessing the disc.
  • the disc drive station 14 is particularly useful if disc access cannot be accomplished by the user terminal 10 with sufficient speed when it is having to perform other functions.
  • each computer station 12 includes at least one communication module 144.
  • each communication module com ⁇ prises a processor 170, such as an Inmos IMS T800 transputer, having a parallel bus interface and four link interfaces.
  • the parallel bus interface is connected to the parallel bus 138 and the four link interfaces are connected to respective link interfaces of a 12:1 mux/demux 172.
  • the other eight link interfaces of the mux/demux are con ⁇ nected to eight link connections of the switch 146. Except for the number of link interfaces, and con ⁇ sequently the potential maximum speed of operation, the mux/demux 172 of the communication module operates in essentially the same way as the mux/demux 60 of the user terminal 10.
  • the communi ⁇ cation modules enable the computer stations to exchange tuple requests and tuples.
  • the mask will generally include the communi ⁇ cation module(s) of that station.
  • the message that is transmitted by the requesting processor module includes the name of the tuple space that would contain the desired tuple, if it exists.
  • the communication module stores the names of the tuple spaces that are associated with each of the other computer stations, and if any of the other computer stations with which that communication module can communicate by way of its transceiver 174 is asso ⁇ ciated with the named tuple space, the communica ⁇ tion module will transmit the template over the fiber optic cable.
  • the proces ⁇ sor of the receiving communication module directs the message over the parallel bus 138 to the appropriate processor module(s). If the message is received at one communication module of a proces ⁇ sor having a second communication module that is connected to a computer .station that includes- a processor that is associated with the named tuple space, the processor of the receiving communica- tion module directs the message over the parallel bus 138 to the other communication module for retransmission. Ultimately, the message reaches all computer stations having processors that are associated with the named tuple space.
  • the tuple When a matching tuple is found in a computer station other than the one that contains the requesting processor, the tuple is transmitted through the crossbar switch of the producing compu ⁇ ter station to a communication module of that sta- tion and is transmitted serially to a communication module of the requesting computer station. This might involve the tuple's passing through one or more intermediate computer stations, and being passed between communication modules of an interme- diate station through the crossbar switch of that station. When the tuple reaches the requesting computer station, it is transmitted to the requesting processor module by way of the crossbar switch.
  • tne present invention is not restricted to the particular embodiment that has been described and illustrated, and that variations may be made therein without departing from the scope of the invention as defined in the appended claims and equivalents thereof.
  • the invention is not restricted to use with any particular type of processor. It is necessary only that the processor be aole to support two communication modes, one of which is suitable for transmission of data pertaining to information required by the processor and the other of which is suitable for transmission of tne information itself.
  • the invention is not restricted to use with computers configured for connection in a network.
  • FIG. 1 computer could accommodate up to thirty-two processor modules. Further, the invention is not restricted to use of a crossbar switch to interconnect the link interfaces of the processors. Five processor modules each having four link interfaces could each have one link interface hard wired to a link interface of each other processor module, as shown in FIG. 4. In this case, a producing processor is always able to transmit a tuple to a requesting processor by way of the link interfaces by which they are connected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multi Processors (AREA)
  • Information Transfer Systems (AREA)

Abstract

Un ordinateur comprend une pluralité de modules processeurs (1100-110N), comportant chacun au moins une première et une seconde interface de connexion I/O (126, 130A, 130B, 130C, 130D), un processeur (122) connecté à ces interfaces (126, 130A, 130B, 130C, 130D), ainsi qu'une mémoire vive (118) connectée au processeur. La première interface de connexion I/O (126) de chaque module processeur (1100-110N) est connectée à un bus commun (138). La seconde interface de connexion I/O (130A, 130B, 130C, 130D) de chaque module processeur (1100-110N) est connectée à un commutateur (150A, 150B, 150C, 150D), le commutateur (150A, 150B, 150C, 150D) fonctionnant pour connecter la seconde interface de connexion I/O (130A, 130B, 130C, 130D) d'un module processeur sélectionné (1100-110N) sélectivement à la seconde interface de connexion I/O (130A, 130B, 130C, 130D) de n'importe quel autre module processeur (1100-110N). Une unité de contrôle (142) est connectée au bus (138) pour recevoir par le bus (138) des données relatives à un premier module processeur (1100-110N), demandant accès aux informations, et à un second module processeur (1100-110N), auprès duquel les informations demandées peuvent être obtenues, et pour commander le commutateur (50A, 150B, 150C, 150D) afin de permettre la transmission des informations demandées du premier module processeur (1100-110N) au second module processeur (1100-110N).
PCT/US1989/001456 1988-04-08 1989-04-07 Ordinateur a architecture de communication hybride WO1989009967A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17941288A 1988-04-08 1988-04-08
US179,412 1988-04-08

Publications (1)

Publication Number Publication Date
WO1989009967A1 true WO1989009967A1 (fr) 1989-10-19

Family

ID=22656490

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1989/001456 WO1989009967A1 (fr) 1988-04-08 1989-04-07 Ordinateur a architecture de communication hybride

Country Status (4)

Country Link
EP (1) EP0377022A4 (fr)
JP (1) JPH01261772A (fr)
AU (1) AU3762289A (fr)
WO (1) WO1989009967A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0341905A2 (fr) * 1988-05-13 1989-11-15 AT&T Corp. Calculateur ayant un système de mémoire intelligente
EP0451938A2 (fr) * 1990-02-28 1991-10-16 Hughes Aircraft Company Processeur de signal formé de plusieurs groupes
EP0570950A3 (en) * 1992-05-22 1994-07-20 Ibm Advanced parallel array processor (apap)
EP0608663A1 (fr) * 1993-01-25 1994-08-03 BULL HN INFORMATION SYSTEMS ITALIA S.p.A. Système multiprocesseur avec mémoire partagée
US5467452A (en) * 1992-07-17 1995-11-14 International Business Machines Corporation Routing control information via a bus selectively controls whether data should be routed through a switch or a bus according to number of destination processors
US5566342A (en) * 1994-08-31 1996-10-15 International Business Machines Corporation Scalable switch wiring technique for large arrays of processors
US5832303A (en) * 1994-08-22 1998-11-03 Hitachi, Ltd. Large scale interconnecting switch using communication controller groups with multiple input-to-one output signal lines and adaptable crossbar unit using plurality of selectors
US5878277A (en) * 1995-05-23 1999-03-02 Hitachi Denshi Kabushiki Kaisha Communication system having at least two types of communication channels
EP0973093A2 (fr) * 1998-06-26 2000-01-19 Sony Computer Entertainment Inc. Méthode et appareil de traitement d'information, et support de présentation
US6041379A (en) * 1996-10-04 2000-03-21 Northrop Grumman Corporation Processor interface for a distributed memory addressing system
EP1577786A1 (fr) * 2004-03-18 2005-09-21 High Tech Computer Corp. Module de conversion de données série/parallèle et système d'ordinateur correspondant
WO2006071942A2 (fr) * 2004-12-28 2006-07-06 Intel Corporation Procede et appareil de mise en oeuvre d'interconnexions heterogenes

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03135650A (ja) * 1989-10-20 1991-06-10 Fuji Photo Film Co Ltd ファイル情報転送方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4417334A (en) * 1981-04-16 1983-11-22 Ncr Corporation Data processing system having dual-channel system bus
US4627045A (en) * 1984-02-14 1986-12-02 Rosemount Inc. Alternating communication channel switchover system
US4633473A (en) * 1984-08-02 1986-12-30 United Technologies Corporation Fault tolerant communications interface
US4644496A (en) * 1983-01-11 1987-02-17 Iowa State University Research Foundation, Inc. Apparatus, methods, and systems for computer information transfer
US4720780A (en) * 1985-09-17 1988-01-19 The Johns Hopkins University Memory-linked wavefront array processor
US4748560A (en) * 1984-10-15 1988-05-31 Mitsubishi Denki Kabushiki Kaisha Occupancy control system for plural serial buses
US4794594A (en) * 1986-06-25 1988-12-27 International Business Machines Corporation Method and system of routing data blocks in data communication networks
US4811210A (en) * 1985-11-27 1989-03-07 Texas Instruments Incorporated A plurality of optical crossbar switches and exchange switches for parallel processor computer

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4417334A (en) * 1981-04-16 1983-11-22 Ncr Corporation Data processing system having dual-channel system bus
US4644496A (en) * 1983-01-11 1987-02-17 Iowa State University Research Foundation, Inc. Apparatus, methods, and systems for computer information transfer
US4627045A (en) * 1984-02-14 1986-12-02 Rosemount Inc. Alternating communication channel switchover system
US4633473A (en) * 1984-08-02 1986-12-30 United Technologies Corporation Fault tolerant communications interface
US4748560A (en) * 1984-10-15 1988-05-31 Mitsubishi Denki Kabushiki Kaisha Occupancy control system for plural serial buses
US4720780A (en) * 1985-09-17 1988-01-19 The Johns Hopkins University Memory-linked wavefront array processor
US4811210A (en) * 1985-11-27 1989-03-07 Texas Instruments Incorporated A plurality of optical crossbar switches and exchange switches for parallel processor computer
US4794594A (en) * 1986-06-25 1988-12-27 International Business Machines Corporation Method and system of routing data blocks in data communication networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0377022A4 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0341905A3 (fr) * 1988-05-13 1992-01-22 AT&T Corp. Calculateur ayant un système de mémoire intelligente
US5134711A (en) * 1988-05-13 1992-07-28 At&T Bell Laboratories Computer with intelligent memory system
EP0341905A2 (fr) * 1988-05-13 1989-11-15 AT&T Corp. Calculateur ayant un système de mémoire intelligente
EP0451938A2 (fr) * 1990-02-28 1991-10-16 Hughes Aircraft Company Processeur de signal formé de plusieurs groupes
EP0451938A3 (en) * 1990-02-28 1993-01-27 Hughes Aircraft Company Multiple cluster signal processor
US5590345A (en) * 1990-11-13 1996-12-31 International Business Machines Corporation Advanced parallel array processor(APAP)
EP0570950A3 (en) * 1992-05-22 1994-07-20 Ibm Advanced parallel array processor (apap)
US5467452A (en) * 1992-07-17 1995-11-14 International Business Machines Corporation Routing control information via a bus selectively controls whether data should be routed through a switch or a bus according to number of destination processors
US5701413A (en) * 1993-01-25 1997-12-23 Bull Hn Information Systems Italia S.P.A. Multi-processor system with shared memory
EP0608663A1 (fr) * 1993-01-25 1994-08-03 BULL HN INFORMATION SYSTEMS ITALIA S.p.A. Système multiprocesseur avec mémoire partagée
US5832303A (en) * 1994-08-22 1998-11-03 Hitachi, Ltd. Large scale interconnecting switch using communication controller groups with multiple input-to-one output signal lines and adaptable crossbar unit using plurality of selectors
US5566342A (en) * 1994-08-31 1996-10-15 International Business Machines Corporation Scalable switch wiring technique for large arrays of processors
US5878277A (en) * 1995-05-23 1999-03-02 Hitachi Denshi Kabushiki Kaisha Communication system having at least two types of communication channels
US6041379A (en) * 1996-10-04 2000-03-21 Northrop Grumman Corporation Processor interface for a distributed memory addressing system
EP0973093A2 (fr) * 1998-06-26 2000-01-19 Sony Computer Entertainment Inc. Méthode et appareil de traitement d'information, et support de présentation
EP0973093A3 (fr) * 1998-06-26 2005-11-23 Sony Computer Entertainment Inc. Méthode et appareil de traitement d'information, et support de présentation
EP1577786A1 (fr) * 2004-03-18 2005-09-21 High Tech Computer Corp. Module de conversion de données série/parallèle et système d'ordinateur correspondant
WO2006071942A2 (fr) * 2004-12-28 2006-07-06 Intel Corporation Procede et appareil de mise en oeuvre d'interconnexions heterogenes
WO2006071942A3 (fr) * 2004-12-28 2006-08-24 Intel Corp Procede et appareil de mise en oeuvre d'interconnexions heterogenes
US7353317B2 (en) 2004-12-28 2008-04-01 Intel Corporation Method and apparatus for implementing heterogeneous interconnects
US7640387B2 (en) 2004-12-28 2009-12-29 Intel Corporation Method and apparatus for implementing heterogeneous interconnects

Also Published As

Publication number Publication date
EP0377022A1 (fr) 1990-07-11
JPH01261772A (ja) 1989-10-18
AU3762289A (en) 1989-11-03
EP0377022A4 (en) 1992-08-12

Similar Documents

Publication Publication Date Title
US5434970A (en) System for distributed multiprocessor communication
US4933846A (en) Network communications adapter with dual interleaved memory banks servicing multiple processors
US5752068A (en) Mesh parallel computer architecture apparatus and associated methods
JP2644780B2 (ja) 処理依頼機能を持つ並列計算機
US5247613A (en) Massively parallel processor including transpose arrangement for serially transmitting bits of data words stored in parallel
EP0389001B1 (fr) Commande de multitraitement pour ordinateurs vectoriels
WO1989009967A1 (fr) Ordinateur a architecture de communication hybride
CZ290716B6 (cs) Počítačový systém s více médii
GB2204974A (en) Programmable i/o sequencer for use in an i/o processor
JP2003178039A (ja) 分散共有仮想メモリーとその構成方法
EP0409434B1 (fr) Méthode et appareil de contrôle de communication entre ordinateurs
KR960012423B1 (ko) 비동기식 디지탈 프로세서 사이에 정보를 교환하기 위한 방법 및 장치
KR100719872B1 (ko) 병렬 컴퓨터 및 이를 이용한 정보처리유닛
US5297255A (en) Parallel computer comprised of processor elements having a local memory and an enhanced data transfer mechanism
US5586289A (en) Method and apparatus for accessing local storage within a parallel processing computer
US5530889A (en) Hierarchical structure processor having at least one sub-sequencer for executing basic instructions of a macro instruction
US6279098B1 (en) Method of and apparatus for serial dynamic system partitioning
JP2938711B2 (ja) 並列計算機
JP3364937B2 (ja) 並列演算装置
JPH07271744A (ja) 並列計算機
JP2004013868A (ja) 情報処理装置及びそれに用いるキャッシュフラッシュ制御方法
RU2042193C1 (ru) Вычислительная система
SU618733A1 (ru) Микропроцессор дл вводавывода данных
JP2984594B2 (ja) マルチクラスタ情報処理システム
Kovaleski et al. An architecture and an interconnection scheme for time-sliced buses

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1989906950

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1989906950

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1989906950

Country of ref document: EP