EP0377022A1 - Computing machine with hybrid communication architecture - Google Patents
Computing machine with hybrid communication architectureInfo
- Publication number
- EP0377022A1 EP0377022A1 EP89906950A EP89906950A EP0377022A1 EP 0377022 A1 EP0377022 A1 EP 0377022A1 EP 89906950 A EP89906950 A EP 89906950A EP 89906950 A EP89906950 A EP 89906950A EP 0377022 A1 EP0377022 A1 EP 0377022A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- processor
- computer
- bus
- interface
- external communication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17356—Indirect interconnection networks
- G06F15/17368—Indirect interconnection networks non hierarchical topologies
- G06F15/17375—One dimensional, e.g. linear array, ring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
Definitions
- This invention relates to a computing machine with hybrid communication architecture.
- a digital computer solves a problem by break ⁇ ing the problem down into multiple steps.
- a compu ⁇ ter with a single processor is able to execute one step at a time. It takes an inordinate time to solve a complex problem by use of such a sequential mode of operation. By operating multiple proces ⁇ sors in parallel, it is generally possible to reduce substantially the time required to solve a problem. If multiple processors are operated in parallel, it is necessary for the processors to share data.
- One technique for sharing data among multiple processors is for there to be a common, or global, memory to which all the processors have access on an equal footing.
- a problem with this technique arises from the fact that only one pro ⁇ cessor can access the memory at a given time, and therefore contention -problems limit the number of processors that can be accommodated.
- the number of processors can be increased somewhat by use of coherent caching or crossbar switching, but such techniques are costly and cumbersome.
- a second method of allowing sharing of data by multiple processors involves use of a parallel communication bus.
- a bus allows a great deal of flexibility in communication, including the ability to broadcast data from one processor to many or all of the others in a single operation. Each proces ⁇ sor is able to execute independently until the need to communicate arises. However, when communication is necessary, contention problems arise since only one processor can transmit on the bus at a time. Use of multiple buses can reduce problems due to contention, but multiple buses also reduce flexi ⁇ bility and add greatly to cost and complexity.
- a third technique for allowing multiple pro ⁇ cessors to share data is provided by point-to-point communication links.
- Processors that have built-in links are commercially available, and therefore they are very easy to provide. Links offer virtually unlimited expansion possibilities, since the number of communication paths increases whenever a processor is added. However, links are the most difficult to use, since the physical interconnection pattern must match the pattern of communication required by the program that is being executed. If processors are added to or removed from the system, a new pattern of link connections must be established, and the program must be rewritten, recompiled or, at the very least, relinked to match. Broadcasting a message is difficult and time consuming, since the message must be copied from one processor to the next until it has reached all processors. Since two different programs will generally require two different physical interconnection patterns, it has not hitherto been possible to execute multiple programs simultaneously using links for communication between processors unless the programs are specifically designed to require the same communication patterns.
- the difficulty of matching the physical interconnection pattern with the pattern of commu- nication required by the program is partially
- the read operation is identical to i ⁇ except that the matching tuple is not removed from the tuple space.
- the eval operation is a specialized form of out. Out creates a passive tuple, whereas eval creates an active tuple.
- a processor performs an _i_n operation, it is necessary to search tuple space for the matching tuple. It would be time consuming to examine each tuple in turn to determine whether it matches the template, and therefore the tuples are classified and a directory is created to facilitate the search.
- different portions of the directory are accessed by the different processors, and in order to complete an i n operation, potentially all the processors must examine their portions of the directory against the template in order to determine whether a matching tuple exists.
- the Linda system makes the programmer's job vastly easier; since he need not know the source or the destination of his data.
- the directory is automatically consulted and used to match a request for a tuple with a tuple that is available. When a required tuple is not available, the requester waits until such a tuple becomes available.
- the Linda system can be implemented on any of the above-mentioned parallel processing architectures, most efficiently with global memory, next most efficiently with one or more buses, and least efficiently with links.
- Linda is based on the use of tuples.
- a tuple is a collection of related data. Elements of a tuple are fields holding actual values or for als.
- a passive tuple is simply a collec ⁇ tion of data items, whereas an active tuple is a process which becomes a passive tuple.
- Tuples exist in an abstract space called tuple space.
- the tuple space may exist over multiple processors.
- Four prin ⁇ cipal operations can be performed on tuple space.
- the out operation is an operation that creates a tuple and places it in tuple space.
- the i_n operation is the reverse of out: it specifies a tuple that it desires, in the form of a template, and the computer matches the template against all the tuples existing in tuple space. If a matching tuple is found, it is removed from tuple space and is returned to the requesting process. When no tuple matches, the _in operation blocks, and the requesting process is suspended until another process, through an out operation, creates a matching tuple. At this point, the requesting process continues. An out operation can never block. great volume of them can be moved over the bus in a short period.
- a preferred embodiment of the present inven ⁇ tion is a computer which comprises a plurality of processor modules, each having at least first and
- the first I/O connection interface of each processor module is connected to a common bus.
- processor module is connected to a switch, the switch being operative to connect the second I/O connection interface of a selected processor module selectively to the second I/O connection interface of any other processor module.
- a controller is
- the Linda directory can be updated by broadcasting
- the tuple is transmitted from the pro ⁇ ducing processor to the requesting processor in a non-broadcast fashion, by way of the switch.
- the bus is available to process other updates and arrange other connections, so that many transmissions can occur simultaneously.
- FIG. 1 is a bLock diagram of a first networked computer system embodying the present invention
- FIG. 2 is a more detailed block diagram of one of the computer stations shown in FIG. 1,
- FIG. 3 is a simplified block diagram of a second networked computer system embodying the invention.
- FIG. 4 is a simplified block diagram of a stand-alone computer emoodying the invention.
- the computer system illustrated in FIG. 1 comprises a user terminal 10, several computer stations 12 and a disc drive station 14.
- the user terminal 10 comprises a processor 16 which is connected to various user utilities, such as a display card 18, a hard and/or floppy disc drive card 20, and a keyboard card 24, through its memory bus 26.
- the memory bus is also connected to local random access memory (RAM) 28 and local read only memory (ROM) 30.
- the processor 16 has four link interfaces 44.
- a second processor 42 is connected over its memory bus to local RAM 48 and has four link interfaces 52.
- each processor 16, 42 is an Inmos I S T800 transputer.
- One link interface of the processor 16 is connected to a link interface of the processor 42, and the other three link interfaces of each processor are connected to respective demultiplexed interfaces of a 6:1 byte domain multiplexer/demultiplexer (mux/demux) 60, which has a multiplexed interface connected to a fiber optic transceiver 64.
- the mux/demux 60 and the fiber optic transceiver 64 are used to transmit data and instructions between the processors 16, 42 and a fiDer optic cable 68, which connects the user terminal 10 to one of the computer stations 12. Messages are transmitted over the cable 68 at a rate of about 100 Mb/s.
- the mux/demux 60 has a multiplexer channel and a demultiplexer channel.
- the multiplexer channel of the mux/demux comprises, for each demultiplexed interface, a serial to parallel converter which receives serial data over its link in words of eight bits at a rate 10-20 Mb/s and, for each eight-bit serial word, generates an eight-bit parallel word and applies it to a parallel bus with a four-bit tag that designates the particular demultiplexed interface that provided the serial word.
- a twelve-bit word is applied to the parallel bus.
- the parallel bus is connected to a high speed parallel-to-serial con ⁇ verter, which reads the twelve-bit parallel words in turn and generates an electrical signal composed of a succession of twelve-bit serial words.
- the electrical signal provided by the parallel-to- serial converter is applied to the fiber optic transceiver 64.
- the demultiplexer channel of the mux/demux 60 comprises a serial-to-parallel converter which receives twelve-bit serial words from the fiber optic transceiver 64 and generates twelve-bit parallel words which are applied to the parallel bus. Each twelve-bit word comprises a four-bit tag and an eight-bit data word.
- the mux/demux receives up to six serial signals and interleaves them to provide a single serial output signal at its multiplexed interface.
- the output signal of the mux/demux is applied to the fiber optic trans-ordinatever 64, which launches an optical signal, coded in accordance with the signal provided by the mux/demux, into the fiber optic cable 68.
- a coded optical signal is received over the fiber optic cable and the fiber optic transceiver generates a serial electrical signal in response thereto.
- the serial electrical signal is applied to the multiplexed interface of the mux/demux 60 and the mux/demux demultiplexes it into up to six signals which are provided at the demultiplexed interfaces respectively of the mux/demux .
- the processors 16, 42 manipulate the data or commands in accordance with programs stored in the ROM 30 and apply the data or commands in serial fashion over the links, the mux/demux 60 and the transceiver 64 to the fiber optic cable 68.
- the serial signal is demulti lexed into up to six serial signals which are applied to the processors 16, 42 over the links, and the processors apply the data or commands to the display card 13 or the disc drive card 20.
- each computer station 12 com ⁇ prises several processor modules 110, each of which is composed of a processor 114 and a random access memory 118.
- the processor 114 is an Inmos IMS T800 transputer, comprising a CPU 122, an external parallel interface 126 and four link inter ⁇ faces 130A-130D.
- the processor 114 also includes other components, but these are not relevant to an understanding of the present invention and there- fore are not illustrated in the drawings.
- the parallel interface 126 is connected to an external 32-bit parallel bus 138, which is connected to the external parallel interface of each other processor module, a cross ⁇ bar switch operator 142 and at least one external communication module 144.
- the crossbar switch operator 142 is an Inmos IMS T414 transputer, which has essentially the same architecture as the IMS T800 transputer.
- the link interfaces of the processor modules cite connected to a programmable crossbar switch 146, which is implemented by four Inmos IMS C004 programmable link switches 150A-150D.
- Each link switch has 32 data link connections. Respective data link connections of the link switch 150A, for example, are connected to the link interfaces 130A of the processors 114.
- Each link switch also has a configuration link connection 158 over which it receives a signal for establishing the manner in which the data link connections are connected by the switch.
- the configuration link connections 158 of the four link switches are connected to respec ⁇ tive link interfaces of the operator 142.
- the crossbar switch 146, the switch operator 142 and the parallel bus 138 are all carried by a mother board having sixteen connection slots.
- Each connec ⁇ tion slot can receive either a communications card, which carries a communication module 144, or a pro ⁇ cessor card, which carries two processor modules 110. Therefore, the maximum number of processor modules that can be accommodated (it being necessary to have at least one communication module) is thiry.
- the four programmable link switches provide an aggregate of 128 link connections, and the maximum of thirty processor modules occupy 120 of these data link connections. The other eight link connections are connected to the external communication module, for purposes which will be described below. If there is more than one communication module (and consequently fewer than thirty processor modules), each com.munica- tion module is connected to two link connections of each link switch.
- the disc drive station 14 comprises a single processor 160 which is connected over its memory bus 168 to a high speed disc drive card 162 and to local RAM and local ROM.
- the processor 160 which may be an Inmos IMS T800 transputer, has four link interfaces which are connected to a 4:1 mux/demux 164.
- the mux/demux 164 of the disc drive station 14 is essentially the same as the mux/demux 60 of the user terminal 10, and is connected to a fiber optic transceiver 166.
- the disc drive station provides high speed disc access without being burdened by the need to generate a display.
- the computer stations 12 shown in FIG. 1 each have two communication modules 144, which enable the stations 12 to be connected in a linear arrangement between the user terminal and the disc drive station. Each computer station therefore can accommodate up to twenty-eight processor modules
- the user terminal 10, the disc drive station 14 and the computer stations 12 may be connected in a star arrangement, as shown in FIG. 3.
- the computer stations 12A-12D each need only one communication module, but the computer station 12E has six communication modules.
- a third possibility is for the computer stations to be organized as a hierarchical tree. Numerous, other arrangements, .employing different interconnection schemes among the user terminal, the disc drive station and the necessary processor modules, are possible.
- Each computer station 12 executes an applica ⁇ tion by use of the Linda language.
- Data are received by the computer station by way of an external communication module 144 and are stored in the external memories 118.
- the data stored in the memories 118 are associated as tuples, and each external memory 118 includes a directory portion containing information regarding the tuples that are * stored in that memory.
- the processor 114 of a processor module 110 executes an out opera ⁇ tion
- the tuple generated in the out operation is loaded into the processor module's external memory and the processor module's portion of the directory is updated to reflect addition of this tuple.
- a processor module performs an _i_n operation, it first examines its own portion of the directory against the template that defines the desired tuple.
- the requesting processor module may broadcast the template over the parallel bus 138 to other processor modules of the computer station.
- the receiving processor modules examine their respective portions of the directory, and the first processor module that finds a match places a signal on the bus to indi ⁇ cate that the other processor modules should cease the search.
- the requesting and producing processor modules then provide signals to the operator 142, and the operator responds by causing the switch 146 to establish a connection between a link interface of the requesting processor module and the corres ⁇ ponding link interface of the producing processor module.
- the matching tuple is then transmitted from the producing processor module to the requesting processor module through the links and the crossbar switch 114, and does not occupy the bus 138.
- the directory portion of the requesting processor module is updated to reflect the fact that the tuple space of that processor module now contains the specified tuple.
- the directory portion of the producing module is updated to reflect the fact that it no longer has the specified tuple in its tuple space.
- the directory portion of the module that executes the opera- tion is updated. It will therefore be seen that it is not necessary to burden the bus with messages pertaining to the contents of the tuple space of each processor module.
- Bus operation has four distinct cycles.
- a processor that requires access to the bus 138 in order to transmit a message asserts a bus request signal on a bus control line and an arbitration cycle takes place. If no other processor requires access to the bus at that time, the first-mentioned processor wins the arbitration by default. If one or more other processors requires access to the bus, distributed arbitration logic ensures fair arbitration among the processors that require access.
- a selection cycle takes place.
- the transmitting processor writes a single 32-bit word onto the bus. If the computer station has twenty-eight processor modules and two communication modules, twenty-eight bits of this word define, on a one bit per module basis, a mask of the processor modules that are to receive the ensuing message. Two more bits deter ⁇ mine whether the external communication modules are to receive the message. This is the selection operation.
- the transmitting processor can select any one or more of the other processors to receive its message.
- Each processor that is selected by the trans ⁇ mitting processor to receive its message receives an interrupt from its parallel interface.
- the interrupt forces the receiving processor into a receive message mode, in which each receiving processor reads the parallel bus.
- the transmitting processor receives a status bit that indicates whether a receiving processor is in the receive message mode, and does not transmit data until all the receiving processors are in the receive message mode.
- the transmitting processor and the receiving processors are interlocked and the selection cycle is com- plete.
- a transmission cycle then occurs, in which the transmitting processor transmits its message over the bus in words of 32 bits.
- the transmitting processor holds its data word on the bus until it has received an acknowledgement bit from each receiving processor.
- the acknowledgement bit indi ⁇ cates that a receiving processor has read the data word from the bus.
- the transmitting processor then ends its write cycle and each receiving processor ends its read cycle.
- the transmitting processor then enters another write cycle, in which the next data word is transmitted.
- the first word placed on the bus during the transmission phase represents the number of words to be transmitted.
- the receiving procesors count the number of words actually transmitted, and when the number trans ⁇ mitted is equal to the number represented by the first word, they process the message.
- the trans ⁇ mitting processor enters a disconnect cycle in which it negates its bus request, and this allows arbitration to take place again.
- the processors that were previously selected are deselected.
- the transmission cycle is then complete, and the bus is available for another transmission.
- each processor module could include a FIFO buffer for receiving and temporarily holding data transmitted to that processor module over the bus. In this fashion, the receiving processors are able to run independently of each other.
- a FIFO buffer has limited capa- city, and if one or more of the buffers is filled during a transmission, it would be necessary to fall back on the previously-described mode of transmitting data from a transmitting processor to the receiving processors.
- Each link is composed of two lines, allowing bi-directional communication between two proces ⁇ sors.
- the producing processor transmits words of eleven bits serially over the link using one line, the first bit of each word being a start bit and the last two bits being an ending.
- the requesting processor On receipt of the first bit of a word, the requesting processor transmits an acknowledgment to the producing pro ⁇ cessor over the other line of the link.
- the acknowledgment code is a unique two-bit code and is received by the producing processor before it com ⁇ pletes transmitting its word.
- the length of the serial word is such that the acknowledgment code can travel through three link switches and still be received by-the producing processor, indi- eating that the next word can be sent, before the transmission of the first word is completed. Accordingly, the producing processor can send a second word immediately after the first word, without having to wait until after the end of the first word to receive an acknowledgement.
- the computer station shown in FIG. 2 is able to recon ⁇ figure the pattern of link connections among its processors dynamically, in response to changes in the communication pattern required by the program that is being executed. Also, multiple programs can be run simultaneously, without regard to the communications patterns required by the programs respectively, since the pattern of link connec ⁇ tions is not fixed at the start of a program. The programmer need not consider the communica ⁇ tions pattern that will be required by a program that he is writing, since a suitable pattern of link connections is established automatically.
- the user terminal 10 can run the same programs as the computer stations 12. However, when it is used as the terminal for a network, the processors 16 and 42 do not run applications but are concerned with graphics, creating the display, reading the keyboard and accessing the disc.
- the disc drive station 14 is particularly useful if disc access cannot be accomplished by the user terminal 10 with sufficient speed when it is having to perform other functions.
- each computer station 12 includes at least one communication module 144.
- each communication module com ⁇ prises a processor 170, such as an Inmos IMS T800 transputer, having a parallel bus interface and four link interfaces.
- the parallel bus interface is connected to the parallel bus 138 and the four link interfaces are connected to respective link interfaces of a 12:1 mux/demux 172.
- the other eight link interfaces of the mux/demux are con ⁇ nected to eight link connections of the switch 146. Except for the number of link interfaces, and con ⁇ sequently the potential maximum speed of operation, the mux/demux 172 of the communication module operates in essentially the same way as the mux/demux 60 of the user terminal 10.
- the communi ⁇ cation modules enable the computer stations to exchange tuple requests and tuples.
- the mask will generally include the communi ⁇ cation module(s) of that station.
- the message that is transmitted by the requesting processor module includes the name of the tuple space that would contain the desired tuple, if it exists.
- the communication module stores the names of the tuple spaces that are associated with each of the other computer stations, and if any of the other computer stations with which that communication module can communicate by way of its transceiver 174 is asso ⁇ ciated with the named tuple space, the communica ⁇ tion module will transmit the template over the fiber optic cable.
- the proces ⁇ sor of the receiving communication module directs the message over the parallel bus 138 to the appropriate processor module(s). If the message is received at one communication module of a proces ⁇ sor having a second communication module that is connected to a computer .station that includes- a processor that is associated with the named tuple space, the processor of the receiving communica- tion module directs the message over the parallel bus 138 to the other communication module for retransmission. Ultimately, the message reaches all computer stations having processors that are associated with the named tuple space.
- the tuple When a matching tuple is found in a computer station other than the one that contains the requesting processor, the tuple is transmitted through the crossbar switch of the producing compu ⁇ ter station to a communication module of that sta- tion and is transmitted serially to a communication module of the requesting computer station. This might involve the tuple's passing through one or more intermediate computer stations, and being passed between communication modules of an interme- diate station through the crossbar switch of that station. When the tuple reaches the requesting computer station, it is transmitted to the requesting processor module by way of the crossbar switch.
- tne present invention is not restricted to the particular embodiment that has been described and illustrated, and that variations may be made therein without departing from the scope of the invention as defined in the appended claims and equivalents thereof.
- the invention is not restricted to use with any particular type of processor. It is necessary only that the processor be aole to support two communication modes, one of which is suitable for transmission of data pertaining to information required by the processor and the other of which is suitable for transmission of tne information itself.
- the invention is not restricted to use with computers configured for connection in a network.
- FIG. 1 computer could accommodate up to thirty-two processor modules. Further, the invention is not restricted to use of a crossbar switch to interconnect the link interfaces of the processors. Five processor modules each having four link interfaces could each have one link interface hard wired to a link interface of each other processor module, as shown in FIG. 4. In this case, a producing processor is always able to transmit a tuple to a requesting processor by way of the link interfaces by which they are connected.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multi Processors (AREA)
- Information Transfer Systems (AREA)
Abstract
Un ordinateur comprend une pluralité de modules processeurs (1100-110N), comportant chacun au moins une première et une seconde interface de connexion I/O (126, 130A, 130B, 130C, 130D), un processeur (122) connecté à ces interfaces (126, 130A, 130B, 130C, 130D), ainsi qu'une mémoire vive (118) connectée au processeur. La première interface de connexion I/O (126) de chaque module processeur (1100-110N) est connectée à un bus commun (138). La seconde interface de connexion I/O (130A, 130B, 130C, 130D) de chaque module processeur (1100-110N) est connectée à un commutateur (150A, 150B, 150C, 150D), le commutateur (150A, 150B, 150C, 150D) fonctionnant pour connecter la seconde interface de connexion I/O (130A, 130B, 130C, 130D) d'un module processeur sélectionné (1100-110N) sélectivement à la seconde interface de connexion I/O (130A, 130B, 130C, 130D) de n'importe quel autre module processeur (1100-110N). Une unité de contrôle (142) est connectée au bus (138) pour recevoir par le bus (138) des données relatives à un premier module processeur (1100-110N), demandant accès aux informations, et à un second module processeur (1100-110N), auprès duquel les informations demandées peuvent être obtenues, et pour commander le commutateur (50A, 150B, 150C, 150D) afin de permettre la transmission des informations demandées du premier module processeur (1100-110N) au second module processeur (1100-110N).A computer comprises a plurality of processor modules (1100-110N), each comprising at least a first and a second I / O connection interface (126, 130A, 130B, 130C, 130D), a processor (122) connected to these interfaces (126, 130A, 130B, 130C, 130D), as well as a random access memory (118) connected to the processor. The first I / O connection interface (126) of each processor module (1100-110N) is connected to a common bus (138). The second I / O connection interface (130A, 130B, 130C, 130D) of each processor module (1100-110N) is connected to a switch (150A, 150B, 150C, 150D), the switch (150A, 150B, 150C, 150D) operating to connect the second I / O connection interface (130A, 130B, 130C, 130D) of a selected processor module (1100-110N) selectively to the second I / O connection interface (130A, 130B, 130C, 130D) from any other processor module (1100-110N). A control unit (142) is connected to the bus (138) to receive via the bus (138) data relating to a first processor module (1100-110N), requesting access to the information, and to a second processor module (1100- 110N), from which the requested information can be obtained, and to control the switch (50A, 150B, 150C, 150D) in order to allow the transmission of the requested information from the first processor module (1100-110N) to the second processor module (1100- 110N).
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17941288A | 1988-04-08 | 1988-04-08 | |
US179412 | 1988-04-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0377022A1 true EP0377022A1 (en) | 1990-07-11 |
EP0377022A4 EP0377022A4 (en) | 1992-08-12 |
Family
ID=22656490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19890906950 Withdrawn EP0377022A4 (en) | 1988-04-08 | 1989-04-07 | Computing machine with hybrid communication architecture |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0377022A4 (en) |
JP (1) | JPH01261772A (en) |
AU (1) | AU3762289A (en) |
WO (1) | WO1989009967A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5134711A (en) * | 1988-05-13 | 1992-07-28 | At&T Bell Laboratories | Computer with intelligent memory system |
JPH03135650A (en) * | 1989-10-20 | 1991-06-10 | Fuji Photo Film Co Ltd | File information transfer method |
IL97315A (en) * | 1990-02-28 | 1994-10-07 | Hughes Aircraft Co | Multiple cluster signal processor |
US5590345A (en) * | 1990-11-13 | 1996-12-31 | International Business Machines Corporation | Advanced parallel array processor(APAP) |
DE4223600C2 (en) * | 1992-07-17 | 1994-10-13 | Ibm | Multiprocessor computer system and method for transmitting control information and data information between at least two processor units of a computer system |
EP0608663B1 (en) * | 1993-01-25 | 1999-03-10 | Bull HN Information Systems Italia S.p.A. | A multi-processor system with shared memory |
US5832303A (en) * | 1994-08-22 | 1998-11-03 | Hitachi, Ltd. | Large scale interconnecting switch using communication controller groups with multiple input-to-one output signal lines and adaptable crossbar unit using plurality of selectors |
US5566342A (en) * | 1994-08-31 | 1996-10-15 | International Business Machines Corporation | Scalable switch wiring technique for large arrays of processors |
US5878277A (en) * | 1995-05-23 | 1999-03-02 | Hitachi Denshi Kabushiki Kaisha | Communication system having at least two types of communication channels |
US6041379A (en) * | 1996-10-04 | 2000-03-21 | Northrop Grumman Corporation | Processor interface for a distributed memory addressing system |
JP2000010913A (en) * | 1998-06-26 | 2000-01-14 | Sony Computer Entertainment Inc | Information processing device and method and distribution medium |
EP1577786A1 (en) * | 2004-03-18 | 2005-09-21 | High Tech Computer Corp. | Serial/parallel data transformer module and related computer system |
US7353317B2 (en) * | 2004-12-28 | 2008-04-01 | Intel Corporation | Method and apparatus for implementing heterogeneous interconnects |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4583161A (en) * | 1981-04-16 | 1986-04-15 | Ncr Corporation | Data processing system wherein all subsystems check for message errors |
US4644496A (en) * | 1983-01-11 | 1987-02-17 | Iowa State University Research Foundation, Inc. | Apparatus, methods, and systems for computer information transfer |
US4627045A (en) * | 1984-02-14 | 1986-12-02 | Rosemount Inc. | Alternating communication channel switchover system |
US4633473A (en) * | 1984-08-02 | 1986-12-30 | United Technologies Corporation | Fault tolerant communications interface |
JPS6194433A (en) * | 1984-10-15 | 1986-05-13 | Mitsubishi Electric Corp | Control system for serial bus |
US4720780A (en) * | 1985-09-17 | 1988-01-19 | The Johns Hopkins University | Memory-linked wavefront array processor |
US4811210A (en) * | 1985-11-27 | 1989-03-07 | Texas Instruments Incorporated | A plurality of optical crossbar switches and exchange switches for parallel processor computer |
EP0253940B1 (en) * | 1986-06-25 | 1991-05-02 | International Business Machines Corporation | Method and system of routing data blocks in data communication networks |
-
1988
- 1988-12-14 JP JP31609588A patent/JPH01261772A/en active Pending
-
1989
- 1989-04-07 WO PCT/US1989/001456 patent/WO1989009967A1/en not_active Application Discontinuation
- 1989-04-07 AU AU37622/89A patent/AU3762289A/en not_active Abandoned
- 1989-04-07 EP EP19890906950 patent/EP0377022A4/en not_active Withdrawn
Non-Patent Citations (2)
Title |
---|
No relevant documents disclosed * |
See also references of WO8909967A1 * |
Also Published As
Publication number | Publication date |
---|---|
EP0377022A4 (en) | 1992-08-12 |
JPH01261772A (en) | 1989-10-18 |
AU3762289A (en) | 1989-11-03 |
WO1989009967A1 (en) | 1989-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5367690A (en) | Multiprocessing system using indirect addressing to access respective local semaphore registers bits for setting the bit or branching if the bit is set | |
US4972314A (en) | Data flow signal processor method and apparatus | |
JP2644780B2 (en) | Parallel computer with processing request function | |
EP0389001B1 (en) | Computer vector multiprocessing control | |
CZ290716B6 (en) | Multimedia computer system | |
EP0377022A1 (en) | Computing machine with hybrid communication architecture | |
JP2003178039A (en) | Distributed shared virtual memory and its constituting method | |
CN1290003C (en) | Method and apparatus for interfacing a processor to a coprocessor | |
EP0409434B1 (en) | Method and device for controlling communication between computers | |
KR100719872B1 (en) | Parallel computer, and information processing unit using the same | |
US5530889A (en) | Hierarchical structure processor having at least one sub-sequencer for executing basic instructions of a macro instruction | |
US5526487A (en) | System for multiprocessor communication | |
US6279098B1 (en) | Method of and apparatus for serial dynamic system partitioning | |
JP3364937B2 (en) | Parallel processing unit | |
US4583167A (en) | Procedure and apparatus for conveying external and output data to a processor system | |
JPH11238034A (en) | Data transfer system | |
RU2042193C1 (en) | Computing system | |
Haralick et al. | Proteus: a reconfigurable computational network for computer vision | |
JP2536408B2 (en) | Data transfer device | |
SU618733A1 (en) | Microprocessor for data input-output | |
JPS62168257A (en) | Multiprocessor system sharing memory | |
JP2984594B2 (en) | Multi-cluster information processing system | |
JP2828972B2 (en) | Parallel processor | |
JPH01267762A (en) | Communication manager between cpu's | |
JPH064464A (en) | Peripheral equipment access device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19891221 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH DE FR GB IT LI LU NL SE |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 19920622 |
|
AK | Designated contracting states |
Kind code of ref document: A4 Designated state(s): AT BE CH DE FR GB IT LI LU NL SE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Withdrawal date: 19920818 |