WO2004107180A1 - マルチプロセッサシステム - Google Patents
マルチプロセッサシステム Download PDFInfo
- Publication number
- WO2004107180A1 WO2004107180A1 PCT/JP2003/006868 JP0306868W WO2004107180A1 WO 2004107180 A1 WO2004107180 A1 WO 2004107180A1 JP 0306868 W JP0306868 W JP 0306868W WO 2004107180 A1 WO2004107180 A1 WO 2004107180A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- processor
- update
- shared memory
- address
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1663—Access to shared memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0813—Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
Definitions
- the present invention relates to a shared memory multiprocessor system in which a plurality of processors are connected and a shared memory space shared by the processors is arranged, and in particular, a processor provided with a shared memory cache that caches data in the shared memory space.
- the present invention relates to a system configured by a computer. Software processing is performed by each processor, and the shared memory is used as a place to store information to be managed in a system unit rather than a single processor, in which data is transferred between processors. used. Shared memory caches are introduced to speed up access to shared memory and improve system performance. Background art
- FIG. 1 is a diagram showing a conventional example of the simplest shared memory type multiprocessor system.
- processors and shared memory are connected by the same global bus, and each processor accesses the shared memory via this global path.
- Each of the port processors (1a-1) to (1a-n) sends bus request signals (1c-1) to (1c-1n) to the arbiter (lb), and the arbiter uses the right to use.
- the arbiter uses the right to use.
- only one processor is given the right to use the global bus (1e), and bus enable signals (Id-1) to (Id-n) are sent to that processor. .
- the processor that has received the bus permission signal accesses the shared memory (1 f) via the global bus and receives desired data. In the realization method of Fig. 1, all access to the shared memory space is via the global bus regardless of the type of read or write.
- the former is caused by the fact that high-speed signal transmission becomes difficult due to electrical conditions such as a longer signal transmission distance in a global bus and the fact that multiple processors share the same signal line.
- the latter means that if two or more processors access the shared memory at the same time, the second and subsequent processors will have to wait for access to the shared memory due to arbitration of the global bus usage right. Due to that. Consequently, these constraints create the following problems in accessing shared memory space:
- Figure 2 shows a conventional example in which a shared memory cache (2h) is placed on each processor.
- each processor can individually hold a copy of the data in the shared memory space. It must look the same for all processors. Therefore, for write processing that triggers data update, it is essential to consider coherency control that guarantees this. Although the reason will be described later, this coherency control also becomes a barrier in solving the above problem.
- FIG. 3 is a diagram illustrating coherency control.
- Figure 3 explains the meaning of the above requirement.
- processor 1 writes the value 1 to the address, and then processor 2 It is assumed that the value 2 is written and the other processors 3 to n read the address.
- requirement 1 is equivalent to eliminating the possibility of reading in the order of, for example, value 2 ⁇ value 1 on each processor (guarantee of 1 ⁇ 0)
- requirement 2 is equivalent to For example, this is equivalent to eliminating the possibility that a processor that has already read the value 1 but subsequently reads the value 0 may occur (guarantee of t 20 ).
- Requirement 3 also allows both the time when another processor still reads the data before the update and the time before the updated data can be read from the time of the data update. Equivalent to making it as short as possible (minimizing t 2 and t 3 ). Requirement 3 is not an essential requirement for coherency control, but is required to improve system performance.
- An example of coherency control in Fig. 2 is that every time a processor writes to a shared memory space, it is reflected in its own shared memory cache, and at the same time, is written to the shared memory via the global bus. Is gro There is a method of monitoring a write access appearing on one global bus and replacing the data with data on a global bus when the data at the address is present in each shared memory cache.
- FIG. 4 is a diagram illustrating an example of how to establish cache coherency.
- FIG. 4 is an example of a processing sequence based on the above method.
- the timings (4a) to (4f) correspond to the following events, respectively.
- t dmw Time required for the processor Z shared memory to recognize the write access on the global bus and reflect the data to itself.
- the expression (1) is a condition for satisfying the above requirement 1, and after the write value is reflected on the shared memory and the shared memory cache on all the processors, the gross is calculated. This guarantees that one bus is released (generally, a sequence in which a write process completion response is sent from the writer, and the bus is released with this message is often used). By satisfying the conditions, it is guaranteed that the previous write processing has been completed when the next processor starts write processing due to arbitration of the right to use the global bus.
- requirement 1 is essentially the same as requiring data update arbitration. This is because guaranteeing the ordering of data updates is equivalent to ensuring that multiple data updates do not occur at the same time, that is, performing arbitration. Accordingly, satisfying the requirement 1 of the coherency control is similarly affected by the constraint 2 occurring in using the global bus, and is a barrier in solving the problem.
- the equation (2) is a condition for satisfying the above requirement 2 by absorbing the fact that the timing of (4d) in FIG.
- the timing of (4d) is that when a read access that conflicts with a write access that appears on the global bus is activated on each processor, the data before the update is returned to the processor core, or the data after the update is returned. Is the timing at which the boundary is returned. Since the updated data is returned at the timing of (4e), if the expression (2) is not satisfied, this timing may be reversed depending on the processor, which is contrary to the above requirement.
- equation (1) indicates that the bus occupation time must be longer than a certain value, that is, it imposes restrictions on the bandwidth of the shared memory space
- equation (2) indicates that the shared memory cache This shows that even if an attempt is made to increase the bandwidth by shortening the writing time to the shared memory, the (4d) timing must be kept above a certain level in consideration of the (4d) timing variation between processors.
- conditions are set for various operation timings, When trying to improve performance by shortening the period, coherency control itself creates a kind of restriction.
- Patent Document 1 discloses a conventional technique for obtaining coherency between caches.
- the processor module has a cache memory, and issues a coherency transaction to another processor module via a bus.
- the processor module receiving the coherency 'transaction performs a coherency check.
- data to be used for the update is sent via a bus.
- a signal line connecting the processor module and the main memory is used for reporting the result of the coherency check.
- Patent Document 1 is a diagrammatic representation of Patent Document 1
- the object of the present invention is to solve the above-mentioned problems and improve the bandwidth and latency of the shared memory space while minimizing the performance reduction factors due to the various restrictions including the coherency control as described above.
- the aim is to provide a multiprocessor system as shown.
- the multiprocessor system of the present invention is a multiprocessor system in which a plurality of processors each having a shared memory cache and at least one shared memory are mutually coupled.
- Dedicated line means for transmitting / receiving data between the processor and the shared memory exclusively; and global bus means for transmitting a data update notification while arbitrating the right to transmit the update notification to each processor.
- the processor sends the data update notification from the processor and the data to be used for updating independently, and In the processor and the shared memory, the access to the address indicated by the update notification is restricted by the reception of the update notification, and the data to be used for the update arrived at each processor and the shared memory is used in the shared memory area. After the address data is updated, access to the address is permitted.
- the transmission and reception of the update data is speeded up by providing the dedicated line means for transmitting and receiving the update data.
- the global bus means arbitrates and transfers only update notifications with a small amount of data, so that there is less waiting for a long time to acquire the right to use the bus.
- each processor and the shared memory update the shared memory area with the update data according to the update notification, coherency between the shared memory cache and the shared memory is secured.
- FIG. 1 is a diagram showing a conventional example of the simplest shared memory type multiprocessor system.
- FIG. 2 is a diagram showing a conventional example in which a shared memory cache (2h) is arranged on each processor.
- FIG. 3 is a diagram illustrating coherency control.
- FIG. 4 is a diagram illustrating an example of how to establish cache coherency.
- FIG. 5 is a configuration diagram of a system based on the embodiment of the present invention.
- FIG. 6 is an example of a time chart based on a series of processes of the first mode in the embodiment of the present invention.
- FIG. 7 is an example of a time chart of a process based on the second mode of the embodiment of the present invention.
- FIG. 8 is an example of a time chart when data is updated with different data sizes.
- FIG. 9 is an example of a time chart of the process based on the third aspect of the embodiment of the present invention.
- FIG. 10 is an example of a time chart based on the principle of the fourth mode of the embodiment of the present invention.
- FIG. 11 and FIG. 12 are a configuration diagram of a system according to a fifth aspect of the embodiment of the present invention and a time chart illustrating a control principle thereof.
- FIG. 13 is a diagram illustrating a sixth aspect of the embodiment of the present invention.
- FIG. 14 is a more specific system configuration diagram based on the embodiment of the present invention.
- FIG. 15 is an internal configuration diagram of each of the processors (14a-1) to (14a-10) in FIG.
- FIG. 16 is a diagram showing a signal flow at the time of write access of the first mode in the embodiment of the present invention.
- FIG. 17 is a diagram showing a signal flow at the time of receiving update data based on the first mode of the embodiment of the present invention.
- FIG. 18 is a diagram showing a signal flow at the time of typical read access in which data of the shared memory cache can be used in the first mode of the embodiment of the present invention.
- FIG. 19 is a diagram showing a signal flow in the case where data in the shared memory cache cannot be used in the read access according to the first mode of the embodiment of the present invention and an update data request process is involved.
- FIG. 20 is a diagram showing a signal flow when the master processor responds to an update data request transmitted from another processor in the first mode of the embodiment of the present invention.
- FIG. 21 is a diagram showing a signal flow at the time of write access in the second mode of the embodiment of the present invention.
- FIG. 22 is a diagram illustrating a second aspect of the embodiment of the present invention.
- FIG. 7 is a diagram showing a signal flow at the time of receiving updated data.
- FIG. 23 is a diagram showing a signal flow at the time of a write access in which the update notification is omitted in the third mode of the embodiment of the present invention.
- FIG. 24 is a diagram showing a signal flow at the time of receiving update data without an update notification transmitted from another processor in the third mode of the embodiment of the present invention.
- FIG. 25 is a diagram showing a signal flow when a processor added to the system issues an all data transmission request in the cache fill operation according to the second mode of the embodiment of the present invention.
- FIG. 26 is a diagram showing a signal flow when the master processor performs all data transmission in response to the all data transmission request in the cache fill operation in the fourth mode of the embodiment of the present invention.
- FIG. 27 is a diagram showing a signal flow when a processor added to the system performs all data reception in the cache fill operation according to the fourth mode of the embodiment of the present invention.
- FIG. 28 is a diagram showing a signal flow at the time of write access based on the fifth aspect of the embodiment of the present invention.
- FIG. 5 is a configuration diagram of a system based on the embodiment of the present invention.
- FIG. 5 a portion corresponding to the global bus in the conventional example is used as an update notification bus (5e) as a path exclusively for data update notification and update data transmission request.
- the contents of the update data are transmitted to and received from the repeater (5h) using the data channel (5g).
- the data channel uses a known high-speed broadband transmission means (for example, Gigabit Ethernet).
- Livita (5 h) has the function of broadcasting data appearing at each port to which the data channel is connected to all ports.
- the shared memory may be allocated on a specific processor, or as in the example in Japanese Patent Application No. 2002-1256, the size of the shared memory space may be different. In the case where a shared memory cache equal to is provided, the shared memory itself need not be provided. In any case, the effect as the embodiment of the present invention can be obtained.
- each processor acquires the update notification bus and sends the address to be updated to the update notification bus.
- update data is sent to the transmission buffer of the data channel.
- the update data mainly receives signal processing delays at the ports of the processors and repeaters, and arrives at other processors with a delay from the update notification.
- the update notification bus is constantly monitored by all processors, and when an update notification is detected, the address is written to an update queue on the processor. Then, when update data arrives, it writes it to the shared memory cache and deletes the address from the update queue.
- the read processing from the processor core is started for the address existing in the update queue, the read from the shared memory cache is suspended, and when the update data arrives, the read processing is performed together with the write processing to the shared memory cache. A process of returning data to the processor core is performed.
- all the addresses stored in the update queue are to be monitored, and the address of the write destination is added to the update data.
- each processor compares the address in the update queue with the address added to the update data, and writes the update data to an appropriate address in the shared memory cache.
- the configuration of the shared memory is basically the same as the configuration of the processor, the shared memory has no processor core, and the shared memory cache is a shared memory chip with a larger capacity.
- an update data transmission request is issued to the update notification bus, and the shared memory or other valid data is retained in the shared memory cache. This is done by the processor sending out updated data.
- FIG. 6 is an example of a time chart based on a series of processes of the first mode in the embodiment of the present invention. '
- processor 1 writes data 1 to address 1, followed by processor 2 writes data 2 to address 2, and in parallel, processor 3 shares address 1, address 0, and address 1 in that order.
- processor 3 shares address 1, address 0, and address 1 in that order.
- A means address
- D means data
- notation such as (1) -0 indicates writing of data 0 to address 1
- notation 1- (0) indicates the address from address 0. It means the read of data 1 respectively.
- the update queue is empty, so the read is performed from the shared memory cache, and data 0 is returned to the processor core.
- the update notification from the processor 1 is detected, and is input to the update queue of the processor 3.
- the update queue is not empty, but only address 1 is on the update queue, and there is no match with the read address. Processing returns data 0 to the processor core.
- processor 1 updates the address 1 When the data arrives, data 1 is written to the shared memory cache of processor 3 and the update queue is cleared. At the same time, the data is returned to the processor core as read data of address 1.
- the main advantages of this method are the following two points.
- One is that the processor that updates the data can reduce the bus occupation time because it does not have to wait for the other processor to reflect it in the shared memory cache, thereby improving the bandwidth of the shared memory space. is there.
- the other is that the average latency of read access can be reduced by eliminating unnecessary wait time for read access that does not compete with data update processing.
- the degree of improvement over the conventional example varies depending on the hit ratio of the shared memory cache and the probability of occurrence of access contention. In particular, the higher the hit ratio and the lower the probability of contention, the superiority of this method Becomes noticeable.
- the principle of the first mode of the embodiment of the present invention is to further increase the bandwidth of the shared memory space by blocking the data update unit in the first mode.
- the bandwidth of the data channel or shared memory cache can be much larger than that of the update notification bus. Therefore, the bandwidth of the shared memory space is limited by the bandwidth of the update notification bus, and the bandwidth of the data channel and the shared memory cache may not be fully utilized.
- FIG. 7 is an example of a time chart of the process based on the second mode of the embodiment of the present invention.
- data is updated in units of four addresses.
- the update notification sent by the processors 1 and 2 is performed by indicating the start of the address to be updated, and the update data of the corresponding address is transmitted collectively on the data channel.
- the update data size is made variable so that only necessary and sufficient data is transmitted to the data channel.
- FIG. 8 is an example of a time chart when data is updated with different data sizes.
- the only difference is that the first write of processor 1 in the example of FIG. This difference reduces the occupation time of the data channel and the shared memory cache by the time required to collect data for two addresses as a whole. Also, the update data corresponding to the write processing of the processor 2 arrives earlier by that time, and the time until the contents of the update queue are cleared is shortened. Can be reduced.
- the method according to the second embodiment not only improves the bandwidth, but also serves as a means for providing exclusive update in block units on the shared memory space. In this regard, it can be expected that the software processing will be more efficient and the processing capacity of the system will be improved. To achieve the same with software, extra processing is required to manage the start and completion of the update.
- the principle of the third aspect of the embodiment of the present invention is to enable the processor to select the attribute of coherency control necessity for each write access, and to notify the update of the write access in which the attribute not requiring coherency control is specified. Is issued, and only the update data is sent to other processors.
- software-to-air processing there is also a use of shared memory space where coherency guarantee is not required.For such processing, software uses this control to reduce the frequency of using the update notification bus and share it. Improves the bandwidth of the memory space, shortens the time that updated data is reflected on other processors, and reduces unnecessary access contention. It is intended to reduce the average latency of read access by minimizing the increase in latency due to the occurrence of latency.
- FIG. 9 is an example of a time chart of the process based on the third mode of the embodiment of the present invention.
- the access pattern of the processor in this example is based on the example in FIG. 6, and the only difference is that the first write of the processor 1 has an attribute that does not require coherency control. Since the processing on the update notification bus accompanying the first write of processor 1 is not activated, the occupation time of the update notification bus is reduced by the time required. In addition, the update notification associated with the second write access by the processor 2 is sent to the update notification bus earlier, so that the update time can be reduced. Processor 3's third read is issued after processor 1's write, but since it has not been placed in the update queue by this control, there is no queuing due to contention, and read with the same latency as normal Access has been completed.
- the principle of the fourth aspect of the embodiment of the present invention is that when a processor is added online, the processor or the shared memory that holds all the data in the shared memory space transfers the data in the shared memory space that the processor itself has to the data channel.
- the idle time is used to transfer data to the additional processor, and the additional processor receives the data and initializes the shared memory cache.
- FIG. 10 is an example of a time chart based on the principle of the fourth mode of the embodiment of the present invention.
- a to h are transfers based on normal data update processing
- 1 to 8 are data transfers to the additional processor performed by this method.
- the built-in processor can use other methods to indicate that it has been newly installed in the system, such as by sending a specific signal to the update notification bus or by using a dedicated signal line that indicates that it is not installed. Notify the mouth processor.
- the processor or shared memory that sends data to the additional processor receives the notification, and sends the update data to the data channel when its own update queue is empty, as shown in Figure 10. If the update queue is no longer empty, immediately suspend data transmission and give priority to normal processing, and resume data transmission when the update queue becomes empty.
- FIG. 11 and FIG. 12 are a configuration diagram of a system according to a fifth aspect of the embodiment of the present invention and a time chart illustrating a control principle thereof.
- the principle of the control according to the fifth aspect is that the same method as in the past can be selectively used for write processing to a specific address with a high contention frequency. Is going to be.
- a data bus (11i) for transferring update data is provided and the same arbitration as the update notification bus
- the processor selects whether to use the data bus for each write access.
- FIG. 13 is a diagram illustrating a sixth aspect of the embodiment of the present invention. 'FIG. 13 (a) is a time chart of the control in the sixth mode.
- the control principle in the fifth mode is applied as it is to the system configuration in the first to fourth modes, and physical transfer of update data is not performed for a specific write access.
- the data is updated first.
- the address in the shared memory space and the data to be written are associated in advance with a specific address generated by the processor core, and when a write access to the specific address is issued, the address is updated.
- the notification is issued, the reserved data is treated as having been transferred as updated data.
- a write for address 1 is treated as a write for data 1 for the same address in the shared memory space.
- FIG. 14 is a more specific system configuration diagram based on the embodiment of the present invention.
- the system consists of 10 processors (14a-1) to (14a-10) and a bus / bitter / repeater (14b).
- the bus arbiter and the repeater provide completely independent functions, both blocks are housed in the same unit to simplify the system configuration.
- the update notification bus (14c) has the following bus addresses: BC 1 to BC 10, bus request signals NR 1 to NR 10, bus permission signals NG 1 to NG 10, update notification address NA (30 bits), update notification address It consists of a mask NM (4 bits), immediate update data ND (4 bits), an update notification signal NV, an update data request signal RV, and an immediate update attribute signal NI, and operates in synchronization with BC.
- the data channels TSD1 to TSD10 and RSD1 to RSD10 use full-duplex communication channels facing serial transmission lines having a transmission band of about 3 gigabits Z seconds. At least two of the processors hold the entire contents of the shared memory space, one of which responds to the update data request as the master processor.
- Fig. 15 shows the processor (14a-1) to (14a-10) in Fig. 14. It is a block diagram.
- processor core (15a)
- processor bus bridge (15b)
- update notification bus bridge (15e)
- data channel IF (15h)
- update queue (15k)
- shared memory cache (15 ⁇ ). The function of each part is outlined below.
- the control block (15c) performs overall control, and the redirector (15d) performs bus switching between functional blocks and converts addresses and data.
- FIG. 16 is a diagram showing a signal flow at the time of write access of the first mode in the embodiment of the present invention.
- the processor core (16a) contains processor address PA and processor data Set the processor transfer type PT and the processor write signal PW.
- the control logic (16c) of the processor bus bridge (16b) sets the redirector function control signal FC.
- the redirector (16d) copies the processor address P A to the effective address EA and the cache address CA, and the processor data P D to the effective data E D and the cache data C D.
- the transmission section (16f) of the update notification bus bridge (16e) receives NS and transmits a noss request signal NR.
- the transmission section (16 ⁇ ) of the update notification bus bridge (16e) receives the bus permission signal ⁇ G and acquires the update notification bus.
- ⁇ ⁇ ⁇ ⁇ is echoed to the update notification address ⁇ , and the update notification signal NV is transmitted to all processors. NA and NV are also looped back and received by the update notification bus bridge monitoring unit (16 g) of the own processor.
- the monitoring unit (16 g) of the update notification bus bridge (16e) echoes NA as the update notification address SA and also uses NV as the update notification reception signal SV. Send to own processor.
- the update notification is queued in the queue register (16 1) of the update queue (16 k). At this time, the same control is performed on other processors.
- the control logic (16c) of the processor bus bridge (16b) receives the SV, transmits the update data transmission signal US, and receives the framer (16) of the data channel IF (16h). i) queues the contents of the EA / ED in the transmit buffer. After sending the US, an acknowledge signal ACK is sent to the processor core, and the access on the processor core side is completed.
- FIG. 17 is a diagram showing a signal flow at the time of receiving update data based on the first mode of the embodiment of the present invention.
- the framer (17i) of the data channel IF (17h) receives the RPD, extracts and expands the packets in the data, sets the update data address UA and the update data UD, and updates the data. Transmits the received signal UR. At the same time, UA is set to the queue clear address QC A of the key register (171).
- the control logic (17c) of the processor bus bridge (17b) receives UR and sets the redirector function control signal FC.
- the redirector (17d) responds by echoing the UA to the CA and the UD to the CD. If other processing is being performed in the control logic (17c), the system waits once and executes this processing as soon as it is completed.
- FIG. 18 is a diagram showing a signal flow at the time of a typical read access in which data of a shared memory cache can be used in the first mode of the embodiment of the present invention. The flow is shown below. The numbers at the beginning of each line correspond to the numbers given to each signal in Fig. 18.
- the processor core (18a) sets PA and PT, and sends a processor read signal PR.
- the shared memory cache (18 ⁇ ) receives the CR and sends an unavailable signal NP if the data on the cache specified by the CA is not available, and sends a cache data CD if it is available I do.
- the comparator (18 m) of the update queue (18 k) transmits the contention signal COL when the queue designated by EA is in the queue register.
- FIG. 19 is a diagram showing common access in read access according to the first mode of the embodiment of the present invention.
- FIG. 9 is a diagram showing a signal flow in a case where data on a memory cache having no data can be used and update data request processing is involved.
- control logic (19c) of the processor bus bridge (19b) does not receive C ⁇ L but receives NP, it sends an update data request signal RS.
- the transmission section (19 ⁇ ) of the update notification bus bridge (19e) receives the RS and transmits the bus request signal NR.
- EA is echoed to the update notification address N A, and the update data request signal RV is transmitted to all processors.
- the NA and RV are also looped back and received by the update notification bus bridge monitoring unit (19 g) of the own processor.
- the monitoring unit (1 9 g) of the update notification bus bridge (1 9 e) echoes NA as SA, and when detecting the RV sent by its own processor, echoes it as SV in its own processor. .
- the update queue (1 9 k) receives the SV as the queue set signal QS, and queues the contents of the SA in the queue register (1 9 1) as the queue set address QSA.
- the master processor In response to the update data request sent in (8), the master processor updates New data is transmitted, the data channel IF (19h) sets the update data address UA, the update data UD, and sends the update data reception signal UR. At the same time, UA is set to the queue clear address QCA in the queue register (191).
- the control logic (1 9c) of the processor bus bridge (1 9b) receives the release of COL and controls the FC to control the redirector (1 9d), UA to CA, UD To CD and PD.
- the control logic (19c) of the processor bus bridge (19b) sends the cache write signal CW to update the desired data on the shared memory cache with the CD, and to the processor core. Send ACK and complete read access.
- FIG. 20 is a diagram showing a signal flow when the master processor responds to an update data request transmitted from another processor in the first mode of the embodiment of the present invention.
- the monitoring unit (19g) of the update notification bus bridge (19e) echoes NA to SA and transmits an update data request signal SR to the processor internal unit.
- control logic (20c) of the processor bus bridge (20b) is the master processor, the control logic (20c) sets the FC in response to SR and controls the redirector (20d) to change the SA to EA. Echo to CA and connect CD and ED. If this is not the master processor, the SR is ignored. The control logic If another process is being performed in step (17c)-and the process waits, and this process is executed as soon as the process is completed.
- control logic (20c,) of the processor bus bridge (20b) sends the CR to the shared memory cache (2011).
- a CD is sent from the shared memory cache (20 ⁇ ) and echoed to the ED.
- the control logic (20c) of the processor bus bridge (20b) transmits US, and the update data is transmitted to the data channel in the same manner as the update data transmission processing at the time of write access.
- FIG. 21 is a diagram showing a signal flow at the time of write access in the second mode of the embodiment of the present invention.
- the processor core (21a) sets the processor address PA, processor data PD, and processor transfer type PT, and transfers multiple sizes of data to the redirector by burst transfer.
- the control logic (21c) of the processor bus bridge (21b) sets the redirector function control signal FC.
- the redirector (21d) encodes the first address set in the processor address P A into the effective address E A. Also, it counts the data size of the burst transfer, calculates the execution address mask EM from it, and outputs it.
- the effective address mask is a signal indicating how many lower bits of the effective address are ignored. Data of multiple sizes set in the PD is stored in the buffer inside the redirector.
- the control logic (21c) of the processor bus bridge (21b) transmits the update notification transmission signal NS.
- the transmitting section (21 #) of the update notification bus bridge (21e) receives the NS and transmits the bus request signal NR.
- the transmission section (21f) of the update notification bus bridge (21e) receives the bus permission signal NG and acquires the update notification bus.
- EA is echoed to the update notification address NA and EM to the update notification address mask NM, and the update notification signal NV is sent to all processors.
- NA, NM, and NV are also looped back and received by the update notification bus bridge monitoring unit (21 g) of the own processor.
- the monitoring unit (21 g) of the update notification bus bridge (21 e) receives the NV, echoes NA to the update setting address SA, and NM to the update setting address mask SM, and transmits the update notification reception signal SV.
- the update queue (21k) receives the SV as the queue set signal QS, and queues the contents of the SA in the queue register (21 1) as the queue set address QS A and the contents of the SM as the queue set address mask QSM.
- the control logic (21c) of the processor bus bridge (21b) transmits the update data transmission signal US when receiving the SV, and sets FC at the same time.
- the redirector (2Id) sets the ED sequentially from the first data of the update data stored in the buffer.
- the framer (21 i) of the data channel IF (21h) receiving this queues the contents of EAZEM / ED in the transmission buffer. After sending the US, an acknowledge signal ACK is sent to the processor core, and the access on the processor core side is completed.
- the data queued in the transmission buffer is constructed in a bucket at any time, and when the data is completed, the data is transferred to the SERDES (21j).
- Transmitted parallel data is transmitted as TPD.
- SERDES responds by modulating the electrical signal carried on the data channel. Retransmit the update data as transmission serial data TSD.
- FIG. 22 is a diagram showing a signal flow at the time of receiving update data transmitted from another processor in the second mode of the embodiment of the present invention.
- the SERDES (22j) of the data channel IF (22h) demodulates the received serial data RSD and sends it to the framer (22i) as received parallel data RPD.
- the framer ('22i) of the data channel IF (22h) receives the RPD, extracts and expands the packet in the data, sets the update data address UA and the update address mask UM, and updates the data. Transmits the received signal UR. At the same time, UA is set to the queue clear address QCA of the queue register (221). At the same time as the UR transmission, update data is sent to the UD sequentially from the first data to the UD.
- the control logic (22c) of the processor bus bridge (22b) receives the UR and sets the redirector function control signal FC.
- the UA and UD are temporarily stored in a buffer in the redirector, and the UA is set to the CA and the first data of the UD is set to the CD. If another process is being performed in the CA by the setting control logic (22c), the process waits for a while and executes this process as soon as the process is completed.
- the control logic (22c) of the processor bus bridge (22b) sends the cache write signal CW, and the shared memory cache (22 ⁇ ) that receives this sends the desired data specified by C ⁇ to the CD. Update in. Subsequently, the next update data stored in the redirector's buffer is set in the CD, the CA value is incremented by 1, and the same cache memory update processing is performed according to the UM setting value. Repeat until there is no more update data. Then cuque After transmitting the rear signal QC, the update queue (22k) receiving this clears the QCA set in (2) from the queue register (221).
- FIG. 23 is a diagram showing a signal flow at the time of a write access in which the update notification is omitted in the third mode of the embodiment of the present invention.
- the processor core (23a) sets the data-only attribute to the processor transfer type PT, and sends the processor address PA, the processor data PD, and the processor write signal PW. ,
- the control logic (23c) of the processor bus bridge (23b) sets the redirector function control signal FC.
- the redirector (23d) echoes the processor address P A to the execution address E A and the processor data PD to the effective data ED.
- the control logic (23c) of the processor bus bridge (23b) sets the data only attribute signal DO and transmits the update data transmission signal US. After transmission of the US, an acknowledgment signal ACK is transmitted to the processor core, and the access on the processor core side is completed.
- the framer (23i) of the data channel IF (23h) receiving the update data transmission signal U S and the data only attribute signal D O queues the content of the EA / ED and the data only attribute in the transmission buffer.
- FIG. 24 is a diagram showing a signal flow at the time of receiving update data in which the update notification transmitted from another processor is omitted in the third mode of the embodiment of the present invention. The flow is shown below. The numbers at the beginning of each line correspond to the numbers assigned to each signal in FIG.
- the SERDES (24 j) of the data channel IF (24 h) demodulates the received serial data RSD and sends it to the framer (24 ⁇ ) as the received parallel data RPD.
- the framer (24i) of the data channel IF (24h) receives the RPD, extracts and expands the packets in the data, sets the update data address UA, the update data UD, and the data-only attribute DO. Transmits the update data reception signal UR.
- the control logic (24c) of the processor bus bridge (24b) receives the update data reception signal UR and the data only attribute signal DO, and sets the redirector function control signal FC. In response, the redirector (24d) echoes UA to the cache address CA and UD to the cache data CD. If other processing is being performed in the control logic (24c), the system waits temporarily and executes this processing as soon as it is completed.
- the control logic (24c) of the processor bus bridge (24b) sends the cache write signal CW, and the shared memory cache (24 ⁇ ) that receives this sends the desired data specified by C ⁇ to the CD. Update in.
- FIG. 25 is a diagram showing a signal flow when a processor added to the system issues an all data transmission request in the cache fill operation in the second mode of the embodiment of the present invention.
- the transmitting section (25f) of the update notification bus bridge (25e) receives RS and IS, and transmits a bus request signal NR.
- the transmission section (25f) of the update notification bus bridge (25e) receives the bus permission signal NG and acquires the update notification bus.
- the transmitting section (25f) of the update notification bus bridge (25e) transmits RV and NI simultaneously.
- FIG. 26 is a diagram showing a signal flow when the master processor performs all data transmission in response to an all data transmission request in the cache fill operation according to the fourth mode of the embodiment of the present invention.
- the monitoring unit (26g) 'of the update notification bus bridge (26e) of the master processor receives NI simultaneously with RV, it transmits SR and SI simultaneously.
- control logic (26c) of the processor bus bridge (26b) When the control logic (26c) of the processor bus bridge (26b) receives SR and SI at the same time, it interprets it as an all data transmission request signal and interprets the first address of the shared memory space as the transmission start address and next Store as the transmission address.
- the control logic (26c) sets the redirector function control signal FC when the queue empty signal QE is valid and there is no other required processing, and the redirector (26d) Caches the stored next transmission address Set to dress CA and control logic (26c) sends cache read signal CR.
- the shared memory cache (26 ⁇ ) receives the CR and sends the data on the cache specified by CA to the cache data CD.
- the redirector (26d) of the processor bus bridge (26b) also sets the previously set CA to the effective address EA, and echoes the CD to the effective data ED.
- the control logic (26c) sets the data-only attribute DO and sends the update data transmission signal US.
- the framer (26i) of the data channel IF (26h) receiving the request queues the contents of the EA / ED and the data-only attribute in the transmission buffer.
- the control logic (26c) of the processor bus bridge (26b) stores the address following the transmitted address as the next transmission address. When the transmitted address reaches the last address of the shared memory space, the first address of the shared memory space is stored as the next transmitted address. If the next transmission address matches the previously stored transmission start address, all data transmission ends.
- the data queued in the transmission buffer is constructed in a bucket at any time. Is sent as SERDES receives this, modulates the electric signal that can be carried on the data channel, and sends out the data as transmission serial data TSD.
- FIG. 27 is a diagram showing a signal flow when a processor added to the system performs all data reception in the cache fill operation according to the fourth mode of the embodiment of the present invention.
- Control logic (27c) output during reception of all data When a processor read signal PR or a processor write signal PW is received, the control logic (27c) suspends this request. Even during the operation of receiving all data, queuing and clearing to the update queue are performed according to the flows shown in FIGS. 16 and 17, respectively.
- the SERDES (27 j) of the data channel IF (27 h) demodulates the received serial data RSD and sends it to the framer (27 i) as received parallel data RPD.
- the framer (27i) of the data channel IF (27h) receives the RPD, extracts and expands the packets in the data, sets the update data address UA, the update data UD, and the data-only attribute DO. Transmits the update data reception signal UR.
- the control logic (27c) of the processor bus bridge (27b) receives UR and sets the redirector function control signal FC.
- the redirector (27d) echoes the UA to the cache address CA and the UD to the cache data CD accordingly. If other processing is being performed in the control logic (27c), the system waits temporarily and executes this processing as soon as it is completed.
- the control logic (27c) of the processor bus bridge (27b) transmits the cache write signal CW. Since the data-only attribute D ⁇ ⁇ has been received, the queue clear signal QC is not transmitted.
- the shared memory cache (2711) receiving the cache write signal CW updates the desired data specified by the CA and the CD, and the data was unavailable in the state before the update. If not, send the unavailable signal NP.
- the control logic (27c) of the processor bus bridge (27b) measures the number of times the unavailable signal NP was received during all data reception operations, and Upon recognizing that all areas of the memory cache have been filled with valid data, the operation of receiving all data is terminated.
- FIG. 28 is a diagram showing a signal flow at the time of write access based on the fifth aspect of the embodiment of the present invention.
- the processor core (28a) sets PA, PD, and PT and transmits PW.
- the control logic (28c) of the processor bus bridge (28b) sets the redirector function control signal FC.
- the redirector (28d) echoes the processor address P A to the effective address E A and the cache address C A, and echoes the processor data P D to the effective data ED and the cache data CD ⁇
- the control logic (28c) of the processor bus bridge (28b) transmits the update notification transmission signal NS.
- the PA is in the specified address space, it transmits the immediate update attribute transmission signal IS.
- the transmitting section (28f) of the update notification bus bridge (28e) receives NS and transmits NR.
- the transmission section (28f) of the update bus bridge (28e) receives the NG and acquires the update notification bus.
- EA is assigned to the update notification address NA
- IS is assigned to the immediate update attribute signal NI
- ED is assigned to the immediate update data ND
- the update notification signal NV is transmitted to all processors.
- NA, ND, NV, and NI are also looped back to the own processor's update notification path prism monitoring unit (28 g) and received.
- the monitoring unit (28 g) of the update notification bus bridge (28 e) When both are received, they are echoed as an immediate update signal SI in the own processor. The same operation is performed on other processors.
- the control logic (28c) of the processor bus bridge (28b) sets the redirector function control signal FC.
- the redirector (28d) responds by echoing SA to CA and SD to CD. The same operation is performed on other processors. At this time, if the processor bus bridge (28b) is performing another process, this process is performed with the highest priority after the completion of that process.
- the control logic (28c) of the processor bus bridge (28b) sends the cache write signal CW, and the shared memory cache (28 ⁇ ) receiving this sends the desired signal specified by C ⁇ . Update the data on CD. The same operation is performed on other processors.
- the write access based on the sixth aspect of the embodiment of the present invention uses reserved data at the time of writing to a specific address, and its flow is substantially similar to the light access in the fifth aspect.
- the following points are the differences.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Multi Processors (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2003/006868 WO2004107180A1 (ja) | 2003-05-30 | 2003-05-30 | マルチプロセッサシステム |
JP2005500234A JP3764893B2 (ja) | 2003-05-30 | 2003-05-30 | マルチプロセッサシステム |
US11/285,184 US7320056B2 (en) | 2003-05-30 | 2005-11-23 | Multi-processor system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2003/006868 WO2004107180A1 (ja) | 2003-05-30 | 2003-05-30 | マルチプロセッサシステム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/285,184 Continuation US7320056B2 (en) | 2003-05-30 | 2005-11-23 | Multi-processor system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004107180A1 true WO2004107180A1 (ja) | 2004-12-09 |
Family
ID=33485807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/006868 WO2004107180A1 (ja) | 2003-05-30 | 2003-05-30 | マルチプロセッサシステム |
Country Status (3)
Country | Link |
---|---|
US (1) | US7320056B2 (ja) |
JP (1) | JP3764893B2 (ja) |
WO (1) | WO2004107180A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100375067C (zh) * | 2005-10-28 | 2008-03-12 | 中国人民解放军国防科学技术大学 | 异构多核微处理器局部空间共享存储方法 |
JP2008250373A (ja) * | 2007-03-29 | 2008-10-16 | Toshiba Corp | マルチプロセッサシステム |
JP2016157462A (ja) * | 2011-10-26 | 2016-09-01 | クゥアルコム・テクノロジーズ・インコーポレイテッド | キャッシュコヒーレンシを有する集積回路 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007179085A (ja) * | 2005-12-26 | 2007-07-12 | Fujitsu Ltd | ディスク装置、ディスク書込データ選択方法およびディスク書込データ選択プログラム |
JP4818820B2 (ja) * | 2006-06-07 | 2011-11-16 | ルネサスエレクトロニクス株式会社 | バスシステムおよびバススレーブならびにバス制御方法 |
JP4410270B2 (ja) * | 2007-04-17 | 2010-02-03 | 株式会社東芝 | バス制御装置 |
JP2009080747A (ja) * | 2007-09-27 | 2009-04-16 | Panasonic Corp | マルチプロセッサ装置および情報処理装置 |
US8239879B2 (en) * | 2008-02-01 | 2012-08-07 | International Business Machines Corporation | Notification by task of completion of GSM operations at target node |
US8200910B2 (en) * | 2008-02-01 | 2012-06-12 | International Business Machines Corporation | Generating and issuing global shared memory operations via a send FIFO |
US8255913B2 (en) * | 2008-02-01 | 2012-08-28 | International Business Machines Corporation | Notification to task of completion of GSM operations by initiator node |
US8484307B2 (en) * | 2008-02-01 | 2013-07-09 | International Business Machines Corporation | Host fabric interface (HFI) to perform global shared memory (GSM) operations |
US8275947B2 (en) * | 2008-02-01 | 2012-09-25 | International Business Machines Corporation | Mechanism to prevent illegal access to task address space by unauthorized tasks |
US8214604B2 (en) * | 2008-02-01 | 2012-07-03 | International Business Machines Corporation | Mechanisms to order global shared memory operations |
US8146094B2 (en) * | 2008-02-01 | 2012-03-27 | International Business Machines Corporation | Guaranteeing delivery of multi-packet GSM messages |
US20090257263A1 (en) * | 2008-04-15 | 2009-10-15 | Vns Portfolio Llc | Method and Apparatus for Computer Memory |
US9471532B2 (en) * | 2011-02-11 | 2016-10-18 | Microsoft Technology Licensing, Llc | Remote core operations in a multi-core computer |
US9448954B2 (en) * | 2011-02-28 | 2016-09-20 | Dsp Group Ltd. | Method and an apparatus for coherency control |
WO2012144012A1 (ja) * | 2011-04-18 | 2012-10-26 | 富士通株式会社 | スレッド処理方法、およびスレッド処理システム |
JP5936152B2 (ja) | 2014-05-17 | 2016-06-15 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | メモリアクセストレース方法 |
CN109491587B (zh) * | 2017-09-11 | 2021-03-23 | 华为技术有限公司 | 数据访问的方法及装置 |
CN112100093B (zh) * | 2020-08-18 | 2023-11-21 | 海光信息技术股份有限公司 | 保持多处理器共享内存数据一致性的方法和多处理器系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS56163572A (en) * | 1980-05-19 | 1981-12-16 | Hitachi Ltd | Data processing system |
EP0510821A1 (en) * | 1991-04-22 | 1992-10-28 | International Business Machines Corporation | Multiprocessor cache system |
EP0608663A1 (en) * | 1993-01-25 | 1994-08-03 | BULL HN INFORMATION SYSTEMS ITALIA S.p.A. | A multi-processor system with shared memory |
EP0669578A2 (en) * | 1994-02-24 | 1995-08-30 | Hewlett-Packard Company | Improved ordered cache-coherency scheme |
US5564034A (en) * | 1992-09-24 | 1996-10-08 | Matsushita Electric Industrial Co., Ltd. | Cache memory with a write buffer indicating way selection |
US6484220B1 (en) * | 1999-08-26 | 2002-11-19 | International Business Machines Corporation | Transfer of data between processors in a multi-processor system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6112287A (en) * | 1993-03-01 | 2000-08-29 | Busless Computers Sarl | Shared memory multiprocessor system using a set of serial links as processors-memory switch |
JP2628079B2 (ja) * | 1988-11-25 | 1997-07-09 | 三菱電機株式会社 | マルチプロセサシステムにおけるダイレクト・メモリ・アクセス制御装置 |
JP3100807B2 (ja) * | 1992-09-24 | 2000-10-23 | 松下電器産業株式会社 | キャッシュメモリ装置 |
US6182176B1 (en) * | 1994-02-24 | 2001-01-30 | Hewlett-Packard Company | Queue-based predictive flow control mechanism |
US5754865A (en) * | 1995-12-18 | 1998-05-19 | International Business Machines Corporation | Logical address bus architecture for multiple processor systems |
-
2003
- 2003-05-30 WO PCT/JP2003/006868 patent/WO2004107180A1/ja active Application Filing
- 2003-05-30 JP JP2005500234A patent/JP3764893B2/ja not_active Expired - Fee Related
-
2005
- 2005-11-23 US US11/285,184 patent/US7320056B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS56163572A (en) * | 1980-05-19 | 1981-12-16 | Hitachi Ltd | Data processing system |
EP0510821A1 (en) * | 1991-04-22 | 1992-10-28 | International Business Machines Corporation | Multiprocessor cache system |
US5564034A (en) * | 1992-09-24 | 1996-10-08 | Matsushita Electric Industrial Co., Ltd. | Cache memory with a write buffer indicating way selection |
EP0608663A1 (en) * | 1993-01-25 | 1994-08-03 | BULL HN INFORMATION SYSTEMS ITALIA S.p.A. | A multi-processor system with shared memory |
EP0669578A2 (en) * | 1994-02-24 | 1995-08-30 | Hewlett-Packard Company | Improved ordered cache-coherency scheme |
US6484220B1 (en) * | 1999-08-26 | 2002-11-19 | International Business Machines Corporation | Transfer of data between processors in a multi-processor system |
Non-Patent Citations (2)
Title |
---|
ARCHIBALD, J, BAER, J-L; "Cache coherence protocols: Evaluation using a multiprocessor simulation model"; ACM Transaction on Computer Systems, November 1986, Vol. 4, No. 4, pages 273-298 * |
UCHIBA, M, et al.: "Kyotsu memory-hoshiki no multi processor system ni okeru seino kojo shisaku", The Institute of Electronics 2003, Nen Sogo Taikai Koen Ronbunshu Tsushin 2, 19 March 2003, page 18 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100375067C (zh) * | 2005-10-28 | 2008-03-12 | 中国人民解放军国防科学技术大学 | 异构多核微处理器局部空间共享存储方法 |
US8380933B2 (en) | 2007-03-20 | 2013-02-19 | Kabushiki Kaisha Toshiba | Multiprocessor system including processor cores and a shared memory |
JP2008250373A (ja) * | 2007-03-29 | 2008-10-16 | Toshiba Corp | マルチプロセッサシステム |
JP2016157462A (ja) * | 2011-10-26 | 2016-09-01 | クゥアルコム・テクノロジーズ・インコーポレイテッド | キャッシュコヒーレンシを有する集積回路 |
Also Published As
Publication number | Publication date |
---|---|
US20060075197A1 (en) | 2006-04-06 |
JPWO2004107180A1 (ja) | 2006-07-20 |
JP3764893B2 (ja) | 2006-04-12 |
US7320056B2 (en) | 2008-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004107180A1 (ja) | マルチプロセッサシステム | |
JP2654369B2 (ja) | プロセッサ・チャネル及び交換機構の間に設けられたアダプタ装置 | |
US6425021B1 (en) | System for transferring data packets of different context utilizing single interface and concurrently processing data packets of different contexts | |
US6145016A (en) | System for transferring frame data by transferring the descriptor index data to identify a specified amount of data to be transferred stored in the host computer | |
US6950438B1 (en) | System and method for implementing a separate virtual channel for posted requests in a multiprocessor computer system | |
US5740467A (en) | Apparatus and method for controlling interrupts to a host during data transfer between the host and an adapter | |
CN102255794B (zh) | 远程消息收发吞吐量优化和等待时间缩短用系统和方法 | |
US7613197B2 (en) | Multi-processor system and message transferring method in the same | |
US20020129173A1 (en) | Communications system and method with non-blocking shared interface | |
US20100064082A1 (en) | Communication module | |
US6938094B1 (en) | Virtual channels and corresponding buffer allocations for deadlock-free computer system operation | |
US20050132089A1 (en) | Directly connected low latency network and interface | |
US6888843B2 (en) | Response virtual channel for handling all responses | |
WO2005015428A1 (en) | System and method for a distributed shared memory | |
JPH01147647A (ja) | データ処理装置 | |
JPH04229350A (ja) | 媒体アクセス制御/ホストシステムインターフェースを実施するための方法及び装置 | |
EP1276045A2 (en) | Cluster system, computer and program | |
KR20140084155A (ko) | 네트워크 프로세서에서의 멀티-코어 상호접속 | |
WO2015084506A1 (en) | System and method for managing and supporting virtual host bus adaptor (vhba) over infiniband (ib) and for supporting efficient buffer usage with a single external memory interface | |
KR20050056934A (ko) | 메모리 상호 접속에서 판독 착수 최적화를 위한 방법 및장치 | |
EP0789302B1 (en) | Communication network end station and adaptor card | |
JP2591502B2 (ja) | 情報処理システムおよびそのバス調停方式 | |
JP4104939B2 (ja) | マルチプロセッサシステム | |
KR100766666B1 (ko) | 멀티프로세서 시스템 | |
JP2007102447A (ja) | 演算処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005500234 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057010950 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057010950 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11285184 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 11285184 Country of ref document: US |