CN1371061A - Multibus pipe-line data processing system and its bus efficiency raising method - Google Patents

Multibus pipe-line data processing system and its bus efficiency raising method Download PDF

Info

Publication number
CN1371061A
CN1371061A CN 01103838 CN01103838A CN1371061A CN 1371061 A CN1371061 A CN 1371061A CN 01103838 CN01103838 CN 01103838 CN 01103838 A CN01103838 A CN 01103838A CN 1371061 A CN1371061 A CN 1371061A
Authority
CN
China
Prior art keywords
bus
data
processor
demand
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 01103838
Other languages
Chinese (zh)
Other versions
CN1170233C (en
Inventor
张志宇
陈灿辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Integrated Systems Corp
Original Assignee
Silicon Integrated Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Integrated Systems Corp filed Critical Silicon Integrated Systems Corp
Priority to CNB011038381A priority Critical patent/CN1170233C/en
Publication of CN1371061A publication Critical patent/CN1371061A/en
Application granted granted Critical
Publication of CN1170233C publication Critical patent/CN1170233C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The multibus data processing system includes a path to connect a first bus bridge, which connects the input bus and processor bus, and a second bus bridge, which connects system memory and processor bus. The said path makes it possible for the I/O device in I/O bus to access system memory without passing through the processor bus, and this raises bus using efficiency and thus system efficiency.

Description

Multibus pipe-line data processing system and promote the method for its bus efficiency
The present invention relates to a kind of device and method that improves bus (bus) usefulness.Particularly, a kind of I/O device that shortens takies pipeline bus (pipeline bus) device and method of last transaction processing time.
In the system of pipeline bus; pending every transaction (transaction) can comprise several execution aspects (phase) usually, for example decides layer (arbitration phase), requesting layer (request phase), spies upon layer (a snooping phase), response layer (responsephase) and data Layer (data phase) etc.May have dependence (dependency) between each aspect: carry out down and must wait for that present aspect being finished before one deck, perhaps, the conclude the business execution of certain aspect of next record must wait up till now the identical aspect of transaction to be finished just can to begin to be performed.
Each aspect has at least one succedaneum (agent) and is responsible for, and the needed time length of each layer is decided by the time that this succedaneum who is responsible for is finished.For more efficient use bus and quicken finishing of every transaction, system carries out the required time except lowering every layer as much as possible, also can shorten the identical pending times such as aspect in the next record transaction, the usefulness of system just can be promoted greatly thus.
A system has a plurality of buses usually, and each bus can link to each other with several devices (device), and there is the contact of data in regular meeting between device, and the contact two ends not necessarily will be on same bus.When some devices have demand data, no matter whether destination apparatus (destination device) is on same bus, the demand data end can open beginning one transaction on its bus that links to each other, handle the process of this transaction, can make this bus be in use the state of (occupied).
Some device has high-speed cache (cache) and is used for storing content in other device memory, so, just can allow this device comparatively fast obtain the content of other device memory.Yet, system must keep the consistance (data coherency) of data, the action of any access target device, the device that must be comprised the destination apparatus data in other high-speed cache is known, because the data of destination apparatus may be changed in other high-speed cache.The high-speed cache that access was revised can cause the inner action (implicit write back) that writes back: the reformed device of high-speed cache must provide the data that are updated to keep the consistance of data.
A trade fair that reads or writes comprises the action that data Layer carries out data transmission, and the length of data Layer required time is relevant with data quantity transmitted.Article one, have a plurality of succedaneums usually on the bus; each succedaneum needs to transmit data with bus; because bus shares, the succedaneum can't carry out the action of data transmission simultaneously, and the data Layer that therefore will begin to carry out the next record transaction must wait the data Layer of transaction up till now to be finished.
Article one, the bus of height pipelineization has a plurality of succedaneums and shares, and the quantity of data transmission also can be a lot of on the bus, and therefore, the frequency range of bus (bandwidth) has been played the part of a very important role influencing on the system effectiveness.The frequency range of bus can be transmitted data speed (operatingfrequency) and data width factors such as (data width) influences, however these factors can't unconfinedly improve, therefore, improving bus efficiency is the best method of promoting system effectiveness.The data transmission that increases bus utilization rate, pipeline transaction, reduces on the bus all is to improve the method for bus efficiency.In the bus of pipelineization, the data transmission that reduces on the bus is unique method, yet the minimizing system must data quantity transmitted be impossible, therefore, can reduce that the prerequisite of data transmission is the pipeline that has another transmission data on the bus.
Fig. 1 is a traditional system architecture, comprises a processor 101, processor bus 102, bus 103 and a system storage 104 are gone in an output; An output is gone into/processor bus bridge (I/O and processor bus bridge) 105, be called for short bus bridge one, connected the bus 102 of exporting bus 103 and processor, a system storage/processor bus bridge (system memory and processor bus bridge) 106, be called for short bus bridge two, the pipeline of a system storage 104 and 102 data transmission of processor bus is provided; Processor 101 comprises a high-speed cache usually.In a lot of the application, the device that output is gone on the bus 103 can frequent access system storer 104, and processor 101 must know that any output goes into device on the bus 103 to the demand of system storage 104, so that keep the consistance of data.
Data in the device access system storer 104 on exporting bus 103, up-to-date data may exist in the system storage 104 or in the high-speed cache of processor, therefore, must on processor bus 102, open beginning demand, make processor 101 spy upon (snoop) corresponding high-speed cache.Bus bridge 1 is responsible for transmitting the demand that bus 103 is gone in output, and sends on the processor bus 102 of concluding the business.After seeing that the demand of bus 103 is gone in output, whether processor 101 can be spied upon the high-speed cache of exporting bus 103 desired datas and be changed, and then, processor 101 can be with result notification bus bridge 1 and the bus bridge 2 106 spied upon.According to the result who spies upon, bus bridge 2 106 is just in processor bus 102 and 104 transmission of system storage data.Bus bridge 1 also can be gone into 103 transmission of bus data at processor bus 102 and output.
The demand that bus 103 is gone in output may be reading of data or the demand that writes data.The flow process of reading of data comprises following step as shown in Figure 2:
1. a device of exporting on the bus 103 sends the data read demand.
2. send the data read demand to processor bus-bar 102 via bus bridge 1.
3. whether processor 101 must be spied upon by the high-speed cache at demand data place and be modified.
4. if this high-speed cache is modified, then can cause the action that inside writes back, the data that processor 101 meeting drivings write back are to processor bus 102; If this high-speed cache is not modified, then bus bridge 2 106 can be delivered to processor bus 102 by system storage 104 with data.
5. bus bridge 1 is obtained data by processor bus 102 and it is delivered to output go into bus 103.
6. if be modified by the system cache at demand data place, then bus-bar bridge 2 106 must be sent the new data that processor 101 is provided back to system storage 104.
Fig. 3 is one and sends the data read demand by I/O device that sent the example of data read demand again by processor 101, this figure represents according to P6 external bus specification (P6 external busspecification).The data read demand that I/O device sends begins to carry out (BPRI# and ADS# are all noble potential) in the 3rd pulse (clock), and the data read demand that processor 101 sends begins to carry out (BPRI# is a noble potential, and ADS# is an electronegative potential) the 6th pulse.Processor 101 is spied upon result's (is electronegative potential at the 7th pulse HITM#) of first demand, show that first demand does not read the cached data that is modified, therefore, bus bridge 2 106 can respond first demand, and beginning is the 12 burst transmissions data (DRDY# is a noble potential).Respond second demand and begin to transmit data the 16 pulse meeting.
Fig. 4 is similar with Fig. 3, and the demand of sending except I/O device reads the data (showing that the 7th pulse the HITM# as a result that spies upon is a noble potential) that are modified, and the action that writes back of an inside is performed the 11 pulse.In the demand of the 16 pulse meeting answer processor 101 and begin to transmit data.
The flow process that writes data comprises following step as shown in Figure 5:
1. a device of exporting on the bus 103 sends the demand that writes data.
2. send data via bus bridge 1 and write demand to processor bus 102.
3. bus bridge 1 drives output and goes into data that bus 103 sends here to processor bus 102.
4. bus bridge 2 106 is received these data and is waited for the snoop result of processor 101.
5. if the system cache at this demand data place is modified, then processor 101 can write back the data that are modified, and bus bridge 2 106 can write back system storage 104 to data that will write back and the data combination of before having sent here again.If the system cache at this demand data place is not modified, then bus bridge 2 106 is directly write system storage 104 with the data of receiving.
Fig. 6 is one and sends data by I/O device and write demand, sends the example that data write demand by processor 101 again.The data that I/O device sends write demand and begin to carry out (BPRI# and ADS# are all noble potential) in the 3rd pulse (clock), and the data that processor 101 sends write demand and begin to carry out (BPRI# is an electronegative potential, and ADS# is a noble potential) the 6th pulse.Processor 101 is spied upon the result of first demand, show that first demand does not read the cached data that is modified, therefore, respond first demand and begin, in second demand of the 13 impulse response and begin to transmit data (DRDY# is a noble potential) the 8th burst transmissions data (DRDY# is a noble potential).
Fig. 7 is similar with Fig. 6, except the demand that I/O device sends is write the data (showing that the 7th pulse the HITM# as a result that spies upon is a noble potential) that are modified, the data of first demand can be begun to transmit the 11 pulse, and the data that the result caused of spying upon write back the 14 pulse and begin.Bus bridge 2 106 is in conjunction with these two data, and will be wherein up-to-date one delivers to system storage 104.Respond second demand and begin to transmit data in nineteen pulse meeting.
The trade fair that reads or writes that bus bridge 1 sends causes data transmission, and can occupy data bus in the process of transmission data, and therefore, before the data Layer of present transaction was not done, next the data Layer in the transaction just can't begin.If the next record transaction is that processor 101 sends, processor 101 will wait for a period of time before obtaining data, just then the usefulness of processor can reduce.If this bus height pipelineization is also extremely busy commonplace, then system effectiveness will be accumulated and have a strong impact on to the time of transaction delay, obvious especially when the device of this situation on exporting bus 103 highly needs access system storer 104.Therefore, promoting processor efficiency the best way, is to lower the time that the data Layer in the transaction that device sends on the bus 103 is gone in output, yet the new method of data transmission on any attenuating processor bus 102 all must meet the demand of data consistency.
The present invention proposes a kind of device and method that I/O device takies the processor bus frequency range that minimizes, and goes into the required time of data Layer in the transaction that the device on the bus sends to lower output.Fundamental purpose of the present invention provides direct path between two bus bridges, data can be directly via this path transmission, just can lower thus bus on data transmission.Another object of the present invention is under the framework that uses direct path access system storer, and the method that promotes bus efficiency is provided.In addition, the present invention also provides the system memory access that can keep data consistency method.
Simultaneously, multibus pipe-line data processing system of the present invention, comprise at least one and go into I/O device on the bus, system storage, bus bridge (bus bridge one) connection processing device bus and output the processor on the processor bus, one in output and go into bus, a bus bridge (bus bridge two) connection processing device bus and system storage, and a path that connects two bus bridges.
When the I/O device on exporting bus sends the demand of reading system memory data, bus bridge sends a vacation (null) data read demand to processor bus for a moment, so that processor is done the action of spying upon, this demand is also transferred to bus bridge two via the path between two bus bridges and is transmitted.If the result who spies upon does not read the cached data that is modified for this demand,, treat just to finish this demand after these data are sent to I/O device just bus bridge two is directly delivered to bus bridge one with the data in the storer via path between the two.If the result who spies upon reads the cached data that is modified for this demand, processor can drive the data revised to processor bus, write back system storage via bus bridge two again, bus bridge one is also obtained data and is sent to I/O device by processor bus.
When the I/O device on exporting bus sends the demand of data writing system storer, bus bridge one can send the data read demand of a vacation equally to processor bus, so that processor is done the action of spying upon, this demand is also transferred to bus bridge two via the path between two bus bridges and is transmitted, and the data that will write are also write bus bridge two by bus bridge one via direct path then.If the result who spies upon is the data that this demand is not write the high-speed cache that is modified, just bus bridge two is directly write system storage with data; If the result who spies upon writes the cached data that is modified for this demand, processor can drive the data revised to processor bus, data that bus bridge two can will write back and the data combination of before having sent here, the result with combination writes system storage again.
Fig. 1 is the block scheme of a legacy system;
Fig. 2 is that the processing flow chart that device on the bus reads demand is gone in output in the legacy system;
Fig. 3 is the sequential chart of two continuous demands that read.First demand is that the device that output is gone on the bus sends, and second demand sent by a P6 processor.First demand does not read the data of the high-speed cache of revising in the processor.This figure represents according to P6 bus protocol in the legacy system (P6 bus protocol).
Fig. 4 is the sequential chart of two continuous demands that read.First demand is that the device that output is gone on the bus sends, and second demand is to handle Lu by a P6 to be sent.First demand reads the cached data of revising in the processing.This figure represents according to P6 bus protocol in the legacy system.
Fig. 5 is that the processing flow chart that device on the bus writes demand is gone in output in the legacy system;
Fig. 6 is two sequential charts that write demand continuously.First demand is that the device that output is gone on the bus sends, and second demand sent by a P6 processor.First demand is not write the data of the high-speed cache of revising in the processor.This figure represents according to P6 bus protocol in the legacy system.
Fig. 7 is the sequential chart of two continuous demands that write.First demand is that the device that output is gone on the bus sends, and second demand sent by a P6 processor.First demand is write the data of the high-speed cache of revising in the processor.This figure represents according to P6 bus protocol in the legacy system.
Fig. 8 is a system block diagram of the present invention, comprises the direct path that can transmit data between the double bus bridge.
Fig. 9 goes into the processing flow chart that device on the bus reads demand for output among the present invention;
Figure 10 is the sequential chart of two continuous demands that read among the present invention.First demand is that I/O device sends false data and reads demand, and second demand is that processor sends.First demand does not read the data of the high-speed cache of revising in the processor.
Figure 11 goes into the processing flow chart that device on the bus writes demand for output among the present invention;
Figure 12 is the sequential chart of two continuous demands that read among the present invention.First demand is that I/O device sends the demand that false data reads and invalid, and second demand is that processor sends.First demand does not read the data of the high-speed cache of revising in the processor.
Figure 13 is the sequential chart of two continuous demands that read in three inventions.First demand is that I/O device sends the demand that false data reads and invalid, and second demand is that processor sends.First demand reads the data of the high-speed cache of revising in the processor.
Now the detailed description of conjunction with figs., embodiment and patented claim protection domain, will on address other purpose of the present invention and advantage and be specified in after.
According to Fig. 8, system architecture of the present invention comprises a processor 101, processor bus 102, bus 103 and a system storage 104 are gone in an output; Output goes into/and processor bus bridge (bus bridge one) 105 connected output and gone into the pipeline that 102, one system storages of the bus/processor bus bridge (bus bridge two) 106 of bus 103 and processor provides a system storage 104 and 102 data transmission of processor bus.Processor 101 comprises a high-speed cache usually., just can not need to use processor bus 102 and transmit data so that data transmission pipeline faster to be provided at bus bridge 1 and path of 2 106 construction of bus bridge.Yet the demand of access system storer 104 must be known that because the high-speed cache in the processor may comprise by the data content of demand, this content may be upgraded by processor to keep the consistance of data by processor 101.Therefore, the action spied upon on the processor bus 102 is still necessary.According to the framework of Fig. 8, the using method of new processor bus 102 will describe as after.
Send demand to the system storage data transmission no matter any one device on the bus is gone in output, bus bridge 1 must be finished following two actions:
1. notice bus bridge 2 106 has the demand of data transmission.
2. produce a false data and read transaction ((enable bit) all is made as 0 in the activation position of all bytes) to processor bus 102.
These two actions are uncorrelated so can carry out simultaneously.The purpose of action 1 is to notify bus bridge 2 106 to have data will pass in and out system storage 104, and therefore, access action (reading or writing), data length and data address (address) all are necessary information in action one.The purpose that the false data of action 2 reads transaction is in order to allow processor spy upon the data that this transaction is read, therefore, only to need the information of data address.New technological process such as Fig. 9 that device on the bus 103 sends the data read demand gone in output.Finish a demand that reads of going into bus 103 and comprise the following step in output:
1. the data read demand is sent by the device of exporting on the bus 103.
2. bus bridge 1 sends a false data and reads demand to processor bus 102, and directly transmits the demand of exporting on the bus 103 and arrive bus bridge 2 106.It is zero that false data reads the data length that demand refers to the data read demand here but need to transmit.
3. if this demand reads the cached data of revising in the processor, can cause the inner action that writes back, processor 101 can drive the data that write back and arrive processor bus 102; Do not repair the data that this is crossed if read, just then data bus can be immediately to next record transaction use.
4. if read the data of revising, bus bridge 1 can be obtained data by processor bus 102; Otherwise, obtain data by bus bridge 2 106.
5. bus bridge 1 transfers data to and exports bus 103.
Technology of the present invention is applied in Fig. 3, and the result has shown different signals, as Figure 10.It is just the same that I/O device and processor 101 send pulse and Fig. 3 of demand among Figure 10.First demand occupies the time (twelve-pulse to the 15 pulses among Fig. 3) that processor bus 102 transmits data, because data transmission is no longer utilized processor bus 102, just has been removed at Figure 10.Therefore, if the demand that reads that the device on the bus 103 sends is gone in output, the data that need were not modified at the high-speed cache of processor 101, then processor bus 102 just can be at the medium pending data end of transmission of data Layer, because data are directly transmitted via the path of bus bridge 1 and bus bridge 2 106.Thus, the data transmission of second demand just can begin the 12 pulse, than Fig. 3 Zao four pulses.
If by first demand that bus bridge 1 is sent, read the data that the high-speed cache of processor 101 was revised, as the example of Fig. 4, utilize method of the present invention, the mode of processor bus 102 runnings is the same with Fig. 4.Processor 101 drives the data that write back and arrives processor bus 102, and transmits data to the device of exporting on the bus 103 by bus bridge 1.Occupied time of data Layer on the processor bus 102 is the same with Fig. 4, and the delay that reading of data demand on the bus is gone in output also is the same.Therefore, in the present invention, no matter whether the data content of demand is modified in high-speed cache, the delay of exporting data read demand on the bus can be not longer than original method, and the present invention just can not go into bus to output not cause any spinoff; If the data of demand are the situations that is not modified, then the data Layer of next record transaction can more early be begun, and therefore the usefulness of processor can be modified, and entire system usefulness also thereby can improve.
The device that output is gone on the bus 103 sends new technological process such as Figure 11 that data write demand, finishes this demand and comprises the following step:
1. data write the device that demand gone on the bus 103 by output and send.
2. data write demand and this data are directly delivered to bus bridge 2 106 by bus bridge 1, must be again via processor bus 102.
3. bus bridge 1 sends a false data simultaneously and reads demand to processor bus 102.False data reads demand, and to refer to transmission length here be zero data, and the false data on the processor bus 102 reads the demand that demand is " read and invalid " (a read and invalidate), rather than the demand that reads of genuine storer.The demand of " it is also invalid to read " can make corresponding high-speed cache temporarily invalid, so that keep the consistance of data.
4. if the data that high-speed cache was revised are arrived in this demand access, then processor 101 can drive the data that write back and arrive processor bus 102; Do not repair the data that this is crossed if read, just do not have the transmission of data on the processor bus 102.
5. bus bridge 2 106 transmits data to system storage 104.
Technology of the present invention is applied in Fig. 6, and the result has shown different signals, as Figure 12.First demand occupies the time that processor bus 102 transmits data among Fig. 6, by the 8th pulse to the 11 pulses, first demand is not write the cached data that is modified, therefore data are directly transmitted via the path of bus bridge 1 and bus bridge 2 106, and processor bus 102 no longer transmits data.Processor bus 102 is just disengaged (release) after the tenth pulse, the data Layer of next record transaction can begin at twelve-pulse, than Fig. 6 Zao a pulse, system effectiveness thereby lifting.
Technology of the present invention is applied in Fig. 7, and first demand is write the cached data that is modified, and the result has shown different signals, as Figure 13.The data of first demand are transmitted by the path of bus bridge 1 and bus bridge 2 106, and the data that write back that snoop result caused are transferred to bus bridge 2 106 by processor 101.If write demand only and the data of part high-speed cache repeat (partial line transfer), the data of then desiring to write need and snoop result is caused write back the data merging after, again in the writing system storer; If write demand fully and the data of high-speed cache repetitions (full line transfer), that then ignores snoop result and caused writes back data, only with in the data writing system storer of desiring to write.
In the present invention, bus bridge 2 106 is preserved bus bridges 1 and processor 101 and is come up-to-date data.Because the data of first demand do not occupy processor bus 102, therefore the transmission that writes back data among Fig. 7 must wait for that the situation of first demand data end of transmission just can not take place, the transmission that writes back data can begin in the 11 pulse, do sth. in advance two pulses than Fig. 7, thereby just shortened the required time of data Layer.Bus 103 is gone in output does not have difference at Fig. 7 and Figure 13, but that processor bus 102 becomes is more efficient, and therefore entire system usefulness also promote.
According to the present invention, data transmission is saved on the processor bus 102 time such as following form.This experimental basis P6 external bus specification, and! The HITM# index according to demand not access to the high-speed cache that is modified, the HITM# index according to the demand access to the high-speed cache that is modified.
Demand data and length ???!HITM# ????HITM#
Read 4QW Shorten four pulses Shorten the zero pulse
Read 2QW Shorten two pulses Shorten the zero pulse
Read 1QW Shorten a pulse Shorten the zero pulse
Write 4QW Shorten four pulses Shorten four pulses
Write 2QW Shorten two pulses Shorten two pulses
Write 1QW Shorten a pulse Shorten a pulse
The above only is the present invention's preferred embodiment, when the scope that can not limit the invention process with this.And the equivalence of being done according to the present patent application claim generally changes and modification, all should belong in the scope that patent of the present invention contains.

Claims (6)

1. multibus pipe-line data processing system, this system includes:
One processor bus;
At least one processor links to each other with this processor bus;
Bus is gone in one output;
Bus bridge one links this processor bus and this is exported into bus;
A system storage;
A bus bridge two links this processor bus and this system storage;
It is characterized in that: also have
Article one, path links this bus bridge one and this bus bridge two.
2. method that promotes bus efficiency in multibus pipe-line data processing system is characterized in that: this multibus pipe-line data processing system comprise at least one processor on processor bus, first bus bridge links an output and goes into that bus and this processor bus, second bus bridge link a system storage and this processor bus, a path links this first bus bridge and this second bus bridge; This method includes the following step:
(a) read the data read demand of this system storage, export a device on the bus, send via this first bus bridge by this;
(b) this data read demand is placed this first bus bridge temporarily;
(c) notify this second bus bridge this data read demand via this path, simultaneously, send a false data and read demand to this processor bus;
(d) action of spying upon at the enterprising line data of this processor bus simultaneously, obtains required data to this second bus bridge by this system storage;
(e) via this path the data transmission on this second bus bridge is arrived this first bus bridge, if and the snoop result of step (d) is not for reading the data that were modified in high-speed cache, then finish this data read demand, otherwise proceed step (f);
(f) data that write back that snoop result is caused are delivered to this processor bus by this processor;
(g) data that write back that this snoop result is caused are transferred to this first bus bridge by this processor bus; And,
(h) data that write back that this snoop result is caused write back this system storage by this processor bus via this second bus bridge.
3. as a kind of method that in multibus pipe-line data processing system, promotes bus efficiency as described in the claim 2, it is characterized in that this false data reads demand and is one to read length be zero memory of data reading command.
4. method that promotes bus efficiency in multibus pipe-line data processing system is characterized in that: this multibus pipe-line data processing system comprise at least one processor on processor bus, first bus bridge links an output and goes into that bus and this processor bus, second bus bridge link a system storage and this processor bus, a path links this first bus bridge and second bus bridge; This method includes the following step:
(a) data that write this system storage write demand, export a device on the bus by this, send via this first bus bridge;
(b) these data are write demand and the data desiring to write place this first bus bridge temporarily;
(c) notify these these data of second bus bridge to write demand via this path, simultaneously, send a false data and read demand to this processor bus;
(d) action of spying upon at the enterprising line data of this processor bus simultaneously, is delivered to this second bus bridge with the data that place this first bus bridge to desire to write temporarily;
(e) data that this is desired to write are delivered to this system storage by this second bus bridge, and if the snoop result of step (d) for not write the data that in high-speed cache, were modified, then finish these data and write demand, otherwise proceed step (f);
(f) data that write back that snoop result is caused are delivered to this processor bus by this processor;
What (g) be combined in that this snoop result on this processor bus causes writes back data and this data of desiring to write; And,
(h) should in conjunction with data write back this system storage by this second bus bridge.
5. as a kind of method that in multibus pipe-line data processing system, promotes bus efficiency as described in the claim 4, it is characterized in that, in step (g), if this writes demand only and the repetition of the data of part high-speed cache, then this data that write need and this snoop result is caused write back data and merge after, write again in this system storage; If this writes demand fully and the repetition of the data of high-speed cache, that then ignores this snoop result and caused writes back data, and only the data that this is write write in this system storage.
6. as a kind of method that in multibus pipe-line data processing system, promotes bus efficiency as described in the claim 4, it is characterized in that, this false data reads the instruction that demand is " read and invalid ", makes corresponding cache invalidation to keep the consistance of data.
CNB011038381A 2001-02-22 2001-02-22 Multibus pipe-line data processing system and its bus efficiency raising method Expired - Fee Related CN1170233C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB011038381A CN1170233C (en) 2001-02-22 2001-02-22 Multibus pipe-line data processing system and its bus efficiency raising method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB011038381A CN1170233C (en) 2001-02-22 2001-02-22 Multibus pipe-line data processing system and its bus efficiency raising method

Publications (2)

Publication Number Publication Date
CN1371061A true CN1371061A (en) 2002-09-25
CN1170233C CN1170233C (en) 2004-10-06

Family

ID=4653505

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB011038381A Expired - Fee Related CN1170233C (en) 2001-02-22 2001-02-22 Multibus pipe-line data processing system and its bus efficiency raising method

Country Status (1)

Country Link
CN (1) CN1170233C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004709B (en) * 2009-08-31 2013-09-25 国际商业机器公司 Bus bridge between processor local bus (PLB) and advanced extensible interface (AXI) and mapping method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004709B (en) * 2009-08-31 2013-09-25 国际商业机器公司 Bus bridge between processor local bus (PLB) and advanced extensible interface (AXI) and mapping method

Also Published As

Publication number Publication date
CN1170233C (en) 2004-10-06

Similar Documents

Publication Publication Date Title
CN1288544A (en) Data transferring in source-synchronous and common clock protocols
US5850530A (en) Method and apparatus for improving bus efficiency by enabling arbitration based upon availability of completion data
CN1069426C (en) System direct memory access (DMA) support logic for PCI based computer system
US5664151A (en) System and method of implementing read resources to maintain cache coherency in a multiprocessor environment permitting split transactions
US5764929A (en) Method and apparatus for improving bus bandwidth by reducing redundant access attempts
US20020144027A1 (en) Multi-use data access descriptor
US6330630B1 (en) Computer system having improved data transfer across a bus bridge
CN1760847A (en) Bus bridge and data transmission method
CN1306417C (en) Cache memory eviction policy for combining write transactions
US20070005877A1 (en) System and method to increase DRAM parallelism
JPS6142049A (en) Data processing system
US5704058A (en) Cache bus snoop protocol for optimized multiprocessor computer system
JP2006260159A (en) Information processing apparatus, and data control method for information processing apparatus
US6970978B1 (en) System and method for providing a pre-fetch memory controller
JP3266470B2 (en) Data processing system with per-request write-through cache in forced order
EP0512685A1 (en) Quadrature bus protocol for carrying out transactions in a computer system
KR100807443B1 (en) Opportunistic read completion combining
CN1170233C (en) Multibus pipe-line data processing system and its bus efficiency raising method
US6578114B2 (en) Method and apparatus for altering data length to zero to maintain cache coherency
US5923857A (en) Method and apparatus for ordering writeback data transfers on a bus
CN110232030A (en) Multichip system and method for caching and processing
CN1622071A (en) Access apparatus and method for direct memory
US6327636B1 (en) Ordering for pipelined read transfers
US6854036B2 (en) Method of transferring data in a processing system
CN1095126C (en) Method and apparatus for enabling cache stream access

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20041006