Summary of the invention
Embodiment of the present invention technical problem to be solved is, it is provided that a kind of internal storage data way to play for time dress
Put.Can solve directory information only is made to optimize by prior art, when Directory caching hits, represented
When data behavior is invalid, need to fetch data from internal memory, the response time of response request.
In order to solve above-mentioned technical problem, first aspect present invention provides a kind of internal storage data buffering side
Method, including:
The memorizer of internal memory agents Home Agent is opened up and at least includes Directory caching and data buffer storage
Combination buffer area, the cache lines in described Directory caching and the cache lines in described data buffer storage one a pair
Should;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address
State whether combination buffer area hits and effectively, perform described combination buffer area accordingly if it has, then direct
Operation.
In the implementation that the first is possible, described combination buffer area includes description information, described description
Information includes: contrast field, directory state information, data state info and effective marker position;
Described judge whether described combination buffer area hits and effectively include according to described operation address:
The contrast field that whether there is coupling in described combination buffer area is inquired about according to described operation address, if
Existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area
Deposit row the most effective.
In conjunction with the first possible implementation of first aspect and first aspect, in the reality that the second is possible
In existing mode, described operation address is read operation address, described directly to described combination buffer area execution phase
The operation answered includes:
The directory information in the cache lines of described hit and data message is directly returned to described caching agent.
In conjunction with the implementation that the second of first aspect is possible, in the implementation that the third is possible,
Described operation address is write operation address, described directly performs to operate bag accordingly to described combination buffer area
Include;
In the cache lines of described hit, write new data message and directory information, update described combination and delay
Deposit directory state information and data state info that district describes in information, and by effective marker position for having
Effect.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation,
Receive the operation address that caching agent Cache Agent sends, judge described group according to described operation address
When conjunction buffer area is miss, LRU LRU is used to select in described combination buffer area
Cache lines to be replaced, directory state information and data state info according to described cache lines to be replaced judge
Internal memory is write back the need of by the directory information in described cache lines to be replaced and data message;
It is invalid by the effective marker position of described cache lines to be replaced.
Second aspect present invention provides a kind of internal storage data way to play for time, including:
Directory caching and data buffer storage, described mesh is opened up in the memorizer of internal memory agents Home Agent
Cache lines in record caching and the cache lines in described data buffer storage are not one_to_one corresponding;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address
State Directory caching and whether described data buffer storage hits and effectively, delays described catalogue if it has, then direct
Deposit and perform corresponding operation with described data buffer storage.
In the implementation that the first is possible, described Directory caching includes that catalogue describes information, described mesh
Record description information includes contrasting field, directory state information, data message flag, data message identity
Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter
Breath includes data state info and the effective flag of data;
The operation address that described reception caching agent Cache Agent sends, sentences according to described operation address
Whether disconnected described Directory caching and described data buffer storage hit and effectively include:
Inquire about whether described Directory caching exists the contrast field of coupling according to described operation address, if depositing
, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching
Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit
Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit
State the cache lines in data buffer storage the most effective.
In conjunction with second aspect and the first possible implementation of second aspect, in the realization that the second is possible
In mode,
Described operation address is read operation address, described directly to described Directory caching and described data buffer storage
Perform corresponding operation to include:
Directory information in the cache lines of the Directory caching directly returning from hit to described caching agent and life
In data buffer storage cache lines in data message.
In conjunction with the implementation that the second of second aspect is possible, in the implementation that the third is possible,
Described operation address is write operation address, described directly to described Directory caching and the execution of described data buffer storage
Corresponding operation includes:
New directory information and the number of described hit is write in the cache lines of the Directory caching of described hit
Cache lines according to caching writes new data message.
In conjunction with the third possible implementation of second aspect, in the 4th kind of possible implementation,
Also include:
The data message flag of the cache lines of the data buffer storage according to described hit judges that described data are delayed
When depositing miss, use LRU LRU to reject one in described data buffer storage and leave unused
Cache lines.
Fourth aspect present invention provides a kind of internal storage data buffer unit, including:
Caching distribution module, at least includes for opening up in the memorizer of internal memory agents Home Agent
The combination buffer area of Directory caching and data buffer storage, cache lines and described data in described Directory caching are delayed
Cache lines one_to_one corresponding in depositing;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to
Described operation address judges whether described combination buffer area hits and effectively, if it has, then directly to described
Combination buffer area performs corresponding operation.
In the implementation that the first is possible, described operation executing module includes:
Hit effective judging unit, for whether inquiring about in described combination buffer area according to described operation address
There is the contrast field of coupling, if existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area
Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include:
Contrast field, directory state information, data state info and effective marker position.
In conjunction with the third aspect and the third aspect the first possible implementation, possible at the second
In implementation, described operation executing module includes;
Read operation performance element, in the cache lines directly returning described hit to described caching agent
Directory information and data message.
In conjunction with the implementation that the second of the third aspect is possible, in three kinds of possible implementations, institute
State operation executing module to include:
Write operation performance element, for writing new data message and catalogue in the cache lines of described hit
Information, updates directory state information and data state info that described combination buffer area describes in information, and
It is effective by effective marker position.
In conjunction with the third possible implementation of the third aspect, in the 4th kind of possible implementation,
Also include:
Replacement module, for receiving the operation address that caching agent Cache Agent sends, according to described
Operation address judges when described combination buffer area is miss, uses LRU LRU to select
Cache lines to be replaced in described combination buffer area, according to the directory state information of described cache lines to be replaced
Judge whether to need to it is believed that the directory information sum in described cache lines to be replaced with data state info
Breath writes back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
Fourth aspect present invention provides a kind of internal storage data buffer unit, including:
Caching distribution module, for opening up Directory caching in the memorizer of internal memory agents Home Agent
With data buffer storage, the cache lines in described Directory caching and the cache lines in described data buffer storage are not one by one
Corresponding;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to
Described operation address judges whether described Directory caching and described data buffer storage hit and effectively, if it is,
Then directly described Directory caching and described data buffer storage are performed corresponding operation.
In the implementation that the first is possible, described operation executing module includes:
Hit effective judging unit, for inquiring about whether described Directory caching exists according to described operation address
The contrast field of coupling, if existing, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching
Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit
Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit
State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh
Record description information includes contrasting field, directory state information, data message flag, data message identity
Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter
Breath includes data state info and the effective flag of data.
In conjunction with the first possible implementation of fourth aspect and fourth aspect, in the reality that the second is possible
In existing mode, described operation execution unit includes;
Read operation performance element, for directly returning the slow of the Directory caching of hit to described caching agent
Data message in the cache lines of the data buffer storage depositing the directory information in row and hit.
In conjunction with the implementation that the second of fourth aspect is possible, in the implementation that the third is possible,
Described operation execution unit includes:
Write operation performance element, for writing new catalogue in the cache lines of the Directory caching of described hit
The cache lines of the data buffer storage of information and described hit writes new data message.
In conjunction with the third possible implementation of fourth aspect, in the 4th kind of possible implementation,
Also include:
Replacement module, the data message flag of the cache lines for being hit described in basis judges described number
During according to cache miss, LRU LRU is used to reject one in described data buffer storage
Idle cache lines.
Implement the embodiment of the present invention, have the advantages that
By original Directory caching scheme, increase corresponding data buffer storage, reduce the number of times accessing internal memory,
Reduce the delay that data obtain.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out
Clearly and completely describe, it is clear that described embodiment is only a part of embodiment of the present invention, and
It is not all, of embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art are not making
Go out the every other embodiment obtained under creative work premise, broadly fall into the scope of protection of the invention.
See Fig. 1, for the first flow process signal of a kind of internal storage data way to play for time of the embodiment of the present invention
Figure, this flow process includes:
Step 101, open up in the memorizer of internal memory agents and at least include Directory caching and data buffer storage
Combination buffer area.
Concrete, Directory caching is the pair of the directory information memory line being encased in internal memory agents in internal memory
This, in directory information, record has status information and the positional information in the buffer of memory line, status information
Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into
In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality,
In Directory caching, only buffering has the directory information that part often has access to, the data buffer storage row in data buffer storage
With the Directory caching row one_to_one corresponding in Directory caching, the i.e. data for correspondence of record in Directory caching row
The status information of the data buffer storage row in caching and positional information, due to this relation one to one, can
It is associated to a combination cache lines with Directory caching row and data cache lines.
The read operation address that step 102, reception caching agent send.
Concrete, caching agent sends a read operation request, and corresponding one of described read operation request reads behaviour
Make address.
Whether step 103, combination buffer area hit.
Concrete, judge to combine whether buffer area hits according to read operation address, described combination buffer area bag
Including description information, described description information includes: contrast field, directory state information, data state info
With effective marker position, the structure of the cache lines of combination buffer area as shown in the table is:
Table 1
Judge whether described combination buffer area hits and effective step specifically includes according to operation address:
Contrast field TAG that whether there is coupling in described combination buffer area is inquired about according to operation address, if
Existing, the most described combination buffer area hits.
Step 104, combination buffer area cache lines the most effective.
Concrete, judge whether effectively according to the numerical value of VALID position in the cache lines of hit, when
This cache lines is identified effective during VALID position 1.
Step 105, the read operation of initiation read operation address correspondence internal memory.
Concrete, when step 103 and step 104 are judged as NO, perform step 105, pass through internal memory
Directory information in the memory line that passage read operation address is corresponding and data message.
Step 106, the directory information obtained from internal memory and data message are write combination buffer area.
Concrete, from combination buffer area, inquire about an idle cache lines, if existing, then will be from internal memory
Obtain directory information and data message write this cache lines, if combination buffer area in do not exist free time
Cache lines, uses LRU least recently used or FIFO first in first out scheduling algorithm rejects a cache lines,
So as directory information and the write of data message.
Step 107, the directly directory information in the cache lines of the caching agent described hit of return and data
Information.
Concrete, when determining that combination buffer area hit and the cache lines hit are effective according to read operation address,
Directly the directory information in the cache lines of hit and data message are back to caching agent.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 2 and Fig. 3, for the second procedure of a kind of internal storage data way to play for time of the embodiment of the present invention
Schematic diagram, this flow process includes;
Step 201, open up in the memorizer of internal memory agents and at least include Directory caching and data buffer storage
Combination buffer area.
Concrete, Directory caching is the pair of the directory information memory line being encased in cpu cache in internal memory
This, in directory information, record has status information and the positional information in the buffer of memory line, status information
Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into
In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality,
In Directory caching, only buffering has the directory information that part often has access to, the data buffer storage row in data buffer storage
With the Directory caching row one_to_one corresponding in Directory caching, the i.e. data for correspondence of record in Directory caching row
The status information of the data buffer storage row in caching and positional information, due to this relation one to one, can
It is associated to a combination cache lines with Directory caching row and data cache lines.
The write operation address that step 202, reception caching agent send.
Concrete, caching agent sends a write operation requests, and corresponding one of described write operation requests writes behaviour
Make address.
Whether step 203, combination buffer area hit.
Concrete, the contrast whether having coupling in the cache lines of combination buffer area is judged according to write operation address
Field, if existing, then the hit of combination buffer area, performing step 204, if not existing, then combining caching
District is not hit by, and performs step 205.According to the example in table 1, judge that combination is slow according to write operation address
Deposit the TAG that whether there is coupling in district, if existing, then combination buffer area hit.
Step 204, the cache lines of combination buffer area is the most effective.
Concrete, judge whether effectively according to effective flag of the cache lines of hit, if effectively, perform
Step 206, if invalid, performs step 205.Such as the example of table 1, VALID is effective flag,
The cache lines that VALID puts 1 mark hit is effective, and the cache lines setting to 0 mark hit is invalid.
Step 205, combination buffer area inquire the cache lines of free time or use preset algorithm queries to arrive
Article one, idle cache lines is replaced operation.
Concrete, when step 203 and/or step 204 are judged as NO, i.e. combination buffer area miss and
/ or hit cache lines invalid time, perform replacement operation.Replacement operation is specifically, buffer area is combined in inquiry
In whether exist free time cache lines, if exist, choose a wherein idle cache lines and be replaced behaviour
Make;If the cache lines in combination buffer area has been filled with, leisureless cache lines, then use LRU or FIFO
One idle cache lines of algorithms selection is replaced operation, replaces the algorithm present invention and is not restricted.Specifically
Replacement process see Fig. 3, including:
Step 2051, internal memory agents initiate the replacement operation of combination buffer area cache lines.
Concrete, in combining buffer area during leisureless cache lines, initiate replacement operation.
Step 2052, employing LRU select a cache lines.
Concrete, LRU LRU, meet the principle of temporal locality, can effectively carry
High level cache hit rate.
Step 2053, judge whether to need to write back internal memory according to directory state information and data state info.
Concrete, before performing replacement operation, it is judged that the directory state information of the cache lines chosen and data
Whether status information is dirty, if dirty, then the data bit that explanation is stored in cache lines is up-to-date, needs
Write back internal memory.Such as the example of table 1, it is dirty for putting directory information in 1 mark cache lines with DIR_DIRTY,
It is dirty for putting data message in 1 mark cache lines with DATA_DIRTY, certainly may be used without other mark
Knowledge method, the present invention is not restricted.
Step 2054, the directory information in cache lines to be replaced and data message are write back internal memory.
Concrete, when step 2053 is judged as YES, perform this step, it is ensured that the data in internal memory are
Up-to-date.
Step 2055, effective home position of described cache lines to be replaced are invalid.
Concrete, the directory information in cache lines and data message after replacement are rewritten, and are not effective
Data, in order to avoid read mistake data, by cache lines to be replaced be effectively designated be set to invalid.
Step 206, in the cache lines of hit, write new data message and directory information, update described
Combination buffer area describes the directory state information in information and data state info, and by effective home position
For effectively.Such as, shown in table 1, DIR_DIRTY, VALID and DATA_DIRTY are all put 1.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 3, for the 3rd schematic flow sheet of a kind of internal storage data way to play for time of the present invention, this stream
Journey includes:
Step 301, internal memory agents memorizer reopen ward off Directory caching and data buffer storage.
Concrete, Directory caching is the copy of the partial list information being encased in internal memory agents in internal memory,
In directory information, record has status information and the positional information in the buffer of memory line, and data buffer storage is interior
Deposit the copy of same data message in graftabl agency, according to Local principle of programme, loading
Directory caching and data buffer storage are the data often having access to.The cache lines of Directory caching is delayed with data herein
The cache lines deposited is not relation one to one, the cache lines of Directory caching by data message flag and
Data message identity associates with the cache lines in data buffer storage.The cache lines structure of Directory caching is as follows
Table:
Table 2
The cache lines structure such as following table of data buffer storage:
Table 3
By data message flag DATA_VALID, cache lines in Directory caching represents that data are delayed
Deposit and whether hit, simultaneously by the cache lines of data message identity DATA_ID inquiry data buffer storage.,
As can be seen from the above table, working as data cache misses, the cache lines i.e. depositing data message is not present in number
According to time in caching, data message identity DATA_ID would not be distributed.
The read operation address that step 302, reception caching agent send.
Concrete, caching agent sends a read operation request, and corresponding one of described read operation request reads behaviour
Make address.
Whether step 303, Directory caching hit and effectively.
Concrete, whether there is the contrast field of coupling according to operation address query directory in caching, such as table
In 2 whether one TAG of existence mates with operation address, if exist, then Directory caching hit, find
Cache lines in the Directory caching of hit, effective flag VALID inquired about in this cache lines determines
This cache lines is the most effective, it is assumed that it is effective that VALID puts 1, and VALID sets to 0 as invalid, then when looking into
Asking VALID when being 1, corresponding caching behavior is effective, execution step 304, inquires VALID
When being 0, corresponding caching behavior is invalid, performs step 305.
Step 305, the read operation of the internal memory that initiation operation address is corresponding.
Concrete, Directory caching is miss, shows not exist in Directory caching the caching of caching agent needs
OK, need to be read from internal memory by main memory access.
Step 306, the directory information obtained from internal memory and data message are write internal memory agents bin.
Concrete, directory information is write Directory caching, in data message write data buffer storage.
Step 304, data buffer storage are the most effective.
Concrete, when step 303 is judged as YES, the data of the cache lines in the Directory caching of reading hit
By the numerical value of DATA_VALID, message identification position DATA_VALID, determines whether data message is deposited
It is in data buffer storage, it is assumed that DATA_VALID puts 1 mark and is present in data buffer storage, sets to 0 table
Show and do not exist in data buffer storage, when detecting that DATA_VALID puts 1, by the number of Directory caching
It is believed that breath identity DATA_ID finds the cache lines of correspondence in data buffer storage.
Step 307, the directly directory information in caching agent return hit cache lines and data message.
Concrete, determine data buffer storage by the value of effective flag VALID of the data in data buffer storage
Cache lines when being also effective, direct Returning catalogue information and data message, improve access speed.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 5, for the 4th schematic flow sheet of a kind of internal storage data way to play for time of the present invention, this stream
Journey includes:
Step 401, in the memorizer of internal memory agents, open up Directory caching and data buffer storage.
Concrete, Directory caching is the copy of the partial list information being encased in internal memory agents in internal memory,
In directory information, record has status information and the positional information in the buffer of memory line, and data buffer storage is interior
Deposit the copy of same data message in graftabl agency, according to Local principle of programme, loading
Directory caching and data buffer storage are the data often having access to.The cache lines of Directory caching is delayed with data herein
The cache lines deposited is not relation one to one, the cache lines of Directory caching by data message flag and
Data message identity associates with the cache lines in data buffer storage.
The write operation address that step 402, reception caching agent send.
Concrete, caching agent sends a write operation requests, and corresponding one of described read operation request writes behaviour
Make address.
Step 403, Directory caching hit and effective.
Concrete, contrast field and the effective flag of catalogue according to Directory caching determine that Directory caching hits
And effectively, to be described above method.
The cache lines of the Directory caching that step 404, renewal are hit.
Cache lines in step 405, data buffer storage is the need of renewal.
Concrete, if write operation requests relates to the renewal of the cache lines of data buffer storage, then continue executing with this
Step 406, if being not related to the renewal of the cache lines of data buffer storage, then performs step 409.
Whether step 406, corresponding cache lines be in data buffer storage.
Data message flag according to Directory caching determines whether the cache lines of the data buffer storage of correspondence deposits
Being in data buffer storage, if existing, performing step 408, if not existing, performing step 409.
Step 408, according to cache lines corresponding to data message identity write.
Concrete, inquire according to the data message identity in Directory caching and data buffer storage needs write
The data of renewal are write in this cache lines by the cache lines of operation.
Step 407, data buffer storage inquire the cache lines of free time or uses Predistribution Algorithm to inquire one
Idle cache lines is replaced operation.
Concrete, to be described above replacement operation, the most no longer Ao Shu.When to Directory caching or data
Before caching is replaced operation, first have to detect directory state information and data state info, i.e. table 2
In DIR_DIRTY and DATA_DIRTY whether be dirty, if dirty, then carry out writing back internal memory operation,
It is replaced operation again.
Step 409, renewal data state info and directory state information.
Concrete, the cache lines in Directory caching and data buffer storage is modified, by data state info and
Directory state information is set to dirty.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 6, for the first structural representation of a kind of internal storage data buffer unit of the present invention, this dress
Put and include:
Caching distribution module 11, at least wraps for opening up in the memorizer of internal memory agents Home Agent
Include the combination buffer area of Directory caching and data buffer storage, the cache lines in described Directory caching and described data
Cache lines one_to_one corresponding in caching.
Concrete, Directory caching is the pair of the directory information memory line being encased in internal memory agents in internal memory
This, in directory information, record has status information and the positional information in the buffer of memory line, status information
Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into
In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality,
In Directory caching, only buffering has the directory information that part often has access to, and caching distribution module 11 is by data
Data buffer storage row in caching and the Directory caching row one_to_one corresponding in Directory caching, i.e. in Directory caching row
The status information for the data buffer storage row in corresponding data buffer storage of record and positional information, due to this
Relation one to one, can be associated to a combination cache lines with Directory caching row and data cache lines.
Operation executing module 12, for receiving the operation address that caching agent Cache Agent sends, root
Judge whether described combination buffer area hits and effectively according to described operation address, if it has, then directly to institute
State combination buffer area and perform corresponding operation.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
Further, see Fig. 7 and Fig. 8, for the second of a kind of internal storage data buffer unit of the present invention
Structural representation, in addition to caching distribution module 11 and operation executing module 12, also includes:
Replacement module 13, replacement module, send operatively for receiving caching agent Cache Agent
Location, when judging that described combination buffer area is miss according to described operation address, uses LRU minimum
Use and described in algorithms selection, combine the cache lines to be replaced in buffer area, according to described cache lines to be replaced
Directory state information and data state info judge whether to need the catalogue in described cache lines to be replaced
Information and data message write back internal memory;It is invalid by the effective marker position of described cache lines to be replaced.
Wherein, operation executing module 12 includes:
Hit effective judging unit 121, for inquiring about in described combination buffer area according to described operation address
The contrast field whether existence is mated, if existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area
Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include:
Contrast field, directory state information, data state info and effective marker position.
Operation executing module 12 also includes;
Read operation performance element 122, for directly returning the cache lines of described hit to described caching agent
In directory information and data message.
Operation executing module also includes:
Write operation performance element 123, for write in the cache lines of described hit new data message and
Directory information, updates directory state information and data mode letter that described combination buffer area describes in information
Breath, and be effective by effective marker position.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 9, the 3rd structural representation of a kind of internal storage data buffer unit for the present invention, including
Processor 61 and memorizer 62, the quantity of the processor 61 in device 1 can be one or more, figure
9 as a example by a processor.In some embodiments of the present invention, processor 61 and memorizer 62 can lead to
Cross bus or other modes connect, in Figure 10 as a example by bus connects.
Wherein, memorizer 62 stores batch processing code, and processor 61 is used for calling memorizer 62
The program code of middle storage, is used for performing following operation:
The memorizer of internal memory agents Home Agent is opened up and at least includes Directory caching and data buffer storage
Combination buffer area, the cache lines in described Directory caching and the cache lines in described data buffer storage one a pair
Should;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address
State whether combination buffer area hits and effectively, perform described combination buffer area accordingly if it has, then direct
Operation.
In some embodiments of the invention, processor is specifically for performing:
The contrast field that whether there is coupling in described combination buffer area is inquired about according to described operation address, if
Existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area
Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include:
Contrast field, directory state information, data state info and effective marker position.
Further, processor 61 is also particularly useful for execution:
The directory information in the cache lines of described hit and data message is directly returned to described caching agent.
Further, processor 61 is specifically for performing:
In the cache lines of described hit, write new data message and directory information, update described combination and delay
Deposit directory state information and data state info that district describes in information, and by effective marker position for having
Effect.
Further, processor 61 is additionally operable to perform:
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address
State combination buffer area miss time, use LRU LRU select described combination buffer area
In cache lines to be replaced, according to directory state information and the data state info of described cache lines to be replaced
Judge whether to need the directory information in described cache lines to be replaced and data message are write back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Figure 10, the 4th structural representation of a kind of internal storage data buffer unit for the present invention, including:
Caching distribution module 21, delays for opening up catalogue in the memorizer of internal memory agents Home Agent
Depositing and data buffer storage, the cache lines in described Directory caching and the cache lines in described data buffer storage are not one
One is corresponding.
Operation executing module 22, for receiving the operation address that caching agent Cache Agent sends, root
Judge whether described Directory caching and described data buffer storage hit and effectively according to described operation address, if
It is then directly described Directory caching and described data buffer storage to be performed corresponding operation.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
Further, see Figure 11-12, for the 5th knot of a kind of internal storage data buffer unit of the present invention
Structure schematic diagram, in addition to caching distribution module 21 and operation executing module 22, also includes:
Replacement module 23, the data message flag of the cache lines for being hit described in basis judges described
During data cache misses, LRU LRU is used to reject one in described data buffer storage
Individual idle cache lines.
Wherein, operation executing module 22 includes:
Hit effective judging unit 221, for whether inquiring about described Directory caching according to described operation address
There is the contrast field of coupling, if existing, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching
Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit
Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit
State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh
Record description information includes contrasting field, directory state information, data message flag, data message identity
Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter
Breath includes data state info and the effective flag of data.
Operation execution unit 22 includes;
Read operation performance element 222, for directly returning the Directory caching of hit to described caching agent
Directory information in cache lines and the data message in the cache lines of the data buffer storage of hit.
Operation execution unit 22 includes:
Write operation performance element 223 is new for writing in the cache lines of the Directory caching of described hit
The cache lines of the data buffer storage of directory information and described hit writes new data message.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Figure 13, for the 6th structural representation of a kind of internal storage data way to play for time of the present invention, including
Processor 71 and memorizer 72, the quantity of the processor 71 in device 1 can be one or more, figure
13 as a example by a processor.In some embodiments of the present invention, processor 71 and memorizer 72 can lead to
Cross bus or other modes connect, in Figure 13 as a example by bus connects.
Wherein, memorizer 72 stores batch processing code, and processor 71 is used for calling memorizer 72
The program code of middle storage, is used for performing following operation:
In the memorizer of internal memory agents Home Agent, open up Directory caching and data are delayed, described catalogue
Cache lines in caching and the cache lines in described data buffer storage are not one_to_one corresponding;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address
State Directory caching and whether described data buffer storage hits and effectively, delays described catalogue if it has, then direct
Deposit and perform corresponding operation with described data buffer storage.
Further, processor 71 is specifically for performing:
Inquire about whether described Directory caching exists the contrast field of coupling according to described operation address, if depositing
, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching
Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit
Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit
State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh
Record description information includes contrasting field, directory state information, data message flag, data message identity
Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter
Breath includes data state info and the effective flag of data.
Further, processor 71 is specifically for performing:
Directory information in the cache lines of the Directory caching directly returning from hit to described caching agent and life
In data buffer storage cache lines in data message.
Further, processor 71 is specifically for performing:
New directory information and the number of described hit is write in the cache lines of the Directory caching of described hit
Cache lines according to caching writes new data message.
In some embodiments of the invention, processor 71 is additionally operable to perform:
The data message flag of the cache lines of the data buffer storage according to described hit judges that described data are delayed
When depositing miss, use LRU LRU to reject one in described data buffer storage and leave unused
Cache lines.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage,
Reduce the number of times accessing internal memory, reduce the delay that data obtain.
One of ordinary skill in the art will appreciate that all or part of stream realizing in above-described embodiment method
Journey, can be by computer program and completes to instruct relevant hardware, and described program can be stored in
In one computer read/write memory medium, this program is upon execution, it may include such as the enforcement of above-mentioned each method
The flow process of example.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only
Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc..
Above disclosed only one preferred embodiment of the present invention, can not limit this with this certainly
The interest field of invention, one of ordinary skill in the art will appreciate that and realize the whole of above-described embodiment or portion
Split flow, and according to the equivalent variations that the claims in the present invention are made, still fall within the scope that invention is contained.