CN103076992B - A kind of internal storage data way to play for time and device - Google Patents

A kind of internal storage data way to play for time and device Download PDF

Info

Publication number
CN103076992B
CN103076992B CN201210578615.3A CN201210578615A CN103076992B CN 103076992 B CN103076992 B CN 103076992B CN 201210578615 A CN201210578615 A CN 201210578615A CN 103076992 B CN103076992 B CN 103076992B
Authority
CN
China
Prior art keywords
data
cache lines
caching
directory
hit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210578615.3A
Other languages
Chinese (zh)
Other versions
CN103076992A (en
Inventor
徐建荣
姚策
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Hangzhou Huawei Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huawei Digital Technologies Co Ltd filed Critical Hangzhou Huawei Digital Technologies Co Ltd
Priority to CN201210578615.3A priority Critical patent/CN103076992B/en
Publication of CN103076992A publication Critical patent/CN103076992A/en
Application granted granted Critical
Publication of CN103076992B publication Critical patent/CN103076992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The method that the embodiment of the invention discloses a kind of internal storage data buffering, it is included in the memorizer of internal memory agents HomeAgent and opens up the combination buffer area at least including Directory caching and data buffer storage, the cache lines in described Directory caching and the cache lines one_to_one corresponding in described data buffer storage;Receive the operation address that caching agent Cache Agent sends, judge whether described combination buffer area hits and effectively, if it has, then direct, described combination buffer area perform corresponding operation according to described operation address.Use the present invention, reduce the number of times accessing internal memory, reduce the delay that data obtain.

Description

A kind of internal storage data way to play for time and device
Technical field
The present invention relates to computer realm, particularly relate to a kind of internal storage data way to play for time and device.
Background technology
Modern advanced computer system is all made up of multiple CPU, in order to improve the access data of CPU Speed, reduce memory access latency, all can realize Cache function, its access speed one on CPU As all ratios directly access more than the fast an order of magnitude of internal memory.Owing to each CPU can realize one piece Cache, the internal memory of the most same address can be cached on different CPU Cache, if certain CPU This address is carried out write operation, it is necessary to make the cache invalidation in other CPU, if there being dirty number in Huan Cun According to having to be written back internal memory.In order to complete above-mentioned functions, Modern multiprocessor computer is on rambus Achieve Cache coherency protocol (Cache Coherence Protocol), on such as Intel CPU platform QPI bus, HT bus on AMT CPU platform all achieve Cache coherency protocol.Common Typical Cache coherency protocol include Source Snoop and Home Snoop two types.For reality Existing Cache coherency protocol, typically realizes caching agent (Cache Agent) function at CPU side, Internal memory side realizes internal memory agents (Home Agent) function, and both realize Cache coherency protocol at cooperation. Cache Agent mainly realizes the management of CPU Cache, and initiates the access function to internal memory, and And respond other request intercept request.Home Agent mainly realizes multiple CA string to same address Rowization access and the parallelization to different address access.
Home Snoop Cache coherency protocol, all of intercepting asks to be all to be sent by internal memory agents. In order to reduce the quantity intercepting transmission, typically can store a catalogue in internal memory, be used for representing this address State in each caching agent of interior existence.When internal memory agents receives the request of a caching agent, just This internal storage data and corresponding catalogue can be read, if memory directory represents does not has caching agent from internal memory Having this internal storage data, internal memory agents directly sends data to requestor's caching agent;If indicated One or more caching agent has this internal storage data, and internal memory can send and listen to the caching that catalogue represents Agency goes, and waits that all of intercepting sends request after response and respond to requestor's caching agent.
Caching in CPU typically has 4 kinds of states, Modefied (dirty), Exclusive (to monopolize), Shared (shares), Invalid (invalid).In order to meet Cache coherency protocol, for same address The data cached of M/E state can only exist at caching agent, data CPU of the two state Direct read-write operation can be carried out;For the S state of same address, may be at multiple CPU's In caching agent, but these data can only be carried out read operation by CPU;For disarmed state, represent slow Deposit the data of not this address in agency.The buffer status being different from caching agent, in memory directory The state of record will not distinguish M/E, all this two classes state is unified to be exclusive state, and some designs are even Will not distinguish tri-states of M/E/S, only whether whether record comprise data in caching agent.
Existing Home Snoop agreement, in order to accelerate the reading to memory directory, can be at internal memory agents Place adds one piece of caching (noting: this caching is not the caching of cpu cache agency), the benefit of do so The memory directory often accessed quickly can be read, and transmission listens to caching agent as early as possible, fall The low delay intercepting response.But, if Directory caching hit, the state of the cache lines of expression is invalid During state, although the request intercepted can not be sent and arrive other caching agents, but row must also be deposited into inside Digital independent, then sends data to requestor's caching agent, and the reading speed of current internal memory is the most very Slowly, the response time of whole request caching is affected.
Summary of the invention
Embodiment of the present invention technical problem to be solved is, it is provided that a kind of internal storage data way to play for time dress Put.Can solve directory information only is made to optimize by prior art, when Directory caching hits, represented When data behavior is invalid, need to fetch data from internal memory, the response time of response request.
In order to solve above-mentioned technical problem, first aspect present invention provides a kind of internal storage data buffering side Method, including:
The memorizer of internal memory agents Home Agent is opened up and at least includes Directory caching and data buffer storage Combination buffer area, the cache lines in described Directory caching and the cache lines in described data buffer storage one a pair Should;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State whether combination buffer area hits and effectively, perform described combination buffer area accordingly if it has, then direct Operation.
In the implementation that the first is possible, described combination buffer area includes description information, described description Information includes: contrast field, directory state information, data state info and effective marker position;
Described judge whether described combination buffer area hits and effectively include according to described operation address:
The contrast field that whether there is coupling in described combination buffer area is inquired about according to described operation address, if Existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective.
In conjunction with the first possible implementation of first aspect and first aspect, in the reality that the second is possible In existing mode, described operation address is read operation address, described directly to described combination buffer area execution phase The operation answered includes:
The directory information in the cache lines of described hit and data message is directly returned to described caching agent.
In conjunction with the implementation that the second of first aspect is possible, in the implementation that the third is possible, Described operation address is write operation address, described directly performs to operate bag accordingly to described combination buffer area Include;
In the cache lines of described hit, write new data message and directory information, update described combination and delay Deposit directory state information and data state info that district describes in information, and by effective marker position for having Effect.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation, Receive the operation address that caching agent Cache Agent sends, judge described group according to described operation address When conjunction buffer area is miss, LRU LRU is used to select in described combination buffer area Cache lines to be replaced, directory state information and data state info according to described cache lines to be replaced judge Internal memory is write back the need of by the directory information in described cache lines to be replaced and data message;
It is invalid by the effective marker position of described cache lines to be replaced.
Second aspect present invention provides a kind of internal storage data way to play for time, including:
Directory caching and data buffer storage, described mesh is opened up in the memorizer of internal memory agents Home Agent Cache lines in record caching and the cache lines in described data buffer storage are not one_to_one corresponding;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State Directory caching and whether described data buffer storage hits and effectively, delays described catalogue if it has, then direct Deposit and perform corresponding operation with described data buffer storage.
In the implementation that the first is possible, described Directory caching includes that catalogue describes information, described mesh Record description information includes contrasting field, directory state information, data message flag, data message identity Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter Breath includes data state info and the effective flag of data;
The operation address that described reception caching agent Cache Agent sends, sentences according to described operation address Whether disconnected described Directory caching and described data buffer storage hit and effectively include:
Inquire about whether described Directory caching exists the contrast field of coupling according to described operation address, if depositing , the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit State the cache lines in data buffer storage the most effective.
In conjunction with second aspect and the first possible implementation of second aspect, in the realization that the second is possible In mode,
Described operation address is read operation address, described directly to described Directory caching and described data buffer storage Perform corresponding operation to include:
Directory information in the cache lines of the Directory caching directly returning from hit to described caching agent and life In data buffer storage cache lines in data message.
In conjunction with the implementation that the second of second aspect is possible, in the implementation that the third is possible, Described operation address is write operation address, described directly to described Directory caching and the execution of described data buffer storage Corresponding operation includes:
New directory information and the number of described hit is write in the cache lines of the Directory caching of described hit Cache lines according to caching writes new data message.
In conjunction with the third possible implementation of second aspect, in the 4th kind of possible implementation, Also include:
The data message flag of the cache lines of the data buffer storage according to described hit judges that described data are delayed When depositing miss, use LRU LRU to reject one in described data buffer storage and leave unused Cache lines.
Fourth aspect present invention provides a kind of internal storage data buffer unit, including:
Caching distribution module, at least includes for opening up in the memorizer of internal memory agents Home Agent The combination buffer area of Directory caching and data buffer storage, cache lines and described data in described Directory caching are delayed Cache lines one_to_one corresponding in depositing;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to Described operation address judges whether described combination buffer area hits and effectively, if it has, then directly to described Combination buffer area performs corresponding operation.
In the implementation that the first is possible, described operation executing module includes:
Hit effective judging unit, for whether inquiring about in described combination buffer area according to described operation address There is the contrast field of coupling, if existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include: Contrast field, directory state information, data state info and effective marker position.
In conjunction with the third aspect and the third aspect the first possible implementation, possible at the second In implementation, described operation executing module includes;
Read operation performance element, in the cache lines directly returning described hit to described caching agent Directory information and data message.
In conjunction with the implementation that the second of the third aspect is possible, in three kinds of possible implementations, institute State operation executing module to include:
Write operation performance element, for writing new data message and catalogue in the cache lines of described hit Information, updates directory state information and data state info that described combination buffer area describes in information, and It is effective by effective marker position.
In conjunction with the third possible implementation of the third aspect, in the 4th kind of possible implementation, Also include:
Replacement module, for receiving the operation address that caching agent Cache Agent sends, according to described Operation address judges when described combination buffer area is miss, uses LRU LRU to select Cache lines to be replaced in described combination buffer area, according to the directory state information of described cache lines to be replaced Judge whether to need to it is believed that the directory information sum in described cache lines to be replaced with data state info Breath writes back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
Fourth aspect present invention provides a kind of internal storage data buffer unit, including:
Caching distribution module, for opening up Directory caching in the memorizer of internal memory agents Home Agent With data buffer storage, the cache lines in described Directory caching and the cache lines in described data buffer storage are not one by one Corresponding;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to Described operation address judges whether described Directory caching and described data buffer storage hit and effectively, if it is, Then directly described Directory caching and described data buffer storage are performed corresponding operation.
In the implementation that the first is possible, described operation executing module includes:
Hit effective judging unit, for inquiring about whether described Directory caching exists according to described operation address The contrast field of coupling, if existing, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh Record description information includes contrasting field, directory state information, data message flag, data message identity Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter Breath includes data state info and the effective flag of data.
In conjunction with the first possible implementation of fourth aspect and fourth aspect, in the reality that the second is possible In existing mode, described operation execution unit includes;
Read operation performance element, for directly returning the slow of the Directory caching of hit to described caching agent Data message in the cache lines of the data buffer storage depositing the directory information in row and hit.
In conjunction with the implementation that the second of fourth aspect is possible, in the implementation that the third is possible, Described operation execution unit includes:
Write operation performance element, for writing new catalogue in the cache lines of the Directory caching of described hit The cache lines of the data buffer storage of information and described hit writes new data message.
In conjunction with the third possible implementation of fourth aspect, in the 4th kind of possible implementation, Also include:
Replacement module, the data message flag of the cache lines for being hit described in basis judges described number During according to cache miss, LRU LRU is used to reject one in described data buffer storage Idle cache lines.
Implement the embodiment of the present invention, have the advantages that
By original Directory caching scheme, increase corresponding data buffer storage, reduce the number of times accessing internal memory, Reduce the delay that data obtain.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to reality Execute the required accompanying drawing used in example or description of the prior art to be briefly described, it should be apparent that below, Accompanying drawing in description is only some embodiments of the present invention, for those of ordinary skill in the art, On the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the first pass schematic diagram of a kind of internal storage data way to play for time of the embodiment of the present invention;
Fig. 2 is the second procedure schematic diagram of a kind of internal storage data way to play for time of the embodiment of the present invention;
Fig. 3 is the 3rd schematic flow sheet of a kind of internal storage data way to play for time of the embodiment of the present invention;
Fig. 4 is the 4th schematic flow sheet of a kind of internal storage data way to play for time of the embodiment of the present invention;
Fig. 5 is the schematic flow sheet of replacement operation in Fig. 4;
Fig. 6 is the first structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention;
Fig. 7 is the second structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention;
Fig. 8 is the structural representation of operation executing module in Fig. 7;
Fig. 9 is the 3rd structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention;
Figure 10 is the 4th structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention;
Figure 11 is the 5th structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention;
Figure 12 is the structural representation of operation executing module in Figure 11;
Figure 13 is the 6th structural representation of a kind of internal storage data buffer unit of the embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out Clearly and completely describe, it is clear that described embodiment is only a part of embodiment of the present invention, and It is not all, of embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art are not making Go out the every other embodiment obtained under creative work premise, broadly fall into the scope of protection of the invention.
See Fig. 1, for the first flow process signal of a kind of internal storage data way to play for time of the embodiment of the present invention Figure, this flow process includes:
Step 101, open up in the memorizer of internal memory agents and at least include Directory caching and data buffer storage Combination buffer area.
Concrete, Directory caching is the pair of the directory information memory line being encased in internal memory agents in internal memory This, in directory information, record has status information and the positional information in the buffer of memory line, status information Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality, In Directory caching, only buffering has the directory information that part often has access to, the data buffer storage row in data buffer storage With the Directory caching row one_to_one corresponding in Directory caching, the i.e. data for correspondence of record in Directory caching row The status information of the data buffer storage row in caching and positional information, due to this relation one to one, can It is associated to a combination cache lines with Directory caching row and data cache lines.
The read operation address that step 102, reception caching agent send.
Concrete, caching agent sends a read operation request, and corresponding one of described read operation request reads behaviour Make address.
Whether step 103, combination buffer area hit.
Concrete, judge to combine whether buffer area hits according to read operation address, described combination buffer area bag Including description information, described description information includes: contrast field, directory state information, data state info With effective marker position, the structure of the cache lines of combination buffer area as shown in the table is:
Table 1
Judge whether described combination buffer area hits and effective step specifically includes according to operation address:
Contrast field TAG that whether there is coupling in described combination buffer area is inquired about according to operation address, if Existing, the most described combination buffer area hits.
Step 104, combination buffer area cache lines the most effective.
Concrete, judge whether effectively according to the numerical value of VALID position in the cache lines of hit, when This cache lines is identified effective during VALID position 1.
Step 105, the read operation of initiation read operation address correspondence internal memory.
Concrete, when step 103 and step 104 are judged as NO, perform step 105, pass through internal memory Directory information in the memory line that passage read operation address is corresponding and data message.
Step 106, the directory information obtained from internal memory and data message are write combination buffer area.
Concrete, from combination buffer area, inquire about an idle cache lines, if existing, then will be from internal memory Obtain directory information and data message write this cache lines, if combination buffer area in do not exist free time Cache lines, uses LRU least recently used or FIFO first in first out scheduling algorithm rejects a cache lines, So as directory information and the write of data message.
Step 107, the directly directory information in the cache lines of the caching agent described hit of return and data Information.
Concrete, when determining that combination buffer area hit and the cache lines hit are effective according to read operation address, Directly the directory information in the cache lines of hit and data message are back to caching agent.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 2 and Fig. 3, for the second procedure of a kind of internal storage data way to play for time of the embodiment of the present invention Schematic diagram, this flow process includes;
Step 201, open up in the memorizer of internal memory agents and at least include Directory caching and data buffer storage Combination buffer area.
Concrete, Directory caching is the pair of the directory information memory line being encased in cpu cache in internal memory This, in directory information, record has status information and the positional information in the buffer of memory line, status information Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality, In Directory caching, only buffering has the directory information that part often has access to, the data buffer storage row in data buffer storage With the Directory caching row one_to_one corresponding in Directory caching, the i.e. data for correspondence of record in Directory caching row The status information of the data buffer storage row in caching and positional information, due to this relation one to one, can It is associated to a combination cache lines with Directory caching row and data cache lines.
The write operation address that step 202, reception caching agent send.
Concrete, caching agent sends a write operation requests, and corresponding one of described write operation requests writes behaviour Make address.
Whether step 203, combination buffer area hit.
Concrete, the contrast whether having coupling in the cache lines of combination buffer area is judged according to write operation address Field, if existing, then the hit of combination buffer area, performing step 204, if not existing, then combining caching District is not hit by, and performs step 205.According to the example in table 1, judge that combination is slow according to write operation address Deposit the TAG that whether there is coupling in district, if existing, then combination buffer area hit.
Step 204, the cache lines of combination buffer area is the most effective.
Concrete, judge whether effectively according to effective flag of the cache lines of hit, if effectively, perform Step 206, if invalid, performs step 205.Such as the example of table 1, VALID is effective flag, The cache lines that VALID puts 1 mark hit is effective, and the cache lines setting to 0 mark hit is invalid.
Step 205, combination buffer area inquire the cache lines of free time or use preset algorithm queries to arrive Article one, idle cache lines is replaced operation.
Concrete, when step 203 and/or step 204 are judged as NO, i.e. combination buffer area miss and / or hit cache lines invalid time, perform replacement operation.Replacement operation is specifically, buffer area is combined in inquiry In whether exist free time cache lines, if exist, choose a wherein idle cache lines and be replaced behaviour Make;If the cache lines in combination buffer area has been filled with, leisureless cache lines, then use LRU or FIFO One idle cache lines of algorithms selection is replaced operation, replaces the algorithm present invention and is not restricted.Specifically Replacement process see Fig. 3, including:
Step 2051, internal memory agents initiate the replacement operation of combination buffer area cache lines.
Concrete, in combining buffer area during leisureless cache lines, initiate replacement operation.
Step 2052, employing LRU select a cache lines.
Concrete, LRU LRU, meet the principle of temporal locality, can effectively carry High level cache hit rate.
Step 2053, judge whether to need to write back internal memory according to directory state information and data state info.
Concrete, before performing replacement operation, it is judged that the directory state information of the cache lines chosen and data Whether status information is dirty, if dirty, then the data bit that explanation is stored in cache lines is up-to-date, needs Write back internal memory.Such as the example of table 1, it is dirty for putting directory information in 1 mark cache lines with DIR_DIRTY, It is dirty for putting data message in 1 mark cache lines with DATA_DIRTY, certainly may be used without other mark Knowledge method, the present invention is not restricted.
Step 2054, the directory information in cache lines to be replaced and data message are write back internal memory.
Concrete, when step 2053 is judged as YES, perform this step, it is ensured that the data in internal memory are Up-to-date.
Step 2055, effective home position of described cache lines to be replaced are invalid.
Concrete, the directory information in cache lines and data message after replacement are rewritten, and are not effective Data, in order to avoid read mistake data, by cache lines to be replaced be effectively designated be set to invalid.
Step 206, in the cache lines of hit, write new data message and directory information, update described Combination buffer area describes the directory state information in information and data state info, and by effective home position For effectively.Such as, shown in table 1, DIR_DIRTY, VALID and DATA_DIRTY are all put 1.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 3, for the 3rd schematic flow sheet of a kind of internal storage data way to play for time of the present invention, this stream Journey includes:
Step 301, internal memory agents memorizer reopen ward off Directory caching and data buffer storage.
Concrete, Directory caching is the copy of the partial list information being encased in internal memory agents in internal memory, In directory information, record has status information and the positional information in the buffer of memory line, and data buffer storage is interior Deposit the copy of same data message in graftabl agency, according to Local principle of programme, loading Directory caching and data buffer storage are the data often having access to.The cache lines of Directory caching is delayed with data herein The cache lines deposited is not relation one to one, the cache lines of Directory caching by data message flag and Data message identity associates with the cache lines in data buffer storage.The cache lines structure of Directory caching is as follows Table:
Table 2
The cache lines structure such as following table of data buffer storage:
Table 3
By data message flag DATA_VALID, cache lines in Directory caching represents that data are delayed Deposit and whether hit, simultaneously by the cache lines of data message identity DATA_ID inquiry data buffer storage., As can be seen from the above table, working as data cache misses, the cache lines i.e. depositing data message is not present in number According to time in caching, data message identity DATA_ID would not be distributed.
The read operation address that step 302, reception caching agent send.
Concrete, caching agent sends a read operation request, and corresponding one of described read operation request reads behaviour Make address.
Whether step 303, Directory caching hit and effectively.
Concrete, whether there is the contrast field of coupling according to operation address query directory in caching, such as table In 2 whether one TAG of existence mates with operation address, if exist, then Directory caching hit, find Cache lines in the Directory caching of hit, effective flag VALID inquired about in this cache lines determines This cache lines is the most effective, it is assumed that it is effective that VALID puts 1, and VALID sets to 0 as invalid, then when looking into Asking VALID when being 1, corresponding caching behavior is effective, execution step 304, inquires VALID When being 0, corresponding caching behavior is invalid, performs step 305.
Step 305, the read operation of the internal memory that initiation operation address is corresponding.
Concrete, Directory caching is miss, shows not exist in Directory caching the caching of caching agent needs OK, need to be read from internal memory by main memory access.
Step 306, the directory information obtained from internal memory and data message are write internal memory agents bin.
Concrete, directory information is write Directory caching, in data message write data buffer storage.
Step 304, data buffer storage are the most effective.
Concrete, when step 303 is judged as YES, the data of the cache lines in the Directory caching of reading hit By the numerical value of DATA_VALID, message identification position DATA_VALID, determines whether data message is deposited It is in data buffer storage, it is assumed that DATA_VALID puts 1 mark and is present in data buffer storage, sets to 0 table Show and do not exist in data buffer storage, when detecting that DATA_VALID puts 1, by the number of Directory caching It is believed that breath identity DATA_ID finds the cache lines of correspondence in data buffer storage.
Step 307, the directly directory information in caching agent return hit cache lines and data message.
Concrete, determine data buffer storage by the value of effective flag VALID of the data in data buffer storage Cache lines when being also effective, direct Returning catalogue information and data message, improve access speed.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 5, for the 4th schematic flow sheet of a kind of internal storage data way to play for time of the present invention, this stream Journey includes:
Step 401, in the memorizer of internal memory agents, open up Directory caching and data buffer storage.
Concrete, Directory caching is the copy of the partial list information being encased in internal memory agents in internal memory, In directory information, record has status information and the positional information in the buffer of memory line, and data buffer storage is interior Deposit the copy of same data message in graftabl agency, according to Local principle of programme, loading Directory caching and data buffer storage are the data often having access to.The cache lines of Directory caching is delayed with data herein The cache lines deposited is not relation one to one, the cache lines of Directory caching by data message flag and Data message identity associates with the cache lines in data buffer storage.
The write operation address that step 402, reception caching agent send.
Concrete, caching agent sends a write operation requests, and corresponding one of described read operation request writes behaviour Make address.
Step 403, Directory caching hit and effective.
Concrete, contrast field and the effective flag of catalogue according to Directory caching determine that Directory caching hits And effectively, to be described above method.
The cache lines of the Directory caching that step 404, renewal are hit.
Cache lines in step 405, data buffer storage is the need of renewal.
Concrete, if write operation requests relates to the renewal of the cache lines of data buffer storage, then continue executing with this Step 406, if being not related to the renewal of the cache lines of data buffer storage, then performs step 409.
Whether step 406, corresponding cache lines be in data buffer storage.
Data message flag according to Directory caching determines whether the cache lines of the data buffer storage of correspondence deposits Being in data buffer storage, if existing, performing step 408, if not existing, performing step 409.
Step 408, according to cache lines corresponding to data message identity write.
Concrete, inquire according to the data message identity in Directory caching and data buffer storage needs write The data of renewal are write in this cache lines by the cache lines of operation.
Step 407, data buffer storage inquire the cache lines of free time or uses Predistribution Algorithm to inquire one Idle cache lines is replaced operation.
Concrete, to be described above replacement operation, the most no longer Ao Shu.When to Directory caching or data Before caching is replaced operation, first have to detect directory state information and data state info, i.e. table 2 In DIR_DIRTY and DATA_DIRTY whether be dirty, if dirty, then carry out writing back internal memory operation, It is replaced operation again.
Step 409, renewal data state info and directory state information.
Concrete, the cache lines in Directory caching and data buffer storage is modified, by data state info and Directory state information is set to dirty.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 6, for the first structural representation of a kind of internal storage data buffer unit of the present invention, this dress Put and include:
Caching distribution module 11, at least wraps for opening up in the memorizer of internal memory agents Home Agent Include the combination buffer area of Directory caching and data buffer storage, the cache lines in described Directory caching and described data Cache lines one_to_one corresponding in caching.
Concrete, Directory caching is the pair of the directory information memory line being encased in internal memory agents in internal memory This, in directory information, record has status information and the positional information in the buffer of memory line, status information Referring to invalid, exclusive, dirty and shared state, positional information refers to which memory line is specifically loaded into In the caching of individual CPU, the corresponding directory information of each memory line data, according to principle of locality, In Directory caching, only buffering has the directory information that part often has access to, and caching distribution module 11 is by data Data buffer storage row in caching and the Directory caching row one_to_one corresponding in Directory caching, i.e. in Directory caching row The status information for the data buffer storage row in corresponding data buffer storage of record and positional information, due to this Relation one to one, can be associated to a combination cache lines with Directory caching row and data cache lines.
Operation executing module 12, for receiving the operation address that caching agent Cache Agent sends, root Judge whether described combination buffer area hits and effectively according to described operation address, if it has, then directly to institute State combination buffer area and perform corresponding operation.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
Further, see Fig. 7 and Fig. 8, for the second of a kind of internal storage data buffer unit of the present invention Structural representation, in addition to caching distribution module 11 and operation executing module 12, also includes:
Replacement module 13, replacement module, send operatively for receiving caching agent Cache Agent Location, when judging that described combination buffer area is miss according to described operation address, uses LRU minimum Use and described in algorithms selection, combine the cache lines to be replaced in buffer area, according to described cache lines to be replaced Directory state information and data state info judge whether to need the catalogue in described cache lines to be replaced Information and data message write back internal memory;It is invalid by the effective marker position of described cache lines to be replaced.
Wherein, operation executing module 12 includes:
Hit effective judging unit 121, for inquiring about in described combination buffer area according to described operation address The contrast field whether existence is mated, if existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include: Contrast field, directory state information, data state info and effective marker position.
Operation executing module 12 also includes;
Read operation performance element 122, for directly returning the cache lines of described hit to described caching agent In directory information and data message.
Operation executing module also includes:
Write operation performance element 123, for write in the cache lines of described hit new data message and Directory information, updates directory state information and data mode letter that described combination buffer area describes in information Breath, and be effective by effective marker position.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Fig. 9, the 3rd structural representation of a kind of internal storage data buffer unit for the present invention, including Processor 61 and memorizer 62, the quantity of the processor 61 in device 1 can be one or more, figure 9 as a example by a processor.In some embodiments of the present invention, processor 61 and memorizer 62 can lead to Cross bus or other modes connect, in Figure 10 as a example by bus connects.
Wherein, memorizer 62 stores batch processing code, and processor 61 is used for calling memorizer 62 The program code of middle storage, is used for performing following operation:
The memorizer of internal memory agents Home Agent is opened up and at least includes Directory caching and data buffer storage Combination buffer area, the cache lines in described Directory caching and the cache lines in described data buffer storage one a pair Should;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State whether combination buffer area hits and effectively, perform described combination buffer area accordingly if it has, then direct Operation.
In some embodiments of the invention, processor is specifically for performing:
The contrast field that whether there is coupling in described combination buffer area is inquired about according to described operation address, if Existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include: Contrast field, directory state information, data state info and effective marker position.
Further, processor 61 is also particularly useful for execution:
The directory information in the cache lines of described hit and data message is directly returned to described caching agent.
Further, processor 61 is specifically for performing:
In the cache lines of described hit, write new data message and directory information, update described combination and delay Deposit directory state information and data state info that district describes in information, and by effective marker position for having Effect.
Further, processor 61 is additionally operable to perform:
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State combination buffer area miss time, use LRU LRU select described combination buffer area In cache lines to be replaced, according to directory state information and the data state info of described cache lines to be replaced Judge whether to need the directory information in described cache lines to be replaced and data message are write back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Figure 10, the 4th structural representation of a kind of internal storage data buffer unit for the present invention, including:
Caching distribution module 21, delays for opening up catalogue in the memorizer of internal memory agents Home Agent Depositing and data buffer storage, the cache lines in described Directory caching and the cache lines in described data buffer storage are not one One is corresponding.
Operation executing module 22, for receiving the operation address that caching agent Cache Agent sends, root Judge whether described Directory caching and described data buffer storage hit and effectively according to described operation address, if It is then directly described Directory caching and described data buffer storage to be performed corresponding operation.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
Further, see Figure 11-12, for the 5th knot of a kind of internal storage data buffer unit of the present invention Structure schematic diagram, in addition to caching distribution module 21 and operation executing module 22, also includes:
Replacement module 23, the data message flag of the cache lines for being hit described in basis judges described During data cache misses, LRU LRU is used to reject one in described data buffer storage Individual idle cache lines.
Wherein, operation executing module 22 includes:
Hit effective judging unit 221, for whether inquiring about described Directory caching according to described operation address There is the contrast field of coupling, if existing, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh Record description information includes contrasting field, directory state information, data message flag, data message identity Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter Breath includes data state info and the effective flag of data.
Operation execution unit 22 includes;
Read operation performance element 222, for directly returning the Directory caching of hit to described caching agent Directory information in cache lines and the data message in the cache lines of the data buffer storage of hit.
Operation execution unit 22 includes:
Write operation performance element 223 is new for writing in the cache lines of the Directory caching of described hit The cache lines of the data buffer storage of directory information and described hit writes new data message.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
See Figure 13, for the 6th structural representation of a kind of internal storage data way to play for time of the present invention, including Processor 71 and memorizer 72, the quantity of the processor 71 in device 1 can be one or more, figure 13 as a example by a processor.In some embodiments of the present invention, processor 71 and memorizer 72 can lead to Cross bus or other modes connect, in Figure 13 as a example by bus connects.
Wherein, memorizer 72 stores batch processing code, and processor 71 is used for calling memorizer 72 The program code of middle storage, is used for performing following operation:
In the memorizer of internal memory agents Home Agent, open up Directory caching and data are delayed, described catalogue Cache lines in caching and the cache lines in described data buffer storage are not one_to_one corresponding;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State Directory caching and whether described data buffer storage hits and effectively, delays described catalogue if it has, then direct Deposit and perform corresponding operation with described data buffer storage.
Further, processor 71 is specifically for performing:
Inquire about whether described Directory caching exists the contrast field of coupling according to described operation address, if depositing , the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then sentence according to the data message flag of the described cache lines being hit Whether disconnected described data buffer storage hits;
The effective flag of data in the cache lines that described data according to hit are slow judges the institute of this hit State the cache lines in data buffer storage the most effective;Described Directory caching includes that catalogue describes information, described mesh Record description information includes contrasting field, directory state information, data message flag, data message identity Mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe letter Breath includes data state info and the effective flag of data.
Further, processor 71 is specifically for performing:
Directory information in the cache lines of the Directory caching directly returning from hit to described caching agent and life In data buffer storage cache lines in data message.
Further, processor 71 is specifically for performing:
New directory information and the number of described hit is write in the cache lines of the Directory caching of described hit Cache lines according to caching writes new data message.
In some embodiments of the invention, processor 71 is additionally operable to perform:
The data message flag of the cache lines of the data buffer storage according to described hit judges that described data are delayed When depositing miss, use LRU LRU to reject one in described data buffer storage and leave unused Cache lines.
Implement embodiments of the invention, by original Directory caching scheme, increase corresponding data buffer storage, Reduce the number of times accessing internal memory, reduce the delay that data obtain.
One of ordinary skill in the art will appreciate that all or part of stream realizing in above-described embodiment method Journey, can be by computer program and completes to instruct relevant hardware, and described program can be stored in In one computer read/write memory medium, this program is upon execution, it may include such as the enforcement of above-mentioned each method The flow process of example.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc..
Above disclosed only one preferred embodiment of the present invention, can not limit this with this certainly The interest field of invention, one of ordinary skill in the art will appreciate that and realize the whole of above-described embodiment or portion Split flow, and according to the equivalent variations that the claims in the present invention are made, still fall within the scope that invention is contained.

Claims (20)

1. an internal storage data way to play for time, it is characterised in that including:
The memorizer of internal memory agents Home Agent is opened up and at least includes Directory caching and data buffer storage Combination buffer area, the cache lines in described Directory caching and the cache lines in described data buffer storage one a pair Should, the cache lines of Directory caching is for recording status information and the position of the cache lines of the data buffer storage of association Information;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State whether combination buffer area hits and effectively, perform described combination buffer area accordingly if it has, then direct Operation.
2. the method for claim 1, it is characterised in that described combination buffer area includes describing Information, described description information includes: contrast field, directory state information, data state info and effectively Flag bit;
Described judge whether described combination buffer area hits and effectively include according to described operation address:
The contrast field that whether there is coupling in described combination buffer area is inquired about according to described operation address, if Existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective.
3. method as claimed in claim 2, it is characterised in that described operation address is read operation ground Location, described directly operation corresponding to the execution of described combination buffer area includes:
The directory information in the cache lines of described hit and data message is directly returned to described caching agent.
4. method as claimed in claim 3, it is characterised in that described operation address is write operation ground Location, described directly operation corresponding to the execution of described combination buffer area includes;
In the cache lines of described hit, write new data message and directory information, update described combination and delay Deposit directory state information and data state info that district describes in information, and by effective marker position for having Effect.
5. method as claimed in claim 4, it is characterised in that also include:
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State combination buffer area miss time, use LRU LRU select described combination buffer area In cache lines to be replaced, according to directory state information and the data state info of described cache lines to be replaced Judge whether to need the directory information in described cache lines to be replaced and data message are write back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
6. an internal storage data way to play for time, it is characterised in that include;
Directory caching and data buffer storage, described mesh is opened up in the memorizer of internal memory agents Home Agent Cache lines in record caching and the cache lines in described data buffer storage are not one_to_one corresponding, delaying of Directory caching Deposit status information and the positional information of the cache lines of the capable data buffer storage for recording association;
Receive the operation address that caching agent Cache Agent sends, judge institute according to described operation address State Directory caching and whether described data buffer storage hits and effectively, delays described catalogue if it has, then direct Deposit and perform corresponding operation with described data buffer storage.
7. method as claimed in claim 6, it is characterised in that described Directory caching includes that catalogue is retouched State information, described catalogue describe information include contrast field, directory state information, data message flag, Data message identity and the effective flag of catalogue;Described data buffer storage includes data specifying-information, institute State data specifying-information and include data state info and the effective flag of data;
The operation address that described reception caching agent Cache Agent sends, sentences according to described operation address Whether disconnected described Directory caching and described data buffer storage hit and effectively include:
Inquire about whether described Directory caching exists the contrast field of coupling according to described operation address, if depositing , the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then judge according to the data message flag of the cache lines of described hit Whether described data buffer storage hits;
The effective flag of data in the cache lines of the described data buffer storage according to hit judges this hit Cache lines in described data buffer storage is the most effective.
Method the most as claimed in claims 6 or 7, it is characterised in that described operation address is for reading behaviour Making address, described directly operation described Directory caching and the execution of described data buffer storage accordingly includes:
Directory information in the cache lines of the Directory caching directly returning from hit to described caching agent and life In data buffer storage cache lines in data message.
9. method as claimed in claim 8, it is characterised in that described operation address is write operation ground Location, described directly operation described Directory caching and the execution of described data buffer storage accordingly includes:
New directory information and the number of described hit is write in the cache lines of the Directory caching of described hit Cache lines according to caching writes new data message.
10. method as claimed in claim 8, it is characterised in that also include:
The data message flag of the cache lines of the data buffer storage according to described hit judges that described data are delayed When depositing miss, use LRU LRU to reject one in described data buffer storage and leave unused Cache lines.
11. 1 kinds of internal storage data buffer units, it is characterised in that including:
Caching distribution module, at least includes for opening up in the memorizer of internal memory agents Home Agent The combination buffer area of Directory caching and data buffer storage, cache lines and described data in described Directory caching are delayed Cache lines one_to_one corresponding in depositing, the cache lines of Directory caching is for recording the caching of the data buffer storage of association The status information of row and positional information;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to Described operation address judges whether described combination buffer area hits and effectively, if it has, then directly to described Combination buffer area performs corresponding operation.
12. devices as claimed in claim 11, it is characterised in that described operation executing module includes:
Hit effective judging unit, for whether inquiring about in described combination buffer area according to described operation address There is the contrast field of coupling, if existing, the most described combination buffer area hits;With
The slow of described hit is judged according to the effective marker position of the cache lines of hit in described combination buffer area Deposit row the most effective;Wherein, described combination buffer area includes that description information, described description information include: Contrast field, directory state information, data state info and effective marker position.
13. devices as claimed in claim 12, it is characterised in that described operation executing module includes;
Read operation performance element, in the cache lines directly returning described hit to described caching agent Directory information and data message.
14. devices as claimed in claim 13, it is characterised in that described operation executing module includes:
Write operation performance element, for writing new data message and catalogue in the cache lines of described hit Information, updates directory state information and data state info that described combination buffer area describes in information, and It is effective by effective marker position.
15. devices as claimed in claim 14, it is characterised in that also include:
Replacement module, for receiving the operation address that caching agent Cache Agent sends, according to described Operation address judges when described combination buffer area is miss, uses LRU LRU to select Cache lines to be replaced in described combination buffer area, according to the directory state information of described cache lines to be replaced Judge whether to need to it is believed that the directory information sum in described cache lines to be replaced with data state info Breath writes back internal memory;
It is invalid by the effective marker position of described cache lines to be replaced.
16. 1 kinds of internal storage data buffer units, it is characterised in that including:
Caching distribution module, for opening up Directory caching in the memorizer of internal memory agents Home Agent With data buffer storage, the cache lines in described Directory caching and the cache lines in described data buffer storage are not one by one Correspondence, the cache lines of Directory caching is for recording status information and the position of the cache lines of the data buffer storage of association Confidence ceases;
Operation executing module, for receiving the operation address that caching agent Cache Agent sends, according to Described operation address judges whether described Directory caching and described data buffer storage hit and effectively, if it is, Then directly described Directory caching and described data buffer storage are performed corresponding operation.
17. devices as claimed in claim 16, it is characterised in that described operation executing module includes:
Hit effective judging unit, for inquiring about whether described Directory caching exists according to described operation address The contrast field of coupling, if existing, the most described Directory caching hits;
Described hit is judged according to the catalogue effective marker position of the cache lines of hit in described Directory caching Cache lines is the most effective, if it has, then judge according to the data message flag of the cache lines of described hit Whether described data buffer storage hits;
The effective flag of data in the cache lines of the described data buffer storage according to hit judges this hit Cache lines in described data buffer storage is the most effective;Described Directory caching includes that catalogue describes information, described Catalogue describes information and includes contrasting field, directory state information, data message flag, data message body Part mark and the effective flag of catalogue;Described data buffer storage includes data specifying-information, and described data describe Information includes data state info and the effective flag of data.
18. devices as described in claim 16 or 17, it is characterised in that described operation executing module Including:
Read operation performance element, for directly returning the slow of the Directory caching of hit to described caching agent Data message in the cache lines of the data buffer storage depositing the directory information in row and hit.
19. devices as claimed in claim 18, it is characterised in that described operation executing module includes:
Write operation performance element, for writing new catalogue in the cache lines of the Directory caching of described hit The cache lines of the data buffer storage of information and described hit writes new data message.
20. devices as claimed in claim 19, it is characterised in that also include:
Replacement module, the data message flag for the data buffer storage row according to described hit judges described During data cache misses, LRU LRU is used to reject one in described data buffer storage Individual idle cache lines.
CN201210578615.3A 2012-12-27 2012-12-27 A kind of internal storage data way to play for time and device Active CN103076992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210578615.3A CN103076992B (en) 2012-12-27 2012-12-27 A kind of internal storage data way to play for time and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210578615.3A CN103076992B (en) 2012-12-27 2012-12-27 A kind of internal storage data way to play for time and device

Publications (2)

Publication Number Publication Date
CN103076992A CN103076992A (en) 2013-05-01
CN103076992B true CN103076992B (en) 2016-09-28

Family

ID=48153532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210578615.3A Active CN103076992B (en) 2012-12-27 2012-12-27 A kind of internal storage data way to play for time and device

Country Status (1)

Country Link
CN (1) CN103076992B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252421A (en) * 2013-06-25 2014-12-31 华为技术有限公司 Caching method and caching device
CN104331377B (en) * 2014-11-12 2018-06-26 浪潮(北京)电子信息产业有限公司 A kind of Directory caching management method of multi-core processor system
CN107992357A (en) * 2016-10-26 2018-05-04 华为技术有限公司 Memory pool access method and multicomputer system
CN108108312A (en) * 2016-11-25 2018-06-01 华为技术有限公司 A kind of cache method for cleaning and processor
CN109669897B (en) * 2017-10-13 2023-11-17 华为技术有限公司 Data transmission method and device
CN111339210B (en) * 2018-12-18 2023-04-28 杭州海康威视数字技术股份有限公司 Data clustering method and device
WO2021103020A1 (en) * 2019-11-29 2021-06-03 华为技术有限公司 Cache memory and method for allocating write operation
CN112114756B (en) * 2020-09-27 2022-04-05 海光信息技术股份有限公司 Storage system and electronic device
CN112698968B (en) * 2020-12-28 2024-06-11 艾体威尔电子技术(北京)有限公司 Received data processing method between asynchronous communication
CN114036084B (en) * 2021-11-17 2022-12-06 海光信息技术股份有限公司 Data access method, shared cache, chip system and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063264B (en) * 2009-11-18 2012-08-29 成都市华为赛门铁克科技有限公司 Data processing method, equipment and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01125639A (en) * 1987-11-11 1989-05-18 Fujitsu Ltd Disk cache control system
JP5235692B2 (en) * 2009-01-15 2013-07-10 三菱電機株式会社 Data access device and data access program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063264B (en) * 2009-11-18 2012-08-29 成都市华为赛门铁克科技有限公司 Data processing method, equipment and system

Also Published As

Publication number Publication date
CN103076992A (en) 2013-05-01

Similar Documents

Publication Publication Date Title
CN103076992B (en) A kind of internal storage data way to play for time and device
US10248572B2 (en) Apparatus and method for operating a virtually indexed physically tagged cache
US6023747A (en) Method and system for handling conflicts between cache operation requests in a data processing system
US8996812B2 (en) Write-back coherency data cache for resolving read/write conflicts
US8909871B2 (en) Data processing system and method for reducing cache pollution by write stream memory access patterns
US7669010B2 (en) Prefetch miss indicator for cache coherence directory misses on external caches
US6725341B1 (en) Cache line pre-load and pre-own based on cache coherence speculation
US6295582B1 (en) System and method for managing data in an asynchronous I/O cache memory to maintain a predetermined amount of storage space that is readily available
JP3309425B2 (en) Cache control unit
JP4065586B2 (en) Link list formation method
US7600077B2 (en) Cache circuitry, data processing apparatus and method for handling write access requests
US6321297B1 (en) Avoiding tag compares during writes in multi-level cache hierarchy
US7281092B2 (en) System and method of managing cache hierarchies with adaptive mechanisms
US10725923B1 (en) Cache access detection and prediction
US9645931B2 (en) Filtering snoop traffic in a multiprocessor computing system
EP0936554B1 (en) Cache coherency protocol including a hovering (H) state having a precise mode and an imprecise mode
JP2001507845A (en) Prefetch management in cache memory
US10540283B2 (en) Coherence de-coupling buffer
CN107341114A (en) A kind of method of directory management, Node Controller and system
KR20180122969A (en) A multi processor system and a method for managing data of processor included in the system
US6345339B1 (en) Pseudo precise I-cache inclusivity for vertical caches
KR100851738B1 (en) Reverse directory for facilitating accesses involving a lower-level cache
US7010649B2 (en) Performance of a cache by including a tag that stores an indication of a previously requested address by the processor not stored in the cache
US10740233B2 (en) Managing cache operations using epochs
JP2023527735A (en) Inter-core cache stashing and target discovery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200423

Address after: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee after: HUAWEI TECHNOLOGIES Co.,Ltd.

Address before: 301, A building, room 3, building 301, foreshore Road, No. 310053, Binjiang District, Zhejiang, Hangzhou

Patentee before: Hangzhou Huawei Digital Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211227

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: xFusion Digital Technologies Co., Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right