CN1797326A - Control circuit and its control method - Google Patents

Control circuit and its control method Download PDF

Info

Publication number
CN1797326A
CN1797326A CNA2004100821423A CN200410082142A CN1797326A CN 1797326 A CN1797326 A CN 1797326A CN A2004100821423 A CNA2004100821423 A CN A2004100821423A CN 200410082142 A CN200410082142 A CN 200410082142A CN 1797326 A CN1797326 A CN 1797326A
Authority
CN
China
Prior art keywords
data
cache
mentioned
unit
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2004100821423A
Other languages
Chinese (zh)
Other versions
CN100445944C (en
Inventor
关诚司
龟丸敏久
根岸博康
小原淳子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Priority to CNB2004100821423A priority Critical patent/CN100445944C/en
Publication of CN1797326A publication Critical patent/CN1797326A/en
Application granted granted Critical
Publication of CN100445944C publication Critical patent/CN100445944C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention is a control circuit and method for replacing data with low accessing probability in a cache memory, able to pre-read data with high accessing probability. A cache memory hit judgment unit reads object data from a main memory cell as judging the object data used in the operation processing of an operation processing unit do not hit the cache memory. In addition, an invalid data judgment unit judges if a cache memory line containing the object data is the same as the cache memory line containing the data used in the previous operation processing as the cache memory hit judgment unit judges the cache memory is hit: if not, a pre-reading control unit replaces the cache memory line containing the data used in the previous operation processing with the data stored in the main memory cell and makes pre-reading operation.

Description

Control circuit and control method
Technical field
The present invention relates to control circuit and control method in the cache memory control.
Background technology
In (prefetch) control circuit of looking ahead in the past that data is stored in advance in the cache memory, be not set to invalid and the control of looking ahead that keep with reference to the data of crossing once because carry out, so in the system low referring again to the probability of the data once of reference mistake, the hit rate of high-speed cache is low, and data provide spended time.
In addition, the time give up in displacement and patent documentation 1 is arranged with reference to the example of crossing data once.The forecasting method of patent documentation 1 record and circuit thereof are taked when data of looking ahead are removed from prefetch buffer, do not give up also the data of not reference and give up prefetch mode with reference to the data of crossing.
[patent documentation 1] spy opens flat 8-292913 communique
In the mode of patent documentation 1, from command cache, do not give up because from prefetch buffer, giving up with reference to the data of crossing, so even referring again to the low situation of probability under data also be maintained in the cache memory.In addition, realize with few hardware resource, use a lot of hardware resources, LSI (Large Scale Integration: the big problem of chip cost large scale integrated circuit) so exist because must use prefetch buffer.In addition in the mode of patent documentation 1, though exist because of branch not with reference to and the low situation of the probability that is referenced in the future of the data skipped under the problem that also keeps as comparable data not.
Summary of the invention
The present invention in view of the above problems, its purpose is to provide a kind of prefetch control circuit, for example, it is because the data that storage is read now or in the future in cache memory, thereby with reference to back once referring again to the many system of the low data of probability in effective especially, can realize with hardware resource still less.
Control circuit of the present invention is characterized in that possessing: the main memory unit of storage data; The cache memory that the data that are stored in the above-mentioned main memory unit is read in and stored as cache line with specific size unit; Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported; To be expression be stored in cache hit in the above-mentioned cache memory or indicated object data as the object data of the data used in the calculation process that above-mentioned operation processing unit is carried out be not stored in cache-miss in the above-mentioned cache memory in judgement, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data and be stored in cache hit judging unit in the above-mentioned cache memory as cache line with specific size unit; When aforementioned cache is hit under the situation that judgment unit judges is a cache hit, judge the cache line that comprises object data and be included in the data determining unit of the similarities and differences of the cache line of the data of using in the preceding computing; Control module, it carries out following control, under the different situation of the cache line of the data of in above-mentioned data determining unit is judged as the cache line that comprises object data and is included in calculation process last time, using, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of the specific size unit that obtains and the cache line that is included in the data of using in the last time computing are replaced and be stored in the above-mentioned cache memory as cache line, when cache line that above-mentioned data determining unit is judged as the data that comprise object be included in last time computing under the identical situation of the cache line of the data used, the cache line that is included in the data of using in the calculation process last time that is stored in the above-mentioned cache memory is not replaced in control.
Above-mentioned control module is characterized in that: being stored in data in the above-mentioned main memory unit and cache line when replacing and being stored in the above-mentioned cache memory, the data of corresponding with the cache line cache line of replacing with the data that are stored in the above-mentioned main memory unit in addition relatively above-mentioned main memory unit, obtain the data that are stored on the continuous zone, above-mentioned cache memory is replaced and be stored in to data that obtain and cache line.
Above-mentioned cache memory is characterized in that: the data on the continuum that is stored in above-mentioned main memory unit are repeatedly read in and be stored as continuous inlet (entry) with specific size unit as cache line, under the different situation of the cache line of the data that above-mentioned control module uses in above-mentioned data determining unit is judged as the cache line that comprises object data and is included in calculation process last time, repeatedly obtain the data that are stored in the above-mentioned main memory unit with specific size unit, the a plurality of data of acquired specific size unit and the inlet of the cache line of the data of using from be included in calculation process are last time replaced to the cache line of the previous inlet of the inlet of the cache line that comprises object data, be stored in the above-mentioned cache memory.
Control circuit of the present invention is characterized in that: further possess instruction and forbid the unit, its analysis package is contained in the data in the cache line that is stored in the above-mentioned cache memory, when the result who analyzes detects under at least one data conditions of data of the data of expression branch instruction and expression END instruction, above-mentioned control module is forbidden above-mentioned cache memory is replaced and be stored in to the cache line that is stored in the data in the above-mentioned main memory unit and be stored in the above-mentioned cache memory.
Above-mentioned operation processing unit is characterized in that: possess, from the cache access unit of above-mentioned cache memory input data; The decoding unit of the data of decoding aforementioned cache addressed location input; Carry out the arithmetic element of calculation process according to the data of above-mentioned decoding unit decoding, the later data of data that the aforementioned cache addressed location has been imported from above-mentioned cache memory input, analyze the data of having imported, when the data of analyzing is under the data conditions of expression branch instruction, from above-mentioned main memory unit, obtain data in the address that is stored in the represented branch target of the data analyzed concurrently with the calculation process of above-mentioned arithmetic element, and store in the above-mentioned cache memory.
Control circuit of the present invention is characterized in that possessing: the main memory unit of storage data; With specific size unit as cache line read in be stored in the above-mentioned main memory unit data and the storage cache memory; Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported; To be expression be stored in cache hit in the above-mentioned cache memory or indicated object data as the object data of the data used in the calculation process that above-mentioned operation processing unit is carried out be not stored in cache-miss in the above-mentioned cache memory in judgement, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data and be stored in cache hit judging unit in the above-mentioned cache memory as cache line with specific size unit; Control module, carry out following control, when aforementioned cache is hit under the situation that judgment unit judges is a cache-miss, aforementioned cache hit judging unit with specific size unit obtain be stored in be stored in above-mentioned cache memory in the continuous zone of the data of the corresponding above-mentioned main memory unit of cache line on data, the cache line that the data of acquired specific unit and aforementioned cache is hit beyond the cache line of judging unit storage is replaced, and is stored in the above-mentioned cache memory as cache line.
Control circuit of the present invention is characterized in that possessing: the main memory unit of storage data; Read in data and the storage that is stored in the above-mentioned main memory unit with specific size unit as cache line, for the effective or invalid cache memory of having stored of cache line management; Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported; Before the calculation process that above-mentioned operation processing unit is carried out, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired specific size and invalid cache line are replaced, and be stored in control module in the above-mentioned cache memory as cache line; Storage is to the reference value and at least a reference value storage unit that is stored in the reference value of the effective high-speed cache line number in the above-mentioned cache memory of the visiting frequency of above-mentioned cache memory; Mensuration is to the visiting frequency and at least a determination unit that is stored in the effective high-speed cache line number in the above-mentioned cache memory of above-mentioned cache memory; The visiting frequency of measuring when the said determination unit to above-mentioned cache memory is being stored under the situation below the reference value of the visiting frequency in the said reference value storage unit, under at least one situation under the situation more than the reference value of the effective high-speed cache line number of aforementioned cache line number in being stored in said reference value storage unit of measuring when the said determination unit, forbid that the mensuration of the control that above-mentioned control module carries out is forbidden the unit.
Control circuit of the present invention is characterized in that: the main memory unit that possesses the storage data; Read in the data that are stored in the above-mentioned main memory unit and the cache memory of storage; Cache access unit from above-mentioned height memory buffer input data; The decoding unit of the data of decoding aforementioned cache addressed location input; Carry out the arithmetic element of calculation process according to the data of above-mentioned decoding unit decoding, the calculation process of aforementioned cache addressed location and above-mentioned arithmetic element obtains from above-mentioned main memory unit and is stored in the data in the address that generates and stores above-mentioned cache memory in the calculation process that above-mentioned arithmetic element is carried out concurrently.
Control method of the present invention is characterized in that: store data in main memory unit, read in as cache line with specific size unit and to be stored in the data in the above-mentioned main memory unit and to be stored in cache memory, input has been stored in the data in the above-mentioned cache memory, carry out calculation process according to the data of having imported, to be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in above-mentioned calculation process in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data as cache line and be stored in the above-mentioned cache memory with specific size unit, under the situation that is judged as cache hit, judge the cache line that comprises object data and the similarities and differences that are included in the cache line of the data of using in the preceding computing, under the different situation of the cache line of the data of in being judged as the cache line that comprises object data and being included in calculation process last time, using, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of the specific size unit that obtains and the cache line that is included in the data of using in the calculation process are last time replaced and be stored in the above-mentioned cache memory as cache line, under the identical situation of the cache line of the data of using in being judged as the cache line that comprises object data and being included in calculation process last time, the cache line that comprises the data of using in the last time calculation process that is stored in the above-mentioned cache memory is not replaced in control.
Control method of the present invention is characterized in that: store data in main memory unit, read in as cache line with specific size unit and to be stored in the data in the above-mentioned main memory unit and to be stored in the cache memory, input has been stored in the data in the above-mentioned cache memory, carry out calculation process according to the data of having imported, to be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in above-mentioned calculation process in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data and be stored in the above-mentioned cache memory as cache line with specific size unit, under the situation that is judged as cache-miss, with specific size unit obtain be stored in be stored in above-mentioned cache memory in the continuous zone of the data of the corresponding above-mentioned main memory unit of cache line on data, the data of acquired specific size unit and the cache line of obtaining object data with specific size unit and be stored in beyond the cache line in the above-mentioned cache memory from above-mentioned main memory unit are replaced, and be stored in the above-mentioned cache memory as cache line.
Control method of the present invention is characterized in that possessing: store data in main memory unit, read in as cache line with specific size unit and to be stored in the data in the above-mentioned main memory unit and to be stored in the cache memory, effective or invalid to the cache line management of having stored, input has been stored in the data in the above-mentioned cache memory, carry out calculation process according to the data of having imported, before calculation process, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired specific size and invalid cache line are replaced, be stored in the above-mentioned cache memory as cache line, storage is to the reference value of the visiting frequency of above-mentioned cache memory be stored in reference value at least a of the effective high-speed cache line number in the above-mentioned cache memory in the reference value storage unit, mensuration is to the visiting frequency of above-mentioned cache memory and be stored at least a of effective high-speed cache line number in the above-mentioned cache memory, determine to the visiting frequency of above-mentioned cache memory situation smaller or equal to the reference value that is stored in the visiting frequency in the said reference value storage unit, effective high-speed cache line number of measuring with the said determination unit more than or equal to the situation of the reference value that is stored in the effective high-speed cache line number in the said reference value storage unit at least one down, forbid before calculation process, obtain the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired discrete cell size and invalid cache row are replaced, and be stored in the above-mentioned cache memory as cache line.
Control method of the present invention is characterized in that: data storage in main memory unit, read in and be stored in the data in the above-mentioned main memory unit and be stored in the cache memory, from above-mentioned cache memory input data, the data of decoding input, carry out calculation process according to the data after the decoding, from above-mentioned cache memory input data the time, from above-mentioned main memory unit, obtain concurrently with calculation process and to be stored in the data in the address object shown in the address that generates in the calculation process and to be stored in the above-mentioned cache memory.
If employing the present invention then for example can be provided in reference to effective especially in the many system of the low data of the probability that refers again to after crossing once, the prefetch control circuit that can realize with hardware resource still less.
Description of drawings
Fig. 1 is a formation block diagram of showing the formation of the prefetch control circuit 100 among the embodiment 1.
Fig. 2 is a process flow diagram of showing the action of the prefetch control circuit 100 among the embodiment 1.
Fig. 3 is the pie graph of the operation processing unit 1 of embodiment 4.
Fig. 4 is the figure that shows the action of the operation processing unit 1 among the embodiment 4.
Fig. 5 is a formation block diagram of showing the formation of the prefetch control circuit 100 among the embodiment 5.
Fig. 6 shows the look ahead process flow diagram of the action of forbidding unit 8 of mensuration among the embodiment 5.
Fig. 7 is a formation block diagram of showing the formation of the prefetch control circuit 100 among the embodiment 6.
Fig. 8 is the hardware block diagram of the prefetch control circuit 100 among the embodiment.
Embodiment
Fig. 1 is a formation block diagram of showing that the prefetch control circuit 100 among the embodiment 1 constitutes.
In Fig. 1, the 1st, access cache 3 is read in data, the data of reading in is carried out the operation processing unit of computing.
The 2nd, when operation processing unit 1 access cache 3, judge the cache hit judging unit that in cache memory 3, whether has the data that become object.
The 3rd, with the cache memory of cache line unit's storage data.
The 4th, the cache line that is stored in the cache memory 3 according to the visit situation to cache memory 3 is set to invalid invalid data judging unit.
The 5th, when in cache memory 3, existing under the situation of effective cache line and invalid cache row, obtain the address of the storage element of prefetching object data from the address of effective cache line, read the data of prefetching object and be stored in the control module of looking ahead the cache memory 3 from main memory unit 7.
The 6th, when cache-miss and when prefetch request is arranged from the main memory control unit of main memory unit 7 sense datas.
The 7th, the main memory unit of various data such as store instruction data and operational data.In addition, director data is the data of content of the operational order of expression operation processing unit 1 calculation process of carrying out, and operational data is the data of the operand of instruction data representation.
At this, so-called cache line is meant the data of the specific size of cache memory 3 storage administrations.
Generally, the cache memory 3 that uses for the difference of the processing speed that remedies CPU (CPU (central processing unit)) and main memory unit 7 is set in inlet and the management of cache row, and each inlet has significance bit, tag address, cache line.
It is effective, invalid that significance bit is represented to enter the mouth, and tag address is illustrated in the storage element address of the cache line in the main memory unit 7.
Then, cache memory 3 is by retrieving the tag address corresponding with the storage element address of access object data to effective inlet, and definite and output packet is contained in the access object data in the cache line.
The below relevant action of explanation.
Fig. 2 is a process flow diagram of showing the action of the prefetch control circuit 100 among the embodiment 1.
At this, suppose in cache memory 3, to be stored in the data of storing in the zone of representing in continuous address in the main memory unit 7, and be the data of last time visiting main memory unit 7 storages.In addition, the significance bit of supposing each inlet represents that effectively it is effective status that data are read.
At first, operation processing unit 1 access cache 3 is carried out read (the step S1) of data.
At this, cache hit judging unit 2 outputs to cache memory 3 to the storage element address of the access object data of operation processing unit 1, according to the output result of cache memory 3, judge in cache memory 3, whether there are access object data (step S2).
Under the access object data were present in situation in the cache memory 3, cache hit judging unit 2 was judged as cache hit, took out the access object data and outputed to operation processing unit 1 (step S3) from cache memory 3.
Then, invalid data judging unit 4 judges that whether the cache line of the access object data that comprise operation processing unit 1 is and the identical cache line (step S4) of data of visit last time.
When the judged result of invalid data judging unit 4 is that the data of data of this visit and last time visit are under the situation of same cache line, end process.Thereby (step S9) do not take place in the action of looking ahead.
When the judged result of invalid data judging unit 4 is that the data of this visit object data and last time visit are that the significance bit of the cache line of the data of visit is set at invalid and makes inlet invalid invalid data judging unit 4 comprising last time under the data conditions of different cache line.At this, the cache line of supposing to comprise this visit object data is the cache line (step S5) that is stored in next inlet that comprises the cache line of the data of visit last time.
Below, the control module 5 of looking ahead judges whether have effective cache line and invalid cache row both sides (step S6) in cache memory 3.
At this moment,, remove this cache line in addition and remain valid, have effective cache line and invalid cache row both sides so the control module 5 of looking ahead detects because the cache line in last time visiting is invalid in step S5.
Below, look ahead control module 5 for for the invalid cache row from main memory unit 7 sense datas, and generate the address (step S7) of prefetching object from the address of effective cache line.
5 pairs of the control modules invalid cache in visit last time of the looking ahead replacement of line data of advancing.Thereby at first, look ahead control module 5 for the data storage of storing on the zone shown in the address continuous in the main memory unit 7 in cache memory 3, and for the effective cache line in the previous inlet that is stored in the invalid cache row, the address of the next address of the storage element address in the main memory unit 7 that stores the data corresponding with cache line as prefetching object.
Below, the prefetching object address corresponding data of control module 5 of looking ahead in order to read and in step S7, to generate, and send request of access to main memory control unit 6.Then, main memory control unit 6 is sense data from main memory unit 7, and in cache memory 3, the significance bit of the cache line of having stored is set to effectively (step S8) data storage.
Look ahead by carrying out this, can be in cache memory 3 the continuous data storage corresponding with new address.
Look ahead control module 5 after the looking ahead of execution in step S8, judge in cache memory 3, whether have effective cache line and invalid cache row both sides (step S6) once more.
In step S8, replace and become effectively because comprise last time the cache line of the data of visit, so in cache memory 3, there is not invalid cache line with new cache line.Thereby do not carry out the above release of looking ahead and handle (step S9).
As mentioned above, when the prefetch control circuit 100 among the embodiment 1 changes at the cache line of the data of operation processing unit 1 visit, the cache line that does not need reference before this can be the new data storage of in the future possible reference in cache memory 3.
In addition, describing to show that being used for data presented in showing with image has and once be read out from main memory unit 7 and the low feature of probability that picture is referenced after showing once more.Therefore, in describing to show the system that shows with image, the operation that can make new data one side of storage with the data of reference are remained on to compare cache hit rate in the cache memory 3 higher.
Thereby, by for example describing to show the prefetch control circuit of using in the system that shows with image in embodiment 1 explanation, can make the data access high speed motion, can improve the handling property of system.
And then, prefetch control circuit 100 among the embodiment 1 is by replacing being positioned at the data and the invalid cache row of the next positions of the data that effectively cache line is corresponding in main memory unit 7, can the data on the continuum that is stored in main memory unit 7 as the new data storage that may be referenced future in cache memory 3.
In addition, describing to show in the system that shows with image, because be mostly a series of data storage of a picture part in the continuum of main memory unit 7, so, can improve cache hit rate by being stored in data storage on the continuum of main memory unit 7 on cache memory 3.
Thereby, describing to show the prefetch control circuit 100 of using explanation in embodiment 1 in the system that shows with image, can make the data access high speed motion by for example, can improve the handling property of system.
In addition, in the prefetch control circuit 100 in embodiment 1, looking ahead when control, need be for the significance bit of using cache memory 3 unnecessary sign.And then, because need be for the unnecessary storer of data storage in cache memory 3 of looking ahead, so can realize controlling the circuit of cache memory 3 with hardware resource still less yet.
But, also passable with the significance bit branch of cache memory 3 sign that is arranged, also passable with the cache memory storer that was arranged in 3 minutes.
Embodiment 2
In above embodiment 1, the action of looking ahead has been described under the situation of operation processing unit 1 access cache 3 and cache hit.
In embodiment 2, the action of looking ahead under the situation of operation processing unit 1 access cache 3 and cache-miss is described according to Fig. 2.
The same with the foregoing description 1 in Fig. 2, operation processing unit 1 access cache 3 (step S1), cache hit judging unit 2 is judged the access object data (step S2) that whether have operation processing unit 1 in cache memory 3.
When the data that in cache memory, do not have visit and under the situation of cache-miss, it is invalid that invalid data judging unit 4 is judged as whole cache lines, makes the significance bit of whole cache lines invalid (step S10).
Below, cache hit judging unit 2 is in order to read the data of the cache line corresponding with the address of cache-miss, and sends request of access to main memory control unit.Main memory control unit 6 sense data and store cache memory 3 into from main memory unit 7, the significance bit of this cache line is set to effectively (step S11).
In addition, cache hit judging unit 2 after main memory unit 7 sense datas, outputs to operation processing unit 1 (step S12) to the data of the access object of reading in step S11.
Below, carry out the processing the same with the foregoing description 1.
The control module 5 of looking ahead is judged the both sides (step S6) that whether have effective cache line and invalid cache row in cache memory 3.
At this moment, the cache line that only reads in data after cache-miss because in step S11 from main memory unit 7 is effective and invalid in addition except that this, so the control module 5 of looking ahead detects the both sides that have effective cache line and invalid cache line.
The control module 5 of looking ahead generates the address (step S7) of prefetching object from the address of effective cache line.
The control module 5 of looking ahead is read the data of then corresponding with effective cache line data back from main memory unit 7, for it and invalid cache row are replaced and be stored in the cache memory 3, and generate the address of prefetching object.At this, the invalid cache row of object is selected the cache line of next input of effective cache line as an alternative.In addition, for the data of effective cache line, the data address that is positioned at the next position in main memory unit 7 is set to the address of prefetching object.
Below, the control module 5 of looking ahead sends Address requests in order to read out in the data corresponding with the prefetching object address that generate among the step S7 to main memory control unit 6.Then, main memory control unit 6 is from primary memory 7 sense datas and store data cache memory 3, and the significance bit of the cache line of storage is set to effectively (step S8).
By this execution of looking ahead, can be in cache memory 3 the continuous data storage corresponding with new address.
Look ahead control module 5 after looking ahead of step S8 carried out, judge in cache memory 3, whether have effective cache line and invalid cache row both sides (step S6) once more.
At this moment, cache-miss judge the back read in the cache line of data from main memory unit 7 and looking ahead 2 of the cache line that reads in effectively, and remove in addition invalid.Therefore, the control module 5 of looking ahead detects two sides that have effective cache line and invalid cache row.
The control module 5 of looking ahead is read the data of then corresponding with effective cache line data back from main memory unit 7, for it and invalid cache row are replaced and be stored in the cache memory 3, and generate the address of prefetching object.At this, the invalid cache row of object is selected the then cache line of next inlet of 2 effective cache lines as an alternative.In addition, for the data of the 2nd effective cache line, in main memory unit 7, being positioned at the address of the data address of the next position as prefetching object.
Below, the control module 5 of looking ahead sends Address requests in order to read the data corresponding with the address of the prefetching object that generates in step S7 to main memory control unit 6.Then, main memory control unit 6 sense data and in cache memory 3, store data from main memory unit 7, the significance bit of the cache line of having stored is set to effectively (step S8).
By this execution of looking ahead, can be in cache 3 the continuous data storage corresponding with new address.
As mentioned above till the invalid cache row does not have repeated execution of steps S6 to a series of prefetch process of step S8.Under the situation that the invalid cache row has not had, do not carry out above action and the end process (step S9) of looking ahead.
By carrying out prefetch process like this, can in cache memory 3, be stored in the data of storing on the continuous zone of main memory unit 7.
As mentioned above, when the data of the request of access that in cache memory 3, does not have operation processing unit 1 and under the miss situation of speed buffering, the probability height of the data that are judged as the data that in cache memory 3, there are not operation processing unit 1 request of access and in main memory unit 7, store on the continuous zone, it is invalid that whole cache lines is set to, and looks ahead and can be taken into cache memory 3 to the data that are stored on the above-mentioned continuous zone by execution.
Thus, for example describing to show in the system that shows with image, because be mostly in the continuous zone of a series of data storage of a picture part in main memory unit 7, so by being stored in data storage on the continuum of main memory unit 7 on cache memory 3, improve cache hit rate, can make the data access high speed motion of operation processing unit 1.
Embodiment 3
In the foregoing description 1, the action that operation processing unit 1 is looked ahead when last time the data of the cache line of next inlet of the cache line of visit conduct interviews has been described.
In embodiment 3, not the cache line of next inlet of operation processing unit 1 cache line of last time visiting according to Fig. 2 explanation, but the data of the cache line that leaves a plurality of inlets are conducted interviews and the action of looking ahead during cache hit.
At this, suppose in cache memory 3 under effective status in the storage and main memory unit 7 last time the data of the visit data of storage continuously.
Operation processing unit 1 access cache 3 (step S1), cache hit judging unit 2 is judged the access object data (step S2) that whether have operation processing unit 1 in cache memory 3.
Under the situation of cache-miss, invalid data judging unit 4 whole cache lines are not set to invalid (step S10) when there are not the access object data in cache memory, and the action shown in the foregoing description 2 is carried out in processing thereafter.
Under the access object data were present in situation in the cache memory 3,2 outputs of cache hit judging unit were taken out data and are fetched into the data (step S3) of operation processing unit 1 from cache memory 3.
Below, invalid data judging unit 4 judges that whether the access object data of operation processing unit 1 are and the same cache line (step S4) of data of visit last time.
When the judged result of invalid data judging unit 4 is that the data of this visit are end process under the situation of the cache line identical with the data of last time visit.Thereby (step S9) do not take place in the action of looking ahead.
When the judged result of invalid data judging unit 4 is that the data of this visit are different with the cache line of last time visit, and then the cache line of next inlet of the cache line that neither last time visit, but left under the data conditions of cache line of a plurality of inlets, it is invalid that invalid data judging unit 4 will be set to till the cache line of this visit previous from the cache line of last time visit, and it is invalid that the significance bit of these cache lines is set to.In the foregoing description 1 because next cache line of the cache line of last time visiting as access object (step S5), so last time Fang Wen cache line be set to invalid.In embodiment 3 because from last time the visit a cache line left a plurality of cache lines as access object, so the cache line of in operation processing unit 1, skipping also the same with the cache line of last time visiting be set to invalid.
Below, the control module 5 of looking ahead judges whether have effective cache line and invalid cache row both sides (step S6) in cache memory 3.
Because the cache line till the cache line from the cache line of last time visiting to this visit previous is invalid, it is effective to remove this cache line in addition, so the control module 5 of looking ahead detects effective cache line and invalid cache row both sides exist.
Below, the control module 5 of looking ahead generates the address (step S7) of prefetching object from the address of effective cache line.
The control module 5 of looking ahead is read the data of then corresponding with effective cache line data back from main memory unit 7, for it and invalid cache row are replaced and be stored in the cache memory 3, and generate the address of prefetching object.At this, the invalid cache row of object is selected the cache line last time visited as an alternative.In addition, for the data of the effective cache line in the previous inlet that is stored in the invalid cache row of replacing object, in main memory unit 7, being positioned at the address of the address of the data on the next position as prefetching object.
Below, the control module 5 of looking ahead is in order to read the data corresponding with the address of the prefetching object that generates among step S7 from main memory unit 7, and send request of access to main memory control unit 6.Main memory control unit 6 sense data and store data in cache memory 3 from main memory unit 7, the significance bit of this cache line is set to effectively (step S8).
By this execution of looking ahead, can be in cache memory 3 the continuous data storage corresponding with new address.
Look ahead control module 5 after looking ahead of step S8 carried out, judge in cache memory 3, whether have effective cache line and invalid cache row both sides (step S6) once more.
Cache line in step S5 till the cache line from the cache line of last time visiting to this visit previous be set to invalid after, because only a cache line is set to effectively in step S8,, the control module 5 of looking ahead has effective cache line and invalid cache row both sides so detecting.
Below, look ahead control module 5 for from main memory unit 7, read be stored in the continuous zone of the data of effective cache line on data, and it and invalid cache line is replaced and be stored in the cache memory 3, and generate the address of prefetching object.At this, the invalid cache line of object is chosen in the cache line that becomes next inlet of effective cache line among the above-mentioned steps S8 as an alternative.In addition, for the data of the effective cache line in the previous inlet that is stored in the invalid cache row of object as an alternative, be the address setting of the data that in main memory unit 7, are positioned at the next position the address of prefetching object.
Below, the prefetching object address corresponding data of control module 5 of looking ahead in order from main memory unit 7, to read and to generate, and send request of access to main memory control unit 6.Main memory control unit 6 is from main memory unit 7 sense datas and store data memory buffer 3, and the significance bit of this cache line is set to effectively (step S8).
Look ahead by carrying out this, can be in cache memory 3 the data storage of in main memory unit 7, storing continuously with effective cache line data.
After, till not having, repeat a series of prefetch process as mentioned above from step S6 to step S8 to the invalid cache row.Under the situation that invalid cache line has not had, finish to look ahead action (step S9).
By carrying out prefetch process like this, can in cache memory 3, be stored in the data of storing on the continuum of main memory unit 7.
Operation processing unit 1 is not as access object next cache line of the cache line of last time visiting, but the data of leaving a plurality of cache lines as access object, be judged as when cache hit judging unit 2 under the situation of cache hit, it is low to be judged as the probability that the cache line of skipping in operation processing unit 1 is conducted interviews.Thereby, in present embodiment 3, it is invalid that cache line till cache line from the cache line of last time visiting to this visit previous is set to, thereafter, look ahead by execution, in main memory unit 7, can with the continuous zone of the data of this visit on the data of storing read in cache 3.
For example describing to show in the system that shows with image, because be mostly a succession of data storage of a picture part on the continuous zone of main memory unit 7, so it is, low for the probability of the data access of the cache line of having skipped to the probability height of the data access of continuous storage.Thereby, because by remove the data of having skipped from cache memory 3, with the continuous zone of the data of new visit on the data storage of storing in cache memory 3, can improve hit rate, so operation processing unit 1 can be carried out data access at high speed.
Embodiment 4
In above embodiment, the situation that operation processing unit 1 is looked ahead when visiting cache according to the calculation process result has been described.
In embodiment 4, operation processing unit 1 data address according to branch instruction and next needs of calculation process data computation in calculation process is described, the action of looking ahead when cache memory is carried out pre-read access.
Fig. 3 is the pie graph of the operation processing unit 1 in the present embodiment 4.
In Fig. 3, the 11st, read in data from cache memory 3, be divided into the cache access unit of director data and operational data.
The 12nd, the instruction decoding unit that the decoding that the director data that cache access unit 11 is told instructs is handled.
The 13rd, operational order is carried out the arithmetic element of calculation process according to the instruction of instruction decoding unit 12 decoding.
The 14th, according to the address of the data of branch instruction and next needs of calculation process data computation, cache access unit 11 is carried out the pre-read access unit of pre-read access request.
Fig. 4 is the figure that shows the action of the operation processing unit 1 among the embodiment 4.
About the action of the operation processing unit among the embodiment 41, illustrate from operation processing unit 1 to cache memory 3 situation of visiting usually and the situation of carrying out pre-read access according to Fig. 4.
At first explanation is not from the action of operation processing unit 1 when cache memory 3 carries out pre-read access but carries out common visit.
In operation processing unit 1, cache access unit 11 reads in the data that are fetched in the cache memory 3, and the data of reading in are outputed to instruction decoding unit 12 (step S101).
The director data that has read in is handled in instruction decoding unit 12 decodings, and the director data after decoding is handled outputs to arithmetic element.In addition, when result that decoding is handled be that the instruction data are to need under the situation of instruction of operational data, to read in (the step S102) of cache access unit 11 request operational datas.
Cache access unit 11 reads in next (the 2nd) data (step S103) from cache memory 3 after operational data is read in.
In addition, can computing if arithmetic element 13 is read in director data and operational data, then carry out calculation process (step S104).
In the calculation process of arithmetic element 13, instruction decoding unit 12 carries out the decoding of next (the 2nd) director data to be handled.Then, when director data be to need cache access unit 11 to be carried out the request of reading in (step S105) of operational data under the situation of instruction of operational data.
Cache access unit 11 reads in next (the 3rd) data (step S106) from cache memory 3 after operational data is read in.
When arithmetic element 13 finishes in the calculation process to the 1st director data, next (the 2nd) instruction and the beginning calculation process (step S107) of 12 decodings of input instruction decoding unit.
Then, instruction decoding unit 12 carries out the decoding processing of next (the 3rd) data.In addition, when director data is to need under the situation of instruction of operational data, operational data (step S108) is read in 11 requests to the cache access unit.
Carry out carrying out reading in of next (the 4th) data from cache memory 3 after operational data reads in cache access unit 11.
Operation processing unit 1 is read in director data and operational data continuation processing from cache memory 3 like this.If operation processing unit 1 is read in END instruction or received end signal, then finish from the director data of cache memory 3 and reading in of operational data.
Below explanation is from the action of operation processing unit 1 when cache memory 3 carries out pre-read access.
Hypothesis stores branch instruction in the 3rd director data in the embodiment 3 of above-mentioned common visit.
In the embodiment of above-mentioned common visit, the 3rd director data is read into cache access unit 11 (step S1061) from cache memory 3 in the calculation process of 1 pair the 1st instruction of arithmetic element.
Thereafter, pre-read access unit 14 till finishing the 1st ordering calculation until arithmetic element 13 during, from the 3rd director data, discern branch instruction, be included in the pre-read access request (step S1062) of the data shown in the branch address in the director data.
Read branch address in the pre-read access request that address location 14 sends in advance for what be included in operation processing unit 1, be judged as when cache hit judging unit 2 under the situation of cache-miss, the action of looking ahead like that shown in above-mentioned embodiment 2 is the data storage of storing on the later continuum of branch address (step S1063) in cache memory 3.
When arithmetic element 13 finishes in the calculation process to the 1st instruction, the 2nd director data is begun calculation process (step S107).
With its concurrently, instruction decoding unit 12 carries out the decoding of next (the 3rd) data to be handled.The result that decoding is handled, instruction decoding unit 12 identification branch instructions.Then, to the data read of speed buffering addressed location 11 request branch address, access cache 3 (step S108).
At this, the data corresponding with branch address are because all be stored in the cache memory 3 by pre-read access unit 14, so cache hit judging unit 2 after cache hit, can be taken into the data of branch address immediately to the cache access unit 11 of operation processing unit 1.
The embodiment that carries out under the situation of above-mentioned pre-read access is exactly data of reading the branch address shown in the branch instruction in advance, but for the data of the address that generates in the calculation process process, operation processing unit 1 also can be carried out pre-read access from cache memory 3.
As mentioned above, because operation processing unit 1 needs the address of data from branch instruction and the calculation process data computation next one in calculation process, cache memory 3 is carried out pre-read access, so before the data of the branch's purpose after calculation process are read in, can in cache memory 3, be taken into next data necessary.Thereby for visit, so can high speed motion because can reduce the delay generation that causes because of cache-miss from operation processing unit 1.
Embodiment 5
In above embodiment, the situation that the control module 5 of looking ahead is looked ahead and moved has been described till invalid cache line does not have.
Explanation is when from the reading under the few situation of visiting frequency of operation processing unit 1 in present embodiment 5, is stored in the situation that effective speed buffering line number in the cache 3 forbids looking ahead action for a long time.
Fig. 5 is a formation block diagram of showing the formation of the prefetch control circuit 100 among the embodiment 5.
In Fig. 5, from 1 to 7 with the foregoing description 1 explanation the same.But when the control module 5 of looking ahead has under the situation of the inhibition request of looking ahead, even there is the action of also not looking ahead of effective cache line and invalid cache row.
The 8th, from operation processing unit 1 to cache memory 3 to read in visiting frequency few, and the mensuration of the action of forbidding looking ahead when the effective speed buffering line number more than reference value is looked ahead and is forbidden the unit.
The determination unit that possessed frequency determination unit 81 and significant figure determination unit 82 at 80 o'clock.
The 81st, measure from the frequency determination unit that read in visiting frequency of operation processing unit 1 to cache memory 3.
The 82nd, the significant figure determination unit of effective high-speed cache line number of mensuration cache memory 3.
The 83rd, storage is used to judge the reference value storage unit of the few visiting frequency reference value of visiting frequency and the reference value of effective high-speed cache line number.
Fig. 6 shows the look ahead process flow diagram of the action of forbidding unit 8 of mensuration among the embodiment 5.
Illustrate according to Fig. 6 that mensuration in the present embodiment 5 is looked ahead and forbid the action of unit 8.
At this in reference value storage unit 83, suppose to be set with judgement from operation processing unit 1 to cache memory 3 read in the few visiting frequency reference value of visit, the reference value of visiting frequency represents to have fully the effectively significant figure reference value of high-speed cache line number relatively.
When value that frequency determination unit 81 is measured than the big situation of the reference value that is stored in the visiting frequency in the reference memory cell 83 under, mensuration is looked ahead and is forbidden 8 pairs of unit control module 5 inhibition request (step S201) of not looking ahead of looking ahead.
Under the situation of value that frequency determination unit 81 is measured smaller or equal to the reference value of visiting frequency, whether mensuration looks ahead the value of forbidding judging that significant figure determination unit 82 measures in unit 8 more than or equal to the significant figure reference value that is stored in the reference value storage unit 83, is being not more than when equaling the significant figure reference value the control module 5 of the looking ahead inhibition request (step S202) of not looking ahead.
The value of measuring when frequency determination unit 81 is smaller or equal to the reference value of visiting frequency, and then under the situation of value more than or equal to the significant figure reference value measured of significant figure determination unit 82, measuring looks ahead forbids 8 pairs of unit control module 5 inhibition request (step S203) of looking ahead of looking ahead.
Have when the control module 5 of looking ahead under the situation of the inhibition request of looking ahead of forbidding unit 8 of looking ahead from mensuration,, also do not generate prefetch address, the action of not looking ahead even detect in the step S6 of Fig. 2 effectively cache line and invalid cache line.
In addition, look ahead in mensuration and to forbid looking ahead in 8 pairs of unit and forbid that unit 5 looks ahead after the inhibition request, the value situation also bigger when frequency determination unit 81 mensuration than the reference value of visiting frequency, under the situation that the value of measuring with significant figure determination unit 82 is lacked than the significant figure reference value, measuring looks ahead forbids that 8 pairs of the unit control module 5 of looking ahead looks ahead and forbid removing request.
At this, the reference value of above-mentioned visiting frequency and significant figure benchmark are to be worth arbitrarily.
In addition, in the above description, illustrated and under significant figure the situation more than or equal to reference value of visiting frequency, forbidden the situation of looking ahead smaller or equal to reference value and cache line.But, also can be visiting frequency smaller or equal to reference value, look ahead when perhaps the significant figure of cache line is more than or equal to reference value and forbid.
Because the necessity of reading a plurality of data from main memory unit 7 and being stored in the cache memory 3 under the few situation of visiting frequency is low, it is desirable to forbid as mentioned above because of the visit of action of looking ahead to main memory unit 7.
For example, when reading under the situation of high speed cache miss in the relative data of the branch address purpose of branch instruction, because be stored in the data void in whole in the cache memory 7, so can be ineffectually carrying out before this suppressing to be Min. from the look ahead processing that is taken into of main memory unit 7.
Thus, because can reduce because of the action of looking ahead to the visit of main memory unit 7, so can reduce the electric power that in the action of looking ahead, consumes.
Embodiment 6
In above embodiment 5, the situation that the significant figure of cache line forbids looking ahead action for a long time has been described under the situation few from the frequency that reads in visit of operation processing unit 1.
In embodiment 6, the situation of the action of forbidding looking ahead is described from cache memory 3 by identification branch instruction and END instruction in the data of reading in the main memory unit 7 under the situation of these instructions.
Fig. 7 is a formation block diagram of showing the formation of the prefetch control circuit 100 among the embodiment 6.
In Fig. 7,1 to 7 with in embodiment 1 explanation the same.But when the control module 5 of looking ahead has under the situation of the inhibition request of looking ahead, even under the situation that has effective cache line and invalid buffering row, also do not look ahead action.
The 9th, identification branch instruction and END instruction from the data that cache memory 3 is read in by main memory unit 7, the action of under the situation of these instructions, forbidding looking ahead, in addition, under having stored the situation of the instruction different with branch instruction, END instruction, cache memory 3 finishes to forbid that the instruction prefetch of controlling forbids the unit.
The 91st, the branch detection of end unit of identification branch instruction and END instruction from the data of reading in by main memory unit 7.
Below explanation action.
Main memory control unit 6 sends to cache memory 3 to these data and instruction prefetch is forbidden unit 9 when reading in data from main memory unit 7.
Branch detection of end unit 91 is analyzed and is sent to the data that instruction prefetch is forbidden unit 9, and whether the identification of branch instruction or END instruction is arranged.
Forbid in the data of unit 9 under the branch instruction and which all non-existent situation of END instruction that when delivering to instruction prefetch instruction prefetch is forbidden 9 pairs of unit control module 5 inhibition request of not looking ahead of looking ahead.
Forbid existing in the data of unit 9 under at least one side's the situation of branch instruction and END instruction sending to instruction prefetch, instruction prefetch is forbidden 9 pairs of unit control module 5 inhibition request of looking ahead of looking ahead.
Look ahead control module 5 under the situation of the inhibition request of looking ahead of forbidding unit 9 from instruction prefetch, and cache line and invalid cache line do not generate prefetch address, the action of not looking ahead yet even detect in the step S6 of Fig. 2 effectively.
In addition, forbid that at instruction prefetch 9 pairs of the unit control module 5 of looking ahead has carried out looking ahead after the inhibition request, when cache memory 3 storages and branch instruction and END instruction which all under the situation of different director data, instruction prefetch forbids that 9 pairs of the unit control module 5 of looking ahead looks ahead and forbid removing request.
Store when cache 3 under the situation of branch instruction and END instruction, even it is in cache memory 3, store and after this corresponding data in continuous address, also low from the probability of operation processing unit 1 visit.By above method, can forbid because of the ineffective access of action of looking ahead to main memory unit 7.
For example when reading under the situation of high speed cache miss in data to the branch address of branch instruction, because be stored in the data void in whole in the cache memory 7, so can be ineffectually carrying out before this suppressing to be Min. from the look ahead processing that is taken into of main memory unit 7.Thus because can reduce because of the action of looking ahead to the visit of main memory unit 7, move the consumption electric power that causes because of looking ahead so can reduce.
Fig. 8 is the hardware structure diagram of the prefetch control circuit 100 in the various embodiments described above.
In Fig. 8, prefetch control circuit 100 possesses the CPU911 of executive routine.CPU911 is connected with ROM913, RAM914, disk set 920 via bus 912.
RAM914 is an example of nonvolatile memory.ROM913, disk set 920 are examples of nonvolatile memory.These are examples of memory storage or storage unit.
Above-mentioned cache memory 3 mainly Static RAM as storage medium, main memory unit 7 Dynamic RAM and disk set 920 as storage medium.
In addition, above-mentioned operation processing unit 1 is carried out calculation process to register 915 as storage medium.
What in addition, conduct "~unit " illustrated in the explanation of the various embodiments described above can certainly realize with the firmware that is stored among the ROM913.Perhaps, also can only use software, perhaps only use hardware, the perhaps combination of software and hardware, and then, realize by combination with firmware.
In addition, also can use the program of the pen recorder storage implementation the various embodiments described above of other recording mediums such as adopting disk set 920, FD, CD, CD, MD, DVD.

Claims (12)

1, a kind of control circuit is characterized in that comprising:
The main memory unit of storage data;
Read in the data that are stored in the above-mentioned main memory unit and the cache memory of storage with specific size unit as cache line;
Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported;
To be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in the calculation process that above-mentioned operation processing unit is carried out in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data and be stored in cache hit judging unit in the above-mentioned cache memory as cache line with specific size unit;
When aforementioned cache is hit under the situation that judgment unit judges is a cache hit, judge the cache line that comprises object data and be included in the data determining unit of the similarities and differences of the cache line of the data of using in the last time computing;
Under the different situation of the cache line of the data of in above-mentioned data determining unit is judged as cache line that comprises object data and the calculation process that is included in last time, using, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of the specific size unit that obtains and the cache line that is included in the data of using in last time the computing are replaced, be stored in the above-mentioned cache memory as cache line, under the identical situation of the cache line of the data of using in above-mentioned data determining unit is judged as the cache line that comprises object data and is included in last time computing, the control module that is stored in the cache line that is included in the data of using in the calculation process last time in the above-mentioned cache memory is not replaced in control.
2, control circuit according to claim 1 is characterized in that:
At above-mentioned control module, being stored in data in the above-mentioned main memory unit and cache line when replacing and being stored in the above-mentioned cache memory, obtain the data that are stored in respect on the continuous zone of the data of the above-mentioned main memory unit corresponding, data that obtain and cache line are replaced and are stored in the above-mentioned cache memory with cache line beyond the cache line of replacing with the data that are stored in the above-mentioned main memory unit.
3, control circuit according to claim 1 is characterized in that:
Above-mentioned cache memory repeatedly reads in the data of storing on the continuum of above-mentioned main memory unit with specific size unit as cache line and is stored in the continuous inlet,
Under the different situation of the cache line of the data that above-mentioned control module uses in above-mentioned data determining unit is judged as the cache line that comprises object data and is included in calculation process last time, repeatedly obtain the data that are stored in the above-mentioned main memory unit with specific size unit, the cache line of the inlet of a plurality of data of acquired specific size unit and the cache line of the data of using from be included in calculation process last time till the previous inlet of the inlet of the cache line that comprises object data replaced, be stored in the above-mentioned cache memory.
4, control circuit according to claim 1 is characterized in that also comprising:
The unit is forbidden in instruction, its analysis package is contained in the data in the cache line that is stored in the above-mentioned cache memory, when the result who analyzes detects under at least one data conditions of the data of expression branch instruction and the data of representing END instruction, above-mentioned control module is forbidden the data that are stored in the above-mentioned main memory unit are replaced with the cache line that has been stored in the above-mentioned cache memory, is stored in the operation in the above-mentioned cache memory.
5, control circuit according to claim 1 is characterized in that:
Above-mentioned operation processing unit possesses:
Cache access unit from above-mentioned cache memory input data;
The decoding unit of the data of decoding aforementioned cache addressed location input;
Carry out the arithmetic element of calculation process according to the data of above-mentioned decoding unit decoding, wherein
The later data of data that the aforementioned cache addressed location has been imported from above-mentioned cache memory input, analyze the data of having imported, when the data of analyzing is under the data conditions of expression branch instruction, with the calculation process of above-mentioned arithmetic element concurrently, from above-mentioned main memory unit, obtain the data that are stored in the address of analyzing the branch target shown in the data, store in the above-mentioned cache memory.
6, a kind of control circuit is characterized in that comprising:
The main memory unit of storage data;
Read in the data that are stored in the above-mentioned main memory unit and the cache memory of storage with specific size unit as cache line;
Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported;
To be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in the calculation process that above-mentioned operation processing unit is carried out in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data and be stored in cache hit judging unit in the above-mentioned cache memory as cache line with specific size unit;
When aforementioned cache is hit under the situation that judgment unit judges is a cache-miss, aforementioned cache hit judging unit with specific size unit obtain be stored in be stored in above-mentioned cache memory in the continuous zone of the data of the corresponding above-mentioned main memory unit of cache line on data, the cache line that the data of acquired specific size unit and aforementioned cache is hit beyond the cache line of judging unit storage is replaced, and is stored in the above-mentioned cache memory as cache line.
7, a kind of control circuit is characterized in that comprising:
The main memory unit of storage data;
Read in data and the storage that is stored in the above-mentioned main memory unit with specific size unit as cache line, for the effective or invalid cache memory of having stored of cache line management; Input has been stored in the data in the above-mentioned cache memory, carries out the operation processing unit of calculation process according to the data of having imported;
Before the calculation process that above-mentioned operation processing unit is carried out, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired specific size and invalid cache line are replaced, be stored in control module in the above-mentioned cache memory as cache line;
Storage is to the reference value and at least a reference value storage unit that is stored in the reference value of the effective high-speed cache line number in the above-mentioned cache memory of the visiting frequency of above-mentioned cache memory;
Mensuration is to the visiting frequency and at least a determination unit that is stored in the effective high-speed cache line number in the above-mentioned cache memory of above-mentioned cache memory;
When the said determination unit measure to the visiting frequency of above-mentioned cache memory situation smaller or equal to the reference value that is stored in the visiting frequency in the said reference value storage unit, effective high-speed cache line number of measuring with the said determination unit more than or equal to the situation of the reference value that is stored in the effective high-speed cache line number in the said reference value storage unit at least one down, forbid that the mensuration of the control that above-mentioned control module carries out is forbidden the unit.
8, a kind of control circuit is characterized in that comprising:
The main memory unit of storage data;
Read in the data that are stored in the above-mentioned main memory unit and the cache memory of storage;
Cache access unit from above-mentioned height memory buffer input data;
The decoding unit of the data of decoding aforementioned cache addressed location input;
Carry out the arithmetic element of calculation process according to the data of above-mentioned decoding unit decoding, wherein
The calculation process of aforementioned cache addressed location and above-mentioned arithmetic element concurrently, from above-mentioned main memory unit, obtain the data that are stored in the address that in the calculation process that above-mentioned arithmetic element is carried out, generates, and store in the above-mentioned cache memory.
9, a kind of control method is characterized in that comprising:
In main memory unit, store data,
Read in the data that are stored in the above-mentioned main memory unit with specific size unit as cache line, and be stored in the cache memory,
Input has been stored in the data in the above-mentioned cache memory, carries out calculation process according to the data of having imported,
To be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in calculation process in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data with specific size unit, and be stored in the above-mentioned cache memory as cache line
Under the situation that is judged as cache hit, judge the cache line that comprises object data and the similarities and differences that are included in the cache line of the data of using in the last time computing,
Under the different situation of the cache line of the data of in being judged as the cache line that comprises object data and being included in calculation process last time, using, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of the specific size unit that obtains and the cache line that is included in the data of using in the calculation process are last time replaced, and be stored in the above-mentioned cache memory as cache line, under the identical situation of the cache line of the data of using in being judged as the cache line that comprises object data and being included in calculation process last time, the cache line that is included in the data of using in the calculation process last time that is stored in the above-mentioned cache memory is not replaced in control.
10, a kind of control method is characterized in that comprising:
In main memory unit, store data,
Read in the data that are stored in the above-mentioned main memory unit with specific size unit as cache line, and be stored in the cache memory,
Input has been stored in the data in the above-mentioned cache memory, carries out calculation process according to the data of having imported,
To be expression be stored in cache hit in the above-mentioned cache memory as the object data of the data used in above-mentioned calculation process in judgement, still the indicated object data are not stored in the cache-miss in the above-mentioned cache memory, under the situation that is judged as cache-miss, from above-mentioned main memory unit, obtain object data with specific size unit, and be stored in the above-mentioned cache memory as cache line
Under the situation that is judged as cache-miss, with specific size unit obtain be stored in be stored in above-mentioned cache memory in the continuous zone of the data of the corresponding above-mentioned main memory unit of cache line on data, the data of acquired specific size unit and the cache line of obtaining object data with specific size unit and be stored in beyond the cache line in the above-mentioned cache memory from above-mentioned main memory unit are replaced, and be stored in the above-mentioned cache memory as cache line.
11, a kind of control method is characterized in that comprising:
In main memory unit, store data,
Read in as cache line with specific size unit and to be stored in the data in the above-mentioned main memory unit and to be stored in the cache memory, effective or invalid to the cache line management of having stored,
Input has been stored in the data in the above-mentioned cache memory, carries out calculation process according to the data of having imported,
Before calculation process, control obtains the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired specific size and invalid cache line are replaced, and be stored in the above-mentioned cache memory as cache line
Storage is to the reference value of the visiting frequency of above-mentioned cache memory be stored in reference value at least a of the effective high-speed cache line number in the above-mentioned cache memory in the reference value storage unit,
Mensuration is to the visiting frequency of above-mentioned cache memory and be stored at least a of effective high-speed cache line number in the above-mentioned cache memory,
When determine to the visiting frequency of above-mentioned cache memory situation smaller or equal to the reference value that is stored in the visiting frequency in the said reference value storage unit, effective high-speed cache line number of measuring with the said determination unit more than or equal to the situation of the reference value that is stored in the effective high-speed cache line number in the said reference value storage unit at least one down, before calculation process, forbid controlling and obtain the data that are stored in the above-mentioned main memory unit with specific size unit, the data of acquired specific size and invalid cache line are replaced, and be stored in the above-mentioned cache memory as cache line.
12, a kind of control method is characterized in that comprising:
Data storage in main memory unit,
Read in the data that are stored in the above-mentioned main memory unit, and be stored in the cache memory,
From above-mentioned cache memory input data,
The data that decoding has been imported,
Carry out calculation process according to the data after the decoding,
From above-mentioned cache memory input data the time,
From above-mentioned main memory unit, obtain the data that are stored in the address shown in the address that in calculation process, generates concurrently with calculation process, and be stored in the above-mentioned cache memory.
CNB2004100821423A 2004-12-21 2004-12-21 Control circuit and its control method Expired - Fee Related CN100445944C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100821423A CN100445944C (en) 2004-12-21 2004-12-21 Control circuit and its control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100821423A CN100445944C (en) 2004-12-21 2004-12-21 Control circuit and its control method

Publications (2)

Publication Number Publication Date
CN1797326A true CN1797326A (en) 2006-07-05
CN100445944C CN100445944C (en) 2008-12-24

Family

ID=36818377

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100821423A Expired - Fee Related CN100445944C (en) 2004-12-21 2004-12-21 Control circuit and its control method

Country Status (1)

Country Link
CN (1) CN100445944C (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426261C (en) * 2006-09-08 2008-10-15 华为技术有限公司 High-speed memory pre-read method and device
CN101594420B (en) * 2008-05-26 2011-09-21 中兴通讯股份有限公司 Rapid access method of cellphone application module
CN102543187A (en) * 2011-12-30 2012-07-04 东莞市泰斗微电子科技有限公司 High efficiency reading serial Flash buffer control circuit
CN101652759B (en) * 2007-04-10 2012-11-28 国际商业机器公司 Programmable data prefetching method and system
CN102841778A (en) * 2011-06-22 2012-12-26 索尼公司 Memory management apparatus, memory management method, control program, and recording medium
CN104813293A (en) * 2012-11-28 2015-07-29 高通股份有限公司 Memory management using dynamically allocated dirty mask space
WO2015143658A1 (en) * 2014-03-27 2015-10-01 Intel Corporation Instruction and logic for filtering of software prefetching instructions
CN105144120A (en) * 2013-03-28 2015-12-09 惠普发展公司,有限责任合伙企业 Storing data from cache lines to main memory based on memory addresses
CN106575258A (en) * 2014-08-08 2017-04-19 三星电子株式会社 Electronic device, on-chip memory and method of operating the on-chip memory
CN111123841A (en) * 2018-10-31 2020-05-08 发那科株式会社 Numerical controller
CN114721726A (en) * 2022-06-10 2022-07-08 成都登临科技有限公司 Method for obtaining instructions in parallel by multithread group, processor and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0146059B1 (en) * 1995-04-11 1998-09-15 문정환 Command prefeth method and circuit using the non-referenced prefeth cache
JPH10105463A (en) * 1996-09-27 1998-04-24 Mitsubishi Electric Corp Cache system and replacement judgement method
JP3166827B2 (en) * 1997-03-28 2001-05-14 日本電気株式会社 External storage device and cache memory control method
US6016545A (en) * 1997-12-16 2000-01-18 Advanced Micro Devices, Inc. Reduced size storage apparatus for storing cache-line-related data in a high frequency microprocessor
JP2004240811A (en) * 2003-02-07 2004-08-26 Mitsubishi Electric Corp Information processor and prefetch method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426261C (en) * 2006-09-08 2008-10-15 华为技术有限公司 High-speed memory pre-read method and device
CN101652759B (en) * 2007-04-10 2012-11-28 国际商业机器公司 Programmable data prefetching method and system
CN101594420B (en) * 2008-05-26 2011-09-21 中兴通讯股份有限公司 Rapid access method of cellphone application module
CN102841778A (en) * 2011-06-22 2012-12-26 索尼公司 Memory management apparatus, memory management method, control program, and recording medium
CN102543187A (en) * 2011-12-30 2012-07-04 东莞市泰斗微电子科技有限公司 High efficiency reading serial Flash buffer control circuit
CN102543187B (en) * 2011-12-30 2015-10-28 泰斗微电子科技有限公司 A kind of serial Flash buffer control circuit of efficient reading
CN104813293A (en) * 2012-11-28 2015-07-29 高通股份有限公司 Memory management using dynamically allocated dirty mask space
CN104813293B (en) * 2012-11-28 2017-10-31 高通股份有限公司 Use the memory management in the dirty mask space of dynamically distributes
CN105144120A (en) * 2013-03-28 2015-12-09 惠普发展公司,有限责任合伙企业 Storing data from cache lines to main memory based on memory addresses
CN105144120B (en) * 2013-03-28 2018-10-23 慧与发展有限责任合伙企业 The data from cache line are stored to main memory based on storage address
CN106030520A (en) * 2014-03-27 2016-10-12 英特尔公司 Instruction and logic for filtering of software prefetching instructions
WO2015143658A1 (en) * 2014-03-27 2015-10-01 Intel Corporation Instruction and logic for filtering of software prefetching instructions
CN106575258A (en) * 2014-08-08 2017-04-19 三星电子株式会社 Electronic device, on-chip memory and method of operating the on-chip memory
CN111123841A (en) * 2018-10-31 2020-05-08 发那科株式会社 Numerical controller
CN111123841B (en) * 2018-10-31 2023-08-29 发那科株式会社 Numerical controller
CN114721726A (en) * 2022-06-10 2022-07-08 成都登临科技有限公司 Method for obtaining instructions in parallel by multithread group, processor and electronic equipment
CN114721726B (en) * 2022-06-10 2022-08-12 成都登临科技有限公司 Method for multi-thread group to obtain instructions in parallel, processor and electronic equipment

Also Published As

Publication number Publication date
CN100445944C (en) 2008-12-24

Similar Documents

Publication Publication Date Title
CN1276358C (en) Memory
CN1934543A (en) Cache memory and control method thereof
CN1920952A (en) Information recording apparatus, information recording method and computer program
CN1797326A (en) Control circuit and its control method
CN101038554A (en) Software update method, update management program and information processing apparatus
CN1760836A (en) Information processing system, information processing method, and program
CN1313938C (en) Storage system, computer system and a method of establishing volume attribute
CN1993670A (en) Information processing device
CN1881183A (en) Information processing device, procedure control method and computer program
CN1690971A (en) Interrupt control apparatus
CN1532666A (en) Information processor, clock pulse control method and control program of said device
CN1591374A (en) Dma transfer controller
CN1947107A (en) Device for transmitting data between memories
CN1369094A (en) Disc drive for achieving improved audio and visual data transfer
CN1366633A (en) Disk memory device, data pre-head method, and recorded medium
CN1731402A (en) Method and apparatus for accelerating file system operation by using coprocessor
CN101065725A (en) Command supply device
CN1262122C (en) Encoding device and method
CN1506971A (en) Semiconductor device, image data processing apparatus and method
CN1898654A (en) Cache memory and its controlling method
CN1297905C (en) High speed buffer storage controller, its control method and computer system
CN1882923A (en) Cache memory and control method thereof
CN1649274A (en) Variable length decoding device and variable length decoding method and reproducing system
CN1940889A (en) Method, apparatus and program for management of access history, storage unit, and information processing apparatus
CN1786941A (en) Processing method of enhancing opening speed of word processing file

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081224

Termination date: 20161221

CF01 Termination of patent right due to non-payment of annual fee