CN101458665A - Second level cache and kinetic energy switch access method - Google Patents

Second level cache and kinetic energy switch access method Download PDF

Info

Publication number
CN101458665A
CN101458665A CNA2007103021300A CN200710302130A CN101458665A CN 101458665 A CN101458665 A CN 101458665A CN A2007103021300 A CNA2007103021300 A CN A2007103021300A CN 200710302130 A CN200710302130 A CN 200710302130A CN 101458665 A CN101458665 A CN 101458665A
Authority
CN
China
Prior art keywords
data
level cache
label
processing unit
central processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007103021300A
Other languages
Chinese (zh)
Other versions
CN101458665B (en
Inventor
黄启庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ali Corp
Original Assignee
Ali Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ali Corp filed Critical Ali Corp
Priority to CN2007103021300A priority Critical patent/CN101458665B/en
Publication of CN101458665A publication Critical patent/CN101458665A/en
Application granted granted Critical
Publication of CN101458665B publication Critical patent/CN101458665B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A method for switching second level cache access by kenetic energy is applied for an applied system, wherein, the applied system comprises a central processing unit having a fist level cache. The method for switching access by kenetic energy comprises: firstly adjusting the frequency speed of the second level cache based on a power state of the central processing unit; then judging a ratio relation between the second level cache and the central processing unit; and switching the access mode of the second level cache based on the ratio relation. The frequency speed of the second level cache is larger than the frequency speed of the central processing unit on design. Thereby the purposes for maintaining the access efficiency of the central processing unit and reducing the access times of the second level cache can be reached, and the electricity-saving effect can be also reached.

Description

The method of second level cache and kinetic energy switch access
Technical field
The present invention relates to the method that a kind of kinetic energy switches memory access, be meant that especially a kind of kinetic energy switches the method and the framework thereof of second level cache (L2 Cache) access.
Background technology
Please refer to Fig. 1, be the configuration diagram of known techniques bus application system.Bus application system 9 is to include a system bus 90, a central processing unit 91, an image processor 92 and a Sound Processor Unit 93 etc.And bus application system 9 is except computer system, and other is as the portable digital-video device etc., also all can design a kinetic energy random access memory (DRAM) 94 again and carry processor for each and place temporal data, result of calculation and program source etc.But tend to thus cause on the bus application system 9 between each controller in the situation of fighting for kinetic energy random access memory 94 access rights, so the usefulness of total system also just thereby significantly reduces.
In addition, in present bus application system 9, because the frequency of most processor is all come soon than the frequency of kinetic energy random access memory 94, so in order to solve the bottleneck of each processor when the data access kinetic energy random access memory 94, present processor all has the design of high-speed cache (Cache), even also have the framework of the high-speed cache of multilayer, to allow processor can not be subjected to the influence of the frequency speed of kinetic energy random access memory 94.By this, promoting the efficient of overall data transmission, and on using in addition can design with the frequency (Bus Clock) of system bus 90 carry out faster than each processor again.
And on-chip cache as shown in FIG. (L1 Cache) 911 and second level cache (L2Cache) 912 promptly are the designs of the high-speed cache of multilayer, wherein on-chip cache 911 is built central processing unit 91 inside in being, speed is very fast, and belong to so-called Harvard formula (Harvard) design, meaning promptly instructs the space of usefulness and the space that data are used to separate.And opposite, second level cache 912 is to build to put in central processing unit 91 outsides, and can not distinguish the space of instruction and data usually, and just so-called integraty is got the design of (Unified Cache) soon.
With at present, high-speed cache mainly is made up of static random access memory (SRAM) in design, so be cached at the power that power that running the time consumed also just is equivalent to static random access memory and is consumed.And be example with the on-chip cache that utilizes bilateral (2-way) circuit framework design at present, it normally distributes and two forms right static random access memory, is respectively one group of label internal memory and one group of data-carrier store.Wherein, the label internal memory mainly is in order to carrying out the comparison of label, and data-carrier store then provides temporarily providing room to carry out the access of data; When data stored, its data after the label internal memory has been compared label, select a data-carrier store wherein to store, and stored content were only required data of processor or order in this data-carrier store in advance.
Yet under this framework, must design under the one-period (1 cycle) at processor and remove four static random access memories of activation simultaneously, just have way to go the needed data of answer processor, and the action that is to say comparison instruction is added and is selected correct data output to finish in the one-period time of processor in the one-period time.Although the required data of processor only are temporary in the right data-carrier store of this composition one of them, above-mentioned four static random access memories too all can be by access in each cycle.
So the on-chip cache under this framework, the power that is consumed during its running also just can form following formula:
Figure A200710302130D0006082703QIETU
So it is many more to work as the memory access number of times, the required relatively power consumption of entire process device is flowed also just big more.
Moreover when the algorithm and the access scheduling of system applied to certain degree, and system effectiveness just has the generation of the design of aforementioned second level cache, with the speed and the efficient of further expedited data response can't rise to specific degrees relatively the time.But use second level cache will face the problem of power consumption stream equally, therefore to how to allow the processor that designs in the system that second level cache is arranged be continued to maintain higher access speed, can thereby not producing power consumption simultaneously and flow through big situation, is to be worth further place of improving at present.
Summary of the invention
In view of this, technical matters to be solved by this invention is, see through the framework improvement of second level cache, make bus system be able to difference, and carry out the mode that kinetic energy switches the second level cache access at any time according to present processor and second level cache frequency.By this, except keeping the access usefulness of processor, and can reduce the access times of second level cache in good time and reach purpose of power saving simultaneously.
In order to achieve the above object, according to a scheme proposed by the invention, a kind of second level cache (L2 Cache) framework is provided, receive the data address package that a central processing unit is exported, it comprises: one forms right label internal memory, forms right a data-carrier store and a comparator circuit unit.A plurality of temporary label data of label memory storage wherein, and the data of label as a result that meet according to this data address package.And data-carrier store corresponds to the label internal memory, in order to storing a plurality of temporal datas, and the result data that meets according to this data address package.Moreover, the comparator circuit unit receive this data address package come relatively this as a result the label data producing a state signal, and then again with this result data computing to form an output data.Wherein, when the frequency speed of second level cache during greater than the frequency speed of central processing unit, the label internal memory operates on a positive edge frequency of this central processing unit, data-carrier store operates on a negative edge frequency of this central processing unit, and according to this label internal memory under label data and only can operating by the corresponding data-carrier store of activation as a result.
In order to achieve the above object, according to another program proposed by the invention, provide a kind of kinetic energy to switch the method for second level cache access, it is applied to an application system, and this application system comprises a central processing unit with an one-level high-speed cache (L1 Cache), the step of this method comprises: at first, adjust the frequency speed of second level cache according to the power rating of this central processing unit.Then judge the ratio between the frequency speed of second level cache and central processing unit, and then switch the access mode of this second level cache again according to this ratio.And wherein the frequency speed of second level cache in design greater than the frequency speed of this central processing unit.By this, allow collocation processor of the present invention can keep the access usefulness of itself, and can also reduce the access times of second level cache in good time and have the effect of power saving except the restriction that is not subjected to kinetic energy random access memory (DRAM).
Above general introduction and ensuing detailed description and accompanying drawing all are to reach mode, means and the effect that predetermined purpose is taked in order to further specify the present invention.And relevant other purpose of the present invention and advantage, will follow-up explanation and graphic in set forth.
Description of drawings
Fig. 1 is the configuration diagram of known techniques bus application system;
Fig. 2 A is the synoptic diagram of the data address package of central processing unit;
Fig. 2 B is the embodiment circuit framework synoptic diagram of second level cache framework of the present invention;
Fig. 3 switches the embodiment process flow diagram of the method for second level cache access for kinetic energy of the present invention;
Fig. 4 is that second level cache of the present invention is in the running embodiment of binary cycle access mode process flow diagram;
Fig. 5 is that second level cache of the present invention is in the running embodiment of single cycle access mode process flow diagram; And
Fig. 6 is that second level cache of the present invention is in the running embodiment of binary cycle access mode sequential chart.
The diagrammatical symbol explanation
Bus application system 9
System bus 90
Central processing unit 91
On-chip cache 911
Second level cache 912
Image processor 92
Sound Processor Unit 93
Second level cache 2
Label data 201
Index data 202
Offset data 203
Output data 204
Label internal memory 21,21 '
Data-carrier store 22,22 '
Comparator circuit unit 23
Embodiment
Please also refer to Fig. 2 A and Fig. 2 B, be respectively the synoptic diagram of data address package of central processing unit and the embodiment circuit framework synoptic diagram of second level cache framework of the present invention.In Fig. 2 A, be produce 32 (bits) with a central processing unit the data address package as follow-up explanation, but also can do a change according to its actual required bit data; The data address package has comprised label data (20 bits) 201, one index datas (8 bits) 202 and one offset datas (4 bits) 203.
When the central processing unit deal with data, it can arrive first in the high-speed cache and seek, if data were temporary in this because of before having read, does not just need time-consuming reading of data from huge internal memory.And shown in Fig. 2 B, the invention provides the framework of a kind of second level cache (L2 Cache) 2, it mainly is not find desired data in the on-chip cache (L1 Cache) (figure does not show) of central processing unit (figure does not show) in inside, when just so-called miss (Miss Hit), just can arrive in the second level cache 2 and look for data.Present embodiment is promptly explained the data address package of exporting in order to look for data when second level cache 2 reception central processing units, the subsequent action that is carried out, and the frequency speed of the second level cache 2 of present embodiment must be greater than the frequency speed of central processing unit.
The framework of this second level cache 2 comprises: one forms right label internal memory 21 and 21 ', forms right data-carrier store 22 and 22 ' and one comparator circuit unit 23.Wherein, label internal memory 21,21 ' is to be used for storing a plurality of temporary label data, and comes index to export the data of label as a result (being similarly 20 bits) that meet according to the index data in the data address package 202.Data-carrier store 22,22 ' then is to correspond to label internal memory 21,21 ' and be provided with, in order to storing a plurality of temporal datas, and come index to export a result data that meets (32 bits) according to index data in the data address package 202 and offset data 203.And the above-mentioned temporal data of mentioning refers to the previous access of central processing unit and crosses and may be temporary in data in the second level cache 2 of present embodiment, there is no at this and is limited.In addition, being familiar with this operator should understand, the label data that every temporal data in the running that processor data stores all can corresponding be set with separately make things convenient for the search of data with similar keyword as this temporal data, and these corresponding label data of setting just are stored in the temporary label data in the label data-carrier store 21,21 '.
Wherein offset data 203 is for example to get the data that the last 2bits among the 4bits comes to be combined into index data 202 10 bits on using, with the address that is used in the address data memory 22,22 ', and certainly also can the change according to the actual design of internal memory.In addition, in the present embodiment, label internal memory 21,21 ' is a positive edge frequency that can operate on central processing unit, and data-carrier store 22,22 ' then is a negative edge frequency that operates on central processing unit.And if the frequency speed of the second level cache 2 selected for use is during more than or equal to the twice of the frequency speed of central processing unit, the access mode of the so-called binary cycle of just changeable formation (2-cycle).
And comparator circuit unit 23 is the label data 201 that receive in the data address package, and then with label data 201 with this as a result the label data compare to produce a state signal, again this state signal and this result data computing are offered central processing unit to form an output data 204 afterwards.Wherein, the state signal is to utilize a high state (High) signal and a low state (Low) signal as identification, if when producing the low state signal, represent that then comparative result is that the label data 201 of data address package and the data of label as a result of output in label internal memory 21 or 21 ' are different; And when producing the high state signal, just represent that comparative result is that the label data 201 of data address package are identical with the data of exporting of label as a result in label internal memory 21 or 21 ', so, just script data-carrier store 22 or the 22 ' result data of being exported can carry out computing (as intersection operation (AND)) to form this output data 204 with this state signal again.Wherein, high state signal and low state signal its represent meaning also can exchange mutually.
In other words,, promptly represent to have the required data of central processing unit in the second level cache 2, so this result data is to carry out intersection operation to form the required output data 204 of central processing unit with the high state signal if produce the high state signal.Certainly shown in comparator circuit unit 23 among Fig. 2 B, for in response to paired data-carrier store 22 and 22 ', therefore can see through a set operation (OR) again and do further computing, to allow different data-carrier store 22 and 22 ' only need one of them to have the required output data 204 of formation just can export to central processing unit.
Moreover, another characteristics of the second level cache 2 of present embodiment are, label internal memory 21,21 ' is to operate according to the different frequency edge of central processing unit with data-carrier store 22 and 22 ', and label internal memory 21 and 21 ' is to be used for comparison with the activation situation of determination data storer 22 and 22 ' when the negative edge frequency.
Therefore data-carrier store 22 and 22 ' can be waited until the comparison than circuit unit 23 of the data of label as a result tranmittance that label internal memory 21,21 ' exported, to obtain actual required label internal memory 21 or 21 ' after one of them, this label internal memory 21 of activation or 21 ' relative data-carrier store 22 or 22 ' operate again, be that comparator circuit unit 23 is after comparison, obtain and save as label internal memory 21 in the required label, then 21 corresponding data-carrier stores 22 of activation label internal memory operate again.Thus, data-carrier store 22 and 22 ' not be with just can operating simultaneously under one-period, thereby the power that is consumed can save a data storer running time.
Refer again to Fig. 3, switch the embodiment process flow diagram of the method for second level cache access for kinetic energy of the present invention.As shown in the figure, the invention provides the method that a kind of kinetic energy switches second level cache 2, it is to be applied to an application system (figure does not show), and this application system includes the central processing unit of tool on-chip cache.The step of method comprises: at first, application system is carried out the running (S301) of system, with central processor in service.And because application system may be under the different states, central processing unit running power can be adjusted automatically saving power, thereby also difference to some extent of the operation frequency speed of central processing unit.And the method that present embodiment provided can judge whether to switch the operation frequency speed (S305) of central processing unit or second level cache 2 according to the state of present application system.
If the judged result of step (S305) is for being, the frequency speed of promptly representing second level cache 2 or central processing unit is adjusted to some extent or is changed, so further judge the ratio (S307) between the frequency speed of second level cache 2 and central processing unit again, to switch the access mode of second level cache 2 according to this ratio.Otherwise, if the judged result of step (S305) is for denying, the frequency speed of then representing present central processing unit or second level cache there is no change, so sustainable utilization access mode originally carries out data access, and continues execution in step (S301) normally to carry out the running of system.
And after step (S307) obtains this ratio, just can switch the access mode of second level cache 2 according to ratio.Wherein, ratio can be respectively as shown in FIG.: the frequency speed of second level cache 2 equals the frequency speed (L2_Cache_Clock=CPU_Clock) of central processing unit; The frequency speed of second level cache 2 is more than or equal to the twice of the frequency speed of central processing unit (L2_Cache_Clock ≧ 2*CPU_Clock); And the frequency speed of second level cache 2 is greater than the frequency speed of central processing unit and less than the twice of the frequency speed of central processing unit (three kinds of ratio such as CPU_Clock<L2_Cache_Clock<2*CPU_Clock).
If this ratio is the frequency speed of second level cache 2 when equaling the frequency speed of central processing unit, then switches second level cache 2 and be the access mode (S308) in single cycle (1-cycle).And under the state of the access mode in single cycle, label internal memory 21 in the second level cache 2,21 ' and data-carrier store 22,22 ' be that the frequency edge that is synchronized with central processing unit operates.
If this ratio is the frequency speed of second level cache 2 during more than or equal to the twice of the frequency speed of central processing unit, then switches second level cache 2 and be double-periodic access mode (S309).
If this ratio is that the frequency speed of central processing unit is greater than 1/2 times of the frequency speed of second level cache 2 and during less than the frequency speed of second level cache 2, then switch second level cache 2 and be double-periodic access mode, and further adjust central processing unit for waiting for the operating mode (S311) of one-period.And under the state of double-periodic access mode, label internal memory 21,21 ' is the positive edge frequency that operates on central processing unit, and data-carrier store 22,22 ' is the negative edge frequency that operates on this central processing unit; And by this label internal memory of comparison 21,21 ' temporary label data, judge desire access data in what person's label internal memory 21,21 ', operate to determine a pairing data-carrier store 22,22 ' the negative edge frequency wherein of only activation in central processing unit.
For the operation situation of the access mode that further specifies binary cycle and single cycle, please be respectively second level cache of the present invention in the running embodiment of binary cycle access mode and single cycle access mode process flow diagram simultaneously again with reference to figure 4 and Fig. 5.
As shown in Figure 4, switch in the state of binary cycle access mode for second level cache 2.At first, second level cache 2 is to be in idle state (S401) under the original state, and can judge at any time whether on-chip cache forms miss situation and proposition demand (S403) in the central processing unit.If judged result is for being, then activation should the right label internal memory 21 and 21 ' (S405) of composition, to obtain label data as a result.Otherwise if the judged result of step (S403) represents then that for not present on-chip cache has the situation that forms data hit, so the step of being back to (S401), second level cache still is in idle state.
And in step (S405) afterwards, these label data as a result of comparing again are to be present in data-carrier store 22 or 22 ' to learn required data, and are selected this required data-carrier store 22 or 22 ' (S407).And then selected data-carrier store 22 or 22 ' (S409) in the activation step (S407).At last, be the on-chip cache (S411) that this output data 204 of output is given central processing unit in this data-carrier store that is enabled 22 or 22 '.So second level cache 2 just is back to step (S401) to be in idle state.
In other words, under the binary cycle access mode, this label internal memory 21 and 21 ' is the positive edge frequency that operates on central processing unit, determines the activation situation of data-carrier store 22 and 22 ' when the negative edge frequency of central processing unit in order to comparison.Just only can enable data storer 22 and 22 ' one of them when the negative edge frequency of central processing unit, operate.
And as shown in Figure 5, be the state that switches in single cycle access mode when second level cache 2.At first under original state, second level cache 2 is in idle state (S501), and judges whether on-chip cache forms miss situation and proposition demand (S503) in the central processing unit.If judged result is for being, then synchronous activation label internal memory 21,21 ' and data-carrier store 22,22 ' (S505) is to obtain label data as a result.Otherwise if the judged result of step (S503) represents then that for not present on-chip cache has the situation that forms data hit, so the step of being back to (S501), second level cache still is in idle state.
And in step (S505) afterwards, these label data as a result of just comparing, and select required data-carrier store 22 or 22 ' (S507).And, therefore just in the data-carrier store 22 or 22 ' of this selection, export the on-chip cache (S509) that an output data 204 is given central processing unit because under the identical cycle, data-carrier store 22 and 22 ' also is in the state of activation synchronously.At last, second level cache 2 just is back to step (S501) to be in idle state.
At last, please refer to Fig. 6, for second level cache of the present invention in the running embodiment of binary cycle access mode sequential chart.As shown in the figure, wherein the second level cache frequency is that twice with processor frequencies illustrates.And when processor frequencies is in positive edge frequency, utilize a complete second level cache frequency period to carry out the access of label internal memory, finishing the access of temporary label data, and and then compare out required data-carrier store.In addition, when processor frequencies is in negative edge frequency, utilize a complete second level cache frequency period to carry out the access of temporal data equally again, to export this output data.So, can form the time sequence status that receives this output data in the part of processor data.
It should be noted that, when each processor frequencies is in negative edge frequency, the present invention only can carry out the output (D1 as shown in FIG. and D2 are to represent the data in the different pieces of information storer respectively) of output data by one of them data-carrier store of activation, under each processor frequencies, saved the power of the required consumption of a data storer by this.
In sum, kinetic energy of the present invention switches the method for second level cache access and the framework of second level cache thereof, see through the framework improvement of second level cache, make application system be able to, and carry out the mode that kinetic energy switches the second level cache access in real time according to the difference between present central processing unit and second level cache frequency speed.By this, except keeping the due access usefulness of central processing unit itself, more can reach the effectiveness of power saving because of the access times that reduce second level cache in good time.
But, the above, it only is the detailed description of specific embodiments of the invention and graphic, be not in order to restriction the present invention, all scopes of the present invention should be as the criterion with claim, anyly be familiar with this skill person in the field of the invention, can think easily and variation or modify and all can be encompassed in the claim that the present invention defines.

Claims (12)

1, a kind of kinetic energy switches the method for second level cache (L2 Cache) access, be applied to an application system, it is characterized in that: described second level cache comprises one and forms right label internal memory and the right data-carrier store of a corresponding composition, and this application system comprises a central processing unit, have an one-level high-speed cache (L1 Cache), the step of this method comprises:
Adjust the frequency speed of this second level cache according to the power rating of this central processing unit;
Judge the ratio between the frequency speed of this second level cache and this central processing unit; And
Switch the access mode of this second level cache according to this ratio,
Wherein, the frequency speed of this second level cache is greater than the frequency speed of this central processing unit.
2, kinetic energy as claimed in claim 1 switches the method for second level cache access, it is characterized in that: described ratio is the twice of the frequency speed of this second level cache more than or equal to the frequency speed of this central processing unit, then switches the access mode that this second level cache is binary cycle (2-cycle).
3, kinetic energy as claimed in claim 2 switches the method for second level cache access, it is characterized in that: when this second level cache is double-periodic access mode, the right label internal memory of this composition operates on a positive edge frequency of this central processing unit, and the right data-carrier store of this composition operates on a negative edge frequency of this central processing unit, and this forms right label internal memory by comparison, operates in the negative edge frequency of this central processing unit with one of them this data-carrier store that determines the data-carrier store that this composition of only activation is right.
4, kinetic energy as claimed in claim 1 switches the method for second level cache access, it is characterized in that: described ratio is that the frequency speed of second level cache is greater than the frequency speed of this central processing unit and less than the twice of the frequency speed of this central processing unit, just switch the access mode that this second level cache is binary cycle (2-cycle), and further adjust this central processing unit for waiting for the operating mode of one-period.
5, kinetic energy as claimed in claim 4 switches the method for second level cache access, it is characterized in that: when this second level cache is double-periodic access mode, the right label internal memory of this composition operates on a positive edge frequency of this central processing unit, and the right data-carrier store of this composition operates on a negative edge frequency of this central processing unit, and this forms right label internal memory by comparison, operates in the negative edge frequency of this central processing unit with one of them this data-carrier store that determines the data-carrier store that this composition of only activation is right.
6, kinetic energy as claimed in claim 1 switches the method for second level cache access, and it is characterized in that: the frequency speed of described second level cache and the frequency speed of this central processing unit are sync signal.
7, a kind of second level cache (L2 Cache) is characterized in that: receive the data address package that a central processing unit is exported, wherein, the data address package comprises label data, an index data and an offset data, and this second level cache comprises:
One forms right label internal memory, stores a plurality of temporary label data, and according to this data address package to export label data as a result;
One forms right data-carrier store, corresponds to the right label internal memory of this composition, in order to storing a plurality of temporal datas, and exports a result data according to this data address package; And
One comparator circuit unit, receive the index data of this data address package and this as a result the label data compare producing a state signal, this state signal again with this result data computing to form an output data;
Wherein, when the frequency speed of this second level cache during greater than the frequency speed of this central processing unit, the right label internal memory of this composition operates on a positive edge frequency of this central processing unit, the right data-carrier store of this composition operates on a negative edge frequency of this central processing unit, and according to this as a result the label internal memory under the label data and only can activation corresponding one of them this data-carrier store to operate.
8, second level cache as claimed in claim 7 is characterized in that: this index data of comparing the right label internal memory of this composition and this data address package by this comparator circuit unit is exported this label data as a result.
9, second level cache as claimed in claim 7 is characterized in that: export this result data by the right data-carrier store of this this composition of comparator circuit unitary operation and this index data and this offset data of this data address package.
10, second level cache as claimed in claim 7 is characterized in that: the label data that described comparator circuit unit receives this data address package are come relatively this label data and produce this state signal as a result.
11, second level cache as claimed in claim 7, it is characterized in that: described state signal utilizes a high state (High) signal or a low state (Low) signal as identification, when this high state signal occurring, the label data of representing this data address package are identical with the data of label as a result of this label internal memory, and when this low state signal occurring, represent that then the label data of this data address package and the data of label as a result of this label internal memory are different.
12, second level cache as claimed in claim 7, it is characterized in that: described state signal utilizes a high state (High) signal or a low state (Low) signal as identification, when this low state signal occurring, the label data of representing this data address package are identical with the data of label as a result of this label internal memory, and when this high state signal occurring, represent that then the label data of this data address package and the data of label as a result of this label internal memory are different.
CN2007103021300A 2007-12-14 2007-12-14 Second level cache and kinetic energy switch access method Expired - Fee Related CN101458665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007103021300A CN101458665B (en) 2007-12-14 2007-12-14 Second level cache and kinetic energy switch access method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007103021300A CN101458665B (en) 2007-12-14 2007-12-14 Second level cache and kinetic energy switch access method

Publications (2)

Publication Number Publication Date
CN101458665A true CN101458665A (en) 2009-06-17
CN101458665B CN101458665B (en) 2011-03-23

Family

ID=40769534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007103021300A Expired - Fee Related CN101458665B (en) 2007-12-14 2007-12-14 Second level cache and kinetic energy switch access method

Country Status (1)

Country Link
CN (1) CN101458665B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958834A (en) * 2010-09-27 2011-01-26 清华大学 On-chip network system supporting cache coherence and data request method
CN102063406A (en) * 2010-12-21 2011-05-18 清华大学 Network shared Cache for multi-core processor and directory control method thereof
CN110168497A (en) * 2017-02-22 2019-08-23 超威半导体公司 Variable wave surface size
CN111124951A (en) * 2018-10-31 2020-05-08 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing data access

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6865684B2 (en) * 1993-12-13 2005-03-08 Hewlett-Packard Development Company, L.P. Utilization-based power management of a clocked device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958834A (en) * 2010-09-27 2011-01-26 清华大学 On-chip network system supporting cache coherence and data request method
CN101958834B (en) * 2010-09-27 2012-09-05 清华大学 On-chip network system supporting cache coherence and data request method
CN102063406A (en) * 2010-12-21 2011-05-18 清华大学 Network shared Cache for multi-core processor and directory control method thereof
CN102063406B (en) * 2010-12-21 2012-07-25 清华大学 Network shared Cache for multi-core processor and directory control method thereof
CN110168497A (en) * 2017-02-22 2019-08-23 超威半导体公司 Variable wave surface size
CN110168497B (en) * 2017-02-22 2023-10-13 超威半导体公司 Variable wavefront size
CN111124951A (en) * 2018-10-31 2020-05-08 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing data access
US11593272B2 (en) 2018-10-31 2023-02-28 EMC IP Holding Company LLC Method, apparatus and computer program product for managing data access
CN111124951B (en) * 2018-10-31 2023-09-15 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing data access

Also Published As

Publication number Publication date
CN101458665B (en) 2011-03-23

Similar Documents

Publication Publication Date Title
US9734056B2 (en) Cache structure and management method for use in implementing reconfigurable system configuration information storage
US6185657B1 (en) Multi-way cache apparatus and method
CN112486312B (en) Low-power-consumption processor
US20050108480A1 (en) Method and system for providing cache set selection which is power optimized
US9336164B2 (en) Scheduling memory banks based on memory access patterns
CN102870089A (en) System and method for storing data in virtualized high speed memory system
CN102356385B (en) Memory access controller, systems, and methods for optimizing memory access times
CN105378847A (en) DRAM sub-array level autonomic refresh memory controller optimization
US5860101A (en) Scalable symmetric multiprocessor data-processing system with data allocation among private caches and segments of system memory
EP3217406B1 (en) Memory management method and device, and memory controller
CN102576318A (en) Integrated circuit, computer system, and control method
CN101458665B (en) Second level cache and kinetic energy switch access method
EP3368989A1 (en) Intelligent coded memory architecture with enhanced access scheduler
CN103927268A (en) Storage access method and device
CN100458663C (en) Control method for low-power consumption RAM and RAM control module
US8484418B2 (en) Methods and apparatuses for idle-prioritized memory ranks
CN100377118C (en) Built-in file system realization based on SRAM
CN101316240A (en) Data reading and writing method and device
CN105487988B (en) The method for improving the effective access rate of SDRAM bus is multiplexed based on memory space
US11494120B2 (en) Adaptive memory transaction scheduling
CN105353865A (en) Multiprocessor based dynamic frequency adjustment method
JP2002351741A (en) Semiconductor integrated circuit device
CN104391676B (en) The microprocessor fetching method and its fetching structure of a kind of inexpensive high bandwidth
CN115757204A (en) NUCA architecture hardware performance optimization method, system and medium applied to automatic driving
KR100398954B1 (en) Multi-way set associative cache memory and data reading method therefrom

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110323

Termination date: 20151214

EXPY Termination of patent right or utility model