CN103207844B - Caching system and cache access method - Google Patents
Caching system and cache access method Download PDFInfo
- Publication number
- CN103207844B CN103207844B CN201310136234.4A CN201310136234A CN103207844B CN 103207844 B CN103207844 B CN 103207844B CN 201310136234 A CN201310136234 A CN 201310136234A CN 103207844 B CN103207844 B CN 103207844B
- Authority
- CN
- China
- Prior art keywords
- information
- cache
- read
- physical address
- subregion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a kind of caching system and cache access method.Caching system includes:Translation lookaside buffer, comparison circuit, capacity are not more than page and the auxiliary cache, the transfer buffer between auxiliary cache and next stage memory that are directly connected with main storage;Auxiliary cache includes the information area of the first quantity item number and the label area of the first quantity item number, the low level of the second quantity digit of the physical address of each single item information subregion stored information content is address and respective index in corresponding piece of corresponding information content, and the label that each single item label subregion is stored is a high position for the 3rd quantity digit of physical address corresponding to corresponding information content;Transfer buffer is used for after comparison circuit produces disablement signal, under the main control unit control of processor, obtains information to be read by next stage memory, and information to be read is sent to processor pipeline.By using this programme, in the case where low-power consumption is ensured, cache access speed is improve.
Description
Technical field
The present invention relates to information cache technical field, more particularly to a kind of caching system and cache access method.
Background technology
The gaps between their growth rates of speed and processor the access main storage instructed due to computing device are bigger will to influence electronics
The processing speed of equipment, therefore, in order to solve the problem, the general access speed caching quickly using power consumption is relatively large come
Temporarily storage current time processor needs the instruction and data for accessing.Wherein, when the finger needed for the processor that is stored with caching
When order or data, that is, hit, processor is directly obtained from caching;When the instruction or data that are not currently needed in caching
When, caching can be produced and once failed, processor pipeline pause, and between utilization and next stage memory transfer buffer from
After corresponding instruction or data are obtained in next stage memory, processor pipeline is just continued executing with.
Wherein, for instruction buffer, in order to reduce the crash rate of instruction buffer, often it is connected using multichannel group
Mode is cached organizing first-level instruction.Usual first-level instruction buffer memory capacity is 32K bytes or 64K bytes, and by 2 tunnels or 4
Road group is connected.Below so that capacity is the first-level instruction caching that 64K bytes, 4 tunnel groups are connected as an example, it is assumed that being accessed for instruction can
Hit, access instruction cache way generally includes the following two kinds method:
Method 1:In TLB in period 1(Translation Look-Aside Buffer, translation lookaside buffer)In
Search in the corresponding physical address of virtual address of instruction fetch of continuing, second round using the certain amount of low of the physical address
The label segment of position access instruction caching, while the label on 4 tunnels is read, 4 will read using comparison circuit in the period 3
Road sign label are compared with a certain amount of high position for the physical address, determine and are hit all the way, are read in the period 4
The command adapted thereto all the way that this is hit is sent with obtaining the instruction fetch of continuing that will be read in instruction fetch of continuing, period 5
To processor pipeline;
Method 2:The corresponding physical address of virtual address of instruction fetch of continuing, second week are searched in period 1 in TLB
The label segment and operation part cached using the certain amount of low level access instruction of the physical address in phase, and by 4 tunnels
Label and instruction are all read out, and comparison circuit is utilized in the period 3 by the 4 road sign label for reading and the certain number of the physical address
A high position for amount is compared, and determines and is hit all the way, and then the command adapted thereto all the way that this is hit is used as to be read
Instruct and send to processor pipeline.
Wherein, method 1 is compared with method 2, and method 1 spends the time more long, but power consumption is relatively low;And method 2 spends the time
It is shorter, but due to read label and the instruction on 4 tunnels simultaneously, therefore power consumption is quite big.
It can be seen that, in the case where low-power consumption is ensured, how to improve cache access speed is a problem demanding prompt solution.
The content of the invention
In order to solve the above technical problems, the embodiment of the invention provides a kind of caching system and cache access method, with
In the case of ensureing low-power consumption, cache access speed is improved, technical scheme is as follows:
In a first aspect, a kind of caching system is the embodiment of the invention provides, including:Translation lookaside buffer, comparison circuit,
Also include:
Capacity be not more than page and be directly connected with main storage auxiliary cache, deposited positioned at the auxiliary cache and next stage
Transfer buffer between reservoir;
The auxiliary cache includes information area and label area, wherein, described information region includes the first quantity item number
Information subregion, the label subregion of the label area including the first quantity item number, and each single item information subregion is unique
One label subregion of correspondence;Wherein, the second number of bits of the physical address of each single item information subregion stored information content
Several low levels is the index of address and corresponding information subregion in corresponding piece of corresponding information content, each single item label subregion institute
A high position for the label of storage the 3rd quantity digit of physical address corresponding to the information content in corresponding information subregion;
The transfer buffer is used for after the comparison circuit produces disablement signal, in the control of the main control unit of processor
Under system, information to be read is obtained by the next stage memory, and then acquired information to be read is sent to processor
Streamline.
Wherein, the label that each single item label subregion is stored thing corresponding to the information content in corresponding information subregion
Manage a high position and significance bit for the 3rd quantity digit of address;
Accordingly, the comparison circuit is used for what is obtained from the auxiliary cache in the main control unit for judging processor
The non-colinear position high of label and the 3rd quantity digit of the physical address obtained from the translation lookaside buffer and described have
During the storage virtual value of effect position, hiting signal is produced;Otherwise, disablement signal is produced.
Wherein, second number of bits is several determines according to the first quantity item number;
Accordingly, the 3rd number of bits is several determines according to the second quantity digit.
Wherein, the auxiliary cache includes:House-keeping instruction caching or assistance data are cached.
Wherein, the caching system, also includes:
Caching processing module, for when the caching instructions that the main control unit sends are received, to described
The information content entrained by the caching instructions in auxiliary cache corresponding to physical address carries out respective handling.
Second aspect, the embodiment of the present invention also provides a kind of cache access method, is provided based on the embodiment of the present invention
Caching system, methods described includes:
The low level of the main control unit of processor second quantity digit of target virtual address according to corresponding to information to be read,
Target information content and the target labels in the auxiliary cache of the caching system are read, and by the conversion of the caching system
Depending on determining page number where target physical address corresponding with page number where the target virtual address in buffer, and then in institute
The corresponding target physical address of the target virtual address is determined in page where the target physical address determined;
Using the comparison circuit target labels and the 3rd of the target physical address the in the caching system
A high position for quantity digit;
After the comparison circuit produces hiting signal, the target information content that will be read out from the auxiliary cache
As information to be read and send to processor pipeline;
After the comparison circuit produces disablement signal, the transfer buffer of the caching system is controlled to be stored from next stage
Device obtains the information to be read, and then the information to be read is sent to processor pipeline.
The technical scheme that the embodiment of the present invention is provided, due to the virtual address in one page offset address with its physically
The offset address of location is identical, also, the index of information subregion and address in the block of corresponding information content are phase in auxiliary cache
The offset address of information content corresponding physical address is answered, therefore, the second quantity digit of physical address corresponding to the information content
Low level is identical with the low level of the second quantity digit of respective virtual address so that target labels and target are read from auxiliary cache
The step of information content and from translation lookaside buffer determine target physical address the step of can carry out simultaneously, this causes to delay
Deposit access rate higher;Simultaneously as the capacity of auxiliary cache is smaller, therefore power consumption needed for access auxiliary cache is relatively low.It can be seen that,
By using this programme, cache access speed can be improved in the case where low-power consumption is ensured.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also
Other accompanying drawings are obtained with according to these accompanying drawings.
A kind of the first structural representation for caching system that Fig. 1 is provided by the embodiment of the present invention;
A kind of second structural representation of caching system that Fig. 2 is provided by the embodiment of the present invention;
A kind of flow chart of cache access method that Fig. 3 is provided by the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on this
Embodiment in invention, the every other reality that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example is applied, the scope of protection of the invention is belonged to.
In order in the case where low-power consumption is ensured, improve cache access speed, a kind of caching is the embodiment of the invention provides
System and cache access method.
A kind of caching system for being provided the embodiment of the present invention first below is introduced.
As shown in figure 1, a kind of caching system 100, can include:
Capacity be not more than page and be directly connected with main storage auxiliary cache 110, comparison circuit 120, translation lookaside delay
Rush device 130, the transfer buffer 140 between auxiliary cache 110 and next stage memory;
Wherein, the auxiliary cache 110 includes information area and label area, wherein, the information area includes the first quantity
The information subregion of item number, the label area includes the label subregion of the first quantity item number, and each single item information subregion is only
One one label subregion of correspondence;Wherein, the second number of the physical address of the information content that each single item information subregion is stored
It is the index of address and corresponding information subregion in corresponding piece of corresponding information content, each single item label sub-district to measure the low level of digit
A high position for the label that domain is stored the 3rd quantity digit of physical address corresponding to the information content in corresponding information subregion;
The mapping of page number where virtual address and page number where respective physical address is stored in the translation lookaside buffer 130
Relation, wherein, offset address phase of offset address of the virtual address in corresponding page to respective physical address in corresponding page
Together;
The comparison circuit 120 is used for the main control unit 200 of comparator processor according to destination virtual corresponding to information to be read
Target labels that the low level of the second quantity digit of address reads from the auxiliary cache 110 and according to the target virtual address from this
A high position for 3rd quantity digit of the respective objects physical address that translation lookaside buffer 130 is determined, when comparative result is represented
When both match, hiting signal is produced, to cause the main control unit by the low of the second quantity digit of the target virtual address
The target information content that position is determined is as information to be read and sends to processor pipeline;Otherwise, disablement signal is produced;
The transfer buffer 140 is used for after the comparison circuit 120 produces disablement signal, in the control of the main control unit 200
Under system, the information to be read is obtained by next stage memory, and then cause that the main control unit 200 wins the confidence acquired continuing
Breath is sent to processor pipeline.
It is understood that for auxiliary cache 110 be not stored with information to be read in the case of, when transfer buffer 140
Under the control of the main control unit 200, after next stage memory obtains the information to be read, the main control unit 200 can be with
The information updating to be read that will be read to the corresponding information part of the auxiliary cache 110, so as to next time directly from auxiliary
The information to be read is read in caching 110.
The technical scheme that the embodiment of the present invention is provided, due to the virtual address in one page offset address with its physically
The offset address of location is identical, also, the index of information subregion and address in the block of corresponding information content are phase in auxiliary cache
The offset address of information content corresponding physical address is answered, therefore, the second quantity digit of physical address corresponding to the information content
Low level is identical with the low level of the second quantity digit of respective virtual address so that target labels and target are read from auxiliary cache
The step of information content and from translation lookaside buffer determine target physical address the step of can carry out simultaneously, this causes to delay
Deposit access rate higher;Simultaneously as the capacity of auxiliary cache is smaller, therefore power consumption needed for access auxiliary cache is relatively low.It can be seen that,
By using this programme, cache access speed can be improved in the case where low-power consumption is ensured.
It should be noted that when the caching that the caching system is included only is auxiliary cache, the next stage memory
It is main storage, and working as the caching system also includes:Capacity is more than level cache, L2 cache, three-level caching of page size etc.
When, the next stage memory is usually the comparatively faster level cache of reading speed inside processor.Art technology
Personnel are it is understood that be also equipped with corresponding transfer buffer, main storage is fast with reading between the adjacent caching of rank
Corresponding transfer buffer is also equipped between the relatively most slow caching of degree;Wherein, won the confidence when auxiliary cache is not stored with to continue
During breath, the transfer buffer being arranged between auxiliary cache and level cache accesses level cache first, and in level cache
During the information to be read that is not stored with, ask to continue to L2 cache by the transfer buffer between level cache and L2 cache
Win the confidence breath, by that analogy.Wherein, the caching direct interaction that transfer buffer is only connected with next stage, for example:It is arranged at auxiliary slow
Deposit the transfer buffer and level cache between only to be interacted with level cache, it will not directly be interacted with other cachings.
Wherein it is possible to understand, because the information read needed for processor includes instruction and data, therefore, the auxiliary
Caching can include:House-keeping instruction caching or assistance data are cached;Wherein, the house-keeping instruction is cached for storage and to processor
Main control unit deliver all kinds of instructions, and assistance data is cached for storing and needed for main control unit delivering computing to processor
Data, also, both can be by the access simultaneously of the main control unit of processor.
Wherein, second number of bits is several determines according to the first quantity item number;Accordingly, the 3rd several evidences of number of bits should
Second quantity digit is determined.Specifically, according to the capacity and each byte number of auxiliary cache, it may be determined that go out the auxiliary cache energy
Enough the first quantity item numbers having;And according to the information block number in each and the first quantity item number, it may be determined that go out physical address
Needed for selection low level the second quantity digit, and a high position for the remaining 3rd quantity digit of physical address is to be deposited as label
In storing up label subregion.
Further, because the stored information content of auxiliary cache 110 can be replaced, and the label institute for being read out is right
Answer the information content may be not already present in the auxiliary cache 110 in, therefore, in order to ensure to read the information content effectively, often
The height of the 3rd quantity digit of physical address corresponding to the information content in one label subregion storage corresponding information subregion
Position and significance bit;
Accordingly, the comparison circuit is used for what is obtained from the auxiliary cache 110 in the main control unit for judging processor
The non-colinear position high of label and the 3rd quantity digit of the physical address obtained from the translation lookaside buffer 130 and this is effective
During the storage virtual value of position, hiting signal is produced;And obtained from the auxiliary cache 110 in the main control unit for judging processor
Label is mismatched or the significance bit with the high-order of the 3rd quantity digit of the physical address obtained from the translation lookaside buffer 130
During storage virtual value, disablement signal is produced.
Further, because the main control unit 200 of processor may carry out explicit caching to each caching, use
In a certain address block of the removing information content in the buffer, and auxiliary cache 110 is to instruct the fritter caching that can lose, therefore,
The auxiliary cache 110 needs automatically to remove appropriate address block corresponding informance content after caching instructions are got, and is based on
Aforesaid way, the caching system that the embodiment of the present invention is provided can also include:
Caching processing module, for when the caching instructions of main control unit transmission are received, to described auxiliary
The information content entrained by the caching instructions helped in caching corresponding to physical address carries out respective handling.
With reference to a specific example, a kind of caching system provided the embodiment of the present invention is introduced.
Wherein, the caching system can include:
4K byte capacities and be directly connected with main storage SRAM SRAM310, comparison circuit 320,
TLB330, transfer buffer 340, level cache 350;
The SRAM310 has information area and label area, wherein, information area includes the information subregion of 64, mark
Signing region includes the label subregion of 64, the label subregion and information subregion of each single item 512 altogether;Assuming that each is believed
Breath subregion can store 64 instructions, i.e. each information subregion and be divided into 64 pieces, in order to identify each single item information subregion, if
Put the index of 6, and in order to distinguish each piece of each single item information subregion, address in the block of 6 is set, also
It is, the index of address and information subregion in 32 0-11 of physical address blocks that can identify corresponding information content, it is remaining
The 12-31 value that can be filled as label subregion.
Comparison circuit 320 is used for the main control unit of comparator processor according to the 12 of information to be read correspondence target virtual address
The target labels that position low level is determined from SRAM310 and the target physical determined from TLB330 according to the target virtual address
20 high positions of address, and judge determined target labels significance bit whether effectively, and then produce hiting signal or
Disablement signal;
The TLB330 is stored with page number where virtual address and the mapping relations of page number where respective physical address, wherein,
Offset address of offset address of the virtual address in corresponding page with respective physical address in corresponding page is identical;
Transfer buffer 340 is used for after comparison circuit 320 produces disablement signal, under the control of the main control unit, leads to
Cross level cache 350 and obtain the information to be read, and then cause that the main control unit sends to place acquired information to be read
Reason device streamline;
The storage organization of the level cache 350 is same as the prior art, will not be repeated here.
Wherein, the main control unit of processor can send information correspondence target virtual address to be read by SRAM310
To read the target information content and the corresponding mesh of target information content corresponding to 12 low levels of the target virtual address
Mark label;Meanwhile, the main control unit can send the information correspondence target virtual address to be read to obtain by TLB330
Target physical address corresponding to the target virtual address.
It will be appreciated by persons skilled in the art that for SRAM310 be not stored with required information to be read in the case of,
When transfer buffer 340 is under the control of the main control unit, after level cache obtains the information to be read, the main control unit
The information updating to be read that will can be read to the corresponding information part of the SRAM310, so as to next time directly from
The information to be read is read in SRAM310.
It should be noted that the caching system can also include:It is arranged at the transfer between level cache and main storage
Buffer, its be used to be not stored with level cache information to be read when, under the control of the main control unit of processor, pass through
Main storage takes information to be read, and then acquired information to be read is sent to processor pipeline.
In the present embodiment institute offer scheme, because general procedure is in 4K byte capacities and the SRAM310 that is directly connected
Hit rate is more than 80%~90% so that most information to be read can hit in SRAM310, it is, hit
2 cycles are needed only to, and during for being not hit by, also need to only be accessed on the time of level cache plus 2 cycles, improve
Cache access efficiency;Meanwhile, the power consumption for accessing the caching of 4K byte capacities is relatively low.It can be seen that, provided by using the present embodiment
Scheme, can improve cache access speed in the case where low-power consumption is ensured.
Further, the embodiment of the present invention also provides a kind of cache access method, above-mentioned based on the embodiment of the present invention
Caching system, as shown in figure 3, methods described can include:
S301, the main control unit of the processor second quantity digit of target virtual address according to corresponding to information to be read
Low level, reads target information content and the target labels in the auxiliary cache of the caching system;
S302, determines the corresponding target physical of the target virtual address in the translation lookaside buffer of the caching system
Address;
Wherein, determined in the translation lookaside buffer of the caching system corresponding with page number where the target virtual address
Target physical address where page number, and then determined in page where the target physical address determined the destination virtual ground
The corresponding target physical address in location.
It should be noted that the offset address phase of offset address and its physical address due to the virtual address in one page
Together, also, in auxiliary cache the index of information subregion is that corresponding information content is corresponding with address in the block of corresponding information content
The offset address of physical address, therefore, the low level of the second quantity digit of physical address corresponding to the information content is won the confidence with continuing
The low level of the second quantity digit of the virtual address of breath is identical, therefore, step S301 and step S302 can be carried out simultaneously.
S303, the 3rd number of the target labels and the target physical address is compared using the comparison circuit in the caching system
Measure a high position for digit;
S304, whether the signal for judging comparison circuit output is hiting signal, if it is, performing step S305;Otherwise,
Perform step S306;
The target information content that S305 will be read out from the auxiliary cache is as information to be read and sends to treatment
Device streamline;
S306, controls the transfer buffer of the caching system to obtain the information to be read from next stage memory, and then will
The information to be read is sent to processor pipeline.
Wherein, the main control unit of the comparison circuit comparator processor is according to target virtual address corresponding to information to be read
Label and buffered from the translation lookaside according to the target virtual address that the low level of the second quantity digit reads from the auxiliary cache
A high position for 3rd quantity digit of the respective objects physical address that device is determined, when comparative result is represented both match when, produce
Raw hiting signal, with the target for the main control unit is determined the low level of the second quantity digit of the target virtual address
The information content is as information to be read and sends to processor pipeline;Otherwise, disablement signal is produced.And the transfer buffer exists
After the comparison circuit produces disablement signal, under the control of the main control unit of processor, being obtained by the next stage memory should
Information to be read, and then cause that the main control unit sends to processor pipeline acquired information to be read.
It is understood that in order to ensure to read the information content effectively, each single item in the auxiliary cache of the caching system
3rd quantity digit of physical address corresponding to the information content in label subregion storage corresponding information subregion it is high-order with
And significance bit;Accordingly, the comparison circuit is used to judge the mark that the main control unit of processor is obtained from the auxiliary cache
Label are deposited with the non-colinear position high and the significance bit of the 3rd quantity digit of the physical address obtained from the translation lookaside buffer
When containing valid value, hiting signal is produced;Otherwise, disablement signal is produced.
It should be noted that for auxiliary cache be not stored with information to be read in the case of, when transfer buffer is in the master
Control under the control of unit, after next stage memory obtains the information to be read, the main control unit will can be read
Information updating to be read is continued to the corresponding information part of the auxiliary cache directly to read this from auxiliary cache next time
Win the confidence breath.
The technical scheme that the embodiment of the present invention is provided, due to the virtual address in one page offset address with its physically
The offset address of location is identical, also, the index of information subregion and address in the block of corresponding information content are phase in auxiliary cache
The offset address of information content corresponding physical address is answered, therefore, the second quantity digit of physical address corresponding to the information content
Low level is identical with the low level of the second quantity digit of the virtual address of information to be read so that target mark is read from auxiliary cache
Sign and can be carried out simultaneously the step of target physical address is determined the step of target information content and from translation lookaside buffer,
This causes that cache access speed is higher;Simultaneously as the capacity of auxiliary cache is smaller, thus access power consumption needed for auxiliary cache compared with
It is low.It can be seen that, by using this programme, cache access speed can be improved in the case where low-power consumption is ensured.
The above is only specific embodiment of the invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (6)
1. a kind of caching system, including:Translation lookaside buffer, comparison circuit, capacity more than page size level cache, two grades
Caching, three-level caching, it is characterised in that also include:
Capacity be not more than page and be directly connected with main storage auxiliary cache, positioned at the auxiliary cache and next stage memory
Between transfer buffer;
The auxiliary cache includes information area and label area, wherein, described information region includes the letter of the first quantity item number
Breath subregion, the label area includes the label subregion of the first quantity item number, and each single item information subregion is uniquely corresponded to
One label subregion;Wherein, the second quantity digit of the physical address of each single item information subregion stored information content
Low level is the index of address and corresponding information subregion in corresponding piece of corresponding information content, and each single item label subregion is stored
Label the 3rd quantity digit of physical address corresponding to the information content in corresponding information subregion a high position;
The transfer buffer is used for after the comparison circuit produces disablement signal, in the control of the main control unit of processor
Under, information to be read is obtained by the next stage memory, and then acquired information to be read is sent to processor stream
Waterline;
The mapping relations of page number where virtual address and page number where respective physical address are stored in the translation lookaside buffer,
Wherein, offset address of offset address of the virtual address in corresponding page with respective physical address in corresponding page is identical;
Next stage memory is the level cache inside processor.
2. caching system according to claim 1, it is characterised in that the label that each single item label subregion is stored is phase
Answer a high position and significance bit for the 3rd quantity digit of physical address corresponding to the information content in information subregion;
Accordingly, the comparison circuit is used to judge the label that the main control unit of processor is obtained from the auxiliary cache
With the non-colinear position high and the significance bit of the 3rd quantity digit of the physical address obtained from the translation lookaside buffer
During storage virtual value, hiting signal is produced;Otherwise, disablement signal is produced.
3. caching system according to claim 1, it is characterised in that second number of bits is several according to first quantity
Item number is determined;
Accordingly, the 3rd number of bits is several determines according to the second quantity digit.
4. caching system according to claim 1, it is characterised in that the auxiliary cache includes:House-keeping instruction cache or
Assistance data is cached.
5. caching system according to claim 1, it is characterised in that also include:
Caching processing module, for when the caching instructions that the main control unit sends are received, to the auxiliary
The information content entrained by the caching instructions in caching corresponding to physical address carries out respective handling.
6. a kind of cache access method, it is characterised in that based on the caching system described in claim 1, methods described includes:
The low level of the main control unit of processor second quantity digit of target virtual address according to corresponding to information to be read, reads
Target information content and target labels in the auxiliary cache of the caching system, and it is slow in the translation lookaside of the caching system
Rush in device and determine page number where target physical address corresponding with page number where the target virtual address, and then determined
The corresponding target physical address of the target virtual address is determined in page where the target physical address for going out;
Using the comparison circuit target labels and the 3rd quantity of the target physical address in the caching system
A high position for digit;
When the comparison circuit produce hiting signal after, the target information content that will be read out from the auxiliary cache as
Information to be read is simultaneously sent to processor pipeline;
After the comparison circuit produces disablement signal, the transfer buffer of the caching system is controlled to be obtained from next stage memory
The information to be read is taken, and then the information to be read is sent to processor pipeline.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310136234.4A CN103207844B (en) | 2013-04-18 | 2013-04-18 | Caching system and cache access method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310136234.4A CN103207844B (en) | 2013-04-18 | 2013-04-18 | Caching system and cache access method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103207844A CN103207844A (en) | 2013-07-17 |
CN103207844B true CN103207844B (en) | 2017-06-06 |
Family
ID=48755071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310136234.4A Active CN103207844B (en) | 2013-04-18 | 2013-04-18 | Caching system and cache access method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103207844B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112231246A (en) * | 2020-10-31 | 2021-01-15 | 王志平 | Method for realizing processor cache structure |
CN115394332B (en) * | 2022-09-09 | 2023-09-12 | 北京云脉芯联科技有限公司 | Cache simulation realization system, method, electronic equipment and computer storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100409203C (en) * | 2005-10-14 | 2008-08-06 | 杭州中天微系统有限公司 | Method of realizing low power consumption high speed buffer storying and high speed buffer storage thereof |
JP5300407B2 (en) * | 2008-10-20 | 2013-09-25 | 株式会社東芝 | Virtual address cache memory and virtual address cache method |
US8806101B2 (en) * | 2008-12-30 | 2014-08-12 | Intel Corporation | Metaphysical address space for holding lossy metadata in hardware |
CN102208966B (en) * | 2010-03-30 | 2014-04-09 | 中兴通讯股份有限公司 | Hybrid automatic repeat request (HARQ) combiner and HARQ data storage method |
CN102541510B (en) * | 2011-12-27 | 2014-07-02 | 中山大学 | Instruction cache system and its instruction acquiring method |
CN102541761B (en) * | 2012-01-17 | 2014-10-22 | 苏州国芯科技有限公司 | Read-only cache memory applying on embedded chips |
-
2013
- 2013-04-18 CN CN201310136234.4A patent/CN103207844B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN103207844A (en) | 2013-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102792285B (en) | For the treatment of the apparatus and method of data | |
CN104133780B (en) | A kind of cross-page forecasting method, apparatus and system | |
CN104252425B (en) | The management method and processor of a kind of instruction buffer | |
CN102460400B (en) | Hypervisor-based management of local and remote virtual memory pages | |
CN104050089B (en) | System on chip and its operating method | |
CN105283855B (en) | A kind of addressing method and device | |
CN102792286B (en) | Address mapping in virtualization process system | |
CN109582214B (en) | Data access method and computer system | |
CN108459975B (en) | Techniques for efficient use of address translation caches | |
CN108139981B (en) | Access method for page table cache TLB table entry and processing chip | |
JP2020529656A (en) | Address translation cache | |
CN110196757A (en) | TLB filling method, device and the storage medium of virtual machine | |
CN104516822B (en) | A kind of memory pool access method and equipment | |
JP6478843B2 (en) | Semiconductor device and cache memory control method | |
EP3553665A1 (en) | Non-volatile memory access method, device, and system | |
CN107229576A (en) | It is a kind of to reduce the apparatus and method that on-chip system runs power consumption | |
CN112840331A (en) | Prefetch management in a hierarchical cache system | |
CN108959125B (en) | Storage access method and device supporting rapid data acquisition | |
CN103207844B (en) | Caching system and cache access method | |
US7562204B1 (en) | Identifying and relocating relocatable kernel memory allocations in kernel non-relocatable memory | |
CN104281545B (en) | A kind of method for reading data and equipment | |
CN114925001A (en) | Processor, page table prefetching method and electronic equipment | |
US20160217079A1 (en) | High-Performance Instruction Cache System and Method | |
CN114637700A (en) | Address translation method for target virtual address, processor and electronic equipment | |
CN104252423B (en) | Consistency processing method and device based on multi-core processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |