CN104572528A - Method and system for processing access requests by second-level Cache - Google Patents

Method and system for processing access requests by second-level Cache Download PDF

Info

Publication number
CN104572528A
CN104572528A CN201510040704.6A CN201510040704A CN104572528A CN 104572528 A CN104572528 A CN 104572528A CN 201510040704 A CN201510040704 A CN 201510040704A CN 104572528 A CN104572528 A CN 104572528A
Authority
CN
China
Prior art keywords
request
data
read
cache
edma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510040704.6A
Other languages
Chinese (zh)
Inventor
李冰
姜伟
徐寅
刘勇
赵霞
王刚
董乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201510040704.6A priority Critical patent/CN104572528A/en
Publication of CN104572528A publication Critical patent/CN104572528A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • G06F13/30Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal with priority control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches

Abstract

The invention discloses a method for processing access requests by a second-level Cache. Improvements are made aiming at the defects of existing second-level Cache access request technology, three parallel flow lines are utilized to perform flow line processing on L1D miss reading requests, L1D miss writing requests, L1P miss reading requests, EDMA reading requests and EDMA writing requests, a cache mechanism is utilized to merge multiple requests identical in offset writing address into a request to increase processing speed of multiple writing requests, and the offset addresses are compared to reduce monitoring times so as to improve data reading capability of EDMA. The invention further discloses a system for processing the access requests by the second-level Cache. Compared with the prior art, the method and the system have the advantages that the system is higher in access request processing efficiency, simple in structure and high in transportability and can be applied to most microprocessor chips.

Description

a kind of secondary cache to disposal route and the system of request of access
Technical field
The present invention relates to a kind of second-level cache to the disposal route of request of access and system, belong to microprocessor technology field.
Background technology
Cache(cache memory) widespread use of technology, well solve the restriction storing wall problem and microprocessor performance is promoted, and the development of very large scale integration technology, also make integrated Large Copacity Cache on sheet become possibility, this reduces the crash rate of Cache to a great extent.One-level Cache is divided into program Cache(L1P) and data Cache(L1D), second-level cache is data and procedure sharing Cache(L2), just can send miss request to second-level cache when one-level Cache lacks.EDMA is direct memory access controller, is connected to second-level cache and external memory, for realizing large stretch of data moving from internal memory to external memory.
Due to one-level Cache finite capacity, and along with the raising of kernel processes speed and the increase of data path, the miss rate of one-level Cache increases, and adopts these miss request of pipeline processes effectively to reduce and blocks because lacking the plenty of time of causing.Read-write requests due to EDMA is sudden transmission, once can send maximum 8 requests, and traditional serial processing will block a large amount of clock period.
The method of existing second-level cache process request of access has serial processing method and single assembly line processing method, serial processing method will cause the waste plenty of time when multiple request successional arrivals, and single streamline can alleviate this temporal obstruction to a certain extent, but whole piece pipeline cycle is longer, the processing requirements of high-performance processor still cannot be met.
Summary of the invention
Technical matters to be solved by this invention is to overcome the lower problem of the treatment effeciency of the second-level cache existing for prior art to request of access, provides a kind of more efficient second-level cache to the disposal route of request of access and system.
The present invention specifically solves the problems of the technologies described above by the following technical solutions:
A kind of second-level cache is to the disposal route of request of access, utilize the streamline that three articles parallel: the first ~ three streamline carries out parallel processing to request of access, first-class waterline be responsible for processing level one data Cache read miss request, write miss request and one-level program Cache read miss request, second waterline is responsible for the read request processing EDMA, and the write request processing EDMA is responsible for by the 3rd streamline; Be that the first ~ three streamline is arranged and its first ~ three buffer unit one to one respectively, first buffer unit is used for writing miss request and miss request of writing identical for multiple line displacement addresses of institute's buffer memory being merged into of buffer memory level one data Cache, second buffer unit is used for the data that listen to of buffer memory and writes back in the corresponding data body of second-level cache by the latest data listened to while EDMA sends data, and the 3rd buffer unit is used for the write request of buffer memory EDMA and write request identical for multiple line displacement addresses of institute's buffer memory is merged into one.
Preferably, described first-class waterline is comprise: address resolution, read the five-stage pipeline that Tag volume data, Tag value and mode bit compare, read data volume, data send this Pyatyi, second waterline be comprise intercept, interception data returns or read data body, data send this three class pipeline of three grades, the 3rd streamline be comprise intercept, data write back, EDMA data write this three class pipeline of three grades.
Further, second waterline, when processing the read request of EDMA, carries out data interception according to the address of read request to one-level Cache and by the line displacement address temporary of institute's interception data; For each read request afterwards, the line displacement address of first this read request being asked compares with temporary line displacement address, if identical, then data interception is not carried out to one-level Cache, if different, then data interception is carried out to one-level Cache and the line displacement address temporary of this read request being asked.
Second-level cache, to a disposal system for request of access, comprises for the parallel streamline of three articles of request of access being carried out to parallel processing: the first ~ three streamline, and the first ~ three buffer unit arranged with the first ~ three streamline one_to_one corresponding; First-class waterline be responsible for processing level one data Cache read miss request, write miss request and one-level program Cache read miss request, second waterline is responsible for the read request processing EDMA, and the write request processing EDMA is responsible for by the 3rd streamline; First buffer unit is used for writing miss request and miss request of writing identical for multiple line displacement addresses of institute's buffer memory being merged into of buffer memory level one data Cache, second buffer unit is used for the data that listen to of buffer memory and writes back in the corresponding data body of second-level cache by the latest data listened to while EDMA sends data, and the 3rd buffer unit is used for the write request of buffer memory EDMA and write request identical for multiple line displacement addresses of institute's buffer memory is merged into one.
Preferably, described first-class waterline is five-stage pipeline, comprises successively: address resolution module, reading Tag volume data module, Tag value and mode bit comparison module, reading data volume module, data transmission blocks; Second waterline is three class pipeline, comprise successively intercept module, interception data returns or read data module, data transmission blocks; 3rd streamline is three class pipeline, comprise successively intercept module, data write back module, EDMA Data write. module.
Further, the module of intercepting of second waterline comprises and intercepts offset address and compare and temporary storage location, for the line displacement address of temporary institute interception data and the line displacement address that the read request that newly arrives is asked compare with temporary line displacement address, if identical, then intercept module and data interception is not carried out to one-level Cache, if different, then intercept module and data interception is carried out to one-level Cache and the line displacement address temporary of the read request that newly arrives being asked compares and temporary storage location to intercepting offset address.
Preferably, this system also comprises the priority arbitration module for carrying out processing priority judgement to the multiple miss request arrived simultaneously, and processing priority is followed successively by from high to low: one-level program Cache read miss request, level one data Cache write miss request, level one data Cache read miss request.
Compared to existing technology, the present invention has following beneficial effect:
The present invention adopts the multiple request of parallel pipeline processing, effectively can improve the performance of whole storage system and then improve the processing speed of whole microprocessor; The introducing of by pass mechanism and buffer unit improves the performance of whole streamline; Owing to intercepting the access at substantial time, the present invention intercepts by comparing offset address to reduce the read data ability that number of times improves EDMA; The introducing writing merging accelerates the processing power to multiple request further, and its structure is simple, portable strong, all can be able to apply in most of microprocessor chip.
Accompanying drawing explanation
Fig. 1 is the overall system block diagram of second-level cache of the present invention to the disposal system of request of access;
Fig. 2 writes the temporary of buffer memory buffer and Unite principle schematic diagram in embodiment;
Fig. 3 is the structured flowchart of miss request streamline in embodiment;
Fig. 4 is the pipeline processes process flow diagram of miss request streamline in embodiment;
Fig. 5 is the structured flowchart of EDMA read request streamline in embodiment;
Fig. 6 is the structured flowchart of EDMA write request streamline in embodiment.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in detail:
The present invention is directed to the deficiency of existing second-level cache request of access technology, it is improved, utilizing three, parallel streamline reads miss request to L1D, L1D writes miss request, L1P reads miss request, these five kinds of request of access of EDMA read request, EDMA write request carry out pipeline processes respectively, and utilize caching mechanism to write the identical request of offset address merge into one to accelerate the processing speed of multiple write request by multiple, intercept number of times to improve the read data ability of EDMA by comparing offset address to reduce.
Fig. 1 shows the general structure of second-level cache of the present invention to the disposal system of request of access.
As shown in Figure 1, the disposal system of second-level cache of the present invention to request of access comprises:
Write buffer memory buffer, write miss request for buffer memory is multiple, and write the identical request of offset address merge into one by multiple;
Priority arbitration module, for carrying out the judgement of priority to multiple miss request of accessing simultaneously;
Miss request streamline, carries out pipeline processes for reading disappearance to L1D reading and writing disappearance and L1P;
Data volume memory module, for the Tag value, mode bit and the data that store;
EDMA read request streamline, for processing the read request of EDMA;
EDMA write request streamline, for processing the write request of EDMA;
EDMA writes merging treatment module, for multiple identical EDMA write request of offset address of writing is merged into a request.
Wherein, write buffer memory buffer for carrying out buffer memory to the miss request of writing arrived continuously, and the identical request of offset address of writing wherein is merged into one.Fig. 2 shows the temporary principle with merging writing buffer memory buffer.As shown in Figure 2, write the spatial cache that buffer memory buffer is 4 64bit, write miss request when there being 4 continuously and (suppose that offset address is respectively 100h, 101h, 100h, time 100h), each request can be temporary in corresponding 64 buffer, if do not adopt and write merging, write buffer memory buffer and can write miss request by buffer memory 4 at most, according to writing merging, can be all then that the write request of 100h merges into a write request by offset address, because the write request that offset address is 100h all needs the same a line writing second-level cache, therefore need not write several times, can write-once by writing merging.Adopt and write writing buffer memory buffer and can writing miss request by buffer memory 16 at most of consolidation strategy.
Priority arbitration module is used for carrying out priority judgement to the multiple miss request arrived simultaneously, and wherein the L1P priority reading to lack is the highest, L1D to read disappearance priority minimum, L1D read miss request only at Write post buffer for its request just can be received time empty.Its concrete processing procedure is as follows:
Step 1, receive multiple miss request;
Step 2, first judge whether it is that L1D writes miss request, if then carry out step 3, then continue the next request of process, if not then carry out step 4;
Step 3, miss request will be write be temporary in and write in buffer memory buffer;
Step 4, determine whether that L1D reads miss request, if not the request that illustrates reads miss request for L1P, then directly sent into pipeline processes, if then carry out step 5;
Whether step 5, judgement write buffer memory buffer is empty, namely judges whether write miss request is finished, if be finished, carries out request process, then postpones L1D if not empty and reads miss request, until it is empty for writing buffer memory buffer.
Miss request streamline is used for reading disappearance to L1D reading and writing disappearance and L1P and carries out pipeline processes; It comprises Pyatyi, if a request is from after the first order enters the second level, next request just can enter the first order and perform, and need not wait until that a request all performs end and performs next request again, the basic structure of miss request streamline as shown in Figure 3, comprises address resolution, reads that Tag volume data, Tag value and mode bit compare, read data volume data, data send this Pyatyi module.
Wherein address resolution module is by the parsing to request address, and the request of judging is arranged in Cache or SRAM or external memory, and the corresponding one section of specific memory address of each memory bank, just can judge to be positioned at which memory bank by request address;
Read Tag volume data module Tag value and state flag bit to be read from Tag memory bank according to the row index values in request;
The Tag of the Tag value of reading with request compares by Tag value and mode bit comparison module, judges whether hit, if hit judges that whether state flag bit is effective, effectively then can read data from Cache body;
Read data volume module in charge and read data from corresponding data space, comprising three kinds of approach, read from Cache body, read from SRAM, read by EDMA;
The data of reading are sent to the operation of L1 by data transmission blocks for completing;
Miss request streamline is to the processing procedure of miss request as shown in Figure 4, specific as follows:
Step 1, the address resolution module address to request is resolved, and demandable data may be positioned at three parts, comprises the Cache part of L2, SRAM part and outer nonresident portion;
Step 2, address resolution module judge whether the row index values in address belongs to Cache space, if do not belong to, carry out step 3, if belong to, carry out step 4;
Step 3, address resolution module judge whether the row index values in address belongs to SRAM space, if do not belong to, carry out step 8, if belong to, carry out step 9;
Step 4, reading Tag volume data module read Tag value and corresponding state flag bit according to row index values from Tag memory bank;
The Tag value of request and the Tag value read from Tag memory bank compare by step 5, Tag value and mode bit comparison module, if identical, carry out steps 6, if difference, carry out steps 8;
Step 6, Tag value and mode bit comparison module judge that whether state flag bit is effective, effectively then carry out steps 7, invalid, carry out steps 8;
Step 7, reading data volume module read the data of hit from the Cache body of hit;
Step 8, reading data volume module read required data by EDMA from external memory;
Step 9, reading data volume module read corresponding data from on-chip SRAM;
The data of reading are sent to requesting party by step 10, data transmission blocks.
Data volume memory module is for storing corresponding data message, and it comprises Tag memory bank, Cache memory bank, SRAM memory bank.Wherein Tag memory bank is used for storing the capable corresponding Tag value of Cache and state flag bit.
EDMA read request streamline is used for carrying out pipeline processes to the read request of bursting of EDMA.Fig. 5 is the structured flowchart of EDMA read request streamline, as shown in Figure 5, this streamline is divided into 3 level production lines, comprise intercept module, interception data returns or read data module, data transmission blocks, wherein intercept module and comprise and intercept offset address and compare and temporary storage location.The process of this streamline to read request process is as follows:
Intercept module can to burst 8 read requests transmitted by disposable reception EDMA, first first request is carried out to the data interception of L1, intercept offset address to compare and intercepted address with temporary storage location and keep in, then the dirty position of L1 corresponding row is read, be modified if dirty position is 1 expression this journey data, needing to write back latest data, not being modified, then without the need to writing back if dirty position is 0 expression this journey data.For 2nd ~ 8 requests, intercept offset address and compare and first the offset address in this offset address of asking and temporary list is compared with temporary storage location, if offset address is identical, do not need again to intercept.Empty when 8 request process terminate need to keep in list to offset address.Such as, the offset address of 8 requests is respectively 100h, 100h, 101h, 102h, 101h, 103h, 100h, 104h, ask intercepting offset address in L1 to be the row of 100h to first time, and offset address 100h is kept in, second time request is identical with the temporary address in scratch list, no longer intercepts, the offset address that third time asks is 101h, different from scratch list, then need to intercept, 101h is temporary in scratch list simultaneously, by that analogy, when the 8th request performs end, scratch list is all reset.
Intercept operation if perform and dirty position is 1, then the interception data of the second level returns or read data module reads latest data in L1, up-to-date data is read from L1 data volume.Read laggard enter the data transmission blocks of the streamline third level, the data returned by L1 write data send buffer memory, perform while data send and the latest data of L1 is write back L2 data volume.
If without the need to carrying out intercepting operation or dirty position is 0, then carry out the reading L2 data volume in the streamline of the second level, read the data in corresponding row from the data volume of L2.Be sent to the data buffer storage in third level streamline after reading data, then carry out data transmission.
EDMA write request streamline is used for carrying out pipeline processes to multiple write requests that EDMA bursts.Its structure as shown in Figure 6, is divided into three grades:
Intercept module, carry out data interception according to the L1 of request address to correspondence is capable, if dirty position is 1, enter second level streamline, if dirty position is 0, enter third level streamline;
Data write back module, are write back to by the L1 latest data of intercepting in data volume corresponding to L2;
EDMA Data write. module, by the data of write request write L2, if the L2 of correspondence capable in data with existing, need first data with existing is write back to external memory;
Write merging module, multiple EDMA is write the identical request of offset address and merge into a request, its Unite principle is identical with the Unite principle writing buffer memory buffer.
The invention provides the assembly line processing method of a kind of second-level cache to request of access, adopt the request of three pipeline processes 5 classes, streamline mechanism effectively can improve handling property, write buffer memory and write merge mechanism introducing can strengthen further to request processing power, the present invention also adopts and reduces the read request processing time that the method intercepting number of times improves EDMA, above method also can be used for the write request of EDMA, the row of second-level cache is replaced, the many aspects such as data sign processing.
The invention provides the pipeline processes system of a kind of second-level cache to request of access, this system can be applied on most of microprocessor chip, can improve the performance of whole storage system, significant to the processing power improving microprocessor chip.

Claims (7)

1. a second-level cache is to the disposal route of request of access, it is characterized in that, utilize the streamline that three articles parallel: the first ~ three streamline carries out parallel processing to request of access, first-class waterline be responsible for processing level one data Cache read miss request, write miss request and one-level program Cache read miss request, second waterline is responsible for the read request processing EDMA, and the write request processing EDMA is responsible for by the 3rd streamline; Be that the first ~ three streamline is arranged and its first ~ three buffer unit one to one respectively, first buffer unit is used for writing miss request and miss request of writing identical for multiple line displacement addresses of institute's buffer memory being merged into of buffer memory level one data Cache, second buffer unit is used for the data that listen to of buffer memory and writes back in the corresponding data body of second-level cache by the latest data listened to while EDMA sends data, and the 3rd buffer unit is used for the write request of buffer memory EDMA and write request identical for multiple line displacement addresses of institute's buffer memory is merged into one.
2. as claimed in claim 1 second-level cache to the disposal route of request of access, it is characterized in that, described first-class waterline is comprise: address resolution, read the five-stage pipeline that Tag volume data, Tag value and mode bit compare, read data volume, data send this Pyatyi, second waterline be comprise intercept, interception data returns or read data body, data send this three class pipeline of three grades, the 3rd streamline be comprise intercept, data write back, EDMA data write this three class pipeline of three grades.
3. as claimed in claim 2 second-level cache to the disposal route of request of access, it is characterized in that, second waterline, when processing the read request of EDMA, carries out data interception according to the address of read request to one-level Cache and by the line displacement address temporary of institute's interception data; For each read request afterwards, the line displacement address of first this read request being asked compares with temporary line displacement address, if identical, then data interception is not carried out to one-level Cache, if different, then data interception is carried out to one-level Cache and the line displacement address temporary of this read request being asked.
4. a second-level cache is to the disposal system of request of access, it is characterized in that, comprise for the parallel streamline of three articles of request of access being carried out to parallel processing: the first ~ three streamline, and the first ~ three buffer unit arranged with the first ~ three streamline one_to_one corresponding; First-class waterline be responsible for processing level one data Cache read miss request, write miss request and one-level program Cache read miss request, second waterline is responsible for the read request processing EDMA, and the write request processing EDMA is responsible for by the 3rd streamline; First buffer unit is used for writing miss request and miss request of writing identical for multiple line displacement addresses of institute's buffer memory being merged into of buffer memory level one data Cache, second buffer unit is used for the data that listen to of buffer memory and writes back in the corresponding data body of second-level cache by the latest data listened to while EDMA sends data, and the 3rd buffer unit is used for the write request of buffer memory EDMA and write request identical for multiple line displacement addresses of institute's buffer memory is merged into one.
5. as claimed in claim 4 second-level cache to the disposal system of request of access, it is characterized in that, described first-class waterline is five-stage pipeline, comprises successively: address resolution module, reading Tag volume data module, Tag value and mode bit comparison module, reading data volume module, data transmission blocks; Second waterline is three class pipeline, comprise successively intercept module, interception data returns or read data module, data transmission blocks; 3rd streamline is three class pipeline, comprise successively intercept module, data write back module, EDMA Data write. module.
6. as claimed in claim 5 second-level cache to the disposal system of request of access, it is characterized in that, the module of intercepting of second waterline comprises and intercepts offset address and compare and temporary storage location, for the line displacement address of temporary institute interception data and the line displacement address that the read request that newly arrives is asked compare with temporary line displacement address, if identical, then intercept module and data interception is not carried out to one-level Cache, if different, then intercept module and data interception is carried out to one-level Cache and the line displacement address temporary of the read request that newly arrives being asked compares and temporary storage location to intercepting offset address.
7. as described in any one of claim 4 ~ 6 second-level cache to the disposal system of request of access, it is characterized in that, this system also comprises the priority arbitration module for carrying out processing priority judgement to the multiple miss request arrived simultaneously, and processing priority is followed successively by from high to low: one-level program Cache read miss request, level one data Cache write miss request, level one data Cache read miss request.
CN201510040704.6A 2015-01-27 2015-01-27 Method and system for processing access requests by second-level Cache Pending CN104572528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510040704.6A CN104572528A (en) 2015-01-27 2015-01-27 Method and system for processing access requests by second-level Cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510040704.6A CN104572528A (en) 2015-01-27 2015-01-27 Method and system for processing access requests by second-level Cache

Publications (1)

Publication Number Publication Date
CN104572528A true CN104572528A (en) 2015-04-29

Family

ID=53088643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510040704.6A Pending CN104572528A (en) 2015-01-27 2015-01-27 Method and system for processing access requests by second-level Cache

Country Status (1)

Country Link
CN (1) CN104572528A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107430551A (en) * 2015-12-01 2017-12-01 华为技术有限公司 Data cache method, memory control device and storage device
CN111881068A (en) * 2020-06-30 2020-11-03 北京思朗科技有限责任公司 Multi-entry fully associative cache memory and data management method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008921A (en) * 2007-01-26 2007-08-01 浙江大学 Embedded heterogeneous polynuclear cache coherence method based on bus snooping
US20070180221A1 (en) * 2006-02-02 2007-08-02 Ibm Corporation Apparatus and method for handling data cache misses out-of-order for asynchronous pipelines
CN101334759A (en) * 2007-06-28 2008-12-31 国际商业机器公司 L2 cache/nest address translation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070180221A1 (en) * 2006-02-02 2007-08-02 Ibm Corporation Apparatus and method for handling data cache misses out-of-order for asynchronous pipelines
CN101008921A (en) * 2007-01-26 2007-08-01 浙江大学 Embedded heterogeneous polynuclear cache coherence method based on bus snooping
CN101334759A (en) * 2007-06-28 2008-12-31 国际商业机器公司 L2 cache/nest address translation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘胜: "《DSP片内高效二级cache控制器的设计与实现》", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107430551A (en) * 2015-12-01 2017-12-01 华为技术有限公司 Data cache method, memory control device and storage device
CN107430551B (en) * 2015-12-01 2020-10-23 华为技术有限公司 Data caching method, storage control device and storage equipment
CN111881068A (en) * 2020-06-30 2020-11-03 北京思朗科技有限责任公司 Multi-entry fully associative cache memory and data management method

Similar Documents

Publication Publication Date Title
CN103324585B (en) Cooperation in the processor of hierarchical cache prefetches process
TWI454909B (en) Memory device, method and system to reduce the power consumption of a memory device
CN102662868B (en) For the treatment of dynamic group associative cache device and the access method thereof of device
KR102404643B1 (en) Hbm with in-memory cache anager
JP2017033584A (en) Apparatus and methods to reduce castouts in multi-level cache hierarchy
CN112558889B (en) Stacked Cache system based on SEDRAM, control method and Cache device
US20120297256A1 (en) Large Ram Cache
US20130326145A1 (en) Methods and apparatus for efficient communication between caches in hierarchical caching design
CN101013402A (en) Methods and systems for processing multiple translation cache misses
CN105183662A (en) Cache consistency protocol-free distributed sharing on-chip storage framework
CN1815626A (en) Storage access controller and storage access method
CN102541510A (en) Instruction cache system and its instruction acquiring method
CN105095104B (en) Data buffer storage processing method and processing device
CN112559433B (en) Multi-core interconnection bus, inter-core communication method and multi-core processor
CN104572528A (en) Method and system for processing access requests by second-level Cache
CN101667159A (en) High speed cache system and method of trb
CN100414518C (en) Improved virtual address conversion and converter thereof
CN103870204B (en) Data write-in and read method, cache controllers in a kind of cache
CN104252423B (en) Consistency processing method and device based on multi-core processor
CN106126450B (en) A kind of the Cache design structures and method of reply multi-core processor snoop accesses conflict
CN102646071A (en) Device and method for executing write hit operation of high-speed buffer memory at single period
US20040153610A1 (en) Cache controller unit architecture and applied method
CN111158753A (en) Flash controller structure with data prefetching function and implementation method thereof
CN115357292A (en) Non-blocking data Cache structure
CN101667158A (en) High speed cache system for stream context

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429

WD01 Invention patent application deemed withdrawn after publication