CN102103549A - Method for replacing cache - Google Patents

Method for replacing cache Download PDF

Info

Publication number
CN102103549A
CN102103549A CN2009102013772A CN200910201377A CN102103549A CN 102103549 A CN102103549 A CN 102103549A CN 2009102013772 A CN2009102013772 A CN 2009102013772A CN 200910201377 A CN200910201377 A CN 200910201377A CN 102103549 A CN102103549 A CN 102103549A
Authority
CN
China
Prior art keywords
cache
auxiliary
data
read
auxiliary cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2009102013772A
Other languages
Chinese (zh)
Inventor
王永流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huahong Integrated Circuit Co Ltd
Original Assignee
Shanghai Huahong Integrated Circuit Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huahong Integrated Circuit Co Ltd filed Critical Shanghai Huahong Integrated Circuit Co Ltd
Priority to CN2009102013772A priority Critical patent/CN102103549A/en
Publication of CN102103549A publication Critical patent/CN102103549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a method for replacing a cache. The method comprising the following steps of: setting an auxiliary cache, wherein the auxiliary cache and the cache jointly participate in data storage; testing the hit of a read cache; if the miss of the read cache occurs, transmitting an external memory read request and temporarily storing data which are written back into the auxiliary cache; after the read request is transmitted, writing data in the auxiliary cache back to an external memory; and if the miss of the read cache and the hit of the auxiliary cache occur, only copying the data to the cache from the auxiliary cache without transmitting the external memory read request. Due to the adoption of a method for setting the auxiliary cache, invalid waiting time in the data reading process can be effectively reduced, and the operating performance of an application system can be improved, so that the power consumption of the system can be effectively reduced.

Description

A kind of buffer replacing method
Technical field
The present invention relates to a kind of data cache method, relate in particular to the replacement method that a kind of integrated circuit is used the high speed buffer memory.
Background technology
Nowadays multimedia technology has obtained general application in daily life, more and more higher in the part multimedia application to the requirement of image accelerating module and CPU processing speed, and the problem of the processing speed of current these unit substantially all is limited in above the read-write external memory storage.
Can adopt high-speed cache for satisfying the integrated circuit application, the current cache mode probably can be divided into three kinds from using: a kind of is read-only buffer memory; A kind of is the buffer memory of only writing, and another is a buffer memory not only readable but also that can write.What this part content related to is last a kind of buffer memory form: the readable buffer memory of writing.When occurring reading cache miss when (being called for short " miss "), need write back to external memory storage earlier, and then send the request of reading external memory storage being write (being called for short " dirty ") part.The module of request for data need be waited for the memory write request time like this, read request time and data read back the time.Consider other processing times of moderator, storage unit controller, conservative estimation needs 50 clock period to read back needed data at least, and read-write can occur alternately, and the frequency that replaces can be very high.Long and the inaccurate problem of data read of elapsed time can appear in the metadata cache processing procedure like this.
At the problems referred to above, a kind of new caching solution need to be proposed, can reduce the data read stand-by period, when guaranteeing operating performance, save power consumption.
Summary of the invention
The object of the invention provides a kind of buffer replacing method, reduces the invalid stand-by period in the data read process, improves the application system runnability, reduces system power dissipation.
The present invention relates to a kind of buffer replacing method, comprise following content:
(1) auxiliary cache is set, auxiliary cache and buffer memory fellowship data storage;
(2) read the cache hit test, read cache miss, send the external memory storage request of reading, temporary in auxiliary cache the data that write back simultaneously;
(3) after read request distributes, the data in the auxiliary cache are write back to external memory storage;
(4) if read cache miss, in auxiliary cache, hit, do not need to send and read the external memory storage request and only need from auxiliary cache copies data in buffer memory.
An auxiliary cache is set, and the data that needs are write back are temporarily stored in the auxiliary cache, and read request can be sent at once like this, otherwise corresponding data such as needs write back to external memory storage earlier, make buffer memory discharge corresponding position, just can send read request.If before the position discharges, send read request, data of coming of reading back probably or can not be put into suitable position, otherwise will the data that needs write back have been covered.
Read request sends, and sends and writes application, can save and read the stand-by period.For writing, wait and bide one's time very unimportantly, but kind of a situation is arranged: when miss (read cache miss, be called for short miss) appears in the read request buffer memory, auxiliary cache hit (cache hit is called for short hit) need copy this content to buffer memory from auxiliary cache.Further reduce the carrying of data if desired, reduce the dynamic power consumption of integrated circuit, can allow buffer memory and auxiliary cache, the exchange role.When miss occurring, current cache becomes auxiliary cache, and current auxiliary cache becomes buffer memory, can solve the data carrying between buffer memory and the auxiliary cache.
From the realization of integrated circuit, adopt two independently storage unit than adopt a storage unit from area still be on the power consumption all greatly.Based on this reason, can merge to a storage unit to buffer memory and auxiliary cache, storage bit number is both sums, distinguishes buffer memory and auxiliary cache with the high address.
Adopt buffer memory alternative provided by the invention, can effectively reduce the invalid stand-by period in the data read process, improve the application system runnability, thereby can reduce system power dissipation effectively.
Description of drawings
Fig. 1 buffer memory and auxiliary cache are the structural representation of storer independently
Fig. 2 buffer memory and auxiliary cache are arranged at the structural representation of same storer
Embodiment
Below in conjunction with each accompanying drawing, the foregoing invention content is described in detail:
To be buffer memory and auxiliary cache be the situation of storer independently to Fig. 1.Suppose that buffer memory is divided into four parts, is respectively A, B, C, D; Auxiliary cache also is divided into four parts, is respectively A ', B ' C ', D ', and following three kinds of situations are arranged:
1, when new read request hit B in buffer memory, directly reading of data from B;
2, as new read request miss in buffer memory, corresponding position is C, also miss in auxiliary cache, and corresponding position is C ':
(1) data of reading among the C are write among the C ';
(2) send read the application;
(3) data among the C ' are write back to external memory storage (when bus is idle);
(4) data of reading back from external memory storage are write C;
(5) from C, read desired data;
3, new read request miss in buffer memory, corresponding position is D, hit in auxiliary cache, correspondence position are A ':
(1) data of reading A ' are write D;
(2) from D, read desired data.
Fig. 2 is arranged at same storer for buffer memory and auxiliary cache, and buffer memory and auxiliary cache can exchange, and supposes that buffer memory is divided into four parts, is respectively A, B, C, D; Auxiliary cache also is divided into four parts, is respectively A ', B ' C ', D ', and following several situation is arranged:
1, new read request hit B in buffer memory, direct reading of data from B, buffer memory and auxiliary cache role are constant;
2, new read request miss in buffer memory, corresponding position is C, also miss in auxiliary cache, corresponding position is C ':
(1) auxiliary cache C ' becomes buffer memory, and buffer memory C becomes auxiliary cache;
(2) send out read the application;
(3) content among the C writes back to external memory storage;
The data of (4) reading back are write among the C ';
(5) from C ', read desired data.
3, new read request miss in buffer memory, corresponding position is D, hit in auxiliary cache, correspondence position are A ', then:
(1) auxiliary cache A ' becomes buffer memory, and buffer memory D becomes auxiliary cache;
(2) from A ', read desired data;
(3) data write back to external memory storage among the D.

Claims (4)

1. buffer replacing method comprises following content:
(1) auxiliary cache is set, auxiliary cache and buffer memory fellowship data storage;
(2) read the cache hit test, when reading cache miss, send the external memory storage request of reading, temporary in auxiliary cache the data that write back simultaneously;
(3) after read request distributes, the data in the auxiliary cache are write back to external memory storage;
(4) if read cache miss, in auxiliary cache, hit, need not send the external memory storage request of reading, directly from auxiliary cache copies data in buffer memory.
2. a kind of buffer replacing method as claimed in claim 1 is characterized in that: described buffer memory saves as different storeies with auxiliary changing.
3. a kind of buffer replacing method as claimed in claim 1 is characterized in that: described buffer memory and auxiliary cache are arranged at same storer, and storage bit number is both sums.
4. as claim 1 or 3 described a kind of buffer replacing methods, it is characterized in that: described buffer memory and auxiliary cache can exchange, and adopt the high address to distinguish both.
CN2009102013772A 2009-12-18 2009-12-18 Method for replacing cache Pending CN102103549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102013772A CN102103549A (en) 2009-12-18 2009-12-18 Method for replacing cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102013772A CN102103549A (en) 2009-12-18 2009-12-18 Method for replacing cache

Publications (1)

Publication Number Publication Date
CN102103549A true CN102103549A (en) 2011-06-22

Family

ID=44156336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102013772A Pending CN102103549A (en) 2009-12-18 2009-12-18 Method for replacing cache

Country Status (1)

Country Link
CN (1) CN102103549A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN104780122A (en) * 2015-03-23 2015-07-15 中国人民解放军信息工程大学 Control method for hierarchical network-on-chip router based on cache redistribution
CN106250326A (en) * 2016-08-01 2016-12-21 浪潮(北京)电子信息产业有限公司 A kind of data capture method based on SSD and system
CN108733582A (en) * 2017-04-18 2018-11-02 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN112995693A (en) * 2021-03-04 2021-06-18 深圳市欧瑞博科技股份有限公司 Intelligent processing method of streaming media file, control panel and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1652092A (en) * 2003-12-09 2005-08-10 国际商业机器公司 Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN1779664A (en) * 2004-11-26 2006-05-31 富士通株式会社 Memory control device and memory control method
CN1851673A (en) * 2005-12-13 2006-10-25 华为技术有限公司 Processor system and its data operating method
CN1851677A (en) * 2005-11-25 2006-10-25 华为技术有限公司 Embedded processor system and its data operating method
CN1898652A (en) * 2003-11-26 2007-01-17 英特尔公司 Method, system, and apparatus for memory compression with flexible in-memory cache
CN101135993A (en) * 2007-09-20 2008-03-05 华为技术有限公司 Embedded system chip and data read-write processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1898652A (en) * 2003-11-26 2007-01-17 英特尔公司 Method, system, and apparatus for memory compression with flexible in-memory cache
CN1652092A (en) * 2003-12-09 2005-08-10 国际商业机器公司 Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN1779664A (en) * 2004-11-26 2006-05-31 富士通株式会社 Memory control device and memory control method
CN1851677A (en) * 2005-11-25 2006-10-25 华为技术有限公司 Embedded processor system and its data operating method
CN1851673A (en) * 2005-12-13 2006-10-25 华为技术有限公司 Processor system and its data operating method
CN101135993A (en) * 2007-09-20 2008-03-05 华为技术有限公司 Embedded system chip and data read-write processing method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169892A (en) * 2012-03-28 2014-11-26 华为技术有限公司 Concurrently accessed set associative overflow cache
CN104780122A (en) * 2015-03-23 2015-07-15 中国人民解放军信息工程大学 Control method for hierarchical network-on-chip router based on cache redistribution
CN106250326A (en) * 2016-08-01 2016-12-21 浪潮(北京)电子信息产业有限公司 A kind of data capture method based on SSD and system
CN106250326B (en) * 2016-08-01 2019-05-10 浪潮(北京)电子信息产业有限公司 A kind of data capture method and system based on SSD
CN108733582A (en) * 2017-04-18 2018-11-02 腾讯科技(深圳)有限公司 A kind of data processing method and device
CN108733582B (en) * 2017-04-18 2021-10-29 腾讯科技(深圳)有限公司 Data processing method and device
CN112995693A (en) * 2021-03-04 2021-06-18 深圳市欧瑞博科技股份有限公司 Intelligent processing method of streaming media file, control panel and computer readable storage medium

Similar Documents

Publication Publication Date Title
US8615633B2 (en) Multi-core processor cache coherence for reduced off-chip traffic
EP2380084B1 (en) Method and apparatus for coherent memory copy with duplicated write request
EP2733617A1 (en) Data buffer device, data storage system and method
US7529955B2 (en) Dynamic bus parking
CN100419715C (en) Embedded processor system and its data operating method
TW502164B (en) Method and apparatus for reducing power in cache memories and a data processing system having cache
US20090055591A1 (en) Hierarchical cache memory system
JP2013509655A (en) Address translation unit containing multiple virtual queues
CN102103549A (en) Method for replacing cache
CN110058816B (en) DDR-based high-speed multi-user queue manager and method
CN111651396B (en) Optimized PCIE (peripheral component interface express) complete packet out-of-order management circuit implementation method
US8909862B2 (en) Processing out of order transactions for mirrored subsystems using a cache to track write operations
JP5428617B2 (en) Processor and arithmetic processing method
CN116257191B (en) Memory controller, memory component, electronic device and command scheduling method
CN117472815A (en) Storage module conversion interface under AXI protocol and conversion method thereof
JP2002351741A (en) Semiconductor integrated circuit device
CN102902631B (en) Multiprocessor inter-core transmission method for avoiding data back writing during read-miss
US20100115323A1 (en) Data store system, data restoration system, data store method, and data restoration method
CN111694777B (en) DMA transmission method based on PCIe interface
US8099560B2 (en) Synchronization mechanism for use with a snoop queue
US20160210234A1 (en) Memory system including virtual cache and management method thereof
US20190259448A1 (en) Save and restore scoreboard
CN116166606B (en) Cache control architecture based on shared tightly coupled memory
CN116932425A (en) Task execution method, device and medium of inline computing engine
CN213934866U (en) Adapter card with solid state disk interface and server

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110622