EP1387276A2 - Verfahren und Vorrichtung zur Speicherverwaltung - Google Patents
Verfahren und Vorrichtung zur Speicherverwaltung Download PDFInfo
- Publication number
- EP1387276A2 EP1387276A2 EP03291913A EP03291913A EP1387276A2 EP 1387276 A2 EP1387276 A2 EP 1387276A2 EP 03291913 A EP03291913 A EP 03291913A EP 03291913 A EP03291913 A EP 03291913A EP 1387276 A2 EP1387276 A2 EP 1387276A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- memory
- data
- stack
- cache
- cache line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0253—Garbage collection, i.e. reclamation of unreferenced memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0891—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
- G06F12/1036—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30098—Register arrangements
- G06F9/3012—Organisation of register space, e.g. banked or distributed register file
- G06F9/30134—Register stacks; shift registers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0875—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/50—Control mechanisms for virtual memory, cache or TLB
- G06F2212/502—Control mechanisms for virtual memory, cache or TLB using adaptive policy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/68—Details of translation look-aside buffer [TLB]
- G06F2212/681—Multi-level TLB, e.g. microTLB and main TLB
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates generally to processor based systems and more particularly to memory management techniques for the processor based system.
- multimedia functionality may include, without limitation, games, audio decoders, digital cameras, etc. It is thus desirable to implement such functionality in an electronic device in a way that, all else being equal, is fast, consumes as little power as possible and requires as little memory as possible. Improvements in this area are desirable.
- the apparatuses may include a processor, a memory coupled to the processor, a stack that exists in memory and contains stack data, and a memory controller coupled to the memory.
- the memory may further include multiple levels.
- the processor may issue data requests and the memory controller may adjust memory management policies between the various levels of memory based on whether the data requests refer to stack data. In this manner, data may be written to a first level of memory without allocating data from a second level of memory. Thus, memory access time may be reduced and overall power consumption may be reduced.
- the subject matter disclosed herein is directed to a processor based system comprising multiple levels of memory.
- the processor based system described herein may be used in a wide variety of electronic systems.
- One example comprises using the processor based system in a portable, battery-operated cell phone.
- data may be transferred between the processor and the multiple levels of memory, where the time associated with accessing each level of memory may vary depending on the type of memory used.
- the processor based system may implement one or more features that reduce the number of transfers among the multiple levels of memory. Consequently, the amount of time taken to transfer data between the multiple levels of memory may be eliminated and the overall power consumed by the processor based system may be reduced.
- FIG. 1 illustrates a system 10 comprising a processor 12 coupled to a first level or cache memory 14, a second level or main memory 16, and a disk array 17.
- the processor 12 comprises a register set 18, decode logic 20, an address generation unit (AGU) 22, an arithmetic logic unit (ALU) 24, and an optional micro-stack 25.
- Cache memory 14 comprises a cache controller 26 and an associated data storage space 28.
- the cache memory 14 may be implemented in accordance with the preferred embodiment described below and in copending applications entitled "Cache with multiple fill modes,” filed June 9, 2000, serial no. 09/591,656; "Smart cache,” filed June 9, 2000, serial no. 09/591,537; and publication no. 2002/0065990.
- Main memory 16 comprises a storage space 30, which may contain contiguous amounts of stored data.
- main memory 16 may include a stack 32.
- cache memory 14 also may contain portions of the stack 32.
- Stack 32 preferably contains data from the processor 12 in a last-in-first-out manner (LIFO).
- Register set 18 may include multiple registers such as general purpose registers, a program counter, and a stack pointer. The stack pointer preferably indicates the top of the stack 32. Data may be produced by system 10 and added to the stack by "pushing" data at the address indicated by the stack pointer. Likewise, data may be retrieved and consumed from the stack by "popping" data from the address indicated by the stack pointer.
- selected data from cache memory 14 and main memory 16 may exist in the micro-stack 25.
- the access times and cost associated with each memory level illustrated in Figure 1 may be adapted to achieve optimal system performance.
- the cache memory 14 may be part of the same integrated circuit as the processor 12 and main memory 16 may be external to the processor 12. In this manner, the cache memory 14 may have relatively quick access time compared to main memory 16, however, the cost (on a per-bit basis) of cache memory 14 may be greater than the cost of main memory 16.
- internal caches, such as cache memory 14 are generally small compared to external memories, such as main memory 16, so that only a small part of the main memory 16 resides in cache memory 14 at a given time. Therefore, reducing data transfers between the cache memory 14 and the main memory 16 may be a key factor in reducing latency and power consumption of a system.
- Software may be executed on the system 10, such as an operating system (OS) as well as various application programs.
- processor 12 may issue effective addresses along with read or write requests, and these requests may be satisfied by various system components (e.g., cache memory 14, main memory 16, or micro-stack 25) according to a memory mapping function.
- various system components may satisfy read/write requests, the software may be unaware whether the request is satisfied via cache memory 14, main memory 16 or micro-stack 25.
- traffic to and from the processor 12 is in the form of words, where the size of the word may vary depending on the architecture of the system 10. Rather than access a single word from main memory 16, each entry in cache memory 14 preferably contains multiple words referred to as a "cache line".
- the principle of locality states that within a given period of time, programs tend to reference a relatively confined area of memory repeatedly.
- caching data in a small memory e.g., cache memory 14
- main memory 16 the slower memory
- cache memory 14 the quicker memory
- cache lines in cache memory 14 as much as possible before replacing a cache line.
- Controller 26 may implement various memory management policies.
- Figure 2 illustrates an exemplary implementation of cache memory 14 including the controller 26 and the storage space 28. Although some of the Figures may illustrate controller 26 as part of cache memory 14, the location of controller 26, as well as its functional blocks, may be located anywhere within the system 10.
- Storage space 28 includes a tag memory 36, valid bits 38, and multiple data arrays 40.
- Data arrays 40 contain cache lines, such as CL 0 and CL 1 , where each cache line includes multiple data words as shown.
- Tag memory 36 preferably contains the addresses of data stored in the data arrays 40, e.g., ADDR 0 and ADDR 1 correspond to cache lines CL 0 and CL 1 respectively.
- Valid bits 38 indicate whether the data stored in the data arrays 40 are valid. For example, cache line CL 0 may be enabled and valid, whereas cache line CL 1 may be disabled and invalid.
- Controller 26 includes compare logic 42 and word select logic 44.
- the controller 26 may receive an address request 45 from the AGU 22 via an address bus, and data may be transferred between the controller 26 and the ALU 24 via a data bus.
- the size of address request 45 may vary depending on the architecture of the system 10.
- Address request 45 may include an upper portion ADDR [H] that indicates which cache line the desired data is located in, and a lower portion ADDR [L] that indicates the desired word within the cache line.
- Compare logic 42 may compare a first part of ADDR[H] to the contents of tag memory 36, where the contents of the tag memory 36 that are compared are the cache lines indicated by a second part of ADDR [H].
- compare logic 42 If the requested data address is located in this tag memory 36 and the valid bit 38 associated with the requested data address is enabled, then compare logic 42 generates a "cache hit" and the cache line may be provided to the word select logic 44.
- Word select logic 44 may determine the desired word from within the cache line based on the lower portion of the data address ADDR[L], and the requested data word may be provided to the processor 12 via the data bus. Otherwise, compare logic 42 generates a cache miss causing an access to the main memory 16.
- Decode logic 20 may generate the address of the data request and may provide the controller 26 with additional information about the address request. For example, the decode logic 20 may indicate the type of data access, i.e., whether the requested data address belongs on the stack 32 (illustrated in Figure 1). Using this information, the controller 26 may implement cache management policies that are optimized for stack based operations as described below.
- FIG. 3 illustrates an exemplary cache management policy 48 that may be implemented by the controller 26.
- Block 50 illustrates a request for data.
- the AGU 22 may provide the address request 45 to the controller 26.
- Controller 26 then may determine whether the data is present in cache memory 14, as indicated by block 52. If the data is present in cache memory 14, a cache hit may be generated, and cache memory 14 may satisfy the data request as indicated in block 54. Alternatively, the controller 26 may determine that the requested address is not present in the cache memory 14 and a "cache miss" may be generated. Controller 26 may then determine whether the initial data request (block 50) refers to data that is part of the stack 32, sometimes called "stack data", as indicated by block 56.
- Decode logic 20, illustrated in Figure 2 may provide the controller 26 with information indicating whether the initial request for data was for stack data.
- traditional read and write miss policies may be implemented as indicated by block 58.
- one cache miss policy that may be implemented when the initial data request was a write operation is a "write allocate”.
- Write allocating involves bringing a desired cache line into cache memory 14 from the main memory 16 and setting its valid bit 38.
- the data write is done to update the data within the cache memory 14 either when the cache line has been loaded into cache memory 14 or while the cache line is being loaded.
- Another cache miss policy resulting from a write operation is called "write no-allocate".
- a write no-allocate operation involves updating data in main memory 16, but not bringing this data into the cache memory 14. Since no cache lines are transferred to cache memory 14, the valid bits 38 are not set or enabled.
- stack based cache management policies may be implemented instead of a traditional cache management policy.
- the stack based cache management policies may be further adapted depending on whether the initial request for data was a read request or a write request, as indicated in block 60.
- the stack 32 expands and contracts. Data are pushed on the stack and popped off of the top of the stack in a sequential manner-i.e., data is not accessed with random addresses but instead with sequential addresses.
- the system 10 is addressing stack data, the corresponding address in memory increases as the stack is growing (e.g.
- system 10 is pushing a value on to the stack.
- stack data that is written to cache memory 14 within a new cache line it is always written to the first word of this cache line and the subsequent stack data are written to the following words of the cache line.
- word W 0 would be written to before word W 1 . Since data pushed from the processor 12 represents the most recent version of the data in the system 10, consulting main memory 16 on a cache miss is unnecessary.
- data may be written to cache memory 14 and the associated line set to valid using valid bit 38 on a cache miss without fetching cache lines from main memory 16, as indicated by block 62 on cache supporting write allocate policy.
- the system 10 may disregard fetching the data from memory 16 (since data from the processor 12 is the most recent version in the system 10).
- Valid bits 38 associated with the various cache lines then may be enabled so that subsequent words within the cache line may be written without fetching from main memory 16.
- the write data is done only within the cache and the write to the main memory may be avoided. Accordingly, the time and power associated with accessing main memory 16 may be minimized. In addition, the bandwidth may be improved as a result of fewer transfers between cache memory 14 and main memory 16.
- a cache miss that occurs when reading stack data may load a new line within the cache memory 14 unnecessarily. For example, when reading data from the stack 32, if the cache memory 14 is checked and the first word in a cache line generates a cache miss, then subsequent words in that cache line will not generate cache hits. Accordingly, preferred embodiments may avoid loading the cache memory 14 when stack data is being read. In this manner, if a cache miss occurs when reading stack data from the first word of a cache line, then the system 10 may disregard fetching the subsequent stack data from memory 16 and may forward the single requested data to system 10. Cache lines in cache memory 14 that are to be replaced are termed "victim lines". Since data may be provided to the processor 12 using the main memory 16, and fetching data from main memory 16 may be disregarded, data in the victim lines may be maintained so that useful data may remain in the cache.
- the embodiments refer to situations where the stack 32 is increasing, i.e., the stack pointer incrementing as data are pushed onto the stack, the above discussion equally applies to situations where the stack 32 is decreasing, i.e., stack pointer decrementing as data are pushed onto the stack.
- checking of the last words of the cache line is done. For example, if the stack pointer is referring to word W N of a cache line CL 0 , and a cache miss occurs from a read operation (e.g., as the result of popping multiple values from the stack 32), then subsequent words, i.e., W N-1 , W N-2 , may also generate cache misses.
- the micro-stack 25 may initiate the data stack transfer between system 10 and the cache memory 14. For example, in the event of an overflow or underflow operation, as is described in copending application entitled “A Processor with a Split Stack,” filed , serial no. (Atty. Docket No.: TI-35425), the micro-stack 25 may push and pop data from the stack 32.
- Stack operations also may be originated by a stack-management OS, which also may benefit from the disclosed cache management policies by indicating prior to the data access that data belong to a stack and thus optimizing those accesses.
- some programming languages such as Java, implement stack based operations and may benefit from the disclosed embodiments.
- system 10 may be implemented as a mobile cell phone such as that illustrated in Figure 4.
- a mobile communication device includes an integrated keypad 412 and display 414.
- the processor 12 and other components may be included in electronics package 410 connected to the keypad 412, display 414, and radio frequency (“RF") circuitry 416.
- the RF circuitry 416 may be connected to an antenna 418.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03291913A EP1387276A3 (de) | 2002-07-31 | 2003-07-30 | Verfahren und Vorrichtung zur Speicherverwaltung |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US40039102P | 2002-07-31 | 2002-07-31 | |
US400391P | 2002-07-31 | ||
EP03291913A EP1387276A3 (de) | 2002-07-31 | 2003-07-30 | Verfahren und Vorrichtung zur Speicherverwaltung |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1387276A2 true EP1387276A2 (de) | 2004-02-04 |
EP1387276A3 EP1387276A3 (de) | 2004-03-31 |
Family
ID=46123469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03291913A Withdrawn EP1387276A3 (de) | 2002-07-31 | 2003-07-30 | Verfahren und Vorrichtung zur Speicherverwaltung |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP1387276A3 (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007041145A1 (en) * | 2005-09-30 | 2007-04-12 | Intel Corporation | Instruction-assisted cache management for efficient use of cache and memory |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6151661A (en) * | 1994-03-03 | 2000-11-21 | International Business Machines Corporation | Cache memory storage space management system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778422A (en) * | 1996-04-04 | 1998-07-07 | International Business Machines Corporation | Data processing system memory controller that selectively caches data associated with write requests |
-
2003
- 2003-07-30 EP EP03291913A patent/EP1387276A3/de not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6151661A (en) * | 1994-03-03 | 2000-11-21 | International Business Machines Corporation | Cache memory storage space management system and method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007041145A1 (en) * | 2005-09-30 | 2007-04-12 | Intel Corporation | Instruction-assisted cache management for efficient use of cache and memory |
US7437510B2 (en) | 2005-09-30 | 2008-10-14 | Intel Corporation | Instruction-assisted cache management for efficient use of cache and memory |
Also Published As
Publication number | Publication date |
---|---|
EP1387276A3 (de) | 2004-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11074190B2 (en) | Slot/sub-slot prefetch architecture for multiple memory requestors | |
EP1643506B1 (de) | System und Verfahren zum Datensichern bei Stromausfall | |
US7546437B2 (en) | Memory usable in cache mode or scratch pad mode to reduce the frequency of memory accesses | |
KR101456860B1 (ko) | 메모리 디바이스의 전력 소비를 감소시키기 위한 방법 및 시스템 | |
KR100339904B1 (ko) | 캐시 프로세스용 시스템 및 방법 | |
US7380070B2 (en) | Organization of dirty bits for a write-back cache | |
US20060004984A1 (en) | Virtual memory management system | |
JP2005115910A (ja) | シリアルフラッシュメモリにおけるxipのための優先順位に基づくフラッシュメモリ制御装置及びこれを用いたメモリ管理方法、これによるフラッシュメモリチップ | |
EP1581876B1 (de) | Speichersteuerung und verfahren zum schreiben in einen speicher | |
US7117306B2 (en) | Mitigating access penalty of a semiconductor nonvolatile memory | |
JP2000029789A (ja) | 多経路キャッシュ装置および方法 | |
US20210056030A1 (en) | Multi-level system memory with near memory capable of storing compressed cache lines | |
US6718439B1 (en) | Cache memory and method of operation | |
US7069415B2 (en) | System and method to automatically stack and unstack Java local variables | |
US8539159B2 (en) | Dirty cache line write back policy based on stack size trend information | |
US20040024969A1 (en) | Methods and apparatuses for managing memory | |
US20050246502A1 (en) | Dynamic memory mapping | |
EP1387278A2 (de) | Verfahren und Vorrichtungen zur Speicherverwaltung | |
US7203797B2 (en) | Memory management of local variables | |
EP1387276A2 (de) | Verfahren und Vorrichtung zur Speicherverwaltung | |
US7330937B2 (en) | Management of stack-based memory usage in a processor | |
US5749092A (en) | Method and apparatus for using a direct memory access unit and a data cache unit in a microprocessor | |
US8117393B2 (en) | Selectively performing lookups for cache lines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
17P | Request for examination filed |
Effective date: 20040930 |
|
AKX | Designation fees paid |
Designated state(s): DE FR GB |
|
17Q | First examination report despatched |
Effective date: 20050307 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: TEXAS INSTRUMENTS FRANCE Owner name: TEXAS INSTRUMENTS INCORPORATED |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20091006 |