CN113849455A - MCU based on hybrid memory and data caching method - Google Patents

MCU based on hybrid memory and data caching method Download PDF

Info

Publication number
CN113849455A
CN113849455A CN202111143598.6A CN202111143598A CN113849455A CN 113849455 A CN113849455 A CN 113849455A CN 202111143598 A CN202111143598 A CN 202111143598A CN 113849455 A CN113849455 A CN 113849455A
Authority
CN
China
Prior art keywords
data
cache
stt
mram
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111143598.6A
Other languages
Chinese (zh)
Other versions
CN113849455B (en
Inventor
李月婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haicun Microelectronics Co ltd
Original Assignee
Zhizhen Storage Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhizhen Storage Beijing Technology Co ltd filed Critical Zhizhen Storage Beijing Technology Co ltd
Priority to CN202111143598.6A priority Critical patent/CN113849455B/en
Publication of CN113849455A publication Critical patent/CN113849455A/en
Application granted granted Critical
Publication of CN113849455B publication Critical patent/CN113849455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Quality & Reliability (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a hybrid memory-based MCU and a data caching method, and relates to the field of the MCU of the hybrid memory. The first aspect of the present application provides a hybrid cache MCU, which includes: the system comprises a processor, a system bus and a memory, wherein the processor is connected with the memory through the system bus, the memory comprises STT-MRAM and SRAM, and the STT-MRAM is used as a backup of the SRAM. A second aspect of the present application provides a caching method, including: the processor accesses target data from the SRAM; when the target data is not cached in the SRAM, the processor accesses the target data from the STT-MRAM. By adding STT-MRAM to the MCU structure as the second level cache and combining the cache method, the effective management of the cache space is realized, and the hit rate of the cache architecture is improved.

Description

MCU based on hybrid memory and data caching method
Technical Field
The embodiment of the invention relates to the field of magnetic storage, in particular to an MCU of a hybrid memory and a data caching method.
Background
A Micro Control Unit (MCU), also called a single-chip microcomputer, is a computer that appropriately reduces the frequency and specification of a Central Processing Unit (CPU), and integrates peripheral interfaces such as a Memory, a counter, a Universal Serial Bus (USB), an a/D converter, a UART (Universal Asynchronous Receiver Transmitter), a Programmable Logic Controller (PLC), a Direct Memory Access (DMA), and the like, and even a Liquid Crystal Display (LCD) driving circuit on a single chip, thereby forming a chip-level computer. Because the MCU volume is relatively less, MCU is widely used in fields such as AI, cloud computing, 5G and intelligent automobile at present.
At present, the market demands that the MCU develops towards the direction of low power consumption and high reliability, so that the MCU is required to further improve the anti-interference capability of the system per se and reduce the instability of the system per se. The internal cache mainly uses a Static Random-Access Memory (SRAM) to store data. However, the buffer performance of the MCU is not good due to the large circuit area occupied by the SRAM and the easy loss of data during storage.
Disclosure of Invention
The embodiment of the invention provides a hybrid memory-based MCU and a data caching method (a caching method and the MCU), which can optimize the caching performance of the MCU.
In order to solve the above problem, a first aspect of the present invention provides a hybrid cache MCU including: a processor 1, a system bus 2 and a memory 3, said processor 1 being connected to said memory 3 via said system bus 2,
wherein the Memory 3 includes Spin Transfer Torque-Magnetic Random Access Memory (STT-MRAM) and Static Random Access Memory (SRAM), and the STT-MRAM is used as a backup of the SRAM.
In some embodiments, the STT-MRAM comprises at least one cache region, the SRAM comprises at least one cache region, and the at least one cache region of the STT-MRAM corresponds to the at least one cache region of the SRAM in a one-to-one correspondence.
In some embodiments, each cache region of the STT-MRAM comprises a physical address and a virtual address, each cache region of the SRAM comprises a physical address and a virtual address, and at least one of the cache regions of the SRAM and at least one of the cache regions of the STT-MRAM establish a correspondence relationship by the respective virtual address.
In some embodiments, each cache region of the STT-MRAM and each cache region of the SRAM are provided with at least two sets.
In some embodiments, when the STT-MRAM is not caching data, the storage state of the STT-MRAM is identified as "0"; when the STT-MRAM is accessed, the storage state identification of the STT-MRAM is updated from a "0" to a "1".
In some embodiments, the STT-MRAM is accessed including: the STT-MRAM is accessed by processor 1, or by the SRAM.
In some embodiments, each piece of data of the SRAM and STT-MRAM caches corresponds to at least one of the following cache information: the cache comprises group information of the cache region where the data is located, the length of the data and a physical address where the data is cached.
In some embodiments, the MCU is provided with an access subject identifier for indicating an access subject to the STT-MRAM, the access subject comprising a processor 1 and a system process.
In another aspect of the present invention, a caching method is provided, which is applied to an MCU including a processor, a static random access memory SRAM, and a spin transfer torque random access memory STT-MRAM, and includes:
the processor accesses target data from the SRAM;
when the target data is not cached in the SRAM, the processor accesses the target data from the STT-MRAM, wherein the target data in the STT-MRAM is written by the SRAM after the processor writes into the SRAM.
In some embodiments, before the processor accesses the target data from the SRAM, the processor further includes:
dividing the STT-MRAM to obtain at least one cache region, and dividing the SRAM to obtain at least one cache region;
configuring a virtual address corresponding to a physical address of each cache region of the STT-MRAM and configuring a virtual address corresponding to a physical address of each cache region of the SRAM;
respectively establishing a corresponding relation between a virtual address of each cache region in the STT-MRAM and a virtual address of each cache region in the SRAM, so that at least one cache region of the STT-MRAM corresponds to at least one cache region of the SRAM one by one.
In some embodiments, after the SRAM caches the data written by the processor, the MCU configures target cache information for the data, where the target cache information indicates group information of the data in the STT-MRAM and a physical address where the data is cached;
after caching the data in the STT-MRAM, detecting whether caching information of the data in the STT-MRAM matches the target caching information;
if so, the data is successfully cached in the STT-MRAM.
In some embodiments, the method further comprises: determining cache data meeting optimization conditions in the STT-MRAM; performing optimization operation on the cache data meeting the optimization condition, wherein the optimization operation comprises one of the following operations: clearing and replacing.
In some embodiments, the cache data satisfying the optimization condition comprises one of:
history data cached in the STT-MRAM at the beginning of any caching cycle;
when the STT-MRAM receives a request for backup data, the corresponding data is stored in the STT-MRAM;
and clearing the data corresponding to the data request.
In some embodiments, after performing the optimization operation on the cached data meeting the optimization condition, the method further includes:
determining the capacity of the cache space after being cleared to obtain a first capacity;
judging whether the first capacity is matched with a pre-configured capacity, wherein the pre-configured capacity is a cache capacity configured according to data to be cleared before the MCU is optimized;
if the first capacity is matched with the preset capacity, caching the data to be optimized
If the first capacity is larger than the configured capacity, repeating the data clearing action until the first capacity is matched with the pre-configured capacity.
In some embodiments, said replacing cache data in said STT-MRAM comprises:
determining data to be cached and the length of the data to be cached;
detecting whether the size of the residual cache space in the STT-MRAM is larger than or equal to the length of the data to be cached;
if the residual cache space in the STT-MRAM is smaller than the length of the data to be cached, determining at least one cache line meeting the length of the data to be cached from cache lines corresponding to cached data;
and replacing the cached data in any cache line by using the data to be cached.
In some embodiments, corresponding to any cache line, determining at least one cache line satisfying the length of the data to be cached from the cache lines corresponding to the cached data includes:
determining a starting point among at least one starting point of the cache line;
searching a cache line which is closest to the starting point and meets the following conditions as the cache line meeting the length of the data to be cached: d(i,s)≥{[D(i,k)-D(k,s)]2}1/2Wherein D is(i,s)Is related to the data to be bufferedDistance of cache lines of matching length, D(i,k)And D(k,s)Refers to the starting point coordinates of two different cache lines of the same cached data.
The embodiment of the invention provides a hybrid cache MCU and a cache data replacement method, which assist a first-level cache SRAM by adding STT-MRAM to an MCU structure as a second-level cache. An optimized data replacement algorithm is provided for the data loss problem of the SRAM, specific screening condition judgment is carried out on data replacement and condition classification, effective management of a cache space is achieved, the hit rate of a cache architecture is improved, and data writing unbalance of an STT-MRAM is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
FIG. 1 is a diagram of the internal architecture of a hybrid MCU according to the present invention;
FIG. 2 is a schematic diagram of an MCU hybrid cache architecture according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an STT-MRAM cache architecture according to an embodiment of the invention;
FIG. 4a is a schematic diagram of a cache data replacement process-data read according to an embodiment of the present invention;
FIG. 4b is a diagram illustrating a cache data replacement process-finding cache actions that satisfy a condition according to an embodiment of the present invention;
FIG. 4c is a diagram illustrating a six-level cache data replacement-cache actions that satisfy a condition according to an embodiment of the present invention;
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood by those within the art that the terms "first", "second", etc. in this application are used only to distinguish one device, module, parameter, etc., from another, and do not denote any particular technical meaning or necessary order therebetween.
A Micro control Unit (MCU, Micro Controller Unit), also called a single-chip microcomputer, is to reduce the frequency and specification of a Central Processing Unit (CPU) appropriately, and integrate peripheral interfaces such as a Memory, a counter, a Universal Serial Bus (USB), an a/D converter, an Asynchronous Receiver Transmitter (UART), a Programmable Logic Controller (PLC), and a Direct Memory Access (DMA), and even a Liquid Crystal Display (LCD) driving circuit on a single chip to form a chip-level computer, which is used for different combined control in different application occasions, and has wide application in the fields of AI, cloud computing, 5G, smart vehicles, and the like.
At present, the market demands that the MCU develops towards the direction of low power consumption and high reliability, so that the MCU is required to further improve the anti-interference capability of the system per se and reduce the instability of the system per se. The internal cache mainly uses a Static Random-Access Memory (SRAM) to store data. However, the SRAM itself occupies a large circuit area, and data is easily lost during storage, which is inconvenient in actual use and fails to achieve the expectation of stable data storage.
In an embodiment of the present application, as shown in fig. 1, a first aspect of the present invention provides a hybrid cache MCU, including: processor 1, system bus 2, memory 3,
wherein the memory 3 comprises STT-MRAM as a backup cache in a cache structure and SRAM.
Generally, a cache memory inside an MCU mainly operates with an SRAM as a main component, i.e., a static random access memory, which is a kind of random access memory, and the term "static" refers to a defect that data stored in the SRAM is lost when power supply is stopped, i.e., the data stored in the SRAM is volatile as it is.
The STT-MRAM memory cell array realizes addressing through a gate and a bit line, realizes information reading and writing through bit line current, has the characteristics of nonvolatility, simple structure, low preparation process cost, low reading and writing times and high power loss, and is widely applied to the field of storage.
Optionally, the processor 1 and the memory 3 are connected to the system bus 2.
In general, an Advanced High Performance Bus (AHB), also called a High Performance Bus, is a Bus interface, like a USB. AHB is mainly used for the connection between high performance modules (such as CPU, DMA, and DPS).
Optionally, the MCU further comprises an AXI4, a peripheral bus 5, and the system bus 2 is connected to the peripheral bus 5 through the AXI 4.
Conventionally, axi (advanced eXtensible interface) is a bus protocol, which is a high-performance, high-bandwidth, low-latency on-chip bus. The address/control and data phase of the device are separated, the device supports unaligned data transmission, only needs a first address in burst transmission, simultaneously reads and writes a data channel separately, supports Outstanding transmission access and out-of-order access, and is easier to carry out time-admission convergence. The AXI enables the processor to obtain more excellent performance with smaller area and lower power consumption. Meanwhile, the AXI is a unidirectional channel architecture, so that the transmission information flow is transmitted only in a single direction, the delay is reduced, and the performance of the system is improved.
In general, an Advanced Peripheral Bus (APB) is a standard on-chip Bus architecture. The APB is mainly used for connection between peripheral peripherals with low bandwidth, and unlike the AHB which supports multiple masters, the only master inside the APB is the APB bridge. The method is characterized by comprising the following steps: two clock cycles transmission; no waiting period and no response signal are needed; the control logic is simple and only has four control signals. The transmission on the APB can be illustrated with a state diagram as shown in the overview diagram.
Optionally, the MCU further includes a peripheral device 6 including an external memory; the peripheral device 6 is connected to the peripheral bus 5.
Typically, the peripheral devices 6 include analog-to-digital converters, serial buses, voltage comparators, input and output interfaces, parallel buses, clocks, communication protocols, and the like.
Optionally, the MCU communicates with the system bus 2, the peripheral bus 5, the memory 3 and the peripheral device 6.
Optionally, the peripheral bus 5 is used for storing system low frequency access data, and the memory 3 is used for storing system high frequency access data.
In one embodiment of the present application, the operating state of the MCU hybrid cache architecture is as shown in fig. 2. When the STT-MRAM does not cache data, the storage state of the STT-MRAM is identified as "0"; when the STT-MRAM is accessed, the storage state identification of the STT-MRAM is updated from a "0" to a "1".
Optionally, the STT-MRAM is accessed comprising: the STT-MRAM is accessed by processor 1, or by SRAM.
Optionally, each piece of data cached by the SRAM and the STT-MRAM corresponds to at least one of the following caching information: the data cache comprises group information of a cache region where the data is located, the length of the data and a physical address where the data is cached.
Optionally, the MCU further comprises a Counter (Counter) inside to record the access state of STT-MRAM to the cache data, and the Counter count is changed from "0" to "1" when the system shifts to access STT-MRAM due to cache failure in accessing SRAM.
In one embodiment of the application, the MCU is provided with an access subject identifier for indicating an access subject to the STT-MRAM, the access subject including a processor 1 and a system process.
Optionally, a Valid (reference value) is further set inside the MCU to record a system process access condition, where the Valid effective number is two bits; the system normality value is set to "01", the Valid value is "10" when the processor 1 accesses the STT-MRAM, and the Valid value is "11" when the system process accesses the STT-MRAM.
Optionally, the MCU further includes a cache controller, where the cache controller is configured to configure target cache information of the data to be cached, which is transmitted to the STT-MRAM by the SRAM, and verify whether the cache information of the data to be cached in the STT-MRAM matches the target cache information.
In an embodiment of the present application, a caching method is provided, which is applied to an MCU, and includes:
the processor accesses target data from the SRAM;
when the target data is not cached in the SRAM, the processor accesses the target data from the STT-MRAM, wherein the target data in the STT-MRAM is written by the SRAM after the processor writes into the SRAM.
Optionally, before the processor accesses the target data from the SRAM, the method further includes:
dividing the STT-MRAM to obtain at least one cache region, and dividing the SRAM to obtain at least one cache region;
configuring a virtual address corresponding to the physical address of each cache region of the STT-MRAM, and configuring a virtual address corresponding to the physical address of each cache region of the SRAM;
respectively establishing a corresponding relation between a virtual address of each cache region in the STT-MRAM and a virtual address of each cache region in the SRAM, so that at least one cache region of the STT-MRAM corresponds to at least one cache region of the SRAM one by one.
Optionally, after the SRAM caches the data written by the processor, the MCU configures target cache information for the data, where the target cache information indicates group information of the data in the STT-MRAM and a physical address where the data is cached, and a caching process is shown in fig. 3;
after caching the data in the STT-MRAM, detecting whether caching information of the data in the STT-MRAM matches the target caching information;
if so, the data is successfully cached in the STT-MRAM.
Optionally, the method further includes: determining cache data meeting optimization conditions in the STT-MRAM; performing optimization operation on the cache data meeting the optimization condition, wherein the optimization operation comprises one of the following operations: clearing and replacing.
Generally, commonly Used data replacement methods include a random algorithm, a cash-out algorithm, and a Least Recently Used (LRU) algorithm.
LRU is a page replacement algorithm that is widely used by most operating systems to maximize page hit rates. The idea of the algorithm is that when the interruption of missing page occurs, the page with the longest unused time is selected for replacement. From the principle of program operation, the least recently used algorithm is a page replacement algorithm which is relatively close to the ideal, and the algorithm not only makes full use of the historical information of page calling in the memory, but also correctly reflects the local problems of the program.
Alternatively, the data replacement method may be implemented on the basis of an LRU algorithm.
Alternatively, as shown in fig. 4a, for the data a ═ D11, Di1, Dj1, Dx1, … … Dn1, when reading access, the counter is set to 1 by 0.
Optionally, the process of finding the data cache line satisfying the STT-MRAM is shown in fig. 4b, where the cache data satisfying the optimization condition includes one of:
history data cached in the STT-MRAM at the beginning of any caching cycle;
when the STT-MRAM receives a request for backup data, the corresponding data is stored in the STT-MRAM;
and clearing the data corresponding to the data request.
Optionally, the process of replacing the data to be cleared is shown in fig. 4c, and includes: when the STT-MRAM is in an initial state, the MCU caches data, the cached data which is backed up after the STT-MRAM is updated, and the data which is randomly cleared according to a request in a process.
Optionally, after performing the optimization operation on the cache data meeting the optimization condition, the method further includes:
after the optimization operation is performed on the cache data meeting the optimization condition, the method further includes:
determining the capacity of the cache space after being cleared to obtain a first capacity;
judging whether the first capacity is matched with a pre-configured capacity, wherein the pre-configured capacity is a cache capacity configured according to data to be cleared before the MCU is optimized;
if the first capacity is matched with the preset capacity, caching the data to be optimized
If the first capacity is larger than the configured capacity, repeating the data clearing action until the first capacity is matched with the pre-configured capacity.
Optionally, the matching of the first capacity and the preconfigured capacity size includes that the first capacity is smaller than or equal to the preconfigured capacity size.
Optionally, the replacing the cache data in the STT-MRAM includes:
determining data to be cached and the length of the data to be cached;
detecting whether the size of the residual cache space in the STT-MRAM is larger than or equal to the length of the data to be cached;
if the residual cache space in the STT-MRAM is smaller than the length of the data to be cached, determining at least one cache line meeting the length of the data to be cached from cache lines corresponding to cached data;
and replacing the cached data in any cache line by using the data to be cached.
Optionally, corresponding to any cache line, determining at least one cache line meeting the length of the data to be cached from the cache line corresponding to the cached data, including:
determining a starting point among at least one starting point of the cache line;
searching a cache line which is closest to the starting point and meets the following conditions as the cache line meeting the length of the data to be cached: d(i,s)≥{[D(i,k)-D(k,s)]2}1/2Wherein D is(i,s)Is the distance, D, of the cache line matching the length of the data to be cached(i,k)And D(k,s)Refers to the starting point coordinates of two different cache lines of the same cached data.
Optionally, the memory capacity M of the cache line that can be eliminatedAExpressed as: mAD (i, s) × B, where B denotes the number of bytes in the cache line.
The embodiment of the invention provides a hybrid cache MCU and a cache data replacement method, which assist a first-level cache SRAM by adding STT-MRAM to an MCU structure as a second-level cache. An optimized data replacement algorithm is provided for the data loss problem of the SRAM, specific screening condition judgment is carried out on data replacement and condition classification, effective management of a cache space is achieved, the hit rate of a cache architecture is improved, and data writing unbalance of an STT-MRAM is reduced.

Claims (12)

1. A micro control unit MCU based on mixed storage is characterized in that,
the MCU based on storage while mixing includes: a processor (1), a system bus (2) and a memory (3), the processor (1) being connected to the memory (3) via the system bus (2),
wherein the memory (3) comprises a spin-transfer torque random access memory (STT-MRAM) and a Static Random Access Memory (SRAM), the STT-MRAM serving as a backup for the SRAM.
2. The hybrid cache MCU of claim 1,
the STT-MRAM comprises at least one cache region, the SRAM comprises at least one cache region, the at least one cache region of the STT-MRAM is in one-to-one correspondence with the at least one cache region of the SRAM, and any cache region comprises at least two groups.
3. The hybrid cache MCU of claim 2,
each piece of data cached by the SRAM and the STT-MRAM corresponds to at least one of the following caching information: the cache comprises group information of the cache region where the data is located, the length of the data and a physical address where the data is cached.
4. The hybrid cache MCU of claim 1,
the MCU is provided with an access subject identification which is used for indicating an access subject accessing the STT-MRAM, and the access subject comprises a processor (1) and a system process.
5. A method of caching, characterized in that,
applied to a Micro Control Unit (MCU) comprising a processor, a Static Random Access Memory (SRAM) and a spin-transfer torque random access memory (STT-MRAM), the method comprising:
the processor accesses target data from the SRAM;
when the target data is not cached in the SRAM, the processor accesses the target data from the STT-MRAM, where the target data is written by the SRAM after the processor writes to the SRAM.
6. The caching method of claim 5,
before the processor accesses the target data from the SRAM, the method further comprises the following steps:
dividing the STT-MRAM to obtain at least one cache region, and dividing the SRAM to obtain at least one cache region;
configuring a virtual address corresponding to a physical address of each of the cache regions of the STT-MRAM and configuring the virtual address corresponding to the physical address of each of the cache regions of the SRAM;
respectively establishing the corresponding relation between the virtual address of each cache region in the STT-MRAM and the virtual address of each cache region in the SRAM, so that at least one cache region of the STT-MRAM is in one-to-one correspondence with at least one cache region of the SRAM.
7. The caching method of claim 5, further comprising:
after the SRAM caches the data written by the processor, the MCU configures target cache information for the data, wherein the target cache information indicates group information of the data in the STT-MRAM and a physical address cached by the data;
after caching the data in the STT-MRAM, detecting whether caching information of the data in the STT-MRAM matches the target caching information;
if so, the data is successfully cached in the STT-MRAM.
8. The caching method of claim 5, further comprising:
determining cache data meeting optimization conditions in the STT-MRAM;
performing optimization operation on the cache data meeting the optimization condition, wherein the optimization operation comprises one of the following operations: clearing and replacing.
9. The caching method of claim 8,
the cache data meeting the optimization condition comprises one of the following data:
history data cached in the STT-MRAM at the beginning of any caching cycle;
when the STT-MRAM receives a request for backup data, the corresponding data is stored in the STT-MRAM;
and clearing the data corresponding to the data request.
10. The caching method of claim 8,
the replacing the cache data in the STT-MRAM comprises:
determining data to be cached and the length of the data to be cached;
detecting whether the size of the residual cache space in the STT-MRAM is larger than or equal to the length of the data to be cached;
if the residual cache space in the STT-MRAM is smaller than the length of the data to be cached, determining at least one cache line meeting the length of the data to be cached from cache lines corresponding to cached data;
and replacing the cached data in any cache line by using the data to be cached.
11. The caching method of claim 10,
corresponding to any cache line, determining at least one cache line meeting the length of the data to be cached from the cache lines corresponding to the cached data, and including:
determining a starting point among at least one starting point of the cache line;
searching a cache line which is closest to the starting point and meets the following conditions as the cache line meeting the length of the data to be cached: d(i,s)≥{[D(i,k)-D(k,s)]2}1/2Wherein D is(i,s)Is the distance, D, of the cache line matching the length of the data to be cached(i,k)And D(k,s)Refers to the starting point coordinates of two different cache lines of the same cached data.
12. The caching method according to claims 8 to 11,
after the optimization operation is performed on the cache data meeting the optimization condition, the method further includes:
determining the capacity of the cache space after being cleared to obtain a first capacity;
judging whether the first capacity is matched with a pre-configured capacity, wherein the pre-configured capacity is a cache capacity configured according to data to be cleared before the MCU is optimized;
if the first capacity is matched with the preset capacity, caching the data to be optimized
If the first capacity is larger than the configured capacity, repeating the data clearing action until the first capacity is matched with the pre-configured capacity.
CN202111143598.6A 2021-09-28 2021-09-28 MCU based on hybrid memory and data caching method Active CN113849455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111143598.6A CN113849455B (en) 2021-09-28 2021-09-28 MCU based on hybrid memory and data caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111143598.6A CN113849455B (en) 2021-09-28 2021-09-28 MCU based on hybrid memory and data caching method

Publications (2)

Publication Number Publication Date
CN113849455A true CN113849455A (en) 2021-12-28
CN113849455B CN113849455B (en) 2023-09-29

Family

ID=78980304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111143598.6A Active CN113849455B (en) 2021-09-28 2021-09-28 MCU based on hybrid memory and data caching method

Country Status (1)

Country Link
CN (1) CN113849455B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130046943A1 (en) * 2011-08-15 2013-02-21 Fujitsu Limited Storage control system and method, and replacing system and method
CN103744800A (en) * 2013-12-30 2014-04-23 龙芯中科技术有限公司 Cache operation method and device for replay mechanism
US9348752B1 (en) * 2012-12-19 2016-05-24 Amazon Technologies, Inc. Cached data replication for cache recovery
CN108108312A (en) * 2016-11-25 2018-06-01 华为技术有限公司 A kind of cache method for cleaning and processor
CN108984338A (en) * 2018-06-01 2018-12-11 暨南大学 A kind of offline optimal caching alternative and method towards the recovery of duplicate removal standby system data
CN109196473A (en) * 2017-02-28 2019-01-11 华为技术有限公司 Buffer memory management method, cache manager, shared buffer memory and terminal
JP2019046283A (en) * 2017-09-05 2019-03-22 富士通株式会社 Controller, backup processing method, and program
CN110471617A (en) * 2018-05-10 2019-11-19 Arm有限公司 For managing the technology of buffer structure in the system using transaction memory
CN111858404A (en) * 2019-04-26 2020-10-30 慧与发展有限责任合伙企业 Cache data positioning system
CN112650694A (en) * 2019-10-12 2021-04-13 北京达佳互联信息技术有限公司 Data reading method and device, cache proxy server and storage medium
CN113190473A (en) * 2021-04-30 2021-07-30 广州大学 Cache data management method and medium based on energy collection nonvolatile processor
CN113392043A (en) * 2021-07-06 2021-09-14 南京英锐创电子科技有限公司 Cache data replacement method, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130046943A1 (en) * 2011-08-15 2013-02-21 Fujitsu Limited Storage control system and method, and replacing system and method
US9348752B1 (en) * 2012-12-19 2016-05-24 Amazon Technologies, Inc. Cached data replication for cache recovery
CN103744800A (en) * 2013-12-30 2014-04-23 龙芯中科技术有限公司 Cache operation method and device for replay mechanism
CN108108312A (en) * 2016-11-25 2018-06-01 华为技术有限公司 A kind of cache method for cleaning and processor
CN109196473A (en) * 2017-02-28 2019-01-11 华为技术有限公司 Buffer memory management method, cache manager, shared buffer memory and terminal
JP2019046283A (en) * 2017-09-05 2019-03-22 富士通株式会社 Controller, backup processing method, and program
CN110471617A (en) * 2018-05-10 2019-11-19 Arm有限公司 For managing the technology of buffer structure in the system using transaction memory
CN108984338A (en) * 2018-06-01 2018-12-11 暨南大学 A kind of offline optimal caching alternative and method towards the recovery of duplicate removal standby system data
CN111858404A (en) * 2019-04-26 2020-10-30 慧与发展有限责任合伙企业 Cache data positioning system
CN112650694A (en) * 2019-10-12 2021-04-13 北京达佳互联信息技术有限公司 Data reading method and device, cache proxy server and storage medium
CN113190473A (en) * 2021-04-30 2021-07-30 广州大学 Cache data management method and medium based on energy collection nonvolatile processor
CN113392043A (en) * 2021-07-06 2021-09-14 南京英锐创电子科技有限公司 Cache data replacement method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范浩;徐光平;薛彦兵;高赞;张桦;: "一种基于强化学习的混合缓存能耗优化与评价", 计算机研究与发展, no. 06, pages 5 - 19 *

Also Published As

Publication number Publication date
CN113849455B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US6771526B2 (en) Method and apparatus for data transfer
US7171526B2 (en) Memory controller useable in a data processing system
US5564114A (en) Method and an arrangement for handshaking on a bus to transfer information between devices in a computer system
US5561783A (en) Dynamic cache coherency method and apparatus using both write-back and write-through operations
US20060190691A1 (en) Die-to-die interconnect interface and protocol for stacked semiconductor dies
US10739836B2 (en) System, apparatus and method for handshaking protocol for low power state transitions
CA2007690C (en) High speed bus with virtual memory data transfer capability
US11899612B2 (en) Online upgrading method and system for multi-core embedded system
CN112965924B (en) AHB-to-AXI bridge and aggressive processing method
US11768607B1 (en) Flash controller for ASIC and control method therefor
US20140068125A1 (en) Memory throughput improvement using address interleaving
CN111221759B (en) Data processing system and method based on DMA
KR20130009926A (en) Flexible flash commands
CN101436171B (en) Modular communication control system
CN104615386A (en) Off-core cache device
CN113093899B (en) Cross-power domain data transmission method
US6425071B1 (en) Subsystem bridge of AMBA's ASB bus to peripheral component interconnect (PCI) bus
WO2022095439A1 (en) Hardware acceleration system for data processing, and chip
EP4070204A1 (en) Data transfers between a memory and a distributed compute array
CN113849455B (en) MCU based on hybrid memory and data caching method
WO2009115058A1 (en) Mainboard for providing flash storage function and storage method thereof
CN210155650U (en) Solid state hard disk controller
US20060206644A1 (en) Method of hot switching data transfer rate on bus
CN109188986B (en) Dual-controller parallel bus communication device and method and communication equipment
EP4373038A1 (en) Processing system, related integrated circuit, device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231227

Address after: Room 1605, Building 1, No. 117 Yingshan Red Road, Huangdao District, Qingdao City, Shandong Province, 266400

Patentee after: Qingdao Haicun Microelectronics Co.,Ltd.

Address before: 100191 rooms 504a and 504b, 5th floor, 23 Zhichun Road, Haidian District, Beijing

Patentee before: Zhizhen storage (Beijing) Technology Co.,Ltd.

TR01 Transfer of patent right