CN109196473B - Cache management method, cache manager, shared cache and terminal - Google Patents

Cache management method, cache manager, shared cache and terminal Download PDF

Info

Publication number
CN109196473B
CN109196473B CN201780022195.1A CN201780022195A CN109196473B CN 109196473 B CN109196473 B CN 109196473B CN 201780022195 A CN201780022195 A CN 201780022195A CN 109196473 B CN109196473 B CN 109196473B
Authority
CN
China
Prior art keywords
data
volatile
thread
cache
volatile cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780022195.1A
Other languages
Chinese (zh)
Other versions
CN109196473A (en
Inventor
宋昆鹏
李艳华
李扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN109196473A publication Critical patent/CN109196473A/en
Application granted granted Critical
Publication of CN109196473B publication Critical patent/CN109196473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0855Overlapped cache accessing, e.g. pipeline
    • G06F12/0857Overlapped cache accessing, e.g. pipeline by multiple requestors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache management method, a cache manager, a shared cache and a terminal relate to the technical field of storage, the method is used for the cache manager (121), and the method comprises the following steps: in the process that a first thread accesses a first volatile cache partition (1211) in a volatile memory (122), if the first thread needs to execute long time-consuming operation, backing up first data in the first volatile cache partition (1211) to a non-volatile memory (123), wherein threads locked by any two volatile cache partitions (1221) are different, and each volatile cache partition (1221) is prohibited from being accessed by a thread different from the thread locked by the volatile cache partition (1221); during a long, time consuming operation performed by a first thread, a first volatile cache partition (1211) is set to a state accessible by a different thread than the first thread. The method solves the problem of low cache utilization rate of the terminal, improves the cache utilization rate of the terminal, and can be used for the terminal.

Description

Cache management method, cache manager, shared cache and terminal
Technical Field
The present application relates to the field of storage technologies, and in particular, to a cache management method, a cache manager, a shared cache, and a terminal.
Background
A processor and volatile memory are provided in the terminal. The processor includes a plurality of processor cores, each processor core including a plurality of threads, each thread for accessing data in the volatile memory, such as writing data in the volatile memory. Multiple threads in the same processor core can share volatile memory so that the multiple threads can simultaneously access data in the volatile memory.
Currently, in a shared volatile memory, when a certain data is not accessed for a long time, the data is replaced by other data. When a certain thread executes an operation (such as a data loss operation) which takes a long time, data originally accessed by the thread in the volatile memory is replaced by data of other threads due to long-time non-access, and data pollution between threads occurs. In the related art, in order to prevent data pollution between threads, a shared volatile memory is divided into a plurality of volatile cache partitions, and different threads are set to correspond to different volatile cache partitions, that is, one thread can only access the volatile cache partition corresponding to the thread. When a certain thread executes an operation which takes a long time, the volatile cache partition corresponding to the thread is prohibited from being accessed by other threads, and the data accessed by the thread in the corresponding volatile cache partition is not replaced by the data of other threads.
However, when a certain thread executes an operation that takes a long time, the volatile cache partition corresponding to the certain thread is prohibited from being accessed by other threads, and at this time, the certain thread does not access the volatile cache partition, so that the volatile cache partition cannot be effectively utilized, and therefore, the cache utilization rate of the terminal is low.
Disclosure of Invention
In order to solve the problem of low cache utilization rate of a terminal, the application provides a cache management method, a cache manager, a shared cache and the terminal. The technical scheme is as follows:
in a first aspect, a cache management method is provided, where a shared cache includes a volatile memory and a non-volatile memory, the volatile memory includes at least two volatile cache partitions, and the method includes: allocating a first volatile cache partition to a first thread, the first volatile cache partition having first data associated with the first thread stored thereon, the first volatile cache partition being one of the at least two volatile cache partitions, and not allowing other threads to access the first volatile cache partition while the first thread occupies the first volatile cache partition; judging whether a first thread needs to execute a long-delay operation, wherein the long-delay operation refers to an operation with an operation duration larger than a preset time threshold, and the first thread does not access the first volatile cache partition during the execution of the long-delay operation; and if the first thread needs to execute long-delay operation, writing the first data in the first volatile cache partition into the nonvolatile memory, and releasing the occupation of the first thread on the first volatile cache partition.
For example, the cache management method may be used for a cache manager, and since other threads are not allowed to access the first volatile cache partition during the first thread occupies the first volatile cache partition, other threads cannot access the first volatile cache partition when the first thread accesses the first volatile cache partition, thereby preventing mutual pollution between data of different threads. When the first thread executes long time-consuming operation, the first data is written into the nonvolatile memory, the first data is backed up, and the occupation of the first thread on the first volatile cache partition is released, that is, when the first thread executes long time-consuming operation, the first volatile cache partition can be accessed by other threads, so that the cache utilization rate of the terminal can be improved.
Optionally, the method further includes: after the first thread finishes executing the long-delay operation, allocating a second volatile cache partition to the first thread, and writing the first data in the non-volatile cache into the second volatile cache partition, where the second volatile cache partition is the first volatile cache partition or another volatile cache partition other than the first volatile cache partition. That is, after the first thread has performed the long-latency operation, the cache manager may restore the first data from the non-volatile memory to: a first volatile cache partition, or a second volatile cache partition different from the first volatile cache partition. Further, after restoring the first data to the second volatile cache partition, the cache manager may further instruct the first thread to access the second volatile cache partition and continue accessing the first data on the second volatile cache partition.
Optionally, the nonvolatile memory includes at least two nonvolatile cache partitions, and the writing the first data in the first volatile cache partition into the nonvolatile memory includes: writing the first data in the first volatile cache partition into a first nonvolatile cache partition in the nonvolatile memory, the first nonvolatile cache partition being any one of the at least two nonvolatile cache partitions; the method further comprises the following steps: recording the incidence relation between the first thread and the first nonvolatile cache partition; the writing the first data in the non-volatile buffer into the second volatile buffer partition includes: and writing the first data in the first nonvolatile cache partition into the second volatile cache partition according to the incidence relation between the first thread and the first nonvolatile cache partition. That is, the cache manager may write the first data from the first volatile cache partition to any partition in the non-volatile memory, and record the association relationship between the first thread and the first non-volatile cache partition when writing a certain partition, so as to determine the first data and the first thread that needs to use the first data when restoring the first data from the non-volatile memory to the volatile memory.
Optionally, each of the volatile cache partitions locks one thread, and any two threads locked by the volatile cache partitions are different, each of the volatile cache partitions does not allow an unlocked thread to access, and the allocating the first volatile cache partition to the first thread includes: setting the first volatile cache partition to lock the first thread; the releasing the occupation of the first volatile cache partition by the first thread comprises: releasing the locking relationship between the first volatile cache partition and the first thread; and/or setting the first volatile cache partition to lock a second thread to be accessed to the volatile memory.
That is, when the occupation of the first volatile cache partition by the first thread is released, in the first aspect, the locking relationship between the first volatile cache partition and the first thread may be directly released; in the second aspect, after the locking relationship between the first volatile cache partition and the first thread is released, the first volatile cache partition may be set to lock the second thread, and the second thread may be instructed to access the first volatile cache partition; in a third aspect, the first volatile cache partition may be directly set to lock the second thread, the locking relationship between the first volatile cache partition and the first thread is overridden, and the second thread is instructed to access the first volatile cache partition. The first volatile cache partition has the ability of being accessed by a second thread different from the first thread by locking the first volatile cache partition to the second thread, and the second thread can access the first volatile cache partition after the first volatile cache partition is locked to the second thread, so that the effect of improving the cache utilization rate of the terminal is achieved.
Optionally, the nonvolatile memory includes at least two nonvolatile cache partitions, the at least two volatile cache partitions are coupled to the at least two nonvolatile cache partitions one by one, and the writing the first data in the first volatile cache partition into the nonvolatile memory includes: writing the first data to a first non-volatile cache partition coupled with the first volatile cache partition. In order to further prevent the data pollution among the threads, the nonvolatile memory also comprises a plurality of nonvolatile cache partitions, so that the data of the threads backed up to the nonvolatile memory can not be polluted.
Optionally, the method further includes: recording relevant information of the first data in the process of writing the first data into the first nonvolatile cache partition, wherein the relevant information of the first data comprises: an identification of the first volatile cache partition, a non-volatile storage identification, and an identification of the first thread, the non-volatile storage identification to indicate a storage location of the first data within the non-volatile memory; after the first thread finishes the long time consuming operation, the first volatile cache partition is allocated to the first thread according to the relevant information of the first data, and the first data in the first non-volatile cache partition is written into the first volatile cache partition. That is, in the process of backing up the first data to the nonvolatile memory, in order to facilitate understanding of the context of the first data to be backed up, the cache manager needs to record the related information of the first data, and recover the first data according to the related information of the first data in the subsequent steps.
Optionally, the allocating the first volatile cache partition to the first thread according to the relevant information of the first data, and writing the first data in the first non-volatile cache partition into the first volatile cache partition includes: setting the first volatile cache partition indicated by the identifier of the first volatile cache partition in the related information of the first data, and locking the first thread indicated by the identifier of the first thread; writing first data in a first nonvolatile cache partition indicated by the first nonvolatile storage identifier in the related information of the first data into the first volatile cache partition indicated by the identifier of the first volatile cache partition; and in the related information indicating the first data, the first thread indicated by the identification of the first thread continues to access the first volatile cache partition indicated by the identification of the first volatile cache partition.
Optionally, before the allocating the first volatile cache partition to the first thread according to the relevant information of the first data and writing the first data in the first non-volatile cache partition into the first volatile cache partition, the method further includes: determining whether the first volatile cache partition is accessed; the allocating the first volatile cache partition to the first thread according to the relevant information of the first data, and writing the first data in the first non-volatile cache partition into the first volatile cache partition, includes: when the first volatile cache partition is not accessed, the first volatile cache partition is allocated to the first thread according to the relevant information of the first data, and the first data in the first non-volatile cache partition is written into the first volatile cache partition.
That is, when the first thread has executed the long time consuming operation, the first volatile cache partition may be accessed by other threads (e.g., the second thread), and at this time, in order to prevent data loss of other threads, it is necessary to wait for the first volatile cache partition to be in an idle state (i.e., the first volatile cache partition is not accessed), so as to perform access recovery of the first thread.
Optionally, the method is applied to a cache manager, where the first nonvolatile cache partition includes a plurality of nonvolatile cache subdivisions, each of which has a capacity greater than or equal to that of the first volatile cache partition, and the writing of the first data to the first nonvolatile cache partition coupled to the first volatile cache partition includes: writing the first data to a first non-volatile cache sub-area of the plurality of non-volatile cache sub-areas that is free, the cache manager recording information about the data that does not include, prior to writing the first data to a first non-volatile cache partition coupled to the first volatile cache partition: a nonvolatile storage identifier for indicating a free nonvolatile cache sub-area, wherein the nonvolatile storage identifier in the first data related information is used for indicating the first nonvolatile cache sub-area;
the recording the related information of the first data comprises: recording the related information of the first data in a preset cache list, wherein the related information of the first data further comprises: the first identifier is used for indicating that the long time consuming operation is not completed, and the preset cache list is used for recording relevant information of data written into a nonvolatile memory;
the method further comprises the following steps: after the first thread finishes executing the long time-consuming operation, changing the first identifier in the related information of the first data, which contains the identifier of the first thread, in the preset cache list into a second identifier, wherein the second identifier is used for indicating that the long time-consuming operation is finished;
the allocating the first volatile cache partition to the first thread according to the relevant information of the first data, and writing the first data in the first non-volatile cache partition into the first volatile cache partition, includes: and sequentially distributing the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier to the thread indicated by the identifier of the thread in the relevant information containing the second identifier according to the relevant information containing the second identifier in the preset cache list, and writing the data on the storage position indicated by the non-volatile storage identifier in the relevant information containing the second identifier into the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier.
Because the capacity of the first nonvolatile cache partition is greater than that of the first volatile cache partition, more threads performing long time-consuming operations can be allowed in the process of accessing the first volatile cache partition, more data can be written into the nonvolatile memory, and more threads waiting for recovery of access to the first volatile cache partition are allowed, so that the cache manager needs to sequentially recover access to the first volatile cache partition by the threads performing long time-consuming operations.
Optionally, the capacity of the first volatile cache partition is greater than or equal to the capacity of the first non-volatile cache partition.
Optionally, before the allocating the first volatile cache partition to the first thread according to the relevant information of the first data and writing the first data in the first non-volatile cache partition into the first volatile cache partition, the method further includes: judging whether the write strategy of the second thread is a write-back strategy; if the write policy is a write-back policy, writing the data with the modified tag in the first volatile cache partition into: memory or storage having a cache level lower than that of the volatile storage.
It should be noted that, when the second thread writes data into the first volatile cache partition, the second thread can write a tag in each data block in the data as a modified tag, and the cache in the terminal includes multiple levels of memory. Before resuming the access of the first thread to the first volatile cache partition, the cache manager may be further configured to determine whether a write policy of the second thread is a writeback policy, in order to prevent data of the second thread from being lost when resuming the access of the first thread to the first volatile cache partition. If the write policy of the second thread is a write-back policy, writing the data with the modified tag in the first volatile cache partition into: memory or storage at a cache level lower than volatile storage.
Optionally, the information related to the first data further includes: a third identifier indicating that the first data on the nonvolatile memory has not been written to the first volatile cache partition, wherein before the first thread indicated by the identifier of the first thread continues to access the first volatile cache partition indicated by the identifier of the first volatile cache partition in the relevant information indicating the first data, the method further comprises: changing the third identifier in the information related to the first data to a fourth identifier, wherein the fourth identifier is used for indicating that the first data on the nonvolatile memory is written into the first volatile cache partition.
It should be noted that, after the first data is restored to the first volatile cache partition, the cache manager may further change a third identifier in the relevant information of the first data in the preset cache list into a fourth identifier, where the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written into the first volatile cache partition. In this way, after the cache manager resumes the access of the first thread to the first volatile cache partition, the cache manager can determine that the first data has been restored to the first volatile cache partition according to the fourth identifier in the related information of the first data in the preset cache list. And then restoring the access of the thread indicated by the identification of the thread to the volatile cache partition indicated by the identification of the volatile cache partition in the related information of the next data containing the second identification. Thus, the cache manager is prevented from writing the first data to the first volatile cache partition again after resuming writing the first data to the first volatile cache partition.
Optionally, the long time-consuming operation is a data loss operation, and before the first thread indicated by the identifier of the first thread in the relevant information indicating the first data continues to access the first volatile cache partition indicated by the identifier of the first volatile cache partition, the method further includes: receiving lost data sent by the first thread; writing the missing data to the first volatile cache partition. That is, when the long time consuming operation performed by the first thread is a data loss operation, before the first thread is instructed to continue accessing the first volatile cache partition, the cache manager can receive lost data sent by the first thread when the first thread performs the long time consuming operation, and write the lost data into the first volatile cache partition, so as to ensure that the first thread can normally access the first volatile cache partition after the first thread resumes accessing the first volatile cache partition.
Optionally, the first data includes: a first data block and a second data block, wherein a data portion of the first data block is independent of the first thread and a data portion of the second data block is dependent on the first thread, the writing the first data of the first volatile cache partition to the non-volatile memory comprising: writing the contents of the valid data bits in the first data block and the second data block to the non-volatile memory; after the writing the first data in the first volatile cache partition to the non-volatile memory, the method further comprises: clearing the contents of all valid data bits within the first volatile cache partition.
It should be noted that the data stored in the first volatile cache partition includes a plurality of data blocks, and each data block includes: valid data bits, a tag, and a data portion. When backing up the first data, only the contents of the valid data bits in the valid data block (the second data block) and the invalid data block (i.e. the first data block) need to be backed up, and in order to ensure that the next thread can normally access the first volatile cache partition, the contents of all the valid data bits in the first volatile cache partition need to be cleared.
Optionally, the long time consuming operation is a preset operation in which the operation duration is greater than a preset duration threshold, where the preset operation includes: at least one of a data loss operation, an access input output device operation, and a sleep operation. That is, the step of backing up the first data to the first nonvolatile cache sub-area is only executed when the first thread executes a long time consuming operation, and the long time consuming operation is a preset operation. When the long time consuming operation is not a preset operation, the step of backing up the first data to the first nonvolatile cache subarea is not executed.
In a second aspect, a cache manager is provided, where the cache manager includes at least one module, and the at least one module is configured to implement the cache management method provided in the first aspect or any optional manner of the first aspect.
In a third aspect, a cache manager is provided, which includes: the device comprises at least one transmitting module, at least one receiving module, at least one processing module, at least one storage module and at least one bus, wherein the storage module is connected with the processing module through the bus; the processing module is configured to execute instructions stored in the storage module; the processing module is realized by executing instructions: the cache management method provided by the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, a shared cache is provided, the shared cache comprising: a cache manager, a volatile memory and a non-volatile memory, wherein the cache manager is the cache manager of the second aspect or the third aspect; the volatile memory includes at least two volatile cache partitions.
Optionally, the nonvolatile memory includes at least two nonvolatile cache partitions, and the at least two volatile cache partitions are coupled to the at least two nonvolatile cache partitions one by one.
In a fifth aspect, a terminal is provided, where the terminal includes: the system comprises a processor and a shared cache, wherein the processor comprises at least two threads; the shared cache is the shared cache of the fourth aspect.
The beneficial effect that technical scheme that this application provided brought is:
when the first thread occupies the first volatile cache partition, other threads are not allowed to access the first volatile cache partition, so that other threads cannot access the first volatile cache partition when the first thread accesses the first volatile cache partition, and mutual pollution among data of different threads is prevented. When the first thread executes long time-consuming operation, the first data is written into the nonvolatile memory, the first data is backed up, and the occupation of the first thread on the first volatile cache partition is released, that is, when the first thread executes long time-consuming operation, the first volatile cache partition can be accessed by other threads, so that the cache utilization rate of the terminal can be improved.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 2 is a schematic partial structure diagram of a terminal according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a shared cache provided in the related art;
fig. 4 is a schematic structural diagram of another shared cache provided in the related art;
fig. 5 is a schematic structural diagram of a cache manager according to an embodiment of the present invention;
fig. 6 is a flowchart of a method of cache management according to an embodiment of the present invention;
FIG. 7 is a flowchart of a method for a cache manager to restore access to a first volatile cache partition by a first thread according to an embodiment of the present invention;
FIG. 8 is a flowchart of another method for a cache manager to resume access to a first volatile cache partition by a first thread according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a ferroelectric nonvolatile flip-flop provided in the related art;
fig. 10 is a schematic structural diagram of a cache manager according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another cache manager according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of another cache manager according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of another cache manager according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a cache manager according to another embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and fig. 2 is a schematic partial structural diagram of a terminal according to an embodiment of the present invention. Referring to fig. 1 and 2, the terminal 1 includes a processor 10, a Cache 11, and a local Memory 12. The processor 10 has access to a cache 11 and a local memory 12, as well as to a storage of the terminal, which is not shown in fig. 1 and 2. Optionally, the terminal is a computer, and the memory of the terminal is a hard disk.
Processor 10 includes at least one processor core 101 (in FIG. 1, the example of a processor including two processor cores 101) and each processor core 101 includes at least one thread 1011 (in FIG. 1, the example of a processor including two threads 1011 per processor core). Note that a register is also included in each processor core 101. It should be noted that the processor 10 includes at least two threads 1011, and all the threads are hardware threads in the embodiment of the present invention.
The cache 11 is a multi-level cache, and when a plurality of processor cores 101 share a certain level of cache, or when a plurality of threads 1011 in a certain processor core 101 share a certain level of cache (access to the certain level of cache at the same time), the level of cache is a shared cache a, and at least one shared cache a exists in the multi-level cache. For example, a multi-level cache includes: the Cache comprises a first-level Cache, a second-level Cache and a third-level Cache, wherein the first-level Cache (English: level 1 Cache; L1 Cache for short) is exclusively occupied by a certain processor core, the second-level Cache (English: level 2 Cache; L2C Cache for short) is shared among a plurality of processor cores, and the third-level Cache (English: level 3 Cache; L3 Cache for short) is shared by all the processor cores. In a multithreaded processor, the L1 Cache can also be shared by multiple threads within the same processor core.
It should be noted that the cache 11 in the practical scenario of the embodiment of the present invention includes a shared cache a, and the number of threads accessing the shared cache a is greater than or equal to two. Thus, when processor 10 includes one processor core 101, the one processor core 101 includes a plurality of threads 1011; when each processor core 101 includes one thread 1011, the processor 10 includes at least two processor cores 101.
Referring to fig. 2, the shared cache a includes: a cache manager 121, a Volatile Memory 122 and a Non-Volatile Memory (NVM) 123. The cache manager 121 is connected to the volatile memory 122, the nonvolatile memory 123, and a plurality of threads sharing a shared cache in which the cache manager is located, and the cache manager 121 is configured to manage access to the shared cache by the plurality of threads.
For example, the volatile Memory 122 is a Static Random Access Memory (SRAM), the NVM 123 is a Flash EEPROM (Flash), a Phase Change Memory (PCM), a Spin Torque Transfer Magnetoresistive Memory (STT-MRAM), or a Ferroelectric Random Access Memory (ram). The PCM uses different conductive characteristics of phase-change materials in crystalline and amorphous states to realize nonvolatile storage, the STT-MRAM uses amplified tunneling effect to realize a magnetic memory, and has the characteristics of high density, short access time, low power consumption and nonvolatile storage, and the FRAM uses the bistable polarization characteristic of a ferroelectric film to realize nonvolatile storage.
In the related art, when a certain data in the shared cache is not accessed for a long time, the data is replaced by other data. When a certain thread executes an operation (such as a data loss operation) which needs a long time, data originally accessed by the thread in the shared cache is replaced by data of other threads due to long-time non-access, and data pollution between threads occurs. In order to prevent data pollution among threads, a shared cache is divided into a plurality of volatile cache partitions, and different threads are set to correspond to different volatile cache partitions, that is, one thread can only access the volatile cache partition corresponding to the thread. When a certain thread executes an operation which takes a long time, the volatile cache partition corresponding to the thread is prohibited from being accessed by other threads, and the data accessed by the thread in the corresponding volatile cache partition is not replaced by the data of other threads.
Fig. 3 is a schematic structural diagram of a shared cache provided in the related art, and fig. 4 is a schematic structural diagram of another shared cache provided in the related art, where in fig. 3, the shared cache 11 is divided into two volatile cache partitions B1 according to ways (english: way) in the shared cache B, one volatile cache partition B1 includes a storage area corresponding to one way (way 1), and the other volatile cache partition B1 includes storage areas corresponding to three ways (way 2, way 3, and way 4). In fig. 4, the shared cache B is divided into two volatile cache partitions B2 according to the lines (english: row) in the shared cache B, wherein one volatile cache partition B2 includes storage areas corresponding to lines 1 to m, the other volatile cache partition B2 includes storage areas corresponding to lines m +1 to x, m is an integer greater than 1 and less than x, and line x is the last row in the shared cache B. When thread 1 and thread 2 share the shared cache B, thread 1 can access the volatile cache partition B1 including the storage area corresponding to one way in fig. 3, or the volatile cache partition B2 including the storage area corresponding to lines 1 to m in fig. 4; thread 2 can access volatile cache partition B1, fig. 3, which includes storage areas corresponding to three ways, or volatile cache partition B2, fig. 4, which includes storage areas corresponding to line m +1 through line x. However, when a certain thread executes an operation that takes a long time, the volatile cache partition corresponding to the certain thread is prohibited from being accessed by other threads, and the certain thread does not access the volatile cache partition at this time, so that the volatile cache partition cannot be effectively utilized, and therefore, the cache utilization rate of the processor is low, and the performance of the processor is poor.
Referring to fig. 2, in order to prevent pollution of data between threads, the volatile memory 122 in the embodiment of the present invention is also divided into a plurality of volatile cache partitions 1221 (in fig. 2, the volatile memory includes two volatile cache partitions 1221, for example), such as the plurality of volatile cache partitions 1221 are obtained according to way or line division in the volatile memory. Each of the volatile cache partitions 1221 in the plurality of volatile cache partitions 1221 locks a thread, and any two of the volatile cache partitions 1221 lock different threads, and the capacity of any two of the volatile cache partitions 1221 may be the same or different. Optionally, the nonvolatile memory 123 in the embodiment of the present invention is also divided into a plurality of nonvolatile cache partitions 1231 coupled to the volatile cache partitions (in fig. 2, the nonvolatile memory includes two nonvolatile cache partitions as an example), and each nonvolatile cache partition is coupled to one nonvolatile cache partition, where the nonvolatile cache partitions coupled to any two volatile cache partitions are different.
Fig. 5 is a schematic structural diagram of a cache manager 121 according to an embodiment of the present invention, where the cache manager 121 includes: at least one transmitting module 1211, at least one receiving module 1212, at least one processing module 1213, at least one storage module 1214, and at least one bus 1215, through which the transmitting module, receiving module, processing module, and storage module are connected. The processing module 1213 is used to execute executable modules, such as computer programs, stored in the storage module 1214. In some embodiments, storage module 1214 stores program 12141, and program 12141 is executable by processing module 1213.
Fig. 6 is a flowchart of a method of cache management according to an embodiment of the present invention, where the cache management method is used for the cache manager 121 in fig. 2, and the cache management method can be implemented by the processing module 1213 in fig. 5 executing the program 12141.
It should be noted that the first volatile cache partition is any one of at least two volatile cache partitions in the volatile memory, the first nonvolatile cache partition is coupled with a first volatile cache partition in the nonvolatile memory, and in the embodiment shown in fig. 6, the first nonvolatile cache partition includes a plurality of nonvolatile cache sub-partitions, each of which has a capacity greater than or equal to that of the first volatile cache partition, and the capacity of the first nonvolatile cache partition is greater than that of the first volatile cache partition.
As shown in fig. 6, the cache management method includes:
step 601, allocating the first volatile cache partition to the first thread.
The cache manager is to manage access by the plurality of threads to a shared cache, the shared cache including volatile memory and non-volatile memory, the volatile memory including a plurality of volatile cache partitions. When more threads need to access the volatile memory, the cache manager can screen out multiple threads from the more threads and lock the multiple volatile cache partitions to the screened multiple threads one by one, that is, each volatile cache partition locks one thread, and the threads locked by any two volatile cache partitions are different. In step 601, the cache manager may assign the first volatile cache partition to the first thread, i.e., lock the first volatile cache partition to the first thread.
The cache manager then instructs each of the plurality of threads to access the locked volatile cache partition. Such as: the cache manager is capable of sending an access indication to each thread, the access indication including an identification of the locked volatile cache partition for the thread, and the thread, upon receiving the access indication, accessing the locked volatile cache partition according to the access indication. At this time, each thread in the multiple threads can only access the locked volatile cache partition, but cannot access other volatile cache partitions, that is, during the period that the first thread occupies the first volatile cache partition, the cache manager does not allow other threads to access the first volatile cache partition, so that pollution of data among the threads is avoided when the multiple threads access the volatile memory.
Further, after determining the thread locked by each volatile cache partition, the cache manager can establish a lock list as shown in table 1, where the lock list is used to record the identity of each volatile cache partition and the identity of the thread locked by each volatile cache partition. For example, volatile cache partition C1 is locked with thread F1, volatile cache partition C2 is locked with thread F2, volatile cache partition C3 is locked with thread F3, volatile cache partition C4 is locked with thread F4, and volatile cache partition C5 is locked with thread F5. It should be noted that table 1 merely illustrates the lock list by way of example, and in practical applications, the lock list may be different from table 1, and the embodiment of the present invention does not limit this.
TABLE 1
Volatile cache partitioning Threading
C1 F1
C2 F2
C3 F3
C4 F4
C5 F5
Step 602, determining whether the first thread needs to execute a long time consuming operation. If the first thread needs to execute the long time consuming operation, then execute step 603; if the first thread does not need to perform the long time consuming operation, step 602 is performed.
During the process of the first thread accessing the first volatile cache partition, the first thread can read data stored on the first volatile cache partition or modify data stored on the first volatile cache partition. When the first thread needs to execute the long time-consuming operation, the cache manager can receive a long time-consuming operation instruction sent by the first thread, and the cache manager can determine that the first thread needs to execute the long time-consuming operation according to the long time-consuming operation instruction. For example, the long-time consuming operation is an operation with an operation duration greater than a preset duration threshold, and the first thread does not access the first volatile cache partition during the execution of the long-delay operation; optionally, the preset time threshold is 100 times of a clock period of the terminal, and the long time consuming operation is data loss processing (english: Cache Missing) or an operation of accessing an input/output (english: In/Out; abbreviated as I/O) device.
Step 603, writing first data related to the first thread in the first volatile cache partition into the non-volatile memory.
When the cache manager determines that the first thread needs to execute the long time-consuming operation, the cache manager can determine that the first thread does not access the first volatile cache partition in a longer time period, and the cache manager can also back up first data, which is stored in the first volatile cache partition and is related to the first thread, to the nonvolatile memory. It should be noted that the nonvolatile memory also includes a plurality of nonvolatile cache partitions, and the plurality of nonvolatile cache partitions are coupled to the plurality of volatile cache partitions, and the nonvolatile cache partitions coupled to any two volatile cache partitions are different. When the cache manager needs to backup the first data to the nonvolatile memory, the cache manager can directly determine a first nonvolatile cache partition coupled with the first volatile cache partition and backup the first data to the first nonvolatile cache partition.
In addition, in the process of backing up the first data to the nonvolatile memory, in order to facilitate understanding of the context of the first data to be backed up, the cache manager may record information related to the first data. For example, the information related to the first data includes: an identification of the first volatile cache partition, a non-volatile storage identification, and an identification of the first thread, the non-volatile storage identification to indicate a storage location of the first data within the non-volatile memory. Optionally, the information related to the first data further includes: the long time-consuming operation execution state identifier and the data recovery identifier, wherein the long time-consuming operation execution state identifier is as follows: a first identifier for indicating that the long time consuming operation is not completed, wherein the data recovery identifier is: a third flag indicating that data is not written from the non-volatile memory to the first volatile cache partition.
Further, a cache list is preset on the cache manager, and the cache list is used for recording relevant information of the data written into the nonvolatile memory. When the cache manager records the relevant information of the first data, the cache manager writes the relevant information of the first data into the cache list. Optionally, the cache list is shown in table 2, and the related information of the first data includes: an identification of the first volatile cache partition (C1), a non-volatile storage identification indicating a first non-volatile cache sub-section in the first non-volatile cache partition (F1M1), an identification of the first thread (W1), a first identification (0), and a second identification (0). The cache list is further recorded with information related to third data, where the third data is data stored in a third volatile cache partition and related to a third thread, and when the third thread performs a long time-consuming operation, the cache manager backs up the third data to a nonvolatile cache subregion in the third nonvolatile cache partition. The information related to the third data includes: an identification of the third volatile cache partition (C3), a non-volatile storage identification indicating a sub-section of the non-volatile cache in the third non-volatile cache partition (F3M1), an identification of the third thread (W3), a first identification (0), and a second identification (0).
TABLE 2
Figure GWB0000003338010000121
For example, the cache manager can select a free nonvolatile cache subregion (the relevant information of the data recorded by the cache manager does not include a nonvolatile storage identifier indicating the free nonvolatile cache subregion before the first data is written into the nonvolatile memory) from the plurality of nonvolatile cache subregions of the first nonvolatile cache subregion as the first nonvolatile cache subregion, write the first data into the free first nonvolatile cache subregion from the plurality of nonvolatile cache subregions of the first nonvolatile cache subregion, and add the nonvolatile storage identifier indicating the first nonvolatile cache subregion to the relevant information of the first data.
It should be noted that the data stored in the first volatile cache partition includes a plurality of data blocks, and each data block includes: valid data bits, a tag, and a data portion. By way of example, the first data stored on the first volatile cache partition includes: the Cache Line comprises a first data block and a second data block, wherein the data block is also called Cache Line, the data part in the first data block is irrelevant to a first thread, and a valid data bit in the first data block is used for indicating that the data part of the first data block is irrelevant to the first thread; the data portion in the second data block is associated with the first thread and the valid data bits in the second data block are used to indicate that the data portion in the second data block is associated with the first thread. Such as: the content of the valid data bits in the first data block is "0" and the content of the valid data bits in the second data block is "1". Optionally, when the cache manager backs up the first data to the nonvolatile memory, the cache manager may back up the content of the valid data bits in the first data block and the second data block to the first nonvolatile cache partition. After backing up the first data to the first non-volatile cache partition, the cache manager also needs to clear the contents of all valid data bits within the first volatile cache partition. That is, when backing up the first data, the cache manager only needs to back up the contents of the valid data bits in the valid data block (the second data block) and the invalid data block (i.e., the first data block), and in order to ensure that the next thread can normally access the first volatile cache partition, the cache manager needs to clear the contents of all valid data bits in the first volatile cache partition.
Optionally, the long time-consuming operation instruction sent by the first thread to the cache manager further includes: after receiving the long time-consuming operation instruction, the cache manager may determine whether the long time-consuming operation is a preset operation according to the identifier of the long time-consuming operation in the long time-consuming operation instruction, and optionally, the preset operation includes at least one of a data loss operation, an operation of accessing the input/output device, and a sleep operation. When the long time consuming operation is a preset operation, the cache manager executes the steps of releasing the occupation of the first thread on the first volatile cache subarea and writing the first data into the first nonvolatile cache subarea. When the long time-consuming operation is not a preset operation, the cache manager does not execute the steps of releasing the occupation of the first thread on the first volatile cache subarea and writing the first data into the first nonvolatile cache subarea.
In addition, in the embodiment of the present invention, the cache manager backs up the first data to the nonvolatile memory, and when the shared cache is suddenly powered off, the first data stored in the nonvolatile memory is not lost, so that the first data can be further prevented from being lost.
It should be noted that, in the embodiment of the present invention, in order to ensure that the first data on the first volatile cache partition can be successfully backed up to the first nonvolatile cache sub-area, it is required to ensure that the capacity of the first nonvolatile cache sub-area is greater than or equal to the capacity of the first volatile cache partition. Optionally, in order to make efficient use of the storage space in the shared cache, the capacity of the first non-volatile cache sub-section may be set equal to the capacity of the first volatile cache partition.
Step 604, the first volatile cache partition is allocated to a second thread that is to access the volatile memory.
The first volatile cache partition locks the first thread until step 604, i.e., the first volatile cache partition is only allowed to be accessed by the first thread before step 604, which does not access the first volatile cache partition during the long, time consuming operation of the first thread. Thus, in step 604 the cache manager is able to release the first thread's occupation of the first volatile cache partition, setting the first volatile cache partition to a state accessible by a thread other than the first thread, i.e. setting the first volatile cache partition to have the capability to be accessed by a thread other than the first thread. It should be noted that, when the cache manager sets the first volatile cache partition to be in a state accessible to other threads, the cache manager may directly set the first volatile cache partition to lock the second thread, and override the locking relationship between the first volatile cache partition and the first thread (i.e., the manner of step 604). Optionally, the cache manager may also directly release the locking relationship between the first volatile cache partition and the first thread; or, the cache manager may also directly release the locking relationship between the first volatile cache partition and the first thread, and set the first volatile cache partition to lock the second thread after the locking relationship between the first volatile cache partition and the first thread is released, and instruct the second thread to access the first volatile cache partition.
For example, the cache manager screens the threads that are more in need of accessing the volatile memory, including the first thread, before step 604. In step 604, the cache manager rescreens one of the more threads that is not being screened as a second thread. The cache manager can set the first volatile cache partition to lock the second thread so that the second thread can access the first volatile cache partition.
When the first volatile cache partition is volatile cache partition C1, as shown in Table 1, and the first thread is thread F1, the cache manager locks the first volatile cache partition (volatile cache partition F1) to the second thread (C6) in step 604, as shown in Table 3.
TABLE 3
Volatile cache partitioning Threading
C6 F1
C2 F2
C3 F3
C4 F4
C5 F5
Further, after allocating the first volatile cache partition to the second thread, the cache manager can also instruct the second thread to access the volatile memory to access the first volatile cache partition.
It should be noted that during the process of accessing the first volatile cache partition by the second thread, the cache manager may also perform a method similar to the method in steps 602 to 604. That is, if the second thread needs to perform a long time consuming operation, the cache manager may also write the second data related to the second thread stored in the first volatile cache partition into a second non-volatile cache sub-area that is idle in the first non-volatile cache partition, and record the related information of the second data in the process of writing the second data into the second non-volatile cache sub-area. The cache manager is then further capable of locking the first volatile cache partition to another thread (neither the first thread nor the second thread) and instructing the other thread to access the first volatile cache partition, and so forth. The cache manager can record the relevant information of the second data in a preset cache list, as shown in table 4, the relevant information of the second data may include: an identification of the first volatile cache partition (C1), a non-volatile storage identification indicating a second non-volatile cache sub-section in the first non-volatile cache partition (F1M2), an identification of the second thread (W2), a first identification (0), and a third identification (0).
TABLE 4
Figure GWB0000003338010000141
Step 605, after the first thread is executed and the long time consuming operation is completed, allocating the first volatile cache partition to the first thread, and writing the first data in the non-volatile memory into the first volatile cache partition.
After the first thread completes the long time consuming operation, the cache manager needs to resume the access of the first thread to the first volatile cache partition. However, in the embodiment of the present invention, the first nonvolatile cache partition includes a plurality of nonvolatile cache partitions, and the capacity of each nonvolatile cache partition is greater than or equal to that of the first volatile cache partition, so that there are more threads that perform the long time consuming operation in the process of accessing the first volatile cache partition, more data that is written into the nonvolatile memory, and more threads that are to be used to restore access to the first volatile cache partition, and therefore the cache manager needs to sequentially restore the access to the first volatile cache partition by the threads that have performed the long time consuming operation.
It should be noted that, after the first thread completes the long time consuming operation, the first thread sends a long time consuming operation completion indication to the cache manager, so that the cache manager changes the first identifier in the related information, which includes the identifier of the first thread, in the preset cache list to the second identifier according to the long time consuming operation completion indication. The second identifier is used for indicating that the long time consuming operation is executed, and at this time, the long time consuming operation execution state identifier in the related information including the identifier of the first thread is the second identifier. For example, as shown in table 5, the cache manager can change the long-time-consuming operation execution state identifier in the related information containing the identifier (W1) of the first thread from the first identifier (0) to the second identifier (1). Furthermore, if the second thread happens to execute the long time-consuming operation, the cache manager can change the execution state identifier of the long time-consuming operation in the related information including the identifier (W2) of the second thread from the first identifier (0) to the second identifier (1).
TABLE 5
Figure GWB0000003338010000151
For example, the cache manager sequentially allocates, according to a preset cache list and according to related information (e.g., related information of first data and related information of second data) including a second identifier in the preset cache list, a volatile cache partition indicated by an identifier of a volatile cache partition in the related information including the second identifier to a thread indicated by an identifier of a thread in the related information including the second identifier, and writes data in a storage location indicated by a nonvolatile storage identifier in the related information including the second identifier into the volatile cache partition indicated by the identifier of the volatile cache partition in the related information including the second identifier. That is, in the cache data containing the second identifier, the access of the thread indicated by the identifier of the thread to the volatile cache partition indicated by the identifier of the volatile cache partition is sequentially resumed.
For example, fig. 7 is a flowchart illustrating a method for a cache manager to restore access to a first volatile memory unit by a first access module, where the method includes:
step 6051a, determine whether the first volatile cache partition is accessed. If the first volatile cache partition is accessed, go to step 6051 a; if the first volatile cache partition is not being accessed, then step 6052a is performed.
After the first thread finishes executing the long time consuming operation, the cache manager needs to first determine whether the first volatile cache partition is being accessed, and if the first volatile cache partition is being accessed, the cache manager continues to execute step 6051a to continuously determine whether the first volatile cache partition is being accessed. If the first volatile cache partition is not being accessed, the cache manager needs to perform step 6052 a.
That is, when the first thread has executed the long time consuming operation, the first volatile cache partition may be accessed by other threads (e.g., the second thread), and at this time, in order to prevent data loss of other threads, it is necessary to wait for the first volatile cache partition to be in an idle state (i.e., the first volatile cache partition is not accessed), so as to perform access recovery of the first thread. It should be noted that after the other threads have completed accessing the first volatile cache partition, the first volatile cache partition is in an idle state, or during the process of performing the long time consuming operation by the other threads, the first volatile cache partition is also in an idle state.
Step 6052a assigns the first volatile cache partition to the first thread based on the information related to the first data. Step 6053a is performed.
The cache manager can read the identifier of the first volatile cache partition and the identifier of the first thread from the related information of the first data, further determine the first volatile cache partition and the first thread, and set the first volatile cache partition to lock the first thread (as shown in table 1), that is, allocate the first volatile cache partition to the first thread.
Step 6053a, writing the first data in the non-volatile memory to the first volatile cache partition according to the related information of the first data.
The cache manager can read the nonvolatile storage identifier from the related information of the first data, further determine a first nonvolatile cache subarea indicated by the nonvolatile storage identifier, acquire the first data stored on the first nonvolatile cache subarea, and write the first data into the first volatile cache subarea. Further, the cache manager may be capable of writing only the contents of the valid data bits in the invalid data block and the valid data block to the first volatile cache partition when writing the first data to the first volatile cache partition.
After writing the first data to the first volatile cache partition, the cache manager may further indicate, in the information related to the first data, that the first thread indicated by the identification of the thread continues to access the first volatile cache partition.
It should be noted that, after the first data is restored to the first volatile cache partition, the cache manager further changes a third identifier in the relevant information of the first data in the preset cache list to a fourth identifier, where the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written into the first volatile cache partition. Thus, after the cache manager completes step 6053a, that is, resumes the access of the first thread to the first volatile cache partition, the cache manager can determine that the first data is restored to the first volatile cache partition according to the fourth identifier in the related information of the first data in the preset cache list. And then restoring the access of the thread indicated by the identification of the thread to the volatile cache partition indicated by the identification of the volatile cache partition in the related information of the next data containing the second identification. Thereby preventing the cache manager from writing the first data to the first volatile cache partition again after writing the first data to the first volatile cache partition.
Further, when the long and time consuming operation performed by the first thread is a data loss operation (english: Cache missing), before step 6053a, the Cache manager can receive the lost data sent by the first thread when the long and time consuming operation is performed, and after step 6053a, write the lost data into the first volatile Cache partition, so as to ensure that the first thread can normally access the first volatile Cache partition after the first thread resumes accessing the first volatile Cache partition.
It should be noted that, when the first nonvolatile cache partition does not include multiple nonvolatile cache sub-partitions and the capacity of the first nonvolatile cache partition is greater than or equal to the capacity of the first volatile cache partition in the embodiment shown in fig. 6, the embodiment shown in fig. 6 may be changed as follows:
first, in step 603, the cache manager writes the first data into a first nonvolatile cache partition in the nonvolatile memory, and the nonvolatile storage identifier in the relevant information of the first data recorded by the cache manager is used to indicate an identifier of the first nonvolatile cache partition. Further, since the first nonvolatile cache partition does not include a plurality of nonvolatile cache sub-partitions, and the first nonvolatile cache partition can only write data related to one thread stored on the first volatile cache partition, the number of threads that need to write data to the nonvolatile memory when accessing the first volatile cache partition is not multiple, and therefore the information related to the first data does not include the long-time-consuming operation execution state identifier.
Second, during the second thread accessing the first volatile cache partition, the cache manager need not perform a method similar to the method in steps 602-604, i.e., the cache manager need not perform any action when the second thread also needs to perform a long time consuming operation.
Finally, in step 605, after the first thread has executed the long time consuming operation, the cache manager does not need to determine whether the first volatile cache partition is being accessed, but directly stops the access of the second thread to the first volatile cache partition, allocates the first volatile cache partition to the first thread, writes the first data in the nonvolatile memory into the first volatile cache partition, and resumes the access of the first thread to the first volatile cache partition.
Optionally, in order to effectively utilize the storage space in the shared cache, in the embodiment of the present invention, the capacity of the first nonvolatile cache partition is set to be equal to the capacity of the first volatile cache partition.
For example, fig. 8 is a flow chart of another method for a cache manager to restore access to a first volatile cache partition by a first thread, as shown in fig. 8, the method comprising:
step 6051b assigns the first volatile cache partition to the first thread based on the information associated with the first data.
After the cache manager determines that the first thread is executed and consumes a long time, the cache manager can determine relevant information of first data containing the first thread directly according to the identification of the first thread, read the identification of the first volatile cache partition and the identification of the first thread from the relevant information of the first data, further determine the first volatile cache partition and the first thread, set the first volatile cache partition to lock the first thread, and allocate the first volatile cache partition to the first thread. At this point, the second thread is not locked with the first volatile cache partition and the second thread cannot access the first volatile cache partition.
Further, when the second thread writes data into the first volatile cache partition, the second thread writes the tag in each data block in the data as a modified tag, and the cache in the terminal includes multiple levels of memory. Before resuming the access of the first thread to the first volatile cache partition, i.e., before performing step 6051b, the cache manager further determines whether the Write policy of the second thread is a Write-Back (english: Write Back) policy in order to prevent the data of the second thread from being lost when resuming the access of the first thread to the first volatile cache partition. If the write strategy of the second thread is a write-back strategy, backing up the data with the modified tag in the first volatile cache partition to: memory (in), or a cache level Memory lower than volatile Memory.
Step 6052b, writing the first data in the non-volatile memory to the first volatile cache partition according to the information related to the first data.
The specific step 6052b of recovering the first data by the cache manager refers to the specific step 6053a in the embodiment shown in fig. 7, which is not described in detail in this embodiment of the present invention.
Optionally, in step 605, the cache manager may further allocate a second volatile cache partition to the first thread, and write the first data in the non-volatile cache into the second volatile cache partition, where the second volatile cache partition is the first volatile cache partition or another volatile cache partition other than the first volatile cache partition. That is, after the first thread has performed the long-latency operation, the cache manager may write the first data from the non-volatile memory to: a first volatile cache partition or a second volatile cache partition different from the first volatile cache partition. Further, after writing the first data to the second volatile cache partition, the cache manager may further instruct the first thread to access the second volatile cache partition and continue accessing the first data on the second volatile cache partition.
Optionally, the nonvolatile memory includes at least two nonvolatile cache partitions, and when the at least two nonvolatile cache partitions are not coupled to the at least two volatile cache partitions one by one, in step 602, the cache manager may write the first data in the first volatile cache partition into a first nonvolatile cache partition in the nonvolatile memory, record an association relationship between the first thread and the first nonvolatile cache partition, where the first nonvolatile cache partition is any partition of the at least two nonvolatile cache partitions. In step 605, the cache manager may allocate the second volatile cache partition to the first thread, and write the first data in the first non-volatile cache partition into the second volatile cache partition according to the association relationship between the first thread and the first non-volatile cache partition. That is, the cache manager may write the first data from the first volatile cache partition to any partition in the non-volatile memory, and record the association relationship between the first thread and the first non-volatile cache partition when writing a certain partition, so as to determine the first data and the first thread that needs to use the first data when restoring the first data from the non-volatile memory to the volatile memory.
In order to prevent data loss on the volatile memory in a burst situation, a ferroelectric nonvolatile flip-flop as shown in fig. 9 is designed in the related art, and includes: a ferroelectric nonvolatile part and a Complementary Metal Oxide Semiconductor (CMOS) volatile part. The ferroelectric nonvolatile part is provided with a signal input end Din, a signal output end Dout and an inverted signal output end
Figure GWB0000003338010000181
Clock signal input terminal Clk and inverted clock signal output terminal
Figure GWB0000003338010000182
When the ferroelectric nonvolatile flip-flop normally works, the COMS volatile part in the ferroelectric nonvolatile flip-flop works, and when a burst condition occurs, the ferroelectric nonvolatile flip-flop generates a first signal RW, a second signal PL and a third signal PCH according to a certain time sequence, so that data on the COMS volatile part is backed up to the ferroelectric nonvolatile part. However, in the related art, when the thread needs to perform a long time-consuming operation, the data of the thread is not backed up by using the nonvolatile portion.
In summary, in the cache management method provided in the embodiment of the present invention, during the period when the first thread occupies the first volatile cache partition, other threads are not allowed to access the first volatile cache partition, so that when other threads cannot access the first volatile cache partition when the first thread accesses the first volatile cache partition, other threads cannot access the first volatile cache partition, thereby preventing mutual pollution between data of different threads. When the first thread executes long time-consuming operation, the first data is written into the nonvolatile memory, the first data is backed up, and the occupation of the first thread on the first volatile cache partition is released, that is, when the first thread executes long time-consuming operation, the first volatile cache partition can be accessed by other threads, so that the cache utilization rate of the terminal can be improved.
Fig. 10 is a schematic structural diagram of a cache manager according to an embodiment of the present invention, where the cache manager may be the cache manager in fig. 2, and as shown in fig. 10, the cache manager 100 includes:
the allocating module 1001 is configured to allocate a first volatile cache partition to a first thread, where first data related to the first thread is stored in the first volatile cache partition, and other threads are not allowed to access the first volatile cache partition while the first thread occupies the first volatile cache partition, where the first volatile cache partition is any partition of at least two volatile cache partitions;
a first determining module 1002, configured to determine whether a first thread needs to perform a long-latency operation, where the long-latency operation is an operation with an operation duration longer than a preset time threshold, and the first thread does not access the first volatile cache partition during the long-latency operation;
the first writing module 1003 is configured to write the first data in the first volatile cache partition into the nonvolatile memory and release occupation of the first thread on the first volatile cache partition when the first thread needs to execute the long-latency operation.
In summary, in the cache manager provided in the embodiments of the present invention, during the period when the first thread occupies the first volatile cache partition, other threads are not allowed to access the first volatile cache partition, so that when the first thread accesses the first volatile cache partition, other threads cannot access the first volatile cache partition, and data of different threads are prevented from being polluted by each other. When the first thread executes the long time-consuming operation, the first write module 1003 writes the first data into the nonvolatile memory, backs up the first data, and releases the occupation of the first thread on the first volatile cache partition, that is, when the first thread executes the long time-consuming operation, the first volatile cache partition can be accessed by other threads, so that the cache utilization rate of the terminal can be improved.
Optionally, each volatile cache partition locks one thread, and the threads locked by any two volatile cache partitions are different, each volatile cache partition does not allow access by the unlocked thread,
the assignment module 1001 is further configured to: setting a first volatile cache partition to lock a first thread;
the first writing module 1003 is further configured to:
releasing the locking relation between the first volatile cache partition and the first thread;
and/or the presence of a gas in the gas,
the first volatile cache partition is set to lock a second thread to access the volatile memory.
Optionally, the nonvolatile memory includes at least two nonvolatile cache partitions, the at least two volatile cache partitions are coupled to the at least two nonvolatile cache partitions one by one, and the first writing module 1003 is further configured to:
the first data is written to a first non-volatile cache partition coupled to the first volatile cache partition.
Fig. 11 is a schematic structural diagram of another cache manager according to an embodiment of the present invention, as shown in fig. 11, on the basis of fig. 10, the cache manager 100 further includes:
the recording module 1004 is configured to record relevant information of the first data during writing of the first data into the first nonvolatile cache partition, where the relevant information of the first data includes: an identifier of the first volatile cache partition, a nonvolatile storage identifier and an identifier of the first thread, the nonvolatile storage identifier indicating a storage location of the first data in the nonvolatile memory;
the second writing module 1005 is configured to, after the first thread has executed the long and time consuming operation, allocate the first volatile cache partition to the first thread according to the related information of the first data, and write the first data in the first nonvolatile cache partition into the first volatile cache partition.
Optionally, the second writing module 1005 is further configured to: setting a first volatile cache partition indicated by the identification of the first volatile cache partition in the related information of the first data, and locking a first thread indicated by the identification of the first thread; writing the first data in the first nonvolatile cache partition indicated by the first nonvolatile storage identifier in the related information of the first data into the first volatile cache partition indicated by the identifier of the first volatile cache partition; and in the related information indicating the first data, the first thread indicated by the identification of the first thread continues to access the first volatile cache partition indicated by the identification of the first volatile cache partition.
Fig. 12 is a schematic structural diagram of another cache manager according to an embodiment of the present invention, as shown in fig. 12, on the basis of fig. 11, the cache manager 100 further includes:
a second determining module 1006, configured to determine whether the first volatile cache partition is accessed;
the second write module 1005 is further configured to: and when the first volatile cache partition is not accessed, the first volatile cache partition is allocated to the first thread according to the related information of the first data, and the first data in the first nonvolatile cache partition is written into the first volatile cache partition.
Optionally, the first non-volatile cache partition comprises a plurality of non-volatile cache sub-partitions, each having a capacity greater than or equal to the capacity of the first volatile cache partition,
the first writing module 1003 is further configured to: writing the first data to a first non-volatile cache subregion of the plurality of non-volatile cache subregions that is free, the cache manager recording information about the data prior to writing the first data to a first non-volatile cache subregion coupled to the first volatile cache subregion excluding: a nonvolatile storage identifier used for indicating a free nonvolatile cache subarea, wherein the nonvolatile storage identifier in the related information of the first data is used for indicating the first nonvolatile cache subarea;
the recording module 1004 is further configured to: recording relevant information of first data in a preset cache list, wherein the relevant information of the first data further comprises: the first identification is used for indicating that long time consuming operation is not completed, and the preset cache list is used for recording relevant information of data written into the nonvolatile memory;
fig. 13 is a schematic structural diagram of another cache manager according to an embodiment of the present invention, as shown in fig. 13, on the basis of fig. 12, the cache manager 100 further includes: a first changing module 1007, configured to change, after the first thread completes execution of the long time consuming operation, a first identifier in the relevant information of the first data, where the first identifier includes an identifier of the first thread, in a preset cache list to a second identifier, where the second identifier is used to indicate that the long time consuming operation has been completed;
the second write module 1005 is further configured to: and sequentially distributing the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier to the thread indicated by the identifier of the thread in the relevant information containing the second identifier according to the relevant information containing the second identifier in the preset cache list, and writing the data in the storage position indicated by the non-volatile storage identifier in the relevant information containing the second identifier into the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier.
Optionally, fig. 14 is a schematic structural diagram of a cache manager according to another embodiment of the present invention, as shown in fig. 14, on the basis of fig. 11, the cache manager 100 further includes:
a third determining module 1008, configured to determine whether the write policy of the second thread is a write-back policy;
a third writing module 1009, configured to, when the write policy is a write-back policy, write the data with the modified tag in the first volatile cache partition into: memory or storage having a cache level lower than that of the volatile storage.
Optionally, the capacity of the first volatile cache partition is greater than or equal to the capacity of the first non-volatile cache partition.
Optionally, the information related to the first data further includes: a third flag indicating that the first data on the nonvolatile memory has not been written to the first volatile cache partition, wherein the cache manager shown in fig. 11 further includes: the second changing module 10010 is configured to change the third identifier in the information related to the first data to a fourth identifier, where the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written to the first volatile cache partition.
Optionally, the long time consuming operation is a data loss operation, and the cache manager shown in fig. 11 further includes: a receiving module 10011, configured to receive lost data sent by the first thread;
the fourth writing module 10012 is configured to write the missing data to the first volatile cache partition.
Optionally, the first data includes: a first data block and a second data block, wherein a data portion of the first data block is independent of the first thread, and a data portion of the second data block is associated with the first thread, the first writing module 1003 is further configured to: writing the contents of the valid data bits in the first data block and the second data block to the non-volatile memory;
the cache manager of any of fig. 11-14 further includes a flush module (none of fig. 11-14 shown) for flushing the contents of all valid data bits within the first volatile cache partition.
Optionally, the long time consuming operation is a preset operation in which the operation duration is greater than a preset duration threshold, and the preset operation includes: at least one of a data loss operation, an access input output device operation, and a sleep operation.
In summary, in the cache manager provided in the embodiments of the present invention, during the period when the first thread occupies the first volatile cache partition, other threads are not allowed to access the first volatile cache partition, so that when the first thread accesses the first volatile cache partition, other threads cannot access the first volatile cache partition, and data of different threads are prevented from being polluted by each other. When the first thread executes the long time-consuming operation, the first writing module writes the first data into the nonvolatile memory, backups the first data, and releases the occupation of the first thread on the first volatile cache partition, that is, when the first thread executes the long time-consuming operation, the first volatile cache partition can be accessed by other threads, so that the cache utilization rate of the terminal can be improved.
It should be noted that, the method embodiments provided in the present application can be mutually referred to corresponding apparatus embodiments, and the present application does not limit this. The sequence of the steps in the method embodiments provided in the present application can be appropriately adjusted, and the steps can be correspondingly increased or decreased according to the situation, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application shall be covered by the protection scope of the present application, and therefore, the details are not repeated.
The sequence of the steps of the cache management method provided by the present application can be appropriately adjusted, and the steps can be correspondingly increased or decreased according to the situation, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application shall be covered by the protection scope of the present application, and therefore, the details are not repeated.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (25)

1. A method of cache management, wherein a shared cache comprises a volatile memory and a non-volatile memory, the volatile memory comprising at least two volatile cache partitions, the method comprising:
allocating a first volatile cache partition to a first thread, the first volatile cache partition having first data associated with the first thread stored thereon, the first volatile cache partition being one of the at least two volatile cache partitions, and not allowing other threads to access the first volatile cache partition while the first thread occupies the first volatile cache partition;
judging whether a first thread needs to execute long time consuming operation, wherein the long time consuming operation refers to operation with the operation duration being larger than a preset time threshold, and the first thread does not visit the first volatile cache partition during the execution of the long time consuming operation;
and if the first thread needs to execute long-time consuming operation, writing the first data in the first volatile cache partition into the nonvolatile memory, and releasing the first thread from occupying the first volatile cache partition.
2. The method of claim 1, wherein each of the volatile cache partitions locks one thread, and wherein the threads locked by any two of the volatile cache partitions are different, wherein each of the volatile cache partitions does not allow access by unlocked threads,
the allocating the first volatile cache partition to the first thread comprises: setting the first volatile cache partition to lock the first thread;
the releasing the occupation of the first volatile cache partition by the first thread comprises: releasing the locking relationship between the first volatile cache partition and the first thread; and/or setting the first volatile cache partition to lock a second thread to be accessed to the volatile memory.
3. The method of claim 2, wherein the non-volatile memory comprises at least two non-volatile cache partitions, the at least two volatile cache partitions are coupled to the at least two non-volatile cache partitions one-to-one, and the writing the first data in the first volatile cache partition to the non-volatile memory comprises:
writing the first data to a first non-volatile cache partition coupled with the first volatile cache partition.
4. The method of claim 3, further comprising:
recording relevant information of the first data in the process of writing the first data into the first nonvolatile cache partition, wherein the relevant information of the first data comprises: an identification of the first volatile cache partition, a non-volatile storage identification, and an identification of the first thread, the non-volatile storage identification to indicate a storage location of the first data within the non-volatile memory;
after the first thread finishes the long time consuming operation, the first volatile cache partition is allocated to the first thread according to the relevant information of the first data, and the first data in the first non-volatile cache partition is written into the first volatile cache partition.
5. The method of claim 4, wherein the allocating the first volatile cache partition to the first thread and writing the first data in the first non-volatile cache partition to the first volatile cache partition according to the information about the first data comprises:
setting the first volatile cache partition indicated by the identifier of the first volatile cache partition in the related information of the first data, and locking the first thread indicated by the identifier of the first thread;
writing first data in a first nonvolatile cache partition indicated by the first nonvolatile storage identifier in the related information of the first data into the first volatile cache partition indicated by the identifier of the first volatile cache partition;
and in the related information indicating the first data, the first thread indicated by the identification of the first thread continues to access the first volatile cache partition indicated by the identification of the first volatile cache partition.
6. The method of claim 5,
before the allocating the first volatile cache partition to the first thread according to the information related to the first data and writing the first data in the first non-volatile cache partition into the first volatile cache partition, the method further includes:
determining whether the first volatile cache partition is accessed;
the allocating the first volatile cache partition to the first thread according to the relevant information of the first data, and writing the first data in the first non-volatile cache partition into the first volatile cache partition, includes:
when the first volatile cache partition is not accessed, the first volatile cache partition is allocated to the first thread according to the relevant information of the first data, and the first data in the first non-volatile cache partition is written into the first volatile cache partition.
7. The method of claim 6, wherein the method is used in a cache manager, wherein the first non-volatile cache partition comprises a plurality of non-volatile cache subdivisions, each of which has a capacity greater than or equal to a capacity of the first volatile cache partition,
the writing the first data to a first non-volatile cache partition coupled with the first volatile cache partition, comprising: writing the first data to a first non-volatile cache sub-area of the plurality of non-volatile cache sub-areas that is free, the cache manager recording information about the data that does not include, prior to writing the first data to a first non-volatile cache partition coupled to the first volatile cache partition: a nonvolatile storage identifier for indicating a free nonvolatile cache sub-area, wherein the nonvolatile storage identifier in the first data related information is used for indicating the first nonvolatile cache sub-area;
the recording the related information of the first data comprises: recording the related information of the first data in a preset cache list, wherein the related information of the first data further comprises: the first identifier is used for indicating that the long time consuming operation is not completed, and the preset cache list is used for recording relevant information of data written into a nonvolatile memory;
the method further comprises the following steps: after the first thread finishes executing the long time-consuming operation, changing the first identifier in the related information of the first data, which contains the identifier of the first thread, in the preset cache list into a second identifier, wherein the second identifier is used for indicating that the long time-consuming operation is finished;
the allocating the first volatile cache partition to the first thread according to the relevant information of the first data, and writing the first data in the first non-volatile cache partition into the first volatile cache partition, includes: and sequentially distributing the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier to the thread indicated by the identifier of the thread in the relevant information containing the second identifier according to the relevant information containing the second identifier in the preset cache list, and writing the data on the storage position indicated by the non-volatile storage identifier in the relevant information containing the second identifier into the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier.
8. The method as claimed in claim 4 or 5, wherein before said allocating the first volatile cache partition to the first thread according to the information related to the first data and writing the first data in the first non-volatile cache partition to the first volatile cache partition, the method further comprises:
judging whether the write strategy of the second thread is a write-back strategy;
if the write policy is a write-back policy, writing the data with the modified tag in the first volatile cache partition into: memory or storage having a cache level lower than that of the volatile storage.
9. The method of claim 5, wherein the information related to the first data further comprises: a third identification to indicate that the first data on the non-volatile memory has not been written to the first volatile cache partition,
before continuing to access the first volatile cache partition indicated by the identification of the first volatile cache partition in the relevant information indicating the first data for the first thread indicated by the identification of the first thread, the method further comprises:
changing the third identifier in the information related to the first data to a fourth identifier, wherein the fourth identifier is used for indicating that the first data on the nonvolatile memory is written into the first volatile cache partition.
10. The method of claim 5, wherein the long time consuming operation is a data loss operation, and wherein before the first thread indicated by the identifier of the first thread in the associated information indicating the first data continues to access the first volatile cache partition indicated by the identifier of the first volatile cache partition, the method further comprises:
receiving lost data sent by the first thread;
writing the missing data to the first volatile cache partition.
11. The method according to any one of claims 1 to 7,
the first data includes: a first data block and a second data block, wherein a data portion in the first data block is independent of the first thread and a data portion in the second data block is dependent on the first thread,
the writing the first data in the first volatile cache partition to the non-volatile memory comprises:
writing the contents of the valid data bits in the first data block and the second data block to the non-volatile memory;
after the writing the first data in the first volatile cache partition to the non-volatile memory, the method further comprises:
clearing the contents of all valid data bits within the first volatile cache partition.
12. A cache manager, wherein a shared cache comprises volatile memory and non-volatile memory, wherein the volatile memory comprises at least two volatile cache partitions, the cache manager comprising:
the thread allocation method includes allocating a first volatile cache partition to a first thread, the first volatile cache partition storing first data related to the first thread, and not allowing other threads to access the first volatile cache partition during the first thread occupying the first volatile cache partition, wherein the first volatile cache partition is any one of at least two volatile cache partitions;
the first judging module is used for judging whether a first thread needs to execute long time consuming operation, wherein the long time consuming operation refers to operation with the operation time length being larger than a preset time threshold, and the first thread does not visit the first volatile cache partition during the execution of the long time consuming operation;
the first writing module is configured to write the first data in the first volatile cache partition into the nonvolatile memory and release occupation of the first volatile cache partition by the first thread when the first thread needs to perform a long-time consuming operation.
13. The cache manager of claim 12 wherein each of the volatile cache partitions locks one thread and any two of the volatile cache partitions locks a different thread, each of the volatile cache partitions disallows access by unlocked threads,
the allocation module is further configured to: setting the first volatile cache partition to lock the first thread;
the first write module is further to: releasing the locking relationship between the first volatile cache partition and the first thread; and/or setting the first volatile cache partition to lock a second thread to be accessed to the volatile memory.
14. The cache manager of claim 13, wherein the non-volatile memory comprises at least two non-volatile cache partitions, the at least two volatile cache partitions are one-to-one coupled with the at least two non-volatile cache partitions, and wherein the first write module is further configured to:
writing the first data to a first non-volatile cache partition coupled with the first volatile cache partition.
15. The cache manager of claim 14, wherein the cache manager further comprises:
a recording module, configured to record relevant information of the first data in a process of writing the first data into the first nonvolatile cache partition, where the relevant information of the first data includes: an identification of the first volatile cache partition, a non-volatile storage identification, and an identification of the first thread, the non-volatile storage identification to indicate a storage location of the first data within the non-volatile memory;
and the second writing module is used for allocating the first volatile cache partition to the first thread according to the relevant information of the first data after the first thread finishes the long time-consuming operation, and writing the first data in the first nonvolatile cache partition into the first volatile cache partition.
16. The cache manager of claim 15, wherein the second write module is further configured to:
setting the first volatile cache partition indicated by the identifier of the first volatile cache partition in the related information of the first data, and locking the first thread indicated by the identifier of the first thread;
writing first data in a first nonvolatile cache partition indicated by the first nonvolatile storage identifier in the related information of the first data into the first volatile cache partition indicated by the identifier of the first volatile cache partition;
and in the related information indicating the first data, the first thread indicated by the identification of the first thread continues to access the first volatile cache partition indicated by the identification of the first volatile cache partition.
17. The cache manager of claim 16,
the cache manager further comprises:
the second judging module is used for judging whether the first volatile cache partition is accessed or not;
the second write module is further to:
when the first volatile cache partition is not accessed, the first volatile cache partition is allocated to the first thread according to the relevant information of the first data, and the first data in the first non-volatile cache partition is written into the first volatile cache partition.
18. The cache manager of claim 17, wherein the first non-volatile cache partition comprises a plurality of non-volatile cache subdivisions, each of the non-volatile cache subdivisions having a capacity greater than or equal to a capacity of the first volatile cache partition,
the first write module is further to: writing the first data to a first non-volatile cache sub-area of the plurality of non-volatile cache sub-areas that is free, the cache manager recording information about the data that does not include, prior to writing the first data to a first non-volatile cache partition coupled to the first volatile cache partition: a nonvolatile storage identifier for indicating a free nonvolatile cache sub-area, wherein the nonvolatile storage identifier in the first data related information is used for indicating the first nonvolatile cache sub-area;
the recording module is further configured to: recording the related information of the first data in a preset cache list, wherein the related information of the first data further comprises: the first identifier is used for indicating that the long time consuming operation is not completed, and the preset cache list is used for recording relevant information of data written into a nonvolatile memory;
the cache manager further includes a first changing module, configured to change, after the first thread completes execution of the long time consuming operation, the first identifier in the relevant information of the first data, which includes the identifier of the first thread, in the preset cache list to a second identifier, where the second identifier is used to indicate that the long time consuming operation has been completed;
the second write module is further to: and sequentially distributing the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier to the thread indicated by the identifier of the thread in the relevant information containing the second identifier according to the relevant information containing the second identifier in the preset cache list, and writing the data on the storage position indicated by the non-volatile storage identifier in the relevant information containing the second identifier into the volatile cache partition indicated by the identifier of the volatile cache partition in the relevant information containing the second identifier.
19. The cache manager according to claim 15 or 16, wherein the cache manager further comprises:
the third judging module is used for judging whether the write strategy of the second thread is a write-back strategy or not;
a third write module, configured to write, when the write policy is a write-back policy, data in the first volatile cache partition having a modified tag: memory or storage having a cache level lower than that of the volatile storage.
20. The cache manager of claim 16, wherein the information related to the first data further comprises: a third identifier to indicate that the first data on the non-volatile memory has not been written to the first volatile cache partition, the cache manager further comprising:
a second changing module, configured to change the third identifier in the information related to the first data to a fourth identifier, where the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written to the first volatile cache partition.
21. The cache manager of claim 16, wherein the long, time consuming operation is a data loss operation, the cache manager further comprising:
the receiving module is used for receiving the lost data sent by the first thread;
a fourth write module to write the missing data to the first volatile cache partition.
22. The cache manager according to any of claims 12 to 18,
the first data includes: a first data block and a second data block, wherein a data portion in the first data block is independent of the first thread and a data portion in the second data block is dependent on the first thread,
the first write module is further to: writing the contents of the valid data bits in the first data block and the second data block to the non-volatile memory;
the cache manager further includes a purge module to purge contents of all valid data bits within the first volatile cache partition.
23. A shared cache, the shared cache comprising: a cache manager, volatile memory and non-volatile memory,
the cache manager is as claimed in any one of claims 12 to 22; the volatile memory includes at least two volatile cache partitions.
24. The shared cache of claim 23,
the non-volatile memory includes at least two non-volatile cache partitions, the at least two volatile cache partitions being coupled to the at least two non-volatile cache partitions one-to-one.
25. A terminal, characterized in that the terminal comprises: a processor and a shared cache, and,
the processor comprises at least two threads;
the shared cache is the shared cache of claim 23 or 24.
CN201780022195.1A 2017-02-28 2017-02-28 Cache management method, cache manager, shared cache and terminal Active CN109196473B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/075132 WO2018157278A1 (en) 2017-02-28 2017-02-28 Cache management method, cache manager, shared cache and terminal

Publications (2)

Publication Number Publication Date
CN109196473A CN109196473A (en) 2019-01-11
CN109196473B true CN109196473B (en) 2021-10-01

Family

ID=63369730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780022195.1A Active CN109196473B (en) 2017-02-28 2017-02-28 Cache management method, cache manager, shared cache and terminal

Country Status (2)

Country Link
CN (1) CN109196473B (en)
WO (1) WO2018157278A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941449A (en) * 2019-11-15 2020-03-31 新华三半导体技术有限公司 Cache block processing method and device and processor chip
CN113596038B (en) * 2021-08-02 2023-04-07 武汉绿色网络信息服务有限责任公司 Data packet parsing method and server
CN113849455B (en) * 2021-09-28 2023-09-29 致真存储(北京)科技有限公司 MCU based on hybrid memory and data caching method
CN114629748B (en) * 2022-04-01 2023-08-15 日立楼宇技术(广州)有限公司 Building data processing method, building edge gateway and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728959B1 (en) * 1995-08-08 2004-04-27 Novell, Inc. Method and apparatus for strong affinity multiprocessor scheduling
CN101499028A (en) * 2009-03-18 2009-08-05 成都市华为赛门铁克科技有限公司 Data protection method and apparatus based on non-volatile memory
CN101697198A (en) * 2009-10-28 2010-04-21 浪潮电子信息产业股份有限公司 Method for dynamically regulating number of active processors in single computer system
CN103744623A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 Method for realizing intelligent degradation of data cached in SSD (Solid State Disk) of storage system
CN104881324A (en) * 2014-09-28 2015-09-02 北京匡恩网络科技有限责任公司 Memory management method in multi-thread environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728959B1 (en) * 1995-08-08 2004-04-27 Novell, Inc. Method and apparatus for strong affinity multiprocessor scheduling
CN101499028A (en) * 2009-03-18 2009-08-05 成都市华为赛门铁克科技有限公司 Data protection method and apparatus based on non-volatile memory
CN101697198A (en) * 2009-10-28 2010-04-21 浪潮电子信息产业股份有限公司 Method for dynamically regulating number of active processors in single computer system
CN103744623A (en) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 Method for realizing intelligent degradation of data cached in SSD (Solid State Disk) of storage system
CN104881324A (en) * 2014-09-28 2015-09-02 北京匡恩网络科技有限责任公司 Memory management method in multi-thread environment

Also Published As

Publication number Publication date
WO2018157278A1 (en) 2018-09-07
CN109196473A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109196473B (en) Cache management method, cache manager, shared cache and terminal
US7783851B2 (en) Methods of reusing log blocks in non-volatile memories and related non-volatile memory devices
US9183132B2 (en) Storage device, computer system, and storage system
US9081702B2 (en) Working set swapping using a sequentially ordered swap file
CN102713866B (en) Reduce based on the access contention in the storage system of flash memory
TWI399644B (en) Block management method for a non-volatile memory
KR102147359B1 (en) Method for managing non-volatile memory device, and non-volatile memory device
US6571326B2 (en) Space allocation for data in a nonvolatile memory
KR101686376B1 (en) Erase management in memory systems
JP4808275B2 (en) Network boot system
JP5486047B2 (en) Device and method for prioritized erase of flash memory
WO2008089643A1 (en) Method for managing flash memory block
US11347417B2 (en) Locking structures in flash memory
WO2014074449A2 (en) Wear leveling in flash memory devices with trim commands
US10168940B2 (en) Data storage using SLC and TLC memory banks and data maintenance method thereof
TW200947439A (en) Memory system
US11520487B2 (en) Managing write operations during a power loss
US9785438B1 (en) Media cache cleaning based on workload
JP2007094921A (en) Memory card and control method for it
CN111462790B (en) Method and apparatus for pipeline-based access management in storage servers
US11579770B2 (en) Volatility management for memory device
CN106021124B (en) A kind of storage method and storage system of data
US10872008B2 (en) Data recovery after storage failure in a memory system
KR102264757B1 (en) Data storage device and operating method thereof
CN107643987B (en) Method for reducing DRAM (dynamic random Access memory) usage in solid state disk and solid state disk using same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant