WO2018157278A1 - 缓存管理方法、缓存管理器、共享缓存和终端 - Google Patents

缓存管理方法、缓存管理器、共享缓存和终端 Download PDF

Info

Publication number
WO2018157278A1
WO2018157278A1 PCT/CN2017/075132 CN2017075132W WO2018157278A1 WO 2018157278 A1 WO2018157278 A1 WO 2018157278A1 CN 2017075132 W CN2017075132 W CN 2017075132W WO 2018157278 A1 WO2018157278 A1 WO 2018157278A1
Authority
WO
WIPO (PCT)
Prior art keywords
volatile
data
cache
thread
partition
Prior art date
Application number
PCT/CN2017/075132
Other languages
English (en)
French (fr)
Inventor
宋昆鹏
李艳华
李扬
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780022195.1A priority Critical patent/CN109196473B/zh
Priority to PCT/CN2017/075132 priority patent/WO2018157278A1/zh
Publication of WO2018157278A1 publication Critical patent/WO2018157278A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0855Overlapped cache accessing, e.g. pipeline
    • G06F12/0857Overlapped cache accessing, e.g. pipeline by multiple requestors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present application relates to the field of storage technologies, and in particular, to a cache management method, a cache manager, a shared cache, and a terminal.
  • a processor and a volatile memory are provided in the terminal.
  • the processor includes a plurality of processor cores, each processor core including a plurality of threads, each thread for accessing data in volatile memory, such as writing data in volatile memory. Multiple threads in the same processor core can share volatile memory, enabling the multiple threads to simultaneously access data in volatile memory.
  • the shared volatile memory is divided into a number of volatile cache partitions, and different threads are set to correspond to different volatile cache partitions, that is, one thread can only Access the volatile cache partition corresponding to this thread.
  • the volatile cache partition corresponding to the thread is prohibited from being accessed by other threads. The data accessed by the thread in the corresponding volatile cache partition is not affected by the data of other threads. replace.
  • the volatile cache partition corresponding to the certain thread is prohibited from being accessed by other threads, and at this time, the certain thread does not access the volatile cache partition.
  • This volatile cache partition cannot be effectively utilized, and therefore, the cache utilization of the terminal is low.
  • the present application provides a cache management method, a cache manager, a shared cache, and a terminal.
  • the technical solution is as follows:
  • a cache management method includes a volatile memory and a nonvolatile memory, and the volatile memory includes at least two volatile cache partitions, and the method includes: The first cache is allocated to the first thread, and the first volatile cache partition stores the first data related to the first thread, and the first thread occupies the first volatile cache partition Not allowing other threads to access the first volatile cache partition, the first volatile cache partition is any one of the at least two volatile cache partitions; determining whether the first thread needs to perform long delay Operation, the long delay operation refers to an operation whose operation duration is greater than a preset time threshold, and the first thread does not access the first volatile cache partition during the execution of the long delay operation; The first thread needs to perform a long delay operation, and the first data in the first volatile cache partition is written into the non-volatile memory, and the first thread is released to the first thread. a volatile Occupied zone.
  • the cache management method can be used by the cache manager, because other threads are not allowed to access the first volatile cache partition during the first thread occupying the first volatile cache partition, so that other threads cannot be in the first
  • the thread accesses the first volatile cache partition, other threads cannot access the first volatile cache partition, preventing different threads.
  • the data is contaminated with each other.
  • the first thread performs a long time-consuming operation, writing the first data to the non-volatile memory, backing up the first data, and releasing the occupation of the first volatile cache partition by the first thread, That is, when the first thread performs a long time-consuming operation, the first volatile cache partition can be accessed by other threads, and therefore, the cache utilization of the terminal can be improved.
  • the method further includes: after the first thread finishes executing the long delay operation, assigning, to the first thread, a second volatile cache partition, and the non-volatile The first data in the cache is written to the second volatile cache partition, and the second volatile cache partition is the first volatile cache partition or the first volatile cache Other volatile cache partitions outside of the partition. That is, after the first thread finishes performing the long delay operation, the cache manager may restore the first data from the non-volatile memory to: the first volatile cache partition, or the first volatile cache. A second volatile cache partition with different partitions. Further, after restoring the first data to the second volatile cache partition, the cache manager may further instruct the first thread to access the second volatile cache partition and continue to access on the second volatile cache partition. First data.
  • the nonvolatile memory includes at least two nonvolatile cache partitions
  • the writing the first data in the first volatile cache partition to the nonvolatile memory includes Writing the first data in the first volatile cache partition to a first non-volatile cache partition in the non-volatile memory, the first non-volatile cache partition being the at least Any one of two non-volatile cache partitions; the method further comprising: recording an association relationship between the first thread and the first non-volatile cache partition;
  • Writing the first data in the buffer to the second volatile cache partition comprises: according to the association relationship between the first thread and the first non-volatile cache partition, the first non- The first data in the volatile cache partition is written to the second volatile cache partition.
  • the cache manager can write the first data from the first volatile cache partition to any partition in the non-volatile memory, and record the first thread and the first non-write when writing a certain partition.
  • the association of the volatile cache partitions is such that when the first data is restored from the non-volatile memory to the volatile memory, the first data and the first thread that needs to use the first data can be determined.
  • each of the volatile cache partitions locks one thread, and any two of the volatile cache partitions are locked by different threads, and each of the volatile cache partitions is not allowed to be accessed by an unlocked thread.
  • the allocating the first volatile cache partition to the first thread includes: setting the first volatile cache partition to lock the first thread; and releasing the first thread to the first.
  • the occupying of the cache partition includes: releasing the locking relationship between the first volatile cache partition and the first thread; and/or setting the first volatile cache partition to be accessed.
  • the second thread of the memory includes: setting the first volatile cache partition to be accessed.
  • the locking relationship between the first volatile cache partition and the first thread may be directly released; After the unlocking relationship between the first volatile cache partition and the first thread is released, the first volatile cache partition is locked to the second thread, and the second thread is instructed to access the first volatile cache partition.
  • the first volatile cache partition can be directly set to lock the second thread, the first volatile cache partition is overwritten with the first thread, and the second thread is instructed to access the first volatile cache partition.
  • the first volatile cache partition has the capability of being accessed by a second thread different from the first thread, and the first volatile cache partition is partitioned. After the second thread is locked, the second thread can access the first volatile cache partition, thereby achieving the effect of improving the cache utilization of the terminal.
  • the non-volatile memory includes at least two non-volatile cache partitions, and the at least two volatile cache partitions are coupled to the at least two non-volatile cache partitions one by one.
  • the first volatile cache partition Writing the first data to the non-volatile memory includes writing the first data to a first non-volatile cache partition coupled to the first volatile cache partition.
  • the non-volatile memory is also provided to include a plurality of non-volatile cache partitions, so that data of multiple threads backed up to the non-volatile memory is not polluted. .
  • the method further includes: recording, in the process of writing the first data to the first non-volatile cache partition, related information of the first data, where the first data is
  • the related information includes: an identifier of the first volatile cache partition, a non-volatile storage identifier, and an identifier of the first thread, where the non-volatile storage identifier is used to indicate that the first data is in the non-easy a storage location in the memory; after the first thread finishes the long time-consuming operation, assigning the first volatile cache partition to the first according to the related information of the first data Threading, and writing the first data in the first non-volatile cache partition to the first volatile cache partition.
  • the cache manager needs to record the related information of the first data, and In the subsequent step, the first data is restored according to the related information of the first data.
  • the first volatile cache partition is allocated to the first thread according to the related information of the first data, and the foregoing in the first non-volatile cache partition
  • Writing the first data to the first volatile cache partition includes: setting the first volatile cache indicated by the identifier of the first volatile cache partition in the related information of the first data Partitioning, locking the first thread indicated by the identifier of the first thread; and in the related information of the first data, the first non-volatile cache partition indicated by the first non-volatile storage identifier The first data in the first volatile cache partition indicated by the identifier of the first volatile cache partition; the related information indicating the first data, the first thread The first thread indicated by the identifier continues to access the first volatile cache partition indicated by the identifier of the first volatile cache partition.
  • the method further includes: determining whether the first volatile cache partition is accessed; according to the related information of the first data, The first volatile cache partition is allocated to the first thread, and the first data in the first non-volatile cache partition is written into the first volatile cache partition, including: When the first volatile cache partition is not accessed, assigning the first volatile cache partition to the first thread according to the related information of the first data, and the first non- The first data in the volatile cache partition is written to the first volatile cache partition.
  • the first volatile cache partition may be accessed by another thread (such as the second thread). In this case, in order to prevent data loss of other threads, it is necessary to wait.
  • the first volatile cache partition is in an idle state (that is, the first volatile cache partition is not accessed), the access recovery of the first thread can be performed.
  • the method is used by a cache manager, where the first non-volatile cache partition includes a plurality of non-volatile cache sub-regions, and each of the non-volatile cache sub-regions has a capacity greater than or equal to Describe the capacity of the first volatile cache partition, the writing the first data to the first non-volatile cache partition coupled to the first volatile cache partition, including: Writing a data to the first non-volatile buffer sub-region that is free in the plurality of non-volatile buffer sub-regions, and writing the first data to the first volatile cache partition Before the first non-volatile cache partition, the related information of the data recorded by the cache manager does not include: non-volatile storage for indicating the idle non-volatile cache sub-area a storage identifier, the non-volatile storage identifier in the related information of the first data is used to indicate the first non-volatile cache sub-region;
  • the recording the related information of the first data includes: recording related information of the first data in a preset cache list, where the related information of the first data further includes: a first identifier, the first The identifier is used to indicate that the long time-consuming operation is not performed, and the preset cache list is used to record related information of data written to the non-volatile memory;
  • the method further includes: after the first thread completes the long time-consuming operation, the related cached list includes information about the first data of the identifier of the first thread.
  • the first identifier is changed to a second identifier, where the second identifier is used to indicate that the long time-consuming operation has been performed;
  • the identifier of the volatile cache partition in the related information that includes the second identifier is determined according to the information about the second identifier in the preset cache list.
  • the indicated volatile cache partition, the thread indicated by the identifier of the thread in the related information including the second identifier, and the storage location indicated by the non-volatile storage identifier in the related information including the second identifier The upper data is written to the volatile cache partition indicated by the identifier of the volatile cache partition in the related information including the second identifier.
  • the capacity of the first non-volatile cache partition is larger than the first volatile cache partition, more threads that can perform long time-consuming operations during the access to the first volatile cache partition are written to the non-volatile
  • the capacity of the first volatile cache partition is greater than or equal to the capacity of the first non-volatile cache partition.
  • the method further includes: determining whether the write policy of the second thread is a writeback policy; if the write policy is a writeback policy, Data writes with modified tags in the first volatile cache partition: memory or a memory whose cache level is lower than the cache level of the volatile memory.
  • the second thread when the second thread writes data to the first volatile cache partition, the second thread can write the label in each data block of the data as the modified label, and the cache in the terminal includes multiple Level of memory.
  • the cache manager Before restoring the access of the first thread to the first volatile cache partition, the cache manager prevents the data of the second thread from being lost when restoring the access of the first thread to the first volatile cache partition, and the cache manager further It can be judged whether the write strategy of the second thread is a writeback policy. If the write strategy of the second thread is a writeback policy, the data with the modified label in the first volatile cache partition is written to: memory or a memory level lower than the cache level of the volatile memory.
  • the related information of the first data further includes: a third identifier, where the third identifier is used to indicate that the first data on the nonvolatile memory has not been written to the first a cryptographic cache partition, in the related information indicating the first data, the first thread indicated by the identifier of the first thread, continuing to access the identifier of the first volatile cache partition
  • the method further includes: changing the third identifier in the related information of the first data to a fourth identifier, where the fourth identifier is used to indicate the The first data on the non-volatile memory has been written to the first volatile cache partition.
  • the cache manager can also pre- In the buffer list, the third identifier in the related information of the first data is changed to a fourth identifier, and the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written to the first volatile cache partition.
  • the cache manager restores the access of the first thread to the first volatile cache partition, the cache manager can be based on the fourth identifier in the related information of the first data in the preset cache list. It is determined that the first data has been restored to the first volatile cache partition.
  • the thread indicated by the identifier of the thread accesses the volatile cache partition indicated by the identifier of the volatile cache partition.
  • the cache manager is prevented from writing the first data to the first volatile cache partition again after writing the first data recovery to the first volatile cache partition.
  • the long time-consuming operation is a data loss operation
  • the method further includes: receiving the lost data sent by the first thread; writing the lost data to the first Volatile cache partition. That is, when the long time-consuming operation performed by the first thread is a data loss operation, the cache manager can receive the first thread sent by the first thread before instructing the first thread to continue to access the first volatile cache partition. Lost data found during long time-consuming operations and writing lost data to the first volatile cache partition to ensure that the first thread is able to resume after the first thread resumes access to the first volatile cache partition. Access the first volatile cache partition.
  • the first data includes: a first data block and a second data block, where the data portion in the first data block is independent of the first thread, and the data in the second data block Portioning the first thread to the first thread, the writing the first data in the first volatile cache partition to the non-volatile memory, comprising: validating the first data block Writing the content of the data bit and the second data block to the non-volatile memory; writing the first data in the first volatile cache partition to the non-volatile After the memory, the method further includes: clearing content of all valid data bits within the first volatile cache partition.
  • the data stored on the first volatile cache partition includes a plurality of data blocks, and each data block includes: valid data bits, tags, and data portions.
  • each data block includes: valid data bits, tags, and data portions.
  • the long time-consuming operation is a preset operation whose operation duration is greater than a preset duration threshold, where the preset operation includes at least one of a data loss operation, an access input device operation, and a sleep operation. That is, the step of backing up the first data to the first non-volatile buffer sub-area is performed only when the first thread performs a long time-consuming operation and the long time-consuming operation is a preset operation. When the long time consuming operation is not a preset operation, the step of backing up the first data to the first nonvolatile cache sub-area is not performed.
  • a cache manager comprising at least one module, the at least one module being configured to implement the cache management method provided by the first aspect or the optional aspect of the first aspect.
  • a cache manager comprising: at least one transmitting module, at least one receiving module, at least one processing module, at least one storage module, and at least one bus, and the storage module is connected to the processing module through a bus
  • the processing module is configured to execute the instructions stored in the storage module; the processing module implements the cache management method provided by any of the possible implementations of the first aspect or the first aspect.
  • a fourth aspect provides a shared cache, where the shared cache includes: a cache manager, a volatile memory, and a non-volatile memory, wherein the cache manager is the cache management according to the second aspect or the third aspect
  • the volatile memory includes at least two volatile cache partitions.
  • the non-volatile memory includes at least two non-volatile cache partitions, and the at least two volatile cache partitions are coupled to the at least two non-volatile cache partitions one by one.
  • a fifth aspect provides a terminal, where the terminal includes: a processor and a shared cache, the processor includes at least two threads; and the shared cache is the shared cache according to the fourth aspect.
  • the first volatile cache partition prevents data from being polluted by different threads. And when the first thread performs a long time-consuming operation, writing the first data to the non-volatile memory, backing up the first data, and releasing the occupation of the first volatile cache partition by the first thread, That is, when the first thread performs a long time-consuming operation, the first volatile cache partition can be accessed by other threads, and therefore, the cache utilization of the terminal can be improved.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure
  • FIG. 2 is a schematic partial structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a shared cache provided by a related art
  • FIG. 5 is a schematic structural diagram of a cache manager according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart of a method for a cache management method according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for a cache manager to restore access of a first thread to a first volatile cache partition according to an embodiment of the present invention
  • FIG. 8 is a flowchart of a method for restoring access of a first thread to a first volatile cache partition by another cache manager according to an embodiment of the present invention
  • FIG. 9 is a schematic structural diagram of a ferroelectric nonvolatile flip-flop provided by the related art.
  • FIG. 10 is a schematic structural diagram of a cache manager according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of another cache manager according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of still another cache manager according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of still another cache manager according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of a cache manager according to another embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a terminal according to an embodiment of the present invention
  • the terminal 1 includes a processor 10, a cache (English: Cache) 11, and a local memory (English: Main Memory) 12.
  • the processor 10 is capable of accessing the cache 11 and the local memory 12, as well as the memory of the terminal (English: storage).
  • the memory of the terminal is not shown in FIGS. 1 and 2.
  • the terminal is a computer, and the memory of the terminal is a hard disk.
  • the processor 10 includes at least one processor core 101 (in FIG. 1 , the processor includes two processor cores 101 as an example). Each processor core 101 includes at least one thread 1011 (one for each processor core in FIG. 1 includes two threads 1011 as an example). It should be noted that each processor core 101 also includes a register. It should be noted that the processor 10 includes at least two threads 1011, and the threads in the embodiments of the present invention are hardware threads.
  • the cache 11 is a multi-level cache, when multiple processor cores 101 share a certain level of cache, or when multiple threads 1011 in a certain processor core 101 share a certain level of cache (while accessing the certain level of cache)
  • the level cache is shared cache A, and at least one shared cache A exists in the multi-level cache.
  • the multi-level cache includes: a level 1 cache, a level 2 cache, and a level 3 cache.
  • the level 1 cache (English: level 1 Cache; referred to as L1 Cache) is exclusively occupied by a processor core, and the level 2 cache (English: level 2) Cache; abbreviation: L2 Cache) is shared among multiple processor cores.
  • the third-level cache (English: level3 Cache; referred to as L3 Cache) is shared by all processor cores. In a multi-threaded processor, the L1 Cache can also be shared by multiple threads within the same processor core.
  • the cache 11 in the scenario used in the embodiment of the present invention includes the shared cache A, and the number of threads accessing the shared cache A is greater than or equal to two.
  • processor 10 includes a processor core 101
  • the one processor core 101 includes a plurality of threads 1011; when each processor core 101 includes a thread 1011, the processor 10 includes at least two processor cores 101.
  • the shared cache A includes a cache manager 121, a volatile memory 122, and a non-volatile memory (Non-Volatile Memory; NVM) 123.
  • the cache manager 121 is coupled to the volatile memory 122, the non-volatile memory 123, and a plurality of threads sharing the shared cache in which the cache manager is located.
  • the cache manager 121 is configured to manage multiple threads to access the shared cache.
  • the volatile memory 122 is a static random access memory (English: Static Random Access Memory; SRAM), and the NVM 123 is a flash memory (English: Flash EEPROM; abbreviated as: Flash), phase change memory (English: Phase Change Memory) Abbreviation: PCM), Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) or Ferroelectric Random Access Memory (English: Ferroelectric Random Access Memory; Abbreviation: FeRAM )Wait.
  • PCM is a non-volatile memory that utilizes different conductivity characteristics of phase change materials in crystalline and amorphous states.
  • STT-MRAM realizes magnetic memory by using amplified tunneling effect, which has high density, short access time and consumption.
  • Low-voltage and non-volatile, FRAM is a non-volatile memory feature that utilizes the bistable polarization characteristics of ferroelectric thin films.
  • the shared cache is divided into several volatile cache partitions, and different threads are set to correspond to different volatile cache partitions, that is, one thread can only access the volatile corresponding to the thread. Cache partition.
  • the volatile cache partition corresponding to the thread is prohibited from being accessed by other threads. The data accessed by the thread in the corresponding volatile cache partition is not affected by the data of other threads. replace.
  • FIG. 3 is a schematic structural diagram of a shared cache provided by the related art
  • FIG. 4 is a schematic structural diagram of another shared cache provided by the related art, where the path in the shared cache B (English: way) is shared in FIG.
  • the cache 11 is divided into two volatile cache partitions B1, one of which includes a storage area corresponding to one road (Route 1), and the other volatile cache partition B1 includes three paths (Route 2) The storage area corresponding to road 3 and road 4).
  • the shared cache B is divided into two volatile cache partitions B2 according to the rows in the shared cache B (English: row), and one of the volatile cache partitions B2 includes the storage areas corresponding to the rows 1 to m.
  • Another volatile cache partition B2 includes a storage area corresponding to row m+1 to row x, m is an integer greater than 1 and less than x, and row x is the last row in the shared cache B.
  • the thread 1 can access the volatile cache partition B1 of the storage area corresponding to one path in FIG. 3, or the storage area corresponding to line 1 to line m in FIG. Volatile cache partition B2; thread 2 can access the volatile cache partition B1 of the storage area corresponding to three paths in FIG. 3, or the volatile area corresponding to the storage area corresponding to row m+1 to row x in FIG. Scenario cache partition B2.
  • the volatile cache partition corresponding to the certain thread is prohibited from being accessed by other threads, and at this time, the certain thread does not access the volatile cache partition. This makes the volatile cache partition unusable, so the processor's cache utilization is low and the processor's performance is poor.
  • the volatile memory 122 in the embodiment of the present invention is also divided into a plurality of volatile cache partitions 1221 (in FIG. 2, the volatile memory includes two volatiles.
  • the cache partition 1221 is taken as an example.
  • a plurality of volatile cache partitions 1221 are obtained according to paths or rows in the volatile memory.
  • Each of the plurality of volatile cache partitions 1221 locks one thread, and any two volatile cache partitions 1221 lock different threads, and the capacity of any two volatile cache partitions 1221 can be The same, can also be different.
  • the non-volatile memory 123 in the embodiment of the present invention is also divided into a plurality of non-volatile cache partitions 1231 coupled to the volatile cache partition (in FIG. 2, the non-volatile memory includes two non- Volatile cache partitions are examples, and each non-volatile cache partition is coupled to a non-volatile cache partition, and the non-volatile cache partitions of any two volatile cache partitions are different.
  • FIG. 5 is a schematic structural diagram of a cache manager 121 according to an embodiment of the present invention.
  • the cache manager 121 includes: at least one transmitting module 1211, at least one receiving module 1212, at least one processing module 1213, and at least one storage module 1214. And at least one bus 1215, the transmitting module, the receiving module, the processing module, and the storage module are connected by a bus.
  • the processing module 1213 is configured to execute an executable module, such as a computer program, stored in the storage module 1214.
  • the storage module 1214 stores a program 12141 that can be executed by the processing module 1213.
  • FIG. 6 is a flowchart of a method for a cache management method according to an embodiment of the present invention.
  • the cache management method is used in the cache manager 121 of FIG. 2, and the cache management method can be executed by the processing module 1213 in FIG. to realise.
  • the first volatile cache partition is any one of at least two volatile cache partitions in the volatile memory, and the first non-volatile cache partition and the non-volatile memory A volatile cache partition is coupled, and in the embodiment shown in FIG. 6, the first non-volatile cache partition includes a plurality of non-volatile cache sub-regions, each of which has a capacity greater than or Equal to the capacity of the first volatile cache partition, the capacity of the first non-volatile cache partition is greater than the capacity of the first volatile cache partition.
  • the cache management method includes:
  • Step 601 Assign the first volatile cache partition to the first thread.
  • the Cache Manager is used to manage access to the shared cache by multiple threads.
  • the shared cache includes volatile memory and non-volatile memory.
  • the volatile memory includes multiple volatile cache partitions. Requires access to volatile storage in more threads
  • Cache manager can filter out multiple threads in the more threads, and lock the multiple volatile cache partitions one by one to the selected multiple threads, that is, each volatile cache partition lock One thread, and any two volatile cache partitions are locked differently.
  • the cache manager may assign the first volatile cache partition to the first thread, that is, lock the first volatile cache partition to the first thread.
  • the Cache Manager then instructs each of the multiple threads to access the locked volatile cache partition.
  • the cache manager can send an access indication to each thread, the access indication includes an identifier of the volatile cache partition locked by the thread, and after receiving the access indication, the thread, according to the access indication, caches the volatile cache according to the access indication. Partition access.
  • each of the multiple threads can only access the locked volatile cache partition and cannot access other volatile cache partitions, that is, during the first thread occupying the first volatile cache partition, the cache The manager does not allow other threads to access the first volatile cache partition, thus ensuring that no data contamination between threads occurs when multiple threads access volatile memory.
  • the cache manager can establish a lock list as shown in Table 1, which is used to record the identifier of each volatile cache partition, and each The identifier of the thread that is locked by the volatile cache partition. For example, the volatile cache partition C1 and the thread F1 are locked, the volatile cache partition C2 and the thread F2 are locked, the volatile cache partition C3 and the thread F3 are locked, the volatile cache partition C4 and the thread F4 are locked, and the volatile cache partition is locked. C5 is locked with thread F5.
  • the lock list is exemplified in the example of the present invention. In the actual application, the lock list may be different from the table 1 , which is not limited by the embodiment of the present invention.
  • Step 602 Determine whether the first thread needs to perform a long time-consuming operation. If the first thread needs to perform a long time-consuming operation, step 603 is performed; if the first thread does not need to perform a long time-consuming operation, step 602 is performed.
  • the first thread can read the data stored on the first volatile cache partition or modify the data stored on the first volatile cache partition.
  • the cache manager can receive a long time-consuming operation indication sent by the first thread, and the cache manager can determine, according to the long time-consuming operation instruction, that the first thread needs to perform long-term consumption.
  • Time operation is an operation whose operation duration is greater than a preset duration threshold, and the first thread does not access the first volatile cache partition during the execution of the long delay operation; optionally, the preset duration threshold is a terminal. 100 times of the clock cycle, long time-consuming operation for data loss processing (English: Cache Missing) or access to input and output (English: In / Out; referred to as: I / O) device operation.
  • Step 603 Write first data related to the first thread in the first volatile cache partition to the non-volatile memory.
  • the cache manager can determine that the first thread will not access the first volatile cache partition for a long period of time, and the cache manager can also The first data associated with the first thread stored on the first volatile cache partition is backed up to the non-volatile memory.
  • the scratch memory also includes a plurality of non-volatile cache partitions, and the plurality of non-volatile cache partitions are coupled to the plurality of volatile cache partitions, and the non-volatile cache partitions of any two volatile cache partitions are different.
  • the cache manager needs to back up the first data to the non-volatile memory, the cache manager can directly determine the first non-volatile cache partition coupled to the first volatile cache partition, and the first A data is backed up to the first non-volatile cache partition.
  • the cache manager can record the related information of the first data in order to facilitate the understanding of the ins and outs of the first data to be backed up.
  • the related information of the first data includes: an identifier of the first volatile cache partition, a non-volatile storage identifier, and an identifier of the first thread, where the non-volatile storage identifier is used to indicate that the first data is in the non-volatile memory.
  • the related information of the first data further includes: a long time-consuming operation execution state identifier and a data recovery identifier
  • the long time-consuming operation execution state identifier is: a first used to indicate that the long-time operation is not performed.
  • the data recovery identifier is a third identifier for indicating that data is not written from the non-volatile memory to the first volatile cache partition.
  • a cache list is pre-configured on the cache manager, and the cache list is used to record related information of data written to the nonvolatile memory.
  • the cache manager When the cache manager records related information of the first data, the cache manager writes related information of the first data to the cache list.
  • the cache list is as shown in Table 2.
  • the related information of the first data includes: an identifier (C1) of the first volatile cache partition, and is used to indicate the first non-zero in the first non-volatile cache partition.
  • the cache list further includes related information of the third data, where the third data is data related to the third thread stored on the third volatile cache partition, and is cached when the third thread performs a long time-consuming operation.
  • the manager backs up the third data to a non-volatile cache sub-region in the third non-volatile cache partition.
  • the related information of the third data includes: an identifier of the third volatile cache partition (C3), a non-volatile storage identifier (F3M1) for indicating a non-volatile cache sub-region in the third non-volatile cache partition,
  • the identifier (W3) of the third thread the first identifier (0), and the second identifier (0).
  • the cache manager can select an idle non-volatile cache sub-region among the plurality of non-volatile cache sub-regions of the first non-volatile cache partition according to the related information of the recorded data (in the first data) Before writing to the non-volatile memory, the related information of the data recorded by the cache manager does not include a non-volatile storage identifier for indicating the free non-volatile cache sub-region) as the first non-volatile cache.
  • the data stored on the first volatile cache partition includes a plurality of data blocks, and each data block includes: valid data bits, tags, and data portions.
  • the first data stored on the first volatile cache partition includes: a first data block and a second data block, where the data block is also referred to as a Cache Line, and the data portion in the first data block is compared with the first thread.
  • the valid data bits in the first data block are used to indicate that the data portion of the first data block is independent of the first thread; the data portion in the second data block is associated with the first thread, and the valid data in the second data block The bit is used to indicate that the data portion of the second data block is associated with the first thread.
  • the content of the valid data bits in the first data block is “0”, the second The content of the valid data bits in the data block is "1".
  • the cache manager may back up the content of the valid data bits in the first data block and the second data block to the first non-volatile Cache partition. After backing up the first data to the first non-volatile cache partition, the cache manager also needs to clear the contents of all valid data bits within the first volatile cache partition.
  • the cache manager when backing up the first data, the cache manager only backs up the contents of the valid data bits in the valid data block (the second data block) and the invalid data block (that is, the first data block), and in order to guarantee the next A thread can normally access the first volatile cache partition, and the cache manager needs to clear the contents of all valid data bits in the first volatile cache partition.
  • the long time-consuming operation indication sent by the first thread to the cache manager further includes: an identifier of the long time-consuming operation, and the cache manager can, according to the long time-consuming operation indication, after receiving the long-time operation instruction
  • the identifier of the long time-consuming operation determines whether the long time-consuming operation is a preset operation.
  • the preset operation includes at least one of a data loss operation, an access input device operation, and a sleep operation.
  • the cache manager performs the release of the first thread to occupy the first volatile cache partition, and writes the first data to the first non-volatile cache sub-area. step.
  • the cache manager does not perform the release of the first thread to occupy the first volatile cache partition, and writes the first data to the first non-volatile cache sub-area. Steps on the steps.
  • the cache manager backs up the first data to the non-volatile memory, and when the shared cache is suddenly powered off, the first data stored in the non-volatile memory is not lost, therefore, The loss of the first data can be further prevented.
  • the first non-volatile cache sub-area in order to ensure that the first data on the first volatile cache partition can be successfully backed up to the first non-volatile cache sub-area, the first non-volatile cache sub-area needs to be guaranteed.
  • the capacity is greater than or equal to the capacity of the first volatile cache partition.
  • the capacity of the first non-volatile cache sub-area may be set equal to the capacity of the first volatile cache partition.
  • Step 604 Assign the first volatile cache partition to the second thread of the volatile memory to be accessed.
  • the first volatile cache partition Prior to step 604, the first volatile cache partition always locks the first thread, that is, before the step 604, the first volatile cache partition is only allowed to be accessed by the first thread, and the first thread performs long time-consuming operations. During the process, the first thread does not access the first volatile cache partition. Therefore, in step 604, the cache manager can release the occupation of the first volatile cache partition by the first thread, and set the first volatile cache partition to a state that can be accessed by a thread different from the first thread, that is, The first volatile cache partition is set to have the ability to be accessed by a different thread than the first thread.
  • the cache manager when the cache manager sets the first volatile cache partition to be accessible by other threads, the cache manager can directly set the first volatile cache partition to lock the second thread, which is the first The locking relationship between the cryptographic cache partition and the first thread is overwritten (ie, the manner of step 604).
  • the cache manager may directly release the locking relationship between the first volatile cache partition and the first thread; or the cache manager may directly cancel the locking relationship between the first volatile cache partition and the first thread.
  • setting the first volatile cache partition to lock the second thread and instructing the second thread to access the first volatile cache partition.
  • the cache manager filters a plurality of threads in the thread that requires more access to the volatile memory before step 604, the plurality of threads including the first thread.
  • the cache manager re-filters one thread as the second thread among the threads that are not filtered in the more threads.
  • the cache manager can set the first volatile cache partition to lock the second thread so that the second thread can access the first volatile cache partition.
  • the cache manager will cache the first volatile cache in step 604. Partition (volatile cache partition F1) is locked The second thread (C6).
  • the cache manager is further capable of indicating that the second thread of the volatile memory to access the first volatile cache partition.
  • the cache manager may also perform a method similar to the method in steps 602 to 604. That is, if the second thread also needs to perform a long time consuming operation, the cache manager can also write the second data related to the second thread stored on the first volatile cache partition to the first nonvolatile The second non-volatile cache sub-area that is free in the cache partition, and records information about the second data in the process of writing the second data to the second non-volatile buffer sub-area. The cache manager can then also lock the first volatile cache partition to another thread (not the first thread or the second thread) and instruct the other thread to access the first volatile cache partition, thus looping Reciprocating.
  • the cache manager can record the related information of the second data in the preset cache list.
  • the related information of the second data may include: an identifier (C1) of the first volatile cache partition, used for a nonvolatile storage identifier (F1M2) indicating a second nonvolatile cache subregion in the first nonvolatile cache partition, an identifier (W2) of the second thread, a first identifier (0), and a third identifier (0) .
  • Step 605 After the first thread finishes performing a long time-consuming operation, assigning the first volatile cache partition to the first thread, and writing the first data in the non-volatile memory to the first volatile cache partition. .
  • the cache manager After the first thread has completed the long time-consuming operation, the cache manager needs to restore the access of the first thread to the first volatile cache partition.
  • the first non-volatile cache partition includes multiple non-volatile cache sub-regions in the embodiment of the present invention, and the capacity of each non-volatile cache sub-region is greater than or equal to the first volatile cache partition.
  • more threads executing long time-consuming operations, more data written to the non-volatile memory, and threads that are to be restored to access the first volatile cache partition are also More, so the cache manager needs to restore the access to the first volatile cache partition by the thread that executes the long time-consuming operation.
  • the first thread after the first thread finishes performing a long time-consuming operation, the first thread sends a long time-consuming operation completion instruction to the cache manager, so that the cache manager performs the completion instruction according to the long time-consuming operation.
  • the first identifier in the related information of the identifier of the first thread that is included in the preset cache list is changed to the second identifier.
  • the identification is used to indicate that the long time-consuming operation has been performed.
  • the long-time operation execution state identifier in the related information including the identifier of the first thread is the second identifier.
  • the cache manager can change the long time-consuming operation execution state identifier in the related information including the identifier (W1) of the first thread from the first identifier (0) to the second identifier (1). ). Further, if the second thread happens to execute the long time-consuming operation at this time, the cache manager can also identify the long-time operation execution state identifier in the related information including the identifier (W2) of the second thread, by the first The identifier (0) is changed to the second identifier (1).
  • the cache manager sequentially includes the second identifier according to the related information that includes the second identifier in the preset cache list, such as related information of the first data and related information of the second data.
  • the volatile cache partition indicated by the identifier of the volatile cache partition in the related information is allocated to the thread indicated by the identifier of the thread in the related information including the second identifier, and the related information including the second identifier is not easy.
  • the data on the storage location indicated by the lost storage identifier is written to the volatile cache partition indicated by the identifier of the volatile cache partition in the related information of the second identifier. That is, the cached data including the second identifier is sequentially restored, and the thread indicated by the identifier of the thread accesses the volatile cache partition indicated by the identifier of the volatile cache partition.
  • FIG. 7 shows a flow chart of a method for the cache manager to restore access of the first access module to the first volatile storage unit. As shown in FIG. 7, the method includes:
  • Step 6051a Determine whether the first volatile cache partition is accessed. If the first volatile cache partition is accessed, step 6051a is performed; if the first volatile cache partition is not accessed, step 6052a is performed.
  • the cache manager needs to first determine whether the first volatile cache partition is being accessed. If the first volatile cache partition is being accessed, the cache manager proceeds to step 6051a. Continue to determine if the first volatile cache partition is being accessed. If the first volatile cache partition is not being accessed, the cache manager needs to perform step 6052a.
  • the first volatile cache partition may be accessed by another thread (such as the second thread). In this case, in order to prevent data loss of other threads, it is necessary to wait.
  • the first volatile cache partition is in an idle state (that is, the first volatile cache partition is not accessed)
  • the access recovery of the first thread can be performed. It should be noted that, after other threads have completed access to the first volatile cache partition, the first volatile cache partition is in an idle state, or in the process of performing long time-consuming operations by other threads, the first The Cache Cache partition is also idle.
  • Step 6052a Assign the first volatile cache partition to the first thread according to the related information of the first data. Go to step 6053a.
  • the cache manager can read the identifier of the first volatile cache partition and the identifier of the first thread from the related information of the first data, thereby determining the first volatile cache partition and the first thread, and setting the first easy
  • the cryptographic cache partition locks the first thread (as shown in Table 1), and also assigns the first volatile cache partition to the first thread.
  • Step 6053a Write first data on the non-volatile memory to the first volatile cache partition according to the related information of the first data.
  • the cache manager is configured to read the non-volatile storage identifier from the related information of the first data, determine the first non-volatile cache sub-region indicated by the non-volatile storage identifier, and acquire the first non-volatile cache sub-region The first data stored thereon, and the first data is written to the first volatile cache partition. Further, when the first data is written into the first volatile cache partition, the cache manager can also write only the content of the valid data bits and the valid data block in the invalid data block to the first volatile cache partition. can.
  • the cache manager may further indicate that the first thread indicated by the identifier of the thread continues to access the first volatile cache partition in the related information of the first data. .
  • the cache manager further changes the third identifier in the related information of the first data to the fourth identifier in the preset cache list.
  • the fourth identification is for indicating that the first data on the nonvolatile memory has been written to the first volatile cache partition.
  • the cache manager can be related according to the first data in the preset cache list.
  • a fourth identifier in the information determines that the first data has been restored to the first volatile cache partition.
  • the thread indicated by the identifier of the thread accesses the volatile cache partition indicated by the identifier of the volatile cache partition.
  • the cache manager is prevented from writing the first data to the first volatile cache partition again after writing the first data to the first volatile cache partition.
  • the cache manager can receive the first thread sent by the first thread to perform a long time consuming Lost data found during operation, and after step 6053a, write the lost data to the first volatile cache partition to ensure that the first thread can be normal after the first thread resumes access to the first volatile cache partition. Access the first volatile cache partition.
  • the first non-volatile cache partition does not include multiple non-volatile cache sub-regions, and the capacity of the first non-volatile cache partition is greater than or equal to the first
  • the embodiment shown in Figure 6 will change as follows:
  • the cache manager writes the first data to the first non-volatile cache partition in the non-volatile memory, and the non-volatile storage identifier in the related information of the first data recorded by the cache manager is used for Indicates the identity of the first non-volatile cache partition. Further, since the first non-volatile cache partition does not include a plurality of non-volatile cache sub-regions, the first non-volatile cache partition can only be written to the first volatile cache partition and stored in one thread Related data, therefore, the number of threads that need to write data to the non-volatile memory when accessing the first volatile cache partition is not multiple, so the related information of the first data does not include long time consumption Operation execution status identifier.
  • the cache manager does not need to perform the method similar to the method in steps 602 to 604, that is, when the second thread also needs to perform a long time-consuming operation.
  • the Cache Manager does not need to do anything.
  • step 605 when the first thread finishes performing a long time-consuming operation, the cache manager does not need to determine whether the first volatile cache partition is being accessed, but directly stops the second thread from being the first volatile. Cache partition access, and assign the first volatile cache partition to the first thread, and write the first data in the non-volatile memory to the first volatile cache partition, and restore the first thread to the first Access to the cache partition.
  • the capacity of the first non-volatile cache partition is set to be equal to the capacity of the first volatile cache partition in the embodiment of the present invention.
  • FIG. 8 shows a flow chart of another method for the cache manager to restore access of the first thread to the first volatile cache partition. As shown in FIG. 8, the method includes:
  • Step 6051b Assign the first volatile cache partition to the first thread according to the related information of the first data.
  • the cache manager can directly determine the related information of the first data including the first thread according to the identifier of the first thread, and correlate from the first data. In the information, reading the identifier of the first volatile cache partition and the identifier of the first thread, thereby determining the first volatile cache partition and the first thread, and setting the first volatile cache partition to lock the first thread, The first volatile cache partition is allocated to the first thread. At this point, the second thread is not locked with the first volatile cache partition, and the second thread cannot access the first volatile cache partition.
  • the second thread when the second thread writes data to the first volatile cache partition, the second thread writes the label in each data block in the data as the modified label, and the cache in the terminal includes multiple levels. Memory.
  • the cache manager Before restoring the access of the first thread to the first volatile cache partition, that is, before performing step 6051b, the cache manager loses the second in order to prevent access to the first volatile cache partition when the first thread is restored.
  • the data of the thread the cache manager also determines whether the write strategy of the second thread is a write back (English: Write Back) policy. If the write strategy of the second thread is a writeback policy, back up the data with the modified label in the first volatile cache partition to: memory (English: Main Memory) or a cache level lower than the cache of the volatile memory. .
  • Step 6052b Write first data on the non-volatile memory to the first volatile cache partition according to the related information of the first data.
  • step 6052b in the step 6052b are to be repeated.
  • step 6053a in the embodiment shown in FIG. 7 are not described herein.
  • the cache manager may further allocate the second volatile cache partition to the first thread, and write the first data in the non-volatile buffer to the second volatile cache partition.
  • the second volatile cache partition is the first volatile cache partition or other volatile cache partitions other than the first volatile cache partition. That is, after the first thread finishes performing the long delay operation, the cache manager may write the first data from the non-volatile memory to: the first volatile cache partition or the first volatile cache A second volatile cache partition with different partitions. Further, after the first data is written to the second volatile cache partition, the cache manager may further instruct the first thread to access the second volatile cache partition and continue on the second volatile cache partition. Access the first data.
  • the non-volatile memory includes at least two non-volatile cache partitions.
  • the cache is performed in step 602.
  • the manager may write the first data in the first volatile cache partition to the first non-volatile cache partition in the non-volatile memory, and record the association relationship between the first thread and the first non-volatile cache partition
  • the first non-volatile cache partition is any one of at least two non-volatile cache partitions.
  • the cache manager may allocate the second volatile cache partition to the first thread, and partition the first non-volatile cache partition according to the association relationship between the first thread and the first non-volatile cache partition.
  • the first data is written to the second volatile cache partition. That is, the cache manager can write the first data from the first volatile cache partition to any partition in the non-volatile memory, and record the first thread and the first non-write when writing a certain partition.
  • the association of the volatile cache partitions is such that when the first data is restored from the non-volatile memory to the volatile memory, the first data and the first thread that needs to use the first data can be determined.
  • the ferroelectric non-volatile flip-flop includes: Missing parts and complementary metal oxide semiconductors (English: Complementary Metal Oxide Semiconductor; referred to as: CMOS) volatile parts.
  • CMOS Complementary Metal Oxide Semiconductor
  • the non-volatile part of the ferroelectric is provided with a signal input terminal Din, a signal output terminal Dout, and an inverted signal output terminal.
  • the cache management method does not allow other threads to access the first volatile cache partition during the first thread occupying the first volatile cache partition, thereby making other threads unable to
  • the first thread accesses the first volatile cache partition
  • other threads cannot access the first volatile cache partition, preventing data of different threads from colliding with each other.
  • the first thread performs a long time-consuming operation, writing the first data to the non-volatile memory, backing up the first data, and releasing the occupation of the first volatile cache partition by the first thread, That is, when the first thread performs a long time-consuming operation, the first volatile cache partition can be accessed by other threads, and therefore, the cache utilization of the terminal can be improved.
  • FIG. 10 is a schematic structural diagram of a cache manager according to an embodiment of the present disclosure.
  • the cache manager may be the cache manager in FIG. 2.
  • the cache manager 100 includes:
  • An allocating module 1001 configured to allocate a first volatile cache partition to the first thread, where the first volatile cache partition stores first data related to the first thread, and the first thread occupies the first volatile The other volatile threads are not allowed to access the first volatile cache partition during the cache partition, and the first volatile cache partition is any one of the at least two volatile cache partitions;
  • the first judging module 1002 is configured to determine whether the first thread needs to perform a long delay operation, and the long delay operation refers to an operation in which the operation duration is greater than a preset time threshold, and the first thread does not access during the long delay operation.
  • a volatile cache partition
  • the first writing module 1003 is configured to: when the first thread needs to perform a long delay operation, write the first data in the first volatile cache partition to the non-volatile memory, and release the first thread to the first The occupation of volatile cache partitions.
  • the embodiment of the present invention provides a cache manager, because other threads are not allowed to access the first volatile cache partition during the first thread occupying the first volatile cache partition, thereby making other threads
  • the first thread cannot access the first volatile cache partition
  • other threads cannot access the first volatile cache partition, preventing data of different threads from colliding with each other.
  • the first writing module 1003 writes the first data to the non-volatile memory, backs up the first data, and releases the first thread to the first volatile.
  • the occupation of the cache partition that is, when the first thread performs a long time-consuming operation, the first volatile cache partition can be accessed by other threads, thereby improving the cache utilization of the terminal.
  • each volatile cache partition locks one thread, and any two volatile cache partitions lock different threads, and each volatile cache partition is not allowed to be accessed by an unlocked thread.
  • the allocating module 1001 is further configured to: set a first volatile cache partition to lock the first thread;
  • the first writing module 1003 is further configured to:
  • the first volatile cache partition is set to lock the second thread of the volatile memory to be accessed.
  • the non-volatile memory includes at least two non-volatile cache partitions, and at least two volatile cache partitions are coupled to the at least two non-volatile cache partitions, and the first write module 1003 further uses to:
  • the first data is written to a first non-volatile cache partition coupled to the first volatile cache partition.
  • FIG. 11 is a schematic structural diagram of another cache manager according to an embodiment of the present invention. As shown in FIG. 11, on the basis of FIG. 10, the cache manager 100 further includes:
  • the recording module 1004 is configured to record related information of the first data in the process of writing the first data to the first non-volatile cache partition, where the related information of the first data includes: the first volatile cache partition An identifier, a non-volatile storage identifier, and an identifier of the first thread, the non-volatile storage identifier is used to indicate a storage location of the first data in the non-volatile memory;
  • the second writing module 1005 is configured to: after the first thread executes the long time-consuming operation, allocate the first volatile cache partition to the first thread according to the related information of the first data, and the first non-volatile The first data in the sexual cache partition is written to the first volatile cache partition.
  • the second writing module 1005 is further configured to: set the first volatile cache partition indicated by the identifier of the first volatile cache partition in the related information of the first data, and lock the identifier of the first thread Determining a first thread; writing, in the related information of the first data, the first data in the first non-volatile cache partition indicated by the first non-volatile storage identifier to the first volatile cache partition Identifying the first volatile cache partition indicated by the identifier; indicating, in the related information of the first data, the first thread indicated by the identifier of the first thread, continuing to access the first vulnerability indicated by the identifier of the first volatile cache partition Loss cache partition.
  • FIG. 12 is a schematic structural diagram of another cache manager according to an embodiment of the present invention. As shown in FIG. 12, on the basis of FIG. 11, the cache manager 100 further includes:
  • the second determining module 1006 is configured to determine whether the first volatile cache partition is accessed
  • the second writing module 1005 is further configured to: when the first volatile cache partition is not accessed, allocate the first volatile cache partition to the first thread according to the related information of the first data, and the first non- The first data in the volatile cache partition is written to the first volatile cache partition.
  • the first non-volatile cache partition includes a plurality of non-volatile cache sub-regions, and each non-volatile cache sub-region has a capacity greater than or equal to a capacity of the first volatile cache partition.
  • the first write module 1003 is further configured to: write the first data to the first non-volatile cache sub-region that is free in the plurality of non-volatile cache sub-regions, and write the first data to the first volatile Before the cache partition is coupled to the first non-volatile cache partition, the information related to the data recorded by the cache manager does not include: a non-volatile storage identifier indicating the idle non-volatile cache sub-area, the correlation of the first data a non-volatile storage identifier in the information is used to indicate the first non-volatile cache sub-region;
  • the recording module 1004 is further configured to: record related information of the first data in a preset cache list, where the related information of the first data further includes: a first identifier, where the first identifier is used to indicate that the long-time operation is not performed, and the The cache list is set to record information related to data written to the nonvolatile memory;
  • FIG. 13 is a schematic structural diagram of another cache manager according to an embodiment of the present invention.
  • the cache manager 100 further includes: a first change module 1007, configured to be used in FIG. After the thread finishes the long time-consuming operation, the first identifier in the related information of the first data that includes the identifier of the first thread in the preset cache list is changed to the second identifier, and the second identifier is used to indicate the length. The time-consuming operation has been completed;
  • the second writing module 1005 is further configured to: sequentially, according to the related information that includes the second identifier in the preset cache list, the volatile cache partition indicated by the identifier of the volatile cache partition in the related information that includes the second identifier. And assigning, to the thread indicated by the identifier of the thread in the related information including the second identifier, the data on the storage location indicated by the non-volatile storage identifier in the related information including the second identifier, and writing the data including the second identifier The volatile cache partition indicated by the identifier of the volatile cache partition in the related information.
  • FIG. 14 is a schematic structural diagram of a cache manager according to another embodiment of the present invention. As shown in FIG. 14, on the basis of FIG. 11, the cache manager 100 further includes:
  • the third determining module 1008 is configured to determine whether the write policy of the second thread is a writeback policy
  • the third write module 1009 is configured to: when the write policy is a writeback policy, write data with the modified label in the first volatile cache partition: the memory or the cache level is lower than the cache level of the volatile memory. Memory.
  • the capacity of the first volatile cache partition is greater than or equal to the capacity of the first non-volatile cache partition.
  • the related information of the first data further includes: a third identifier, where the third identifier is used to indicate that the first data on the nonvolatile memory has not been written to the first volatile cache partition, as shown in FIG.
  • the cache manager further includes: a second change module 10010, configured to change a third identifier in the related information of the first data to a fourth identifier, where the fourth identifier is used to indicate that the first data on the nonvolatile memory has been written To the first volatile cache partition.
  • the long time-consuming operation is a data loss operation
  • the cache manager shown in FIG. 11 further includes: a receiving module 10011, configured to receive the lost data sent by the first thread;
  • the fourth writing module 10012 is configured to write the lost data to the first volatile cache partition.
  • the first data includes: a first data block and a second data block, where the data portion in the first data block is independent of the first thread, and the data portion in the second data block is related to the first thread,
  • the writing module 1003 is further configured to: write the content of the valid data bit and the second data block in the first data block to the non-volatile memory;
  • the cache manager shown in any of Figures 11 through 14 further includes a clearing module (none of Figures 11 through 14) for clearing the contents of all valid data bits within the first volatile cache partition.
  • the long time-consuming operation is a preset operation in which the operation duration is greater than a preset duration threshold, and the preset operation includes at least one of a data loss operation, an access input device operation, and a sleep operation.
  • the embodiment of the present invention provides a cache manager, because other threads are not allowed to access the first volatile cache partition during the first thread occupying the first volatile cache partition, thereby making other threads
  • the first thread cannot access the first volatile cache partition
  • other threads cannot access the first volatile cache partition, preventing data of different threads from colliding with each other.
  • the first writing module writes the first data to the non-volatile memory, backs up the first data, and releases the first thread to the first volatile cache.
  • the occupation of the partition that is, when the first thread performs a long time-consuming operation, the first volatile cache partition can be accessed by other threads, thereby improving the cache utilization of the terminal.
  • the method embodiments provided by the present application can refer to the corresponding device embodiments, which is not limited in this application.
  • the sequence of the steps of the method embodiments provided by the present application can be appropriately adjusted, and the steps can be correspondingly increased or decreased according to the situation. Any person skilled in the art can easily think of the changed method within the technical scope disclosed in the present application. All should be covered by the scope of protection of this application, and therefore will not be described again.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种缓存管理方法、缓存管理器、共享缓存和终端,涉及存储技术领域,该方法用于缓存管理器(121),该方法包括:在第一线程访问易失性存储器(122)中的第一易失性缓存分区(1221)的过程中,若第一线程需要执行长耗时操作,则将第一易失性缓存分区(1221)中的第一数据备份至非易失性存储器(123),任意两个易失性缓存分区(1221)锁定的线程不同,每个易失性缓存分区(1221)禁止被与易失性缓存分区(1221)锁定的线程不同的线程访问;在第一线程执行长耗时操作的过程中,将第一易失性缓存分区(1221)设置为可被与第一线程不同的线程访问的状态。该方法解决了终端的缓存利用率较低的问题,提高了终端的缓存利用率,可用于终端。

Description

缓存管理方法、缓存管理器、共享缓存和终端 技术领域
本申请涉及存储技术领域,特别涉及一种缓存管理方法、缓存管理器、共享缓存和终端。
背景技术
终端中设置有处理器和易失性存储器。处理器包括多个处理器核,每个处理器核包括多个线程,每个线程用于访问易失性存储器中的数据,如在易失性存储器中写入数据。同一处理器核中的多个线程能够共享易失性存储器,使得该多个线程能够同时访问易失性存储器中的数据。
目前,共享的易失性存储器中,在某一数据长时间未被访问时,该数据就会被其他数据替换。当某一线程在执行需要耗时较长的操作(如数据丢失操作)时,该线程原先在易失性存储器中访问的数据会由于长时间没有被访问,而被其他线程的数据替换,发生线程间的数据污染。相关技术中,为了防止线程间的数据污染,将共享的易失性存储器划分成若干个易失性缓存分区,并且设置不同的线程对应不同的易失性缓存分区,也即,一个线程只能访问该线程对应的易失性缓存分区。当某一线程执行耗时较长的操作时,该线程对应的易失性缓存分区禁止被其他线程访问,该线程在对应的易失性缓存分区中访问的数据并不会被其他线程的数据替换。
但是,在某一线程执行耗时较长的操作时,该某一线程对应的易失性缓存分区禁止被其他线程访问,且此时该某一线程也并未访问该易失性缓存分区,使得该易失性缓存分区无法被有效利用,因此,终端的缓存利用率较低。
发明内容
为了解决终端的缓存利用率较低的问题,本申请提供了一种缓存管理方法、缓存管理器、共享缓存和终端。所述技术方案如下:
第一方面,提供了一种缓存管理方法,共享缓存包括易失性存储器和非易失存储器,所述易失性存储器包括至少两个易失性缓存分区,所述方法包括:将第一易失性缓存分区分配给第一线程,所述第一易失性缓存分区上存储有所述第一线程相关的第一数据,在所述第一线程占用所述第一易失性缓存分区期间不允许其他线程访问所述第一易失性缓存分区,所述第一易失性缓存分区为所述至少两个易失性缓存分区中的任一分区;判断第一线程是否需要执行长延时操作,所述长延时操作是指操作时长大于预设时间阈值的操作,且所述第一线程在执行所述长延时操作期间不访问所述第一易失性缓存分区;若所述第一线程需要执行长延时操作,则将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,并释放所述第一线程对所述第一易失性缓存分区的占用。
示例的,该缓存管理方法可以用于缓存管理器,由于在第一线程占用第一易失性缓存分区期间,不允许其他线程访问第一易失性缓存分区,从而使得其他线程无法在第一线程访问第一易失性缓存分区时,其他线程无法访问该第一易失性缓存分区,防止了不同线程 的数据之间互相污染。且在第一线程执行长耗时操作时,将第一数据写入到非易失性存储器,对第一数据进行了备份,并释放第一线程对第一易失性缓存分区的占用,也即在第一线程执行长耗时操作时,第一易失性缓存分区能够被其他线程访问,因此,能够提高终端的缓存利用率。
可选的,所述方法还包括:在所述第一线程将所述长延时操作执行完毕后,为第二易失性缓存分区分配给所述第一线程,并将所述非易失性缓存器中的所述第一数据写入所述第二易失性缓存分区,所述第二易失性缓存分区为所述第一易失性缓存区分或所述第一易失性缓存分区之外的其他易失性缓存分区。也即,在第一线程执行完毕长延时操作后,缓存管理器可以将该第一数据从非易失性存储器上恢复至:第一易失性缓存分区,或者与第一易失性缓存分区不同的第二易失性缓存分区。进一步的,在将第一数据恢复至第二易失性缓存分区后,缓存管理器还可以指示该第一线程访问第二易失性缓存分区,并在第二易失性缓存分区上继续访问第一数据。
可选的,所述非易失存储器包含至少两个非易失性缓存分区,所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器包括:将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器中的第一非易失性缓存分区,第一非易失性缓存分区为所述至少两个非易失性缓存分区中的任一分区;所述方法还包括:记录所述第一线程与所述第一非易失性缓存分区的关联关系;所述将所述非易失性缓存器中的所述第一数据写入所述第二易失性缓存分区,包括:根据所述第一线程与所述第一非易失性缓存分区的关联关系,将所述第一非易失性缓存分区中的所述第一数据写入所述第二易失性缓存分区。也即是,该缓存管理器可以将第一数据从第一易失缓存分区写入非易失性存储器中的任一分区,并在写入某一分区时,记录第一线程与第一非易失性缓存分区的关联关系,以便于在将第一数据从非易失性存储器上恢复至易失性存储器上时,能够确定该第一数据以及需要使用该第一数据的第一线程。
可选的,每个所述易失性缓存分区锁定一个线程,且任意两个所述易失性缓存分区锁定的线程不同,每个所述易失性缓存分区不允许被未锁定的线程访问,所述将第一易失性缓存分区分配给第一线程,包括:设置所述第一易失性缓存分区锁定所述第一线程;所述释放所述第一线程对所述第一易失性缓存分区的占用,包括:解除所述第一易失性缓存分区与所述第一线程的锁定关系;和/或,设置所述第一易失性缓存分区锁定待访问所述易失性存储器的第二线程。
也即,在释放所述第一线程对所述第一易失性缓存分区的占用时,第一方面,可以直接解除第一易失性缓存分区与第一线程的锁定关系;第二方面,可以在解除第一易失性缓存分区与第一线程的锁定关系后,设置第一易失性缓存分区锁定第二线程,并指示第二线程访问第一易失性缓存分区;第三方面,可以直接设置第一易失性缓存分区锁定第二线程,将第一易失性缓存分区与第一线程的锁定关系覆盖掉,并指示第二线程访问第一易失性缓存分区。其中,通过将第一易失性缓存分区锁定第二线程,使得第一易失性缓存分区具有被与第一线程不同的第二线程访问的能力,并且,在将第一易失性缓存分区锁定第二线程后,第二线程能够访问第一易失性缓存分区,从而实现了提高终端的缓存利用率的效果。
可选的,所述非易失性存储器包括至少两个非易失性缓存分区,所述至少两个易失性缓存分区与所述至少两个非易失性缓存分区一一耦合,所述将所述第一易失性缓存分区中 的所述第一数据写入所述非易失性存储器,包括:将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区。为了进一步的防止线程间的数据污染,设置非易失性存储器也包括多个非易失性缓存分区,这样一来,备份至非易失性存储器上的多个线程的数据就不会发生污染。
可选的,所述方法还包括:在将所述第一数据写入至所述第一非易失性缓存分区的过程中,记录所述第一数据的相关信息,所述第一数据的相关信息包括:所述第一易失性缓存分区的标识、非易失存储标识和所述第一线程的标识,所述非易失存储标识用于指示所述第一数据在所述非易失性存储器内的存储位置;在所述第一线程执行完毕所述长耗时操作后,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。也即,在将第一数据备份至非易失性存储器的过程中,缓存管理器为了便于了解进行备份的第一数据的来龙去脉,缓存管理器需要将第一数据的相关信息进行记录,并在后续的步骤中根据该第一数据的相关信息对第一数据进行恢复。
可选的,所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:设置所述第一数据的相关信息中,所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区,锁定所述第一线程的标识所指示的所述第一线程;将所述第一数据的相关信息中,所述第一非易失存储标识所指示的第一非易失性缓存分区内的第一数据,写入至所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区;指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区。
可选的,在所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区之前,所述方法还包括:判断所述第一易失性缓存分区是否被访问;所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:在所述第一易失性缓存分区未被访问时,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。
也即,在第一线程执行完毕长耗时操作时,该第一易失性缓存分区上可能正被其他线程(如第二线程)访问,此时,为了防止其他线程的数据丢失,需要等待该第一易失性缓存分区处于空闲状态(也即第一易失性缓存分区未被访问)时,才能进行第一线程的访问恢复。
可选的,所述方法用于缓存管理器,所述第一非易失性缓存分区包括多个非易失缓存子区,每个所述非易失缓存子区的容量均大于或等于所述第一易失性缓存分区的容量,所述将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区,包括:将所述第一数据写入至所述多个非易失缓存子区中空闲的第一非易失缓存子区,在将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区之前,所述缓存管理器记录的数据的相关信息不包括:用于指示空闲的非易失缓存子区的非易失存 储标识,所述第一数据的相关信息中的所述非易失存储标识用于指示所述第一非易失缓存子区;
所述记录所述第一数据的相关信息,包括:在预设的缓存列表中记录所述第一数据的相关信息,所述第一数据的相关信息还包括:第一标识,所述第一标识用于指示所述长耗时操作未执行完毕,所述预设的缓存列表用于记录写入至非易失存储器的数据的相关信息;
所述方法还包括:在所述第一线程将所述长耗时操作执行完毕后,将所述预设的缓存列表中包含所述第一线程的标识的所述第一数据的相关信息中的所述第一标识,更改为第二标识,所述第二标识用于指示所述长耗时操作已执行完毕;
所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:依次根据所述预设的缓存列表中包含第二标识的相关信息,将所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区,分配给所述包含第二标识的相关信息中线程的标识所指示的线程,并将所述包含第二标识的相关信息中非易失存储标识所指示的存储位置上的数据,写入所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区。
由于第一非易失性缓存分区的容量大于第一易失性缓存分区,因此,在访问第一易失性缓存分区的过程中能够允许执行长耗时操作的线程较多,写入至非易失存储器的数据也较多,待恢复对第一易失性缓存分区的访问的线程也较多,所以缓存管理器需要依次恢复执行完毕长耗时操作的线程对第一易失性缓存分区的访问。
可选的,所述第一易失性缓存分区的容量大于或等于所述第一非易失性缓存分区的容量。
可选的,在所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区之前,所述方法还包括:判断所述第二线程的写策略是否为写回策略;若所述写策略为写回策略,则将所述第一易失性缓存分区中具有已修改标签的数据写入:内存或者缓存级别低于所述易失性存储器的缓存级别的存储器。
需要说明的是,第二线程在向第一易失性缓存分区中写入数据时,第二线程能够将数据中的每个数据块中的标签写为已修改标签,终端中的缓存包括多个级别的存储器。在恢复第一线程对第一易失性缓存分区的访问之前,缓存管理器为了防止在恢复第一线程对第一易失性缓存分区的访问时,丢失第二线程的数据,缓存管理器还能够判断第二线程的写策略是否为写回策略。若第二线程的写策略为写回策略,则将第一易失性缓存分区中具有已修改标签的数据写入至:内存或低于易失性存储器的缓存级别的存储器。
可选的,所述第一数据的相关信息还包括:第三标识,所述第三标识用于指示所述非易失存储器上的所述第一数据还未写入至所述第一易失性缓存分区,在所述指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区之前,所述方法还包括:将所述第一数据的相关信息中的所述第三标识更改为第四标识,所述第四标识用于指示所述非易失存储器上的所述第一数据已写入至所述第一易失性缓存分区。
需要说明的是,在将第一数据恢复至第一易失性缓存分区后,缓存管理器还能够将预 设的缓存列表中,第一数据的相关信息中的第三标识更改为第四标识,第四标识用于指示非易失存储器上的第一数据已写入至第一易失性缓存分区。这样一来,缓存管理器在恢复第一线程对第一易失性缓存分区的访问后,缓存管理器就能够根据该预设的缓存列表中的第一数据的相关信息中的第四标识,确定第一数据已恢复至第一易失性缓存分区。进而执行恢复下一个包含第二标识的数据的相关信息中,线程的标识所指示的线程对易失性缓存分区的标识所指示的易失性缓存分区的访问。从而防止了缓存管理器在将第一数据恢复写入至第一易失性缓存分区后,再次将第一数据写入至第一易失性缓存分区。
可选的,所述长耗时操作为数据丢失操作,在所述指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区之前,所述方法还包括:接收所述第一线程发送的丢失数据;将所述丢失数据写入所述第一易失性缓存分区。也即,在第一线程执行的长耗时操作为数据丢失操作时,在指示第一线程继续访问第一易失性缓存分区之前,该缓存管理器能够接收到第一线程发送的第一线程在执行长耗时操作时找到的丢失数据,并将丢失数据写入第一易失性缓存分区,以保证在第一线程恢复对第一易失性缓存分区的访问后,第一线程能够正常访问第一易失性缓存分区。
可选的,所述第一数据包括:第一数据块和第二数据块,其中,所述第一数据块中的数据部分与所述第一线程无关,所述第二数据块中的数据部分与所述第一线程相关,所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,包括:将所述第一数据块中的有效数据位的内容和所述第二数据块写入至所述非易失性存储器;在所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器之后,所述方法还包括:清除所述第一易失性缓存分区内的全部有效数据位的内容。
需要说明的是,第一易失性缓存分区上存储的数据包括多个数据块,每个数据块包括:有效数据位、标签和数据部分。在备份第一数据时,仅仅备份有效数据块(第二数据块)和无效数据块(也即第一数据块)中的有效数据位的内容即可,且为了保证下一线程能够正常访问第一易失性缓存分区,需要清除第一易失性缓存分区内的全部有效数据位的内容。
可选的,所述长耗时操作为操作时长大于预设时长阈值的预设操作,所述预设操作包括:数据丢失操作、访问输入输出设备操作和休眠操作中的至少一种操作。也即,只有在第一线程执行长耗时操作,且该长耗时操作为预设操作时,才执行将第一数据备份至第一非易失缓存子区上的步骤。在该长耗时操作并不是预设操作时,并不执行将第一数据备份至第一非易失缓存子区上的步骤。
第二方面,提供了一种缓存管理器,该缓存管理器包括至少一个模块,该至少一个模块用于实现上述第一方面或第一方面的任一可选方式所提供的缓存管理方法。
第三方面,提供了一种缓存管理器,该缓存管理器包括:至少一个发射模块、至少一个接收模块、至少一个处理模块、至少一个存储模块以及至少一个总线,存储模块通过总线与处理模块相连;处理模块被配置为执行存储模块中存储的指令;处理模块通过执行指令来实现:上述第一方面或第一方面中任意一种可能的实现方式所提供的缓存管理方法。
第四方面,提供了一种共享缓存,所述共享缓存包括:缓存管理器、易失性存储器和非易失性存储器,所述缓存管理器为第二方面或第三方面所述的缓存管理器;所述易失性存储器包括至少两个易失性缓存分区。
可选的,所述非易失性存储器包括至少两个非易失性缓存分区,所述至少两个易失性缓存分区与所述至少两个非易失性缓存分区一一耦合。
第五方面,提供了一种终端,所述终端包括:处理器和共享缓存,所述处理器包括至少两个线程;所述共享缓存为第四方面所述的共享缓存。
本申请提供的技术方案带来的有益效果是:
在第一线程占用第一易失性缓存分区期间,不允许其他线程访问第一易失性缓存分区,从而使得其他线程无法在第一线程访问第一易失性缓存分区时,其他线程无法访问该第一易失性缓存分区,防止了不同线程的数据之间互相污染。且在第一线程执行长耗时操作时,将第一数据写入到非易失性存储器,对第一数据进行了备份,并释放第一线程对第一易失性缓存分区的占用,也即在第一线程执行长耗时操作时,第一易失性缓存分区能够被其他线程访问,因此,能够提高终端的缓存利用率。
附图说明
图1为本发明实施例提供的一种终端的结构示意图;
图2为本发明实施例提供的一种终端的局部结构示意图;
图3为相关技术提供的一种共享缓存的结构示意图;
图4为相关技术提供的另一种共享缓存的结构示意图;
图5为本发明实施例提供的一种缓存管理器的结构示意图;
图6为本发明实施例提供的一种缓存管理方法的方法流程图;
图7为本发明实施例提供的一种缓存管理器在恢复第一线程对第一易失性缓存分区的访问的方法流程图;
图8为本发明实施例提供的另一种缓存管理器在恢复第一线程对第一易失性缓存分区的访问的方法流程图;
图9为相关技术提供的一种铁电非易失触发器的结构示意图;
图10为本发明实施例提供的一种缓存管理器的结构示意图;
图11为本发明实施例提供的另一种缓存管理器的结构示意图;
图12为本发明实施例提供的又一种缓存管理器的结构示意图;
图13为本发明实施例提供的又一种缓存管理器的结构示意图;
图14为本发明另一实施例提供的一种缓存管理器的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1为本发明实施例提供的一种终端的结构示意图,图2为本发明实施例提供的一种终端的局部结构示意图。请结合图1和图2,终端1包括处理器10、缓存(英文:Cache)11、本地内存(英文:Main Memory)12。处理器10能够访问缓存11和本地内存12,以及终端的存储器(英文:storage),需要说明的是,图1和图2中并未示出终端的存储器。可选的,终端为计算机,终端的存储器为硬盘。
处理器10包括至少一个处理器核101(图1中以处理器包括两个处理器核101为例), 每个处理器核101包括至少一个线程1011(图1中一每个处理器核包括两个线程1011为例)。需要说明的是,每个处理器核101内还包括寄存器。需要说明的是,处理器10包括至少两个线程1011,且本发明实施例中所说的线程均为硬件线程。
缓存11为多级缓存,当多个处理器核101共享某一级缓存,或者当某一处理器核101中的多个线程1011共享某一级缓存(同时访问该某一级缓存)时,该级缓存为共享缓存A,多级缓存中存在至少一个共享缓存A。例如,多级缓存包括:一级缓存、二级缓存和三级缓存,一级缓存(英文:level 1 Cache;简称:L1 Cache)由某个处理器核独占,二级缓存(英文:level 2 Cache;简称:L2 Cache)在多个处理器核之间共享,三级缓存(英文:level3 Cache;简称:L3 Cache)由所有处理器核共享。在多线程处理器中,L1 Cache也能够由同一个处理器核内的多个线程共享。
需要说明的是,本发明实施例所实用的场景中的缓存11包括共享缓存A,访问该共享缓存A的线程的个数大于或等于二。因此,当处理器10包括一个处理器核101时,该一个处理器核101包括多个线程1011;当每个处理器核101包括一个线程1011时,该处理器10包括至少两个处理器核101。
请参考图2,共享缓存A包括:缓存管理器121、易失性存储器122和非易失性存储器(英文:Non-Volatile Memory;简称:NVM)123。缓存管理器121与易失性存储器122、非易失性存储器123以及共享该缓存管理器所在的共享缓存的多个线程相连接,缓存管理器121用于管理多个线程访问共享缓存。
示例的,易失性存储器122为静态随机存取存储器(英文:Static RandomAccess Memory;简称:SRAM),NVM 123为闪存(英文:Flash EEPROM;简称:Flash)、相变存储器(英文:Phase Change Memory;简称:PCM)、自旋扭矩转换磁阻存储记忆体(英文:Spin Transfer Torque Magnetoresistive Random Access Memory;简称:STT-MRAM)或铁电随机存取记忆体(英文:Ferroelectric RandomAccess Memory;简称:FeRAM)等。其中,PCM是利用相变材料在晶态和非晶态的不同导电特性实现存储的非易失性,STT-MRAM是利用放大了的隧道效应实现磁存储器,具有密度高、访问时间短、耗电低和非易失的特点,FRAM是利用铁电薄膜的双稳态极化特性实现的非易失存储特性。
相关技术中,在共享缓存内的某一数据长时间未被访问时,该数据就会被其他数据替换。当某一线程在执行需要耗时较长的操作(如数据丢失操作)时,该线程原先在共享缓存中访问的数据会由于长时间没有被访问,而被其他线程的数据替换,发生线程间的数据污染。为了防止线程间的数据污染,将共享缓存划分成若干个易失性缓存分区,并且设置不同的线程对应不同的易失性缓存分区,也即,一个线程只能访问该线程对应的易失性缓存分区。当某一线程执行耗时较长的操作时,该线程对应的易失性缓存分区禁止被其他线程访问,该线程在对应的易失性缓存分区中访问的数据并不会被其他线程的数据替换。
图3为相关技术提供的一种共享缓存的结构示意图,图4为相关技术提供的另一种共享缓存的结构示意图,其中,图3中按照共享缓存B中的路(英文:way)将共享缓存11划分为两个易失性缓存分区B1,其中的一个易失性缓存分区B1包括一个路(路1)对应的存储区域,另一个易失性缓存分区B1包括三个路(路2、路3和路4)对应的存储区域。 图4中按照共享缓存B中的行(英文:row)将共享缓存B划分为两个易失性缓存分区B2,其中的一个易失性缓存分区B2包括行1至行m对应的存储区域,另一个易失性缓存分区B2包括行m+1至行x对应的存储区域,m为大于1且小于x的整数,行x为该共享缓存B中的最后一个row。当线程1和线程2共享该共享缓存B时,该线程1能够访问图3中包括一个路对应的存储区域的易失性缓存分区B1,或图4中包括行1至行m对应的存储区域的易失性缓存分区B2;线程2能够访问图3中包括三个路对应的存储区域的易失性缓存分区B1,或图4中包括行m+1至行x对应的存储区域的易失性缓存分区B2。但是,在某一线程执行耗时较长的操作时,该某一线程对应的易失性缓存分区禁止被其他线程访问,且此时该某一线程也并未访问该易失性缓存分区,使得该易失性缓存分区无法被有效利用,因此,处理器的缓存利用率较低,处理器的性能较差。
请继续参考图2,为了防止线程间数据的污染,本发明实施例中的易失性存储器122也划分为多个易失性缓存分区1221(图2中以易失性存储器包括两个易失性缓存分区1221为例),如根据易失性存储器中的路或行划分得到多个易失性缓存分区1221。该多个易失性缓存分区1221中的每个易失性缓存分区1221锁定一个线程,且任意两个易失性缓存分区1221锁定的线程不同,任意两个易失性缓存分区1221的容量可以相同,也可以不同。可选的,本发明实施例中的非易失性存储器123也划分为与易失性缓存分区耦合的多个非易失性缓存分区1231(图2中以非易失性存储器包括两个非易失性缓存分区为例),且每个非易失性缓存分区与一个非易失性缓存分区耦合,任意两个易失性缓存分区耦合的非易失性缓存分区不同。
图5为本发明实施例提供的一种缓存管理器121的结构示意图,该缓存管理器121包括:至少一个发射模块1211,至少一个接收模块1212,至少一个处理模块1213,至少一个存储模块1214,以及至少一个总线1215,发射模块、接收模块、处理模块、存储模块通过总线相连接。处理模块1213用于执行存储模块1214中存储的可执行模块,例如计算机程序。在一些实施方式中,存储模块1214存储了程序12141,程序12141能够被处理模块1213执行。
图6为本发明实施例提供的一种缓存管理方法的方法流程图,该缓存管理方法用于图2中的缓存管理器121,该缓存管理方法能够被图5中的处理模块1213执行程序12141来实现。
需要说明的是,第一易失性缓存分区为易失性存储器中的至少两个易失性缓存分区中的任一分区,第一非易失性缓存分区与非易失性存储器中的第一易失性缓存分区耦合,且在图6所示的实施例中,第一非易失性缓存分区包括多个非易失缓存子区,每个非易失缓存子区的容量均大于或等于第一易失性缓存分区的容量,第一非易失性缓存分区的容量大于第一易失性缓存分区的容量。
如图6所示,该缓存管理方法包括:
步骤601、将第一易失性缓存分区分配给第一线程。
缓存管理器用于管理多个线程对共享缓存的访问,共享缓存包括易失性存储器和非易失性存储器,易失性存储器包括多个易失性缓存分区。在较多线程均需要访问易失性存储 器时,缓存管理器能够在该较多线程中筛选出多个线程,并将该多个易失性缓存分区一一锁定至筛选出来的多个线程,也即每个易失性缓存分区锁定一个线程,且任意两个易失性缓存分区锁定的线程不同。在步骤601中,缓存管理器可以将第一易失性缓存分区分配给第一线程,也即将第一易失性缓存分区锁定第一线程。
然后,缓存管理器就指示多个线程中的每个线程访问锁定的易失性缓存分区。如:缓存管理器能够向每个线程发送访问指示,该访问指示包括该线程锁定的易失性缓存分区的标识,线程在接收到访问指示后,就根据该访问指示对锁定的易失性缓存分区进行访问。此时,该多个线程中的每个线程仅仅能够访问锁定的易失性缓存分区,而无法访问其他易失性缓存分区,也即在第一线程占用第一易失性缓存分区期间,缓存管理器不允许其他线程访问第一易失性缓存分区,因此,保证了在多个线程访问易失性存储器时,不会发生线程间的数据的污染。
进一步的,缓存管理器在确定每个易失性缓存分区锁定的线程后,能够建立如表1所示的锁定列表,该锁定列表用于记录每个易失性缓存分区的标识,以及每个易失性缓存分区锁定的线程的标识。如易失性缓存分区C1与线程F1锁定,易失性缓存分区C2与线程F2锁定,易失性缓存分区C3与线程F3锁定,易失性缓存分区C4与线程F4锁定,易失性缓存分区C5与线程F5锁定。需要说明的是,表1中仅仅是示例性的对锁定列表进行了举例说明,实际应用中,锁定列表可以与表1不同,本发明实施例对此不作限定。
表1
易失性缓存分区 线程
C1 F1
C2 F2
C3 F3
C4 F4
C5 F5
步骤602、判断所述第一线程是否需要执行长耗时操作。若第一线程需要执行长耗时操作,则执行步骤603;若第一线程不需要执行长耗时操作,则执行步骤602。
在第一线程访问第一易失性缓存分区的过程中,第一线程能够读取第一易失性缓存分区上存储的数据,或者,修改第一易失性缓存分区上存储的数据。当第一线程需要执行长耗时操作时,该缓存管理器能够接收到第一线程发送的长耗时操作指示,缓存管理器能够根据该长耗时操作指示确定该第一线程需要执行长耗时操作。示例的,长耗时操作为操作时长大于预设时长阈值的操作,且第一线程在执行长延时操作期间不访问第一易失性缓存分区;可选的,该预设时长阈值为终端的时钟周期的100倍,长耗时操作为数据丢失处理(英文:Cache Missing)或访问输入输出(英文:In/Out;简称:I/O)设备的操作。
步骤603、将第一易失性缓存分区中的与第一线程相关的第一数据写入非易失性存储器。
缓存管理器在确定第一线程需要执行长耗时操作时,缓存管理器就能够确定此时第一线程会在较长时间段内不访问第一易失性缓存分区,缓存管理器还能够将第一易失性缓存分区上存储的与第一线程相关的第一数据备份至非易失性存储器。需要说明的是,非易失 性存储器也包括多个非易失性缓存分区,且多个非易失性缓存分区与多个易失性缓存分区耦合,任意两个易失性缓存分区耦合的非易失性缓存分区不同。在缓存管理器需要将第一数据备份至非易失性存储器时,该缓存管理器能够直接确定与该第一易失性缓存分区相耦合的第一非易失性缓存分区,并将该第一数据备份至第一非易失性缓存分区。
需要说明的是,在将第一数据备份至非易失性存储器的过程中,缓存管理器为了便于了解进行备份的第一数据的来龙去脉,缓存管理器能够将第一数据的相关信息进行记录。示例的,第一数据的相关信息包括:第一易失性缓存分区的标识、非易失存储标识和第一线程的标识,非易失存储标识用于指示第一数据在非易失性存储器内的存储位置。可选的,该第一数据的相关信息还包括:长耗时操作执行状态标识和数据恢复标识,且该长耗时操作执行状态标识为:用于指示长耗时操作未执行完毕的第一标识,该数据恢复标识为:用于指示数据未从非易失性存储器上写入至第一易失性缓存分区的第三标识。
进一步的,缓存管理器上预设有缓存列表,该缓存列表用于记录写入至非易失存储器的数据的相关信息。缓存管理器在记录第一数据的相关信息时,缓存管理器将该第一数据的相关信息写入该缓存列表。可选的,该缓存列表如表2所示,该第一数据的相关信息包括:第一易失性缓存分区的标识(C1)、用于指示第一非易失性缓存分区中第一非易失缓存子区的非易失存储标识(F1M1)、第一线程的标识(W1)、第一标识(0)以及第二标识(0)。该缓存列表上还记载有第三数据的相关信息,该第三数据为存储在第三易失性缓存分区上与第三线程相关的数据,且在第三线程执行长耗时操作时,缓存管理器将第三数据备份至第三非易失性缓存分区中的非易失缓存子区。该第三数据的相关信息包括:第三易失性缓存分区的标识(C3)、用于指示第三非易失性缓存分区中非易失缓存子区的非易失存储标识(F3M1)、第三线程的标识(W3)、第一标识(0)以及第二标识(0)。
表2
Figure PCTCN2017075132-appb-000001
示例的,缓存管理器能够根据记录的数据的相关信息,在第一非易失性缓存分区的多个非易失缓存子区中选取一个空闲的非易失缓存子区(在将第一数据写入职非易失性存储器之前,缓存管理器记录的数据的相关信息中并不包含用于指示该空闲的非易失缓存子区的非易失存储标识)作为第一非易失缓存子区,并将第一数据写入至第一非易失性缓存分区的多个非易失缓存子区中空闲的第一非易失缓存子区,并在第一数据的相关信息中增加用于指示第一非易失缓存子区的非易失存储标识。
需要说明的是,第一易失性缓存分区上存储的数据包括多个数据块,每个数据块包括:有效数据位、标签和数据部分。示例的,第一易失性缓存分区上存储的第一数据包括:第一数据块和第二数据块,其中,数据块也称为Cache Line,第一数据块中的数据部分与第一线程无关,第一数据块中的有效数据位用于指示该第一数据块的数据部分与第一线程无关;第二数据块中的数据部分与第一线程相关,第二数据块中的有效数据位用于指示该第二数据块中的数据部分与第一线程相关。如:第一数据块中的有效数据位的内容为“0”,第二 数据块中的有效数据位的内容为“1”。可选的,缓存管理器在将第一数据备份至非易失性存储器时,缓存管理器可以将第一数据块中的有效数据位的内容和第二数据块备份至第一非易失性缓存分区。在将第一数据备份至第一非易失性缓存分区之后,缓存管理器还需要清除第一易失性缓存分区内的全部有效数据位的内容。也即,在备份第一数据时,缓存管理器仅仅备份有效数据块(第二数据块)和无效数据块(也即第一数据块)中的有效数据位的内容即可,且为了保证下一线程能够正常访问第一易失性缓存分区,缓存管理器需要清除第一易失性缓存分区内的全部有效数据位的内容。
可选的,第一线程发送给缓存管理器的长耗时操作指示中还包含:长耗时操作的标识,缓存管理器在接收到长耗时操作指示后,能够根据该长耗时操作指示中的长耗时操作的标识确定该长耗时操作是否为预设操作,可选的,预设操作包括数据丢失操作、访问输入输出设备操作和休眠操作中的至少一种操作。在该长耗时操作为预设操作时,缓存管理器才执行释放第一线程对第一易失性缓存分区的占用,以及将第一数据写入至第一非易失缓存子区上的步骤。在该长耗时操作并不是预设操作时,缓存管理器并不执行释放第一线程对第一易失性缓存分区的占用,以及将第一数据写入至第一非易失缓存子区上的步骤的步骤。
另外,本发明实施例中,缓存管理器将第一数据备份至非易失性存储器,当共享缓存突然断电时,存储在非易失性存储器上的第一数据也不会丢失,因此,能够进一步的防止第一数据的丢失。
需要说明的是,本发明实施例中,为了保证第一易失性缓存分区上的第一数据能够成功备份至第一非易失缓存子区,则需要保证第一非易失缓存子区的容量大于或等于第一易失性缓存分区的容量。可选的,为了对共享缓存内的存储空间进行有效的利用,可以设置第一非易失缓存子区的容量等于第一易失性缓存分区的容量。
步骤604、将第一易失性缓存分区分配给待访问易失性存储器的第二线程。
在步骤604之前,第一易失性缓存分区一直锁定第一线程,也即在步骤604之前该第一易失性缓存分区仅仅允许被第一线程访问,在第一线程执行长耗时操作的过程中,该第一线程并未访问第一易失性缓存分区。因此,在步骤604中缓存管理器能够释放第一线程对第一易失性缓存分区的占用,将第一易失性缓存分区设置为可被与第一线程不同的线程访问的状态,也即设置第一易失性缓存分区具有被与第一线程不同的线程访问的能力。需要说明的是,缓存管理器在设置该第一易失性缓存分区为可被其他线程访问的状态时,缓存管理器可以直接设置第一易失性缓存分区锁定第二线程,将第一易失性缓存分区与第一线程的锁定关系覆盖掉(也即步骤604的方式)。可选的,缓存管理器还可以直接解除第一易失性缓存分区与第一线程的锁定关系;或者,缓存管理器还可以直接解除第一易失性缓存分区与第一线程的锁定关系,并在解除第一易失性缓存分区与第一线程的锁定关系后,设置第一易失性缓存分区锁定第二线程,并指示第二线程访问第一易失性缓存分区。
示例的,缓存管理器在步骤604之前在该较多需要访问易失性存储器的线程中筛选了多个线程,该多个线程包括第一线程。在步骤604中,缓存管理器在该较多线程中未被筛选上的线程中再筛选一个线程作为第二线程。缓存管理器能够设置第一易失性缓存分区锁定第二线程,使得第二线程能够访问该第一易失性缓存分区。
如表1所示,当第一易失性缓存分区为易失性缓存分区C1,第一线程为线程F1时,如表3所示,在步骤604中缓存管理器将第一易失性缓存分区(易失性缓存分区F1)锁定 第二线程(C6)。
表3
易失性缓存分区 线程
C6 F1
C2 F2
C3 F3
C4 F4
C5 F5
进一步的,在将第一易失性缓存分区分配给第二线程后,缓存管理器还能够指示待访问易失性存储器的第二线程访问第一易失性缓存分区。
需要说明的是,在第二线程访问第一易失性缓存分区的过程中,缓存管理器也可以执行如步骤602至步骤604中的方法相似的方法。也即,若第二线程也需要执行长耗时操作,则缓存管理器也能够将第一易失性缓存分区上存储的与第二线程相关的第二数据,写入至第一非易失性缓存分区中空闲的第二非易失缓存子区,并在将第二数据写入至第二非易失缓存子区的过程中,记录第二数据的相关信息。然后,缓存管理器还能够将第一易失性缓存分区锁定至另一个线程(不是第一线程,也不是第二线程),并指示该另一个线程访问第一易失性缓存分区,如此循环往复。缓存管理器能够在预设的缓存列表中记录第二数据的相关信息,如表4所示,该第二数据的相关信息可以包括:第一易失性缓存分区的标识(C1)、用于指示第一非易失性缓存分区中第二非易失缓存子区的非易失存储标识(F1M2)、第二线程的标识(W2)、第一标识(0)和第三标识(0)。
表4
Figure PCTCN2017075132-appb-000002
步骤605、在第一线程执行完毕长耗时操作后,将第一易失性缓存分区分配给第一线程,并将非易失性存储器中的第一数据写入第一易失性缓存分区。
在第一线程执行完毕长耗时操作后,缓存管理器需要恢复第一线程对第一易失性缓存分区的访问。但是,由于本发明实施例中第一非易失性缓存分区包括多个非易失缓存子区,且每个非易失缓存子区的容量均大于或等于第一易失性缓存分区,因此,在访问第一易失性缓存分区的过程中执行长耗时操作的线程较多,写入至非易失存储器的数据较多,待恢复对第一易失性缓存分区的访问的线程也较多,所以缓存管理器需要依次恢复执行完毕长耗时操作的线程对第一易失性缓存分区的访问。
需要说明的是,在第一线程执行完毕长耗时操作后,该第一线程向缓存管理器发送长耗时操作执行完毕指示,以便于缓存管理器根据该长耗时操作执行完毕指示,将预设的缓存列表中包含第一线程的标识的相关信息中的第一标识更改为第二标识。其中,该第二标 识用于指示长耗时操作已执行完毕,此时,该包含第一线程的标识的相关信息中的长耗时操作执行状态标识为第二标识。示例的,如表5所示,缓存管理器能够将包含第一线程的标识(W1)的相关信息中的长耗时操作执行状态标识,由第一标识(0)更改为第二标识(1)。进一步的,若此时第二线程也恰巧执行完毕长耗时操作,则缓存管理器也能够将包含第二线程的标识(W2)的相关信息中的长耗时操作执行状态标识,由第一标识(0)更改为第二标识(1)。
表5
Figure PCTCN2017075132-appb-000003
示例的,缓存管理器根据预设的缓存列表,依次根据预设的缓存列表中包含第二标识的相关信息(如第一数据的相关信息、第二数据的相关信息),将包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区,分配给包含第二标识的相关信息中线程的标识所指示的线程,并将包含第二标识的相关信息中非易失存储标识所指示的存储位置上的数据,写入包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区。也即依次恢复包含第二标识的缓存数据中,线程的标识所指示的线程对易失性缓存分区的标识所指示的易失性缓存分区的访问。
示例的,图7示出了一种缓存管理器在恢复第一访问模块对第一易失存储单元的访问的方法流程图,如图7所示,该方法包括:
步骤6051a、判断第一易失性缓存分区是否被访问。若第一易失性缓存分区被访问,则执行步骤6051a;若第一易失性缓存分区未被访问,则执行步骤6052a。
在第一线程执行完毕长耗时操作后,缓存管理器需要首先判断第一易失性缓存分区是否正在被访问,若第一易失性缓存分区正在被访问,则缓存管理器继续执行步骤6051a,继续判断第一易失性缓存分区是否正在被访问。若第一易失性缓存分区并不是正在被访问,则缓存管理器需要执行步骤6052a。
也即,在第一线程执行完毕长耗时操作时,该第一易失性缓存分区上可能正被其他线程(如第二线程)访问,此时,为了防止其他线程的数据丢失,需要等待该第一易失性缓存分区处于空闲状态(也即第一易失性缓存分区未被访问)时,才能进行第一线程的访问恢复。需要说明的是,在其他线程已完成对第一易失性缓存分区的访问后,第一易失性缓存分区处于空闲状态,或者,在其他线程执行长耗时操作的过程中,第一易失性缓存分区也处于空闲状态。
步骤6052a、根据第一数据的相关信息,将第一易失性缓存分区分配给第一线程。执行步骤6053a。
缓存管理器能够从第一数据的相关信息中,读取第一易失性缓存分区的标识和第一线程的标识,进而确定第一易失性缓存分区和第一线程,并设置第一易失性缓存分区锁定第一线程(如表1所示),也即将第一易失性缓存分区分配给第一线程。
步骤6053a、根据第一数据的相关信息,将非易失性存储器上的第一数据写入至第一易失性缓存分区。
缓存管理器能够从第一数据的相关信息中,读取非易失存储标识,进而确定非易失存储标识所指示的第一非易失缓存子区,并获取第一非易失缓存子区上存储的第一数据,以及将该第一数据写入至第一易失性缓存分区。进一步的,在将第一数据写入第一易失性缓存分区时,缓存管理器也能够仅仅将无效数据块中的有效数据位的内容和有效数据块写入第一易失性缓存分区即可。
在将第一数据写入至第一易失性缓存分区后,该缓存管理器还可以指示第一数据的相关信息中,线程的标识所指示的第一线程继续访问第一易失性缓存分区。
需要说明的是,在将第一数据恢复至第一易失性缓存分区后,缓存管理器还将预设的缓存列表中,第一数据的相关信息中的第三标识更改为第四标识,该第四标识用于指示非易失存储器上的第一数据已写入至第一易失性缓存分区。这样一来,缓存管理器在执行完毕步骤6053a,也即恢复第一线程对第一易失性缓存分区的访问后,缓存管理器就能够根据该预设的缓存列表中的第一数据的相关信息中的第四标识,确定第一数据已恢复至第一易失性缓存分区。进而执行恢复下一个包含第二标识的数据的相关信息中,线程的标识所指示的线程对易失性缓存分区的标识所指示的易失性缓存分区的访问。从而防止了缓存管理器在将第一数据写入至第一易失性缓存分区后,再次将第一数据写入至第一易失性缓存分区。
进一步的,当第一线程执行的长耗时操作为数据丢失操作(英文:Cache missing)时,在步骤6053a之前,该缓存管理器能够接收到第一线程发送的第一线程在执行长耗时操作时找到的丢失数据,并在步骤6053a之后,将丢失数据写入第一易失性缓存分区,以保证在第一线程恢复对第一易失性缓存分区的访问后,第一线程能够正常访问第一易失性缓存分区。
需要说明的是,当图6所示的实施例中,第一非易失性缓存分区并不包括多个非易失缓存子区,且第一非易失性缓存分区的容量大于或等于第一易失性缓存分区的容量时,图6所示的实施例会发生如下变化:
首先,步骤603中缓存管理器将第一数据写入至非易失性存储器中的第一非易失性缓存分区,缓存管理器记录的第一数据的相关信息中非易失存储标识用于指示第一非易失性缓存分区的标识。进一步的,由于第一非易失性缓存分区并不包括多个非易失缓存子区,该第一非易失性缓存分区仅仅能够写入第一易失性缓存分区上存储的与一个线程相关的数据,因此,在访问第一易失性缓存分区时需要将数据写入至非易失性存储器的线程的个数并不是多个,所以第一数据的相关信息并不包括长耗时操作执行状态标识。
其次,在第二线程访问第一易失性缓存分区的过程中,缓存管理器无需执行步骤602至步骤604中的方法相似的方法,也即在第二线程也需要执行长耗时操作时,缓存管理器无需执行任何动作。
最后,在执行步骤605时,在第一线程执行完毕长耗时操作后,缓存管理器无需判断第一易失性缓存分区是否正在被访问,而是直接停止第二线程对第一易失性缓存分区的访问,并将第一易失性缓存分区分配给第一线程,并将非易失性存储器中的第一数据写入第一易失性缓存分区,恢复第一线程对第一易失性缓存分区的访问。
可选的,为了对共享缓存内的存储空间进行有效的利用,本发明实施例中设置第一非易失性缓存分区的容量等于第一易失性缓存分区的容量。
示例的,图8示出了另一种缓存管理器在恢复第一线程对第一易失性缓存分区的访问的方法流程图,如图8所示,该方法包括:
步骤6051b、根据第一数据的相关信息,将第一易失性缓存分区分配给第一线程。
缓存管理器在确定第一线程执行完毕长耗时操作后,缓存管理器就能够直接根据第一线程的标识,确定包含该第一线程的第一数据的相关信息,并从第一数据的相关信息中,读取第一易失性缓存分区的标识和第一线程的标识,进而确定第一易失性缓存分区和第一线程,并设置第一易失性缓存分区锁定第一线程,将第一易失性缓存分区分配给第一线程。此时,第二线程并未与第一易失性缓存分区锁定,第二线程无法访问第一易失性缓存分区。
进一步的,第二线程在向第一易失性缓存分区中写入数据时,第二线程将数据中的每个数据块中的标签写为已修改标签,终端中的缓存包括多个级别的存储器。在恢复第一线程对第一易失性缓存分区的访问之前,也即在执行步骤6051b之前,缓存管理器为了防止在恢复第一线程对第一易失性缓存分区的访问时,丢失第二线程的数据,缓存管理器还判断第二线程的写策略是否为写回(英文:Write Back)策略。若第二线程的写策略为写回策略,则将第一易失性缓存分区中具有已修改标签的数据备份至:内存(英文:Main Memory)或低于易失性存储器的缓存级别的存储器。
步骤6052b、根据第一数据的相关信息,将非易失性存储器上的第一数据写入至第一易失性缓存分区。
步骤6052b中缓存管理器恢复第一数据的具体步骤,参考图7所示实施例中的步骤6053a中的具体步骤,本发明实施例对此不做赘述。
可选的,在步骤605中,缓存管理器还可以将第二易失性缓存分区分配给第一线程,并将非易失性缓存器中的第一数据写入第二易失性缓存分区,第二易失性缓存分区为第一易失性缓存区分或第一易失性缓存分区之外的其他易失性缓存分区。也即,在第一线程执行完毕长延时操作后,缓存管理器可以将该第一数据从非易失性存储器上写入至:第一易失性缓存分区或者与第一易失性缓存分区不同的第二易失性缓存分区。进一步的,在将第一数据写入至第二易失性缓存分区后,缓存管理器还可以指示该第一线程访问第二易失性缓存分区,并在第二易失性缓存分区上继续访问第一数据。
可选的,非易失存储器包含至少两个非易失性缓存分区,当至少两个非易失性缓存分区并未与至少两个易失性缓存分区一一耦合时,在步骤602中缓存管理器可以将第一易失性缓存分区中的第一数据写入非易失性存储器中的第一非易失性缓存分区,记录第一线程与第一非易失性缓存分区的关联关系,第一非易失性缓存分区为至少两个非易失性缓存分区中的任一分区。在步骤605中缓存管理器可以将第二易失性缓存分区分配给第一线程,并根据第一线程与第一非易失性缓存分区的关联关系,将第一非易失性缓存分区中的第一数据写入第二易失性缓存分区。也即是,该缓存管理器可以将第一数据从第一易失缓存分区写入非易失性存储器中的任一分区,并在写入某一分区时,记录第一线程与第一非易失性缓存分区的关联关系,以便于在将第一数据从非易失性存储器上恢复至易失性存储器上时,能够确定该第一数据以及需要使用该第一数据的第一线程。
为了防止在突发状况下,易失性存储器上的数据丢失,相关技术中设计了如图9所示的铁电非易失触发器,该铁电非易失触发器包括:铁电非易失部分和互补金属氧化物半导体(英文:Complementary Metal Oxide Semiconductor;简称:CMOS)易失部分。铁电非易失部分上设置有信号输入端Din、信号输出端Dout、反相信号输出端
Figure PCTCN2017075132-appb-000004
时钟信号输入端Clk以及反相时钟信号输出端
Figure PCTCN2017075132-appb-000005
当铁电非易失触发器正常工作时,铁电非易失触发器中的COMS易失部分工作,当出现突发状况时,铁电非易失触发器会按照一定的时序产生第一信号RW、第二信号PL和第三信号PCH,使COMS易失部分上的数据备份到铁电非易失部分。但是相关技术中并未在线程需要执行长耗时操作时,使用该非易失部分对线程的数据进行备份。
综上所述,由于本发明实施例提供的缓存管理方法中,在第一线程占用第一易失性缓存分区期间,不允许其他线程访问第一易失性缓存分区,从而使得其他线程无法在第一线程访问第一易失性缓存分区时,其他线程无法访问该第一易失性缓存分区,防止了不同线程的数据之间互相污染。且在第一线程执行长耗时操作时,将第一数据写入到非易失性存储器,对第一数据进行了备份,并释放第一线程对第一易失性缓存分区的占用,也即在第一线程执行长耗时操作时,第一易失性缓存分区能够被其他线程访问,因此,能够提高终端的缓存利用率。
图10为本发明实施例提供的一种缓存管理器的结构示意图,该缓存管理器可以为图2中的缓存管理器,如图10所示,该缓存管理器100包括:
分配模块1001,用于将第一易失性缓存分区分配给第一线程,第一易失性缓存分区上存储有与第一线程相关的第一数据,在第一线程占用第一易失性缓存分区期间不允许其他线程访问第一易失性缓存分区,第一易失性缓存分区为至少两个易失性缓存分区中的任一分区;
第一判断模块1002,用于判断第一线程是否需要执行长延时操作,长延时操作是指操作时长大于预设时间阈值的操作,且第一线程在执行长延时操作期间不访问第一易失性缓存分区;
第一写入模块1003,用于在第一线程需要执行长延时操作时,将第一易失性缓存分区中的第一数据写入非易失性存储器,并释放第一线程对第一易失性缓存分区的占用。
综上所述,本发明实施例提供的了一种缓存管理器,由于在第一线程占用第一易失性缓存分区期间,不允许其他线程访问第一易失性缓存分区,从而使得其他线程无法在第一线程访问第一易失性缓存分区时,其他线程无法访问该第一易失性缓存分区,防止了不同线程的数据之间互相污染。且在第一线程执行长耗时操作时,第一写入模块1003将第一数据写入到非易失性存储器,对第一数据进行了备份,并释放第一线程对第一易失性缓存分区的占用,也即在第一线程执行长耗时操作时,第一易失性缓存分区能够被其他线程访问,因此,能够提高终端的缓存利用率。
可选的,每个易失性缓存分区锁定一个线程,且任意两个易失性缓存分区锁定的线程不同,每个易失性缓存分区不允许被未锁定的线程访问,
分配模块1001还用于:设置第一易失性缓存分区锁定第一线程;
第一写入模块1003还用于:
解除第一易失性缓存分区与第一线程的锁定关系;
和/或,
设置第一易失性缓存分区锁定待访问易失性存储器的第二线程。
可选的,非易失性存储器包括至少两个非易失性缓存分区,至少两个易失性缓存分区与至少两个非易失性缓存分区一一耦合,第一写入模块1003还用于:
将第一数据写入至与第一易失性缓存分区相耦合的第一非易失性缓存分区。
图11为本发明实施例提供的另一种缓存管理器的结构示意图,如图11所示,在图10的基础上,该缓存管理器100还包括:
记录模块1004,用于在将第一数据写入至第一非易失性缓存分区的过程中,记录第一数据的相关信息,第一数据的相关信息包括:第一易失性缓存分区的标识、非易失存储标识和第一线程的标识,非易失存储标识用于指示第一数据在非易失性存储器内的存储位置;
第二写入模块1005,用于在第一线程执行完毕长耗时操作后,根据第一数据的相关信息,将第一易失性缓存分区分配给第一线程,并将第一非易失性缓存分区中的第一数据写入第一易失性缓存分区。
可选的,第二写入模块1005还用于:设置第一数据的相关信息中,第一易失性缓存分区的标识所指示的第一易失性缓存分区,锁定第一线程的标识所指示的第一线程;将第一数据的相关信息中,第一非易失存储标识所指示的第一非易失性缓存分区内的第一数据,写入至第一易失性缓存分区的标识所指示的第一易失性缓存分区;指示第一数据的相关信息中,第一线程的标识所指示的第一线程,继续访问第一易失性缓存分区的标识所指示的第一易失性缓存分区。
图12为本发明实施例提供的又一种缓存管理器的结构示意图,如图12所示,在图11的基础上,该缓存管理器100还包括:
第二判断模块1006,用于判断第一易失性缓存分区是否被访问;
第二写入模块1005还用于:在第一易失性缓存分区未被访问时,根据第一数据的相关信息,将第一易失性缓存分区分配给第一线程,并将第一非易失性缓存分区中的第一数据写入第一易失性缓存分区。
可选的,第一非易失性缓存分区包括多个非易失缓存子区,每个非易失缓存子区的容量均大于或等于第一易失性缓存分区的容量,
第一写入模块1003还用于:将第一数据写入至多个非易失缓存子区中空闲的第一非易失缓存子区,在将第一数据写入至与第一易失性缓存分区相耦合的第一非易失性缓存分区之前,缓存管理器记录的数据的相关信息不包括:用于指示空闲的非易失缓存子区的非易失存储标识,第一数据的相关信息中的非易失存储标识用于指示第一非易失缓存子区;
记录模块1004还用于:在预设的缓存列表中记录第一数据的相关信息,第一数据的相关信息还包括:第一标识,第一标识用于指示长耗时操作未执行完毕,预设的缓存列表用于记录写入至非易失存储器的数据的相关信息;
图13为本发明实施例提供的又一种缓存管理器的结构示意图,如图13所示,在图12的基础上,该缓存管理器100还包括:第一更改模块1007,用于在第一线程将长耗时操作执行完毕后,将预设的缓存列表中包含第一线程的标识的第一数据的相关信息中的第一标识,更改为第二标识,第二标识用于指示长耗时操作已执行完毕;
第二写入模块1005还用于:依次根据预设的缓存列表中包含第二标识的相关信息,将包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区,分配给包含第二标识的相关信息中线程的标识所指示的线程,并将包含第二标识的相关信息中非易失存储标识所指示的存储位置上的数据,写入包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区。
可选的,图14为本发明另一实施例提供的一种缓存管理器的结构示意图,如图14所示,在图11的基础上,该缓存管理器100还包括:
第三判断模块1008,用于判断第二线程的写策略是否为写回策略;
第三写入模块1009,用于在写策略为写回策略时,将第一易失性缓存分区中具有已修改标签的数据写入:内存或者缓存级别低于易失性存储器的缓存级别的存储器。
可选的,第一易失性缓存分区的容量大于或等于第一非易失性缓存分区的容量。
可选的,第一数据的相关信息还包括:第三标识,第三标识用于指示非易失存储器上的第一数据还未写入至第一易失性缓存分区,图11所示的缓存管理器还包括:第二更改模块10010,用于将第一数据的相关信息中的第三标识更改为第四标识,第四标识用于指示非易失存储器上的第一数据已写入至第一易失性缓存分区。
可选的,长耗时操作为数据丢失操作,图11所示的缓存管理器还包括:接收模块10011,用于接收第一线程发送的丢失数据;
第四写入模块10012,用于将丢失数据写入第一易失性缓存分区。
可选的,第一数据包括:第一数据块和第二数据块,其中,第一数据块中的数据部分与第一线程无关,第二数据块中的数据部分与第一线程相关,第一写入模块1003还用于:将第一数据块中的有效数据位的内容和第二数据块写入至非易失性存储器;
图11至图14任一所示的缓存管理器还包括清除模块(图11至图14均未示出),用于清除第一易失性缓存分区内的全部有效数据位的内容。
可选的,长耗时操作为操作时长大于预设时长阈值的预设操作,预设操作包括:数据丢失操作、访问输入输出设备操作和休眠操作中的至少一种操作。
综上所述,本发明实施例提供的了一种缓存管理器,由于在第一线程占用第一易失性缓存分区期间,不允许其他线程访问第一易失性缓存分区,从而使得其他线程无法在第一线程访问第一易失性缓存分区时,其他线程无法访问该第一易失性缓存分区,防止了不同线程的数据之间互相污染。且在第一线程执行长耗时操作时,第一写入模块将第一数据写入到非易失性存储器,对第一数据进行了备份,并释放第一线程对第一易失性缓存分区的占用,也即在第一线程执行长耗时操作时,第一易失性缓存分区能够被其他线程访问,因此,能够提高终端的缓存利用率。
需要说明的是,本申请提供的方法实施例能够与相应的设备实施例相互参考,本申请对此不做限定。本申请提供的方法实施例步骤的先后顺序能够进行适当调整,步骤也能够根据情况进行相应增减,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
本申请提供的缓存管理方法步骤的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到 变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (25)

  1. 一种缓存管理方法,其特征在于,共享缓存包括易失性存储器和非易失存储器,所述易失性存储器包括至少两个易失性缓存分区,所述方法包括:
    将第一易失性缓存分区分配给第一线程,所述第一易失性缓存分区上存储有所述第一线程相关的第一数据,在所述第一线程占用所述第一易失性缓存分区期间不允许其他线程访问所述第一易失性缓存分区,所述第一易失性缓存分区为所述至少两个易失性缓存分区中的任一分区;
    判断第一线程是否需要执行长延时操作,所述长延时操作是指操作时长大于预设时间阈值的操作,且所述第一线程在执行所述长延时操作期间不访问所述第一易失性缓存分区;
    若所述第一线程需要执行长延时操作,则将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,并释放所述第一线程对所述第一易失性缓存分区的占用。
  2. 根据权利要求1所述的方法,其特征在于,每个所述易失性缓存分区锁定一个线程,且任意两个所述易失性缓存分区锁定的线程不同,每个所述易失性缓存分区不允许被未锁定的线程访问,
    所述将第一易失性缓存分区分配给第一线程,包括:设置所述第一易失性缓存分区锁定所述第一线程;
    所述释放所述第一线程对所述第一易失性缓存分区的占用,包括:解除所述第一易失性缓存分区与所述第一线程的锁定关系;和/或,设置所述第一易失性缓存分区锁定待访问所述易失性存储器的第二线程。
  3. 根据权利要求2所述的方法,其特征在于,所述非易失性存储器包括至少两个非易失性缓存分区,所述至少两个易失性缓存分区与所述至少两个非易失性缓存分区一一耦合,所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,包括:
    将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    在将所述第一数据写入至所述第一非易失性缓存分区的过程中,记录所述第一数据的相关信息,所述第一数据的相关信息包括:所述第一易失性缓存分区的标识、非易失存储标识和所述第一线程的标识,所述非易失存储标识用于指示所述第一数据在所述非易失性存储器内的存储位置;
    在所述第一线程执行完毕所述长耗时操作后,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:
    设置所述第一数据的相关信息中,所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区,锁定所述第一线程的标识所指示的所述第一线程;
    将所述第一数据的相关信息中,所述第一非易失存储标识所指示的第一非易失性缓存分区内的第一数据,写入至所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区;
    指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区。
  6. 根据权利要求5所述的方法,其特征在于,
    在所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区之前,所述方法还包括:
    判断所述第一易失性缓存分区是否被访问;
    所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:
    在所述第一易失性缓存分区未被访问时,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。
  7. 根据权利要求6所述的方法,其特征在于,所述方法用于缓存管理器,所述第一非易失性缓存分区包括多个非易失缓存子区,每个所述非易失缓存子区的容量均大于或等于所述第一易失性缓存分区的容量,
    所述将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区,包括:将所述第一数据写入至所述多个非易失缓存子区中空闲的第一非易失缓存子区,在将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区之前,所述缓存管理器记录的数据的相关信息不包括:用于指示空闲的非易失缓存子区的非易失存储标识,所述第一数据的相关信息中的所述非易失存储标识用于指示所述第一非易失缓存子区;
    所述记录所述第一数据的相关信息,包括:在预设的缓存列表中记录所述第一数据的相关信息,所述第一数据的相关信息还包括:第一标识,所述第一标识用于指示所述长耗时操作未执行完毕,所述预设的缓存列表用于记录写入至非易失存储器的数据的相关信息;
    所述方法还包括:在所述第一线程将所述长耗时操作执行完毕后,将所述预设的缓存列表中包含所述第一线程的标识的所述第一数据的相关信息中的所述第一标识,更改为第二标识,所述第二标识用于指示所述长耗时操作已执行完毕;
    所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区,包括:依次根据所述预设的缓存列表中包含第二标识的相关信息,将所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区,分配给所述包含第二标识的相关信息中线程的标识所指示的线程,并将所述包含第二标识的相关信息中非易失存储标识所指示的存储位置上的数据,写入所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性 缓存分区。
  8. 根据权利要求4或5所述的方法,其特征在于,在所述根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区之前,所述方法还包括:
    判断所述第二线程的写策略是否为写回策略;
    若所述写策略为写回策略,则将所述第一易失性缓存分区中具有已修改标签的数据写入:内存或者缓存级别低于所述易失性存储器的缓存级别的存储器。
  9. 根据权利要求5所述的方法,其特征在于,所述第一数据的相关信息还包括:第三标识,所述第三标识用于指示所述非易失存储器上的所述第一数据还未写入至所述第一易失性缓存分区,
    在所述指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区之前,所述方法还包括:
    将所述第一数据的相关信息中的所述第三标识更改为第四标识,所述第四标识用于指示所述非易失存储器上的所述第一数据已写入至所述第一易失性缓存分区。
  10. 根据权利要求5所述的方法,其特征在于,所述长耗时操作为数据丢失操作,在所述指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区之前,所述方法还包括:
    接收所述第一线程发送的丢失数据;
    将所述丢失数据写入所述第一易失性缓存分区。
  11. 根据权利要求1至7任一所述的方法,其特征在于,
    所述第一数据包括:第一数据块和第二数据块,其中,所述第一数据块中的数据部分与所述第一线程无关,所述第二数据块中的数据部分与所述第一线程相关,
    所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,包括:
    将所述第一数据块中的有效数据位的内容和所述第二数据块写入至所述非易失性存储器;
    在所述将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器之后,所述方法还包括:
    清除所述第一易失性缓存分区内的全部有效数据位的内容。
  12. 一种缓存管理器,其特征在于,共享缓存包括易失性存储器和非易失存储器,所述易失性存储器包括至少两个易失性缓存分区,所述缓存管理器包括:
    分配模块,用于将第一易失性缓存分区分配给第一线程,所述第一易失性缓存分区上存储有所述第一线程相关的第一数据,在所述第一线程占用所述第一易失性缓存分区期间不允许其他线程访问所述第一易失性缓存分区,所述第一易失性缓存分区为所述至少两个易失性 缓存分区中的任一分区;
    第一判断模块,用于判断第一线程是否需要执行长延时操作,所述长延时操作是指操作时长大于预设时间阈值的操作,且所述第一线程在执行所述长延时操作期间不访问所述第一易失性缓存分区;
    第一写入模块,用于在所述第一线程需要执行长延时操作时,将所述第一易失性缓存分区中的所述第一数据写入所述非易失性存储器,并释放所述第一线程对所述第一易失性缓存分区的占用。
  13. 根据权利要求12所述的缓存管理器,其特征在于,每个所述易失性缓存分区锁定一个线程,且任意两个所述易失性缓存分区锁定的线程不同,每个所述易失性缓存分区不允许被未锁定的线程访问,
    所述分配模块还用于:设置所述第一易失性缓存分区锁定所述第一线程;
    所述第一写入模块还用于:解除所述第一易失性缓存分区与所述第一线程的锁定关系;和/或,设置所述第一易失性缓存分区锁定待访问所述易失性存储器的第二线程。
  14. 根据权利要求13所述的缓存管理器,其特征在于,所述非易失性存储器包括至少两个非易失性缓存分区,所述至少两个易失性缓存分区与所述至少两个非易失性缓存分区一一耦合,所述第一写入模块还用于:
    将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区。
  15. 根据权利要求14所述的缓存管理器,其特征在于,所述缓存管理器还包括:
    记录模块,用于在将所述第一数据写入至所述第一非易失性缓存分区的过程中,记录所述第一数据的相关信息,所述第一数据的相关信息包括:所述第一易失性缓存分区的标识、非易失存储标识和所述第一线程的标识,所述非易失存储标识用于指示所述第一数据在所述非易失性存储器内的存储位置;
    第二写入模块,用于在所述第一线程执行完毕所述长耗时操作后,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。
  16. 根据权利要求15所述的缓存管理器,其特征在于,所述第二写入模块还用于:
    设置所述第一数据的相关信息中,所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区,锁定所述第一线程的标识所指示的所述第一线程;
    将所述第一数据的相关信息中,所述第一非易失存储标识所指示的第一非易失性缓存分区内的第一数据,写入至所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区;
    指示所述第一数据的相关信息中,所述第一线程的标识所指示的所述第一线程,继续访问所述第一易失性缓存分区的标识所指示的所述第一易失性缓存分区。
  17. 根据权利要求16所述的缓存管理器,其特征在于,
    所述缓存管理器还包括:
    第二判断模块,用于判断所述第一易失性缓存分区是否被访问;
    所述第二写入模块还用于:
    在所述第一易失性缓存分区未被访问时,根据所述第一数据的相关信息,将所述第一易失性缓存分区分配给所述第一线程,并将所述第一非易失性缓存分区中的所述第一数据写入所述第一易失性缓存分区。
  18. 根据权利要求17所述的缓存管理器,其特征在于,所述第一非易失性缓存分区包括多个非易失缓存子区,每个所述非易失缓存子区的容量均大于或等于所述第一易失性缓存分区的容量,
    所述第一写入模块还用于:将所述第一数据写入至所述多个非易失缓存子区中空闲的第一非易失缓存子区,在将所述第一数据写入至与所述第一易失性缓存分区相耦合的第一非易失性缓存分区之前,所述缓存管理器记录的数据的相关信息不包括:用于指示空闲的非易失缓存子区的非易失存储标识,所述第一数据的相关信息中的所述非易失存储标识用于指示所述第一非易失缓存子区;
    所述记录模块还用于:在预设的缓存列表中记录所述第一数据的相关信息,所述第一数据的相关信息还包括:第一标识,所述第一标识用于指示所述长耗时操作未执行完毕,所述预设的缓存列表用于记录写入至非易失存储器的数据的相关信息;
    所述缓存管理器还包括第一更改模块,用于在所述第一线程将所述长耗时操作执行完毕后,将所述预设的缓存列表中包含所述第一线程的标识的所述第一数据的相关信息中的所述第一标识,更改为第二标识,所述第二标识用于指示所述长耗时操作已执行完毕;
    所述第二写入模块还用于:依次根据所述预设的缓存列表中包含第二标识的相关信息,将所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区,分配给所述包含第二标识的相关信息中线程的标识所指示的线程,并将所述包含第二标识的相关信息中非易失存储标识所指示的存储位置上的数据,写入所述包含第二标识的相关信息中易失性缓存分区的标识所指示的易失性缓存分区。
  19. 根据权利要求15或16所述的缓存管理器,其特征在于,所述缓存管理器还包括:
    第三判断模块,用于判断所述第二线程的写策略是否为写回策略;
    第三写入模块,用于在所述写策略为写回策略时,将所述第一易失性缓存分区中具有已修改标签的数据写入:内存或者缓存级别低于所述易失性存储器的缓存级别的存储器。
  20. 根据权利要求16所述的缓存管理器,其特征在于,所述第一数据的相关信息还包括:第三标识,所述第三标识用于指示所述非易失存储器上的所述第一数据还未写入至所述第一易失性缓存分区,所述缓存管理器还包括:
    第二更改模块,用于将所述第一数据的相关信息中的所述第三标识更改为第四标识,所述第四标识用于指示所述非易失存储器上的所述第一数据已写入至所述第一易失性缓存分区。
  21. 根据权利要求16所述的缓存管理器,其特征在于,所述长耗时操作为数据丢失操作, 所述缓存管理器还包括:
    接收模块,用于接收所述第一线程发送的丢失数据;
    第四写入模块,用于将所述丢失数据写入所述第一易失性缓存分区。
  22. 根据权利要求12至18任一所述的缓存管理器,其特征在于,
    所述第一数据包括:第一数据块和第二数据块,其中,所述第一数据块中的数据部分与所述第一线程无关,所述第二数据块中的数据部分与所述第一线程相关,
    所述第一写入模块还用于:将所述第一数据块中的有效数据位的内容和所述第二数据块写入至所述非易失性存储器;
    所述缓存管理器还包括清除模块,用于清除所述第一易失性缓存分区内的全部有效数据位的内容。
  23. 一种共享缓存,其特征在于,所述共享缓存包括:缓存管理器、易失性存储器和非易失性存储器,
    所述缓存管理器为权利要求12至22任一所述的缓存管理器;所述易失性存储器包括至少两个易失性缓存分区。
  24. 根据权利要求23所述的共享缓存,其特征在于,
    所述非易失性存储器包括至少两个非易失性缓存分区,所述至少两个易失性缓存分区与所述至少两个非易失性缓存分区一一耦合。
  25. 一种终端,其特征在于,所述终端包括:处理器和共享缓存,
    所述处理器包括至少两个线程;
    所述共享缓存为权利要求23或24所述的共享缓存。
PCT/CN2017/075132 2017-02-28 2017-02-28 缓存管理方法、缓存管理器、共享缓存和终端 WO2018157278A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780022195.1A CN109196473B (zh) 2017-02-28 2017-02-28 缓存管理方法、缓存管理器、共享缓存和终端
PCT/CN2017/075132 WO2018157278A1 (zh) 2017-02-28 2017-02-28 缓存管理方法、缓存管理器、共享缓存和终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/075132 WO2018157278A1 (zh) 2017-02-28 2017-02-28 缓存管理方法、缓存管理器、共享缓存和终端

Publications (1)

Publication Number Publication Date
WO2018157278A1 true WO2018157278A1 (zh) 2018-09-07

Family

ID=63369730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075132 WO2018157278A1 (zh) 2017-02-28 2017-02-28 缓存管理方法、缓存管理器、共享缓存和终端

Country Status (2)

Country Link
CN (1) CN109196473B (zh)
WO (1) WO2018157278A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596038A (zh) * 2021-08-02 2021-11-02 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941449A (zh) * 2019-11-15 2020-03-31 新华三半导体技术有限公司 Cache块处理方法、装置及处理器芯片
CN113849455B (zh) * 2021-09-28 2023-09-29 致真存储(北京)科技有限公司 一种基于混合式存储器的mcu及缓存数据的方法
CN114629748B (zh) * 2022-04-01 2023-08-15 日立楼宇技术(广州)有限公司 一种楼宇数据的处理方法、楼宇的边缘网关及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728959B1 (en) * 1995-08-08 2004-04-27 Novell, Inc. Method and apparatus for strong affinity multiprocessor scheduling
CN101697198A (zh) * 2009-10-28 2010-04-21 浪潮电子信息产业股份有限公司 一种动态调整单一计算机系统内活动处理器数量的方法
CN103744623A (zh) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 一种实现存储系统ssd缓存的数据智能降级的方法
CN104881324A (zh) * 2014-09-28 2015-09-02 北京匡恩网络科技有限责任公司 一种多线程下的内存管理方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499028A (zh) * 2009-03-18 2009-08-05 成都市华为赛门铁克科技有限公司 一种基于非易失性存储器的数据保护方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728959B1 (en) * 1995-08-08 2004-04-27 Novell, Inc. Method and apparatus for strong affinity multiprocessor scheduling
CN101697198A (zh) * 2009-10-28 2010-04-21 浪潮电子信息产业股份有限公司 一种动态调整单一计算机系统内活动处理器数量的方法
CN103744623A (zh) * 2014-01-10 2014-04-23 浪潮电子信息产业股份有限公司 一种实现存储系统ssd缓存的数据智能降级的方法
CN104881324A (zh) * 2014-09-28 2015-09-02 北京匡恩网络科技有限责任公司 一种多线程下的内存管理方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596038A (zh) * 2021-08-02 2021-11-02 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器
CN113596038B (zh) * 2021-08-02 2023-04-07 武汉绿色网络信息服务有限责任公司 数据包解析的方法和服务器

Also Published As

Publication number Publication date
CN109196473B (zh) 2021-10-01
CN109196473A (zh) 2019-01-11

Similar Documents

Publication Publication Date Title
CN108874701B (zh) 用于混合存储器中的写入和刷新支持的系统和方法
US9645895B2 (en) Data storage device and flash memory control method
US10606513B2 (en) Volatility management for non-volatile memory device
US7603525B2 (en) Flash memory management method that is resistant to data corruption by power loss
WO2018157278A1 (zh) 缓存管理方法、缓存管理器、共享缓存和终端
US8909853B2 (en) Methods and apparatus to share a thread to reclaim memory space in a non-volatile memory file system
US11347417B2 (en) Locking structures in flash memory
US20140129758A1 (en) Wear leveling in flash memory devices with trim commands
JP4808275B2 (ja) ネットワークブートシステム
US10387275B2 (en) Resume host access based on transaction logs
US10534551B1 (en) Managing write operations during a power loss
US9123443B2 (en) Memory device, memory management device, and memory management method
US10733101B2 (en) Processing node, computer system, and transaction conflict detection method
KR20200032527A (ko) 메모리 시스템의 동작 방법 및 메모리 시스템
US11074113B1 (en) Method and apparatus for performing atomic operations on local cache slots of a shared global memory
KR102462048B1 (ko) 이중 slc/qlc 프로그래밍 및 리소스 해제
CN111462790B (zh) 在存储服务器中进行基于管线的存取管理的方法及设备
US11579770B2 (en) Volatility management for memory device
US10452312B2 (en) Apparatus, system, and method to determine a demarcation voltage to use to read a non-volatile memory
US10872008B2 (en) Data recovery after storage failure in a memory system
CN110874273B (zh) 一种数据处理方法及装置
US10656846B2 (en) Operating method of memory system
KR102264757B1 (ko) 데이터 저장 장치 및 그것의 동작 방법
KR20090047880A (ko) 비휘발성 메모리의 관리 방법 및 관리 시스템
US11726669B2 (en) Coherency locking schemes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17898642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17898642

Country of ref document: EP

Kind code of ref document: A1