US10922236B2 - Cascade cache refreshing - Google Patents

Cascade cache refreshing Download PDF

Info

Publication number
US10922236B2
US10922236B2 US16/811,590 US202016811590A US10922236B2 US 10922236 B2 US10922236 B2 US 10922236B2 US 202016811590 A US202016811590 A US 202016811590A US 10922236 B2 US10922236 B2 US 10922236B2
Authority
US
United States
Prior art keywords
cache
determining
refreshing
refreshed
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/811,590
Other versions
US20200320011A1 (en
Inventor
Yangyang Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910269207.1A external-priority patent/CN110059023B/en
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Assigned to ALIBABA GROUP HOLDING LIMITED reassignment ALIBABA GROUP HOLDING LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZHAO, Yangyang
Assigned to ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. reassignment ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNOR'S INTEREST Assignors: ALIBABA GROUP HOLDING LIMITED
Assigned to Advanced New Technologies Co., Ltd. reassignment Advanced New Technologies Co., Ltd. ASSIGNMENT OF ASSIGNOR'S INTEREST Assignors: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.
Publication of US20200320011A1 publication Critical patent/US20200320011A1/en
Application granted granted Critical
Publication of US10922236B2 publication Critical patent/US10922236B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0891Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/2885Hierarchically arranged intermediate devices, e.g. for hierarchical caching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels

Definitions

  • the present specification relates to the field of computer technologies, and in particular, to cascade cache refreshing methods, systems, and devices.
  • a cache is a storage that can exchange data at a high speed.
  • a cache is a memory chip on a hard disk controller, and has a very high access rate.
  • the cache is a buffer between internal storage of the hard disk and an external interface. Because an internal data transmission rate of the hard disk is different from a transmission rate of the external interface, the cache serves as a buffer.
  • a size and a rate of the cache are important factors that directly affect the transmission rate of the hard disk, and can greatly improve overall performance of the hard disk.
  • the hard disk accesses fragmentary data, data needs to be continuously exchanged between the hard disk and a memory. If there is a large cache, the fragmentary data can be temporarily stored in the cache, to reduce system load and improve a data transmission rate.
  • Embodiments of the present specification provide cascade cache refreshing methods, systems, and devices, to alleviate a problem in a cascade cache refreshing process in the existing technology.
  • An embodiment of the present specification provides a cascade cache refreshing method, where the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
  • sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed comprises determining whether each cache in the cascade cache needs to be refreshed.
  • the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • the method further includes: determining a cache priority based on the dependency relationship, and determining the cache refreshing sequence based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
  • the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining, based on a cache that the cache depends on, whether the cache needs to be refreshed; and/or determining whether the cache is externally triggered to be refreshed.
  • the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining whether the cache is externally triggered to be refreshed, and when the cache is not externally triggered to be refreshed, determining, based on a cache that the cache depends on, whether the cache needs to be refreshed.
  • the method further includes: adding a refreshing tag to a refreshing target when cache refreshing is externally triggered; and determining whether the cache is externally triggered to be refreshed, where it is determined whether the cache has the refreshing tag; and the method further includes: canceling the refreshing tag after a cache with the refreshing tag is refreshed.
  • An embodiment of the present specification further proposes a cascade cache refreshing system, where the system includes: a refreshing sorting module, configured to determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and a refreshing module, configured to sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
  • a refreshing sorting module configured to determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache
  • a refreshing module configured to sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
  • the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • a circular dependency detection unit configured to detect whether a circular dependency conflict exists, and terminate cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • An embodiment of the present specification further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the embodiments of the present specification.
  • the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache.
  • the following problem can be effectively alleviated: Data resource pressure is caused because cache data resources are repeatedly and centrally invoked, and cached data is inconsistent because data is changed during cache refreshing.
  • FIG. 1 and FIG. 5 are flowcharts illustrating an application program running method, according to an embodiment of the present specification
  • FIG. 2 and FIG. 3 are partial flowcharts illustrating an application program running method, according to an embodiment of the present specification
  • FIG. 4 is a schematic diagram illustrating a cache dependency relationship tree of a cascade cache, according to an embodiment of the present specification.
  • FIG. 6 and FIG. 7 are structural block diagrams of a system, according to an embodiment of the present specification.
  • cache refreshing is usually managed through isolation, and caches are refreshed and controlled independently of each other.
  • caches are refreshed and controlled independently of each other.
  • caches are refreshed and controlled independently of each other.
  • a cascade cache refreshing method proposes a cascade cache refreshing method.
  • an application scenario in the existing technology is first analyzed.
  • a cascade cache is characterized by the following two features: (1) there are a plurality of caches; and (2) there is a dependency relationship between caches.
  • cache 1 is refreshed
  • cache 2 that depends on cache 1 also needs to be refreshed.
  • cache 3 that depends on cache 2 also needs to be refreshed. That is, the cascade cache is refreshed in an association way.
  • each cache in the cascade cache is managed through isolation, it is equivalent to ignoring the dependency relationship between the caches and considering each cache as an independent cache. Consequently, after a cache is refreshed, an associated cache having a dependency relationship with the cache may not be refreshed synchronously. If data is changed during cache refreshing, a difference between basic data resources of the caches can be caused, and cached data can be inconsistent. In addition, the caches are refreshed and controlled independently of each other, and the cached data is also isolated from each other. Consequently, cluster refreshing data resources are repeatedly and centrally invoked, and data resource performance is affected.
  • caches having a dependency relationship with each other can be refreshed synchronously
  • caches having a dependency relationship in the cascade cache are used as a whole, and are refreshed synchronously during refreshing.
  • the cascade cache is refreshed in an association way. As such, if a cache refreshing sequence is not considered when all the caches are refreshed synchronously, a cache that a certain cache depends on may be refreshed only after the cache is refreshed. As a result, the cache needs to be refreshed again to keep consistent with data cached by the cache that the cache depends on.
  • the dependency relationship between the caches in the cascade cache is often not a simple dependency chain, when different caches are triggered to be refreshed, caches that need to be refreshed synchronously and that have a dependency relationship are also different. If all caches are refreshed each time refreshing is triggered, a refreshing operation may be performed on a cache that does not need to be refreshed, and processing resources are wasted.
  • the caches in the cascade cache are not refreshed randomly. Instead, a fixed refreshing sequence is determined based on the dependency relationship between the caches, and the caches are refreshed in a refreshing round based on the refreshing sequence. In addition, in a refreshing process, it is first determined whether the cache needs to be refreshed in a current refreshing round. If the cache does not need to be refreshed, the cache is not refreshed in the current refreshing round.
  • the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
  • the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache.
  • the following problem can be effectively alleviated: Data resource pressure that is caused because cache data resources are repeatedly and centrally invoked, and in consistent cached data that is caused because data is changed during cache refreshing.
  • the method includes the following steps:
  • the method further includes: obtaining a cache definition of the cascade cache, and determining the dependency relationship between the caches in the cascade cache based on the cache definition.
  • the cache refreshing sequence needs to be determined only in an initial process of initializing the cascade cache, and the cache refreshing sequence is stored after the cache refreshing sequence is obtained. In a subsequent cache refreshing process, the stored cache refreshing sequence only needs to be invoked, without repeating a step of generating the cache refreshing sequence each time cache refreshing is performed.
  • the cache refreshing sequence is determined for all caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between all the caches in the cascade cache, and the cache refreshing sequence includes each cache in the cascade cache.
  • the cache refreshing sequence is determined for a part of caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between a part of caches in the cascade cache, and the cache refreshing sequence includes a part of caches in the cascade cache.
  • whether the caches need to be refreshed is determined for each cache in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a cache ranking first in the cache refreshing sequence, whether each cache in the cache refreshing sequence needs to be refreshed. If the cache refreshing sequence includes all the caches in the cascade cache, in a process of determining whether the caches need to be refreshed, it is determined whether each cache in the cascade cache needs to be refreshed.
  • whether the caches need to be refreshed is determined for a part of caches in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a certain cache in the cache refreshing sequence, whether the cache and each cache or certain caches following the cache need to be refreshed.
  • the cache refreshing sequence is stored after the cache refreshing sequence is obtained. At the same time, it is monitored whether the cache definition of the cascade cache is updated. If the cache definition of the cascade cache is updated, a new cache refreshing sequence is generated based on an updated cache definition, and the originally stored cache refreshing sequence is updated.
  • S 233 is performed to refresh the cache, and the method proceeds to step S 234 .
  • step S 234 If the target cache does not need to be refreshed, the method directly jumps to step S 234 .
  • step 232 freshing determining
  • step S 231 If no, the method jumps to step S 231 .
  • the cascade cache is refreshed in an association way.
  • cache 2 depends on cache 1
  • cache 3 depends on cache 2.
  • cache 1 depends on cache 3
  • a circular dependency relationship chain (cache 1-cache 2-cache 3-cache 1) is formed. If any cache on the circular dependency relationship chain is refreshed, an infinite refreshing sequence is formed. Consequently, refreshing may not be stopped, and an execution conflict may be generated.
  • the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • the method includes the following steps:
  • S 321 is performed to stop cache refreshing and output alarm information.
  • S 330 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
  • the following steps are performed: obtaining each cache dependency sequence chain based on the cache dependency relationship; and determining whether a same cache identifier exists in the cache dependency sequence chain; and if yes, determining that a circular dependency conflict exists in the cache cascade relationship.
  • a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
  • formed cache dependency sequence chains are as follows: cache D-cache B-cache A-cache C; cache D-cache B-cache A-cache E; cache B-cache A-cache C; cache B-cache A-cache E; cache B-cache F; cache A-cache C; and cache A-cache E.
  • cache dependency sequence chain is formed: cache D-cache B-cache A-cache C-cache B. There are two cache B on the cache dependency sequence chain, a circular dependency conflict is caused.
  • a cache priority is determined based on the dependency relationship, and the cache refreshing sequence is determined based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
  • the cache refreshing sequence is formed in descending order of priorities.
  • a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
  • a formed cache dependency sequence tree is shown in FIG. 4 .
  • the following can be obtained based on the cache dependency sequence tree:
  • cache C, cache E, and cache F do not depend on another cache
  • cache C, cache E, and cache F have a highest priority (which is set to 1); because cache A depends on cache C and cache E, cache A has a lower priority than cache C and cache E, and because priorities of both cache C and cache E are 1, the priority of cache A is 2; because cache B depends on cache A and cache F, cache B has a lower priority than cache A and cache F, and because the priority of cache A is 2 and the priority of cache F is 1, to ensure that cache B has a lower priority than cache A and cache F, the priority of cache B is 3; and because cache D depends on cache B and cache A, cache D has a lower priority than cache B and cache A, and because the priority of cache A is 2 and the priority of cache B is 3, to ensure that cache D has a lower priority than cache A and cache B, the priority of cache B is 4.
  • cache refreshing sequence is formed in descending order of priorities: cache C-cache E-cache F-cache A-cache B-cache D.
  • the cache when it is determined whether a cache needs to be refreshed, it is determined whether the cache is externally triggered to be refreshed. When the cache is not externally triggered to be refreshed, it is determined, based on a cache that the cache depends on, whether the cache needs to be refreshed.
  • a refreshing tag is added to a refreshing target when cache refreshing is externally triggered.
  • it is determined whether the cache is externally triggered to be refreshed it is determined whether the cache has the refreshing tag. Further, the refreshing tag is canceled after a cache with the refreshing tag is refreshed.
  • the first cache when it is determined, based on the cache that the cache depends on, whether the cache needs to be refreshed, the first cache needs to be refreshed when there is a cache whose refreshing time is later than a refreshing time of the first cache among all caches that the first cache depends on.
  • the cache refreshing sequence before it is sequentially determined, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed, it is first determined whether refreshing determining needs to be performed for the cascade cache. If no refreshing determining needs to be performed for the cascade cache, there is no need to sequentially determine, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed.
  • the cascade cache when the cascade cache is externally triggered to be refreshed, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if any cache in the cascade cache needs to be refreshed based on an external command, it is determined that refreshing determining needs to be performed for the cascade cache.
  • a refreshing time point is predetermined for the cascade cache. If a current moment satisfies the refreshing time point predetermined for the cascade cache, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if it is predetermined that one round of refreshing is performed for the cascade cache at an interval of 10 minutes, refreshing determining needs to be performed for the cascade cache at an interval of 10 minutes.
  • the method includes the following steps:
  • S 521 is performed to stop cache refreshing and output alarm information.
  • S 530 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
  • step S 540 If yes, the method jumps to step S 540 . If no, the method returns to step S 560 .
  • S 552 is performed to refresh the cache and cancel the refreshing tag.
  • S 555 is performed to traverse a refreshing time of a cache that the cache depends on and determine whether the refreshing time of the cache that the cache depends on is later than the refreshing time of the cache.
  • S 556 is performed to refresh the cache and jump to step S 554 .
  • step S 541 If no, the method directly jumps to step S 541 .
  • step S 560 If yes, this round of cache refreshing ends, and the method jumps to step S 560 .
  • step S 540 If no, the method jumps to step S 540 .
  • step S 560 it is monitored whether cache refreshing is externally triggered and/or whether the current moment satisfies the refreshing time point of the cascade cache. If cache refreshing is externally triggered and/or the current moment satisfies the refreshing time point of the cascade cache, it is determined that the cascade cache needs to be refreshed.
  • step S 540 a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
  • a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
  • cache refreshing sequence cache C-cache E-cache F-cache A-cache B-cache D.
  • cache refreshing sequence is invoked, all the caches are traversed based on the cache refreshing sequence, and it is sequentially determined whether all the caches need to be refreshed, and a cache that needs to be refreshed is refreshed.
  • cache C needs to be refreshed, cache C is refreshed, and a refreshing time of cache C is updated.
  • Cache E has no refreshing tag and has no dependent cache, and cache E is not refreshed.
  • Cache F has no refreshing tag and has no dependent cache, and cache F is not refreshed.
  • Cache A has no refreshing tag, and cache A depends on cache C and cache E, where because the refreshing time of cache C is later than a refreshing time of cache A, it is determined that cache A needs to be refreshed, cache A is refreshed, and the refreshing time of cache A is updated.
  • Cache B has no refreshing tag, and cache B depends on cache A and cache F, where because the refreshing time of cache A is later than a refreshing time of cache B, it is determined that cache B needs to be refreshed, and cache B is refreshed.
  • Cache D has no refreshing tag, and cache D depends on cache B and cache A, where because both the refreshing time of cache B and the refreshing time of cache A are later than a refreshing time of cache D, it is determined that cache D needs to be refreshed, and cache D is refreshed.
  • step S 540 when cache refreshing is externally triggered, a target object (a cache to which a refreshing tag is added) for which cache refreshing is externally triggered is used as a cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed.
  • a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
  • a cache on which refreshing determining is initially performed is determined based on a refreshing trigger method.
  • a target object a cache to which a refreshing tag is added
  • the cache on which refreshing determining is initially performed is used as the cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed.
  • the embodiments of the present specification further propose a cascade cache refreshing system.
  • the system includes: a refreshing sorting module 620 , configured to determine a cache refreshing sequence based on a dependency relationship; and a refreshing module 630 , configured to sequentially determine, based on the cache refreshing sequence, whether caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a cache needs to be refreshed, it is determined whether a cache following the cache in the cache refreshing sequence needs to be refreshed after the cache is refreshed.
  • the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • a circular dependency detection unit configured to detect whether a circular dependency conflict exists, and terminate cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
  • the system further includes a dependency relationship determining module 710 .
  • the dependency relationship determining module 710 is configured to obtain a cache definition, and determine the dependency relationship between caches in a cascade cache based on the cache definition.
  • the dependency relationship determining module 710 includes a circular dependency detection unit 711 .
  • the present disclosure further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the present disclosure.
  • a technical improvement is a hardware improvement (for example, an improvement to a circuit structure, such as a diode, a transistor, or a switch) or a software improvement (an improvement to a method procedure) can be clearly distinguished.
  • a hardware improvement for example, an improvement to a circuit structure, such as a diode, a transistor, or a switch
  • a software improvement an improvement to a method procedure
  • a designer usually programs an improved method procedure into a hardware circuit, to obtain a corresponding hardware circuit structure. Therefore, a method procedure can be improved by using a hardware entity module.
  • a programmable logic device for example, a field programmable gate array (FPGA)
  • FPGA field programmable gate array
  • the designer performs programming to “integrate” a digital system to a PLD without requesting a chip manufacturer to design and produce an application-specific integrated circuit chip.
  • programming is mostly implemented by using “logic compiler” software.
  • the logic compiler software is similar to a software compiler used to develop and write a program. Original code needs to be written in a particular programming language for compilation. The language is referred to as a hardware description language (HDL).
  • HDL hardware description language
  • HDLs such as the Advanced Boolean Expression Language (ABEL), the Altera Hardware Description Language (AHDL), Confluence, the Cornell University Programming Language (CUPL), HDCal, the Java Hardware Description Language (JHDL), Lava, Lola, MyHDL, PALASM, and the Ruby Hardware Description Language (RHDL).
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • CUPL Cornell University Programming Language
  • HDCal the Java Hardware Description Language
  • JHDL Java Hardware Description Language
  • Lava Lola
  • MyHDL MyHDL
  • PALASM Ruby Hardware Description Language
  • RHDL Ruby Hardware Description Language
  • VHDL very-high-speed integrated circuit hardware description language
  • Verilog Verilog
  • a controller can be implemented by using any appropriate method.
  • the controller can be a microprocessor or a processor, or a computer-readable medium that stores computer readable program code (such as software or firmware) that can be executed by the microprocessor or the processor, a logic gate, a switch, an application-specific integrated circuit (ASIC), a programmable logic controller, or a built-in microprocessor.
  • Examples of the controller include but are not limited to the following microprocessors: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320.
  • the memory controller can also be implemented as a part of the control logic of the memory.
  • controller can be considered as a hardware component, and an apparatus configured to implement various functions in the controller can also be considered as a structure in the hardware component. Or the apparatus configured to implement various functions can even be considered as both a software module implementing the method and a structure in the hardware component.
  • the system, apparatus, module, or unit illustrated in the previous embodiments can be implemented by using a computer chip or an entity, or can be implemented by using a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device, or a combination of any of these devices.
  • an embodiment of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure can use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure can use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, etc.) that include computer-usable program code.
  • computer-usable storage media including but not limited to a disk memory, a CD-ROM, an optical memory, etc.
  • These computer program instructions can be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine, so the instructions executed by the computer or the processor of the another programmable data processing device generate a device for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions can be stored in a computer readable memory that can instruct the computer or the another programmable data processing device to work in a specific way, so the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus.
  • the instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • a computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory.
  • the memory can include a non-persistent memory, a random access memory (RAM), a non-volatile memory, and/or another form that are in a computer readable medium, for example, a read-only memory (ROM) or a flash memory (flash RAM).
  • ROM read-only memory
  • flash RAM flash memory
  • the computer readable medium includes persistent, non-persistent, movable, and unmovable media that can store information by using any method or technology.
  • the information can be a computer readable instruction, a data structure, a program module, or other data.
  • Examples of a computer storage medium include but are not limited to a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), another type of random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or another memory technology, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or another optical storage, a cassette magnetic tape, a magnetic tape/magnetic disk storage or another magnetic storage device.
  • the computer storage medium can be used to store information accessible by the calculating device. Based on the definition in the present specification, the computer readable medium does not include transitory media such as a modulated data signal and carrier.
  • the present application can be described in the general context of computer executable instructions executed by a computer, for example, a program module.
  • the program module includes a routine, a program, an object, a component, a data structure, etc. executing a specific task or implementing a specific abstract data type.
  • the present application can also be practiced in distributed computing environments. In the distributed computing environments, tasks are performed by remote processing devices connected through a communications network. In a distributed computing environment, the program module can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present application discloses a cascade cache refreshing method, system, and device. The method in an embodiment of the present specification includes: determining a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of PCT Application No. PCT/CN2020/071159, filed on Jan. 9, 2020, which claims priority to Chinese Patent Application No. 201910269207.1, filed on Apr. 4, 2019, and each application is hereby incorporated by reference in its entirety.
TECHNICAL FIELD
The present specification relates to the field of computer technologies, and in particular, to cascade cache refreshing methods, systems, and devices.
BACKGROUND
A cache is a storage that can exchange data at a high speed. In an example of an application scenario of a hard disk, a cache is a memory chip on a hard disk controller, and has a very high access rate. The cache is a buffer between internal storage of the hard disk and an external interface. Because an internal data transmission rate of the hard disk is different from a transmission rate of the external interface, the cache serves as a buffer. A size and a rate of the cache are important factors that directly affect the transmission rate of the hard disk, and can greatly improve overall performance of the hard disk. When the hard disk accesses fragmentary data, data needs to be continuously exchanged between the hard disk and a memory. If there is a large cache, the fragmentary data can be temporarily stored in the cache, to reduce system load and improve a data transmission rate.
SUMMARY
Embodiments of the present specification provide cascade cache refreshing methods, systems, and devices, to alleviate a problem in a cascade cache refreshing process in the existing technology.
The following technical solutions are used in the embodiments of the present specification.
An embodiment of the present specification provides a cascade cache refreshing method, where the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
In an embodiment of the present specification, sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed comprises determining whether each cache in the cascade cache needs to be refreshed.
In an embodiment of the present specification, the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
In an embodiment of the present specification, the method further includes: determining a cache priority based on the dependency relationship, and determining the cache refreshing sequence based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
In an embodiment of the present specification, the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining, based on a cache that the cache depends on, whether the cache needs to be refreshed; and/or determining whether the cache is externally triggered to be refreshed.
In an embodiment of the present specification, the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining whether the cache is externally triggered to be refreshed, and when the cache is not externally triggered to be refreshed, determining, based on a cache that the cache depends on, whether the cache needs to be refreshed.
In an embodiment of the present specification, the method further includes: adding a refreshing tag to a refreshing target when cache refreshing is externally triggered; and determining whether the cache is externally triggered to be refreshed, where it is determined whether the cache has the refreshing tag; and the method further includes: canceling the refreshing tag after a cache with the refreshing tag is refreshed.
In an embodiment of the present specification, it is determined, based on the cache that the cache depends on, whether the cache needs to be refreshed, where the first cache needs to be refreshed when there is a cache whose refreshing time is later than a refreshing time of the first cache among all caches that the first cache depends on.
An embodiment of the present specification further proposes a cascade cache refreshing system, where the system includes: a refreshing sorting module, configured to determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and a refreshing module, configured to sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
In an embodiment of the present specification, the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
An embodiment of the present specification further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the embodiments of the present specification.
The previous at least one technical solution used in the embodiments of the present specification can achieve the following beneficial effects: Based on the method in the embodiments of the present specification, the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache. Compared with the existing technology, when the cascade cache is refreshed based on the method in the embodiments of the present specification, the following problem can be effectively alleviated: Data resource pressure is caused because cache data resources are repeatedly and centrally invoked, and cached data is inconsistent because data is changed during cache refreshing.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings described here are intended to provide a further understanding of the present application, and constitute a part of the present application. The illustrative embodiments of the present application and descriptions thereof are intended to describe the present application, and do not constitute limitations on the present application. In the accompanying drawings:
FIG. 1 and FIG. 5 are flowcharts illustrating an application program running method, according to an embodiment of the present specification;
FIG. 2 and FIG. 3 are partial flowcharts illustrating an application program running method, according to an embodiment of the present specification;
FIG. 4 is a schematic diagram illustrating a cache dependency relationship tree of a cascade cache, according to an embodiment of the present specification; and
FIG. 6 and FIG. 7 are structural block diagrams of a system, according to an embodiment of the present specification.
DESCRIPTION OF EMBODIMENTS
To make the objectives, technical solutions, and advantages of the present application clearer, the following clearly and comprehensively describes the technical solutions of the present application with reference to specific embodiments and accompanying drawings of the present application. Clearly, the described embodiments are merely some rather than all of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without creative efforts shall fall within the protection scope of the present application.
In existing technology, cache refreshing is usually managed through isolation, and caches are refreshed and controlled independently of each other. However, because there is a dependency relationship between caches in a cascade cache, if refreshing is managed through isolation, cluster refreshing data resources are repeatedly and centrally invoked, and data resource performance can be affected. In addition, if data is changed during cache refreshing, a difference between basic data resources of the caches can be caused, and cached data can be inconsistent.
To alleviate the previous problem, the present specification proposes a cascade cache refreshing method. In the embodiments of the present specification, an application scenario in the existing technology is first analyzed. In an application scenario, a cascade cache is characterized by the following two features: (1) there are a plurality of caches; and (2) there is a dependency relationship between caches. In an application scenario, to ensure consistency between data cached by caches having a dependency relationship in the cascade cache, when cache 1 is refreshed, cache 2 that depends on cache 1 also needs to be refreshed. Further, after cache 2 is refreshed, cache 3 that depends on cache 2 also needs to be refreshed. That is, the cascade cache is refreshed in an association way.
If refreshing of each cache in the cascade cache is managed through isolation, it is equivalent to ignoring the dependency relationship between the caches and considering each cache as an independent cache. Consequently, after a cache is refreshed, an associated cache having a dependency relationship with the cache may not be refreshed synchronously. If data is changed during cache refreshing, a difference between basic data resources of the caches can be caused, and cached data can be inconsistent. In addition, the caches are refreshed and controlled independently of each other, and the cached data is also isolated from each other. Consequently, cluster refreshing data resources are repeatedly and centrally invoked, and data resource performance is affected.
As such, in an embodiment of the present specification, to ensure that caches having a dependency relationship with each other can be refreshed synchronously, caches having a dependency relationship in the cascade cache are used as a whole, and are refreshed synchronously during refreshing.
Further, the cascade cache is refreshed in an association way. As such, if a cache refreshing sequence is not considered when all the caches are refreshed synchronously, a cache that a certain cache depends on may be refreshed only after the cache is refreshed. As a result, the cache needs to be refreshed again to keep consistent with data cached by the cache that the cache depends on.
Further, because the dependency relationship between the caches in the cascade cache is often not a simple dependency chain, when different caches are triggered to be refreshed, caches that need to be refreshed synchronously and that have a dependency relationship are also different. If all caches are refreshed each time refreshing is triggered, a refreshing operation may be performed on a cache that does not need to be refreshed, and processing resources are wasted.
As such, in an embodiment of the present specification, the caches in the cascade cache are not refreshed randomly. Instead, a fixed refreshing sequence is determined based on the dependency relationship between the caches, and the caches are refreshed in a refreshing round based on the refreshing sequence. In addition, in a refreshing process, it is first determined whether the cache needs to be refreshed in a current refreshing round. If the cache does not need to be refreshed, the cache is not refreshed in the current refreshing round.
In an embodiment of the present specification, the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
Based on the method in the embodiments of the present specification, the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache. Compared with the existing technology, when the cascade cache is refreshed based on the method in the embodiments of the present specification, the following problem can be effectively alleviated: Data resource pressure that is caused because cache data resources are repeatedly and centrally invoked, and in consistent cached data that is caused because data is changed during cache refreshing.
The technical solutions provided in the embodiments of the present specification are described in detail below with reference to the accompanying drawings.
In an embodiment of the present specification, as shown in FIG. 1, the method includes the following steps:
S120. Determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache.
S130. Sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
Further, in an embodiment of the present specification, the method further includes: obtaining a cache definition of the cascade cache, and determining the dependency relationship between the caches in the cascade cache based on the cache definition.
Further, in an embodiment of the present specification, the cache refreshing sequence needs to be determined only in an initial process of initializing the cascade cache, and the cache refreshing sequence is stored after the cache refreshing sequence is obtained. In a subsequent cache refreshing process, the stored cache refreshing sequence only needs to be invoked, without repeating a step of generating the cache refreshing sequence each time cache refreshing is performed.
Further, in an embodiment of the present specification, the cache refreshing sequence is determined for all caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between all the caches in the cascade cache, and the cache refreshing sequence includes each cache in the cascade cache.
Further, in an embodiment of the present specification, the cache refreshing sequence is determined for a part of caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between a part of caches in the cascade cache, and the cache refreshing sequence includes a part of caches in the cascade cache.
Further, in an embodiment of the present specification, whether the caches need to be refreshed is determined for each cache in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a cache ranking first in the cache refreshing sequence, whether each cache in the cache refreshing sequence needs to be refreshed. If the cache refreshing sequence includes all the caches in the cascade cache, in a process of determining whether the caches need to be refreshed, it is determined whether each cache in the cascade cache needs to be refreshed.
Further, in an embodiment of the present specification, whether the caches need to be refreshed is determined for a part of caches in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a certain cache in the cache refreshing sequence, whether the cache and each cache or certain caches following the cache need to be refreshed.
Further, in an embodiment of the present specification, the cache refreshing sequence is stored after the cache refreshing sequence is obtained. At the same time, it is monitored whether the cache definition of the cascade cache is updated. If the cache definition of the cascade cache is updated, a new cache refreshing sequence is generated based on an updated cache definition, and the originally stored cache refreshing sequence is updated.
In an embodiment of the present specification, as shown in FIG. 2, in a process of sequentially determining, based on the cache refreshing sequence, whether the caches need to be refreshed, and refreshing a cache that needs to be refreshed, the following steps are performed:
S231. Determine a target cache based on the cache refreshing sequence.
S232. Determine whether the target cache needs to be refreshed.
If the target cache needs to be refreshed, S233 is performed to refresh the cache, and the method proceeds to step S234.
If the target cache does not need to be refreshed, the method directly jumps to step S234.
S234. Determine whether step 232 (refreshing determining) is performed for all the caches in the cache refreshing sequence.
If yes, this round of cache refreshing ends.
If no, the method jumps to step S231.
Further, the cascade cache is refreshed in an association way. As such, assume that cache 2 depends on cache 1 and cache 3 depends on cache 2. On this assumption, further, if cache 1 depends on cache 3, a circular dependency relationship chain (cache 1-cache 2-cache 3-cache 1) is formed. If any cache on the circular dependency relationship chain is refreshed, an infinite refreshing sequence is formed. Consequently, refreshing may not be stopped, and an execution conflict may be generated.
As such, in an embodiment of the present specification, the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
In an embodiment of the present specification, in a process of determining the dependency relationship between the caches in the cascade cache based on the cache definition, it is detected whether a circular dependency conflict exists.
In an embodiment of the present specification, as shown in FIG. 3, the method includes the following steps:
S310. Obtain the cache definition of the cascade cache, and determine the dependency relationship between the caches in the cascade cache based on the cache definition.
S320. Determine, based on the dependency relationship between the caches in the cascade cache, whether a circular dependency conflict exists.
If yes, S321 is performed to stop cache refreshing and output alarm information.
If no, S330 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
S340. Sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh the cache that needs to be refreshed.
In an embodiment of the present specification, in a process of determining the dependency relationship between the caches in the cascade cache based on the cache definition, the following steps are performed: obtaining each cache dependency sequence chain based on the cache dependency relationship; and determining whether a same cache identifier exists in the cache dependency sequence chain; and if yes, determining that a circular dependency conflict exists in the cache cascade relationship.
In an example of an application scenario, a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
As such, formed cache dependency sequence chains are as follows: cache D-cache B-cache A-cache C; cache D-cache B-cache A-cache E; cache B-cache A-cache C; cache B-cache A-cache E; cache B-cache F; cache A-cache C; and cache A-cache E.
In the previous cache dependency sequence chains, because there is no cache dependency sequence chain includes two or more same cache identifiers, no circular dependency conflict exists.
In the previous application scenario, if a dependency relationship, that is, cache C depends on cache B, is added for the cascade cache, the following cache dependency sequence chain is formed: cache D-cache B-cache A-cache C-cache B. There are two cache B on the cache dependency sequence chain, a circular dependency conflict is caused.
Further, in an embodiment of the present specification, in a process of determining the cache refreshing sequence, a cache priority is determined based on the dependency relationship, and the cache refreshing sequence is determined based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
In an embodiment of the present specification, the cache refreshing sequence is formed in descending order of priorities.
In an example of an application scenario, a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
As such, a formed cache dependency sequence tree is shown in FIG. 4. The following can be obtained based on the cache dependency sequence tree:
Because cache C, cache E, and cache F do not depend on another cache, cache C, cache E, and cache F have a highest priority (which is set to 1); because cache A depends on cache C and cache E, cache A has a lower priority than cache C and cache E, and because priorities of both cache C and cache E are 1, the priority of cache A is 2; because cache B depends on cache A and cache F, cache B has a lower priority than cache A and cache F, and because the priority of cache A is 2 and the priority of cache F is 1, to ensure that cache B has a lower priority than cache A and cache F, the priority of cache B is 3; and because cache D depends on cache B and cache A, cache D has a lower priority than cache B and cache A, and because the priority of cache A is 2 and the priority of cache B is 3, to ensure that cache D has a lower priority than cache A and cache B, the priority of cache B is 4.
Finally, the following cache refreshing sequence is formed in descending order of priorities: cache C-cache E-cache F-cache A-cache B-cache D.
In the previous cache refreshing sequence, because cache C, cache E, and cache F have a same priority, rankings of the three caches can be randomly changed.
Further, in an embodiment of the present specification, when it is determined whether a cache needs to be refreshed, it is determined, based on a cache that the cache depends on, whether the cache needs to be refreshed.
Further, in an embodiment of the present specification, when it is determined whether a cache needs to be refreshed, it is determined whether the cache is externally triggered to be refreshed.
Further, in an embodiment of the present specification, when it is determined whether a cache needs to be refreshed, it is determined whether the cache is externally triggered to be refreshed. When the cache is not externally triggered to be refreshed, it is determined, based on a cache that the cache depends on, whether the cache needs to be refreshed.
In an embodiment of the present specification, a refreshing tag is added to a refreshing target when cache refreshing is externally triggered. When it is determined whether the cache is externally triggered to be refreshed, it is determined whether the cache has the refreshing tag. Further, the refreshing tag is canceled after a cache with the refreshing tag is refreshed.
Further, in an embodiment of the present specification, when it is determined, based on the cache that the cache depends on, whether the cache needs to be refreshed, the first cache needs to be refreshed when there is a cache whose refreshing time is later than a refreshing time of the first cache among all caches that the first cache depends on.
Further, in an embodiment of the present specification, before it is sequentially determined, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed, it is first determined whether refreshing determining needs to be performed for the cascade cache. If no refreshing determining needs to be performed for the cascade cache, there is no need to sequentially determine, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed.
In an embodiment of the present specification, when the cascade cache is externally triggered to be refreshed, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if any cache in the cascade cache needs to be refreshed based on an external command, it is determined that refreshing determining needs to be performed for the cascade cache.
In an embodiment of the present specification, a refreshing time point is predetermined for the cascade cache. If a current moment satisfies the refreshing time point predetermined for the cascade cache, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if it is predetermined that one round of refreshing is performed for the cascade cache at an interval of 10 minutes, refreshing determining needs to be performed for the cascade cache at an interval of 10 minutes.
In an embodiment of the present specification, as shown in FIG. 5, the method includes the following steps:
S510. Obtain the cache definition of the cascade cache, and determine the dependency relationship between the caches in the cascade cache based on the cache definition.
S520. Determine, based on the dependency relationship between the caches in the cascade cache, whether a circular dependency conflict exists.
If yes, S521 is performed to stop cache refreshing and output alarm information.
If no, S530 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
S560. Determine whether to perform refreshing determining for the cascade cache.
If yes, the method jumps to step S540. If no, the method returns to step S560.
S540. Determine a target cache based on the cache refreshing sequence.
S551. Determine whether the target cache has a refreshing tag.
If the target cache has a refreshing tag, S552 is performed to refresh the cache and cancel the refreshing tag.
S554. Update a refreshing time, and jump to step S541.
If the target cache does not have a refreshing tag, S555 is performed to traverse a refreshing time of a cache that the cache depends on and determine whether the refreshing time of the cache that the cache depends on is later than the refreshing time of the cache.
If yes, S556 is performed to refresh the cache and jump to step S554.
If no, the method directly jumps to step S541.
S541. Determine whether refreshing determining is performed for all the caches in the cache refreshing sequence.
If yes, this round of cache refreshing ends, and the method jumps to step S560.
If no, the method jumps to step S540.
Further, in an embodiment of the present specification, in step S560, it is monitored whether cache refreshing is externally triggered and/or whether the current moment satisfies the refreshing time point of the cascade cache. If cache refreshing is externally triggered and/or the current moment satisfies the refreshing time point of the cascade cache, it is determined that the cascade cache needs to be refreshed.
Further, in an embodiment of the present specification, in step S540, a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
In an example of an application scenario, a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
Based on the method in an embodiment of the present specification, it is determined that no circular dependency conflict exists, to obtain the following cache refreshing sequence: cache C-cache E-cache F-cache A-cache B-cache D.
When cache C is externally triggered to be refreshed, a refreshing tag is added to cache C. The cache refreshing sequence is invoked, all the caches are traversed based on the cache refreshing sequence, and it is sequentially determined whether all the caches need to be refreshed, and a cache that needs to be refreshed is refreshed. An execution sequence is as follows:
(1) It is determined, based on the refreshing tag, that cache C needs to be refreshed, cache C is refreshed, and a refreshing time of cache C is updated.
(2) Cache E has no refreshing tag and has no dependent cache, and cache E is not refreshed.
(3) Cache F has no refreshing tag and has no dependent cache, and cache F is not refreshed.
(4) Cache A has no refreshing tag, and cache A depends on cache C and cache E, where because the refreshing time of cache C is later than a refreshing time of cache A, it is determined that cache A needs to be refreshed, cache A is refreshed, and the refreshing time of cache A is updated.
(5) Cache B has no refreshing tag, and cache B depends on cache A and cache F, where because the refreshing time of cache A is later than a refreshing time of cache B, it is determined that cache B needs to be refreshed, and cache B is refreshed.
(6) Cache D has no refreshing tag, and cache D depends on cache B and cache A, where because both the refreshing time of cache B and the refreshing time of cache A are later than a refreshing time of cache D, it is determined that cache D needs to be refreshed, and cache D is refreshed.
Further, in an embodiment of the present specification, in step S540, when cache refreshing is externally triggered, a target object (a cache to which a refreshing tag is added) for which cache refreshing is externally triggered is used as a cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed. When the current moment satisfies the refreshing time point of the cascade cache, a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
Further, in an embodiment of the present specification, in step S540, a cache on which refreshing determining is initially performed is determined based on a refreshing trigger method. When cache refreshing is externally triggered, a target object (a cache to which a refreshing tag is added) for which cache refreshing is externally triggered is used as the cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed.
Based on the method in the embodiments of the present specification, the embodiments of the present specification further propose a cascade cache refreshing system. As shown in FIG. 6, the system includes: a refreshing sorting module 620, configured to determine a cache refreshing sequence based on a dependency relationship; and a refreshing module 630, configured to sequentially determine, based on the cache refreshing sequence, whether caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a cache needs to be refreshed, it is determined whether a cache following the cache in the cache refreshing sequence needs to be refreshed after the cache is refreshed.
Further, in an embodiment of the present specification, the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
Specifically, in an embodiment of the present specification, as shown in FIG. 7, the system further includes a dependency relationship determining module 710. The dependency relationship determining module 710 is configured to obtain a cache definition, and determine the dependency relationship between caches in a cascade cache based on the cache definition. The dependency relationship determining module 710 includes a circular dependency detection unit 711.
Further, based on the method in the present disclosure, the present disclosure further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the present disclosure.
In the 1990s, whether a technical improvement is a hardware improvement (for example, an improvement to a circuit structure, such as a diode, a transistor, or a switch) or a software improvement (an improvement to a method procedure) can be clearly distinguished. However, as technologies develop, current improvements to many method procedures can be considered as direct improvements to hardware circuit structures. A designer usually programs an improved method procedure into a hardware circuit, to obtain a corresponding hardware circuit structure. Therefore, a method procedure can be improved by using a hardware entity module. For example, a programmable logic device (PLD) (for example, a field programmable gate array (FPGA)) is such an integrated circuit, and a logical function of the PLD is determined by a user through device programming. The designer performs programming to “integrate” a digital system to a PLD without requesting a chip manufacturer to design and produce an application-specific integrated circuit chip. In addition, at present, instead of manually manufacturing an integrated circuit chip, such programming is mostly implemented by using “logic compiler” software. The logic compiler software is similar to a software compiler used to develop and write a program. Original code needs to be written in a particular programming language for compilation. The language is referred to as a hardware description language (HDL). There are many HDLs, such as the Advanced Boolean Expression Language (ABEL), the Altera Hardware Description Language (AHDL), Confluence, the Cornell University Programming Language (CUPL), HDCal, the Java Hardware Description Language (JHDL), Lava, Lola, MyHDL, PALASM, and the Ruby Hardware Description Language (RHDL). The very-high-speed integrated circuit hardware description language (VHDL) and Verilog are most commonly used. A person skilled in the art should also understand that a hardware circuit that implements a logical method procedure can be readily obtained once the method procedure is logically programmed by using the several described hardware description languages and is programmed into an integrated circuit.
A controller can be implemented by using any appropriate method. For example, the controller can be a microprocessor or a processor, or a computer-readable medium that stores computer readable program code (such as software or firmware) that can be executed by the microprocessor or the processor, a logic gate, a switch, an application-specific integrated circuit (ASIC), a programmable logic controller, or a built-in microprocessor. Examples of the controller include but are not limited to the following microprocessors: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320. The memory controller can also be implemented as a part of the control logic of the memory. A person skilled in the art also knows that, in addition to implementing the controller by using the computer readable program code, logic programming can be performed on method steps to allow the controller to implement the same function in forms of the logic gate, the switch, the application-specific integrated circuit, the programmable logic controller, and the built-in microcontroller. Therefore, the controller can be considered as a hardware component, and an apparatus configured to implement various functions in the controller can also be considered as a structure in the hardware component. Or the apparatus configured to implement various functions can even be considered as both a software module implementing the method and a structure in the hardware component.
The system, apparatus, module, or unit illustrated in the previous embodiments can be implemented by using a computer chip or an entity, or can be implemented by using a product having a certain function. A typical implementation device is a computer. The computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device, or a combination of any of these devices.
For ease of description, the apparatus above is described by dividing functions into various units. Certainly, when the present application is implemented, a function of each unit can be implemented in one or more pieces of software and/or hardware.
A person skilled in the art should understand that an embodiment of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure can use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure can use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, etc.) that include computer-usable program code.
The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product based on the embodiments of the present disclosure. It is worthwhile to note that computer program instructions can be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions can be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine, so the instructions executed by the computer or the processor of the another programmable data processing device generate a device for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
These computer program instructions can be stored in a computer readable memory that can instruct the computer or the another programmable data processing device to work in a specific way, so the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
These computer program instructions can be loaded onto the computer or another programmable data processing device, so a series of operations and operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
In a typical configuration, a computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory.
The memory can include a non-persistent memory, a random access memory (RAM), a non-volatile memory, and/or another form that are in a computer readable medium, for example, a read-only memory (ROM) or a flash memory (flash RAM). The memory is an example of the computer readable medium.
The computer readable medium includes persistent, non-persistent, movable, and unmovable media that can store information by using any method or technology. The information can be a computer readable instruction, a data structure, a program module, or other data. Examples of a computer storage medium include but are not limited to a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), another type of random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or another memory technology, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or another optical storage, a cassette magnetic tape, a magnetic tape/magnetic disk storage or another magnetic storage device. The computer storage medium can be used to store information accessible by the calculating device. Based on the definition in the present specification, the computer readable medium does not include transitory media such as a modulated data signal and carrier.
It is worthwhile to further note that, the terms “include”, “contain”, or their any other variants are intended to cover a non-exclusive inclusion, so a process, a method, a product or a device that includes a list of elements not only includes those elements but also includes other elements which are not expressly listed, or further includes elements inherent to such process, method, product or device. Without more constraints, an element preceded by “includes a . . . ” does not preclude the existence of additional identical elements in the process, method, product or device that includes the element.
The present application can be described in the general context of computer executable instructions executed by a computer, for example, a program module. Generally, the program module includes a routine, a program, an object, a component, a data structure, etc. executing a specific task or implementing a specific abstract data type. The present application can also be practiced in distributed computing environments. In the distributed computing environments, tasks are performed by remote processing devices connected through a communications network. In a distributed computing environment, the program module can be located in both local and remote computer storage media including storage devices.
The embodiments in the present specification are described in a progressive way. For same or similar parts of the embodiments, references can be made to the embodiments mutually. Each embodiment focuses on a difference from other embodiments. Particularly, a system embodiment is similar to a method embodiment, and therefore is described briefly. For related parts, references can be made to related descriptions in the method embodiment.
The previous embodiments are embodiments of the present application, and are not intended to limit the present application. A person skilled in the art can make various modifications and changes to the present application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present application shall fall within the scope of the claims in the present application.

Claims (27)

What is claimed is:
1. A computer-implemented method for refreshing a cascade cache, the method comprising:
obtaining a dependency relationship between a plurality of caches in the cascade cache;
determining, based on the dependency relationship, one or more cache priorities, wherein a particular cache that does not depend on any other cache has a higher priority than another cache that depends on a different cache;
determining, based on the cache priorities, a cache refreshing sequence associated with the plurality of caches in the cascade cache;
determining, based on the cache refreshing sequence, that a first cache of the plurality of caches is to be refreshed; and
responsive to determining that the first cache is to be refreshed, refreshing the first cache; and determining whether a second cache of the plurality of caches, the second cache following the first cache in the cache refreshing sequence, is to be refreshed after the first cache is refreshed.
2. The computer-implemented method of claim 1, further comprising sequentially determining whether each cache in the cascade cache is to be refreshed.
3. The computer-implemented method of claim 1, wherein the method further comprises:
detecting whether a circular dependency conflict exists, wherein the dependency relationship is used to detect whether the second cache is dependent upon itself;
responsive to detecting that the circular dependency conflict exists, terminating cache refreshing; and
responsive to detecting that the circular dependency conflict does not exist, continuing refreshing the cascade cache in accordance with the dependency relationship.
4. The computer-implemented method of claim 1, wherein determining the first cache is to be refreshed comprises:
determining the first cache is not externally triggered to be refreshed.
5. The computer-implemented method of claim 4 further comprising:
responsive to determining the first cache is not externally triggered to be refreshed, determining, based on a cache that the first cache depends on, whether the first cache is to be refreshed.
6. The computer-implemented method of claim 1, wherein determining the first cache is to be refreshed comprises:
determining the first cache is externally triggered to be refreshed.
7. The computer-implemented method of claim 6 wherein determining the first cache is externally triggered to be refreshed comprises determining the first cache has a refreshing tag corresponding to an external trigger, and the method further comprises:
responsive to determining the first cache has the refreshing tag corresponding to the external trigger, refreshing the first cache, and
canceling the refreshing tag after refreshing the first cache.
8. The computer-implemented method of claim 1, wherein determining the first cache is to be refreshed comprises:
determining, based on a third cache that the first cache depends on, the first cache is to be refreshed.
9. The computer-implemented method of claim 8, wherein determining, based on the third cache that the first cache depends on, the first cache is to be refreshed comprises:
determining that a refreshing time of the third cache is later than a refreshing time of the first cache;
determining, based on the dependency relationship, that the first cache depends on the third cache; and
determining, based on determining that the refreshing time of the third cache is later than the refreshing time of the first cache and, based on the dependency relationship, that the first cache depends on the third cache, that the first cache is to be refreshed.
10. A non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising:
obtaining a dependency relationship between a plurality of caches in a cascade cache;
determining, based on the dependency relationship, one or more cache priorities, wherein a particular cache that does not depend on any other cache has a higher priority than another cache that depends on a different cache;
determining, based on the cache priorities, a cache refreshing sequence associated with the plurality of caches in the cascade cache;
determining, based on the cache refreshing sequence, that a first cache of the plurality of caches is to be refreshed; and
responsive to determining that the first cache is to be refreshed, refreshing the first cache; and determining whether a second cache of the plurality of caches, the second cache following the first cache in the cache refreshing sequence, is to be refreshed after the first cache is refreshed.
11. The non-transitory, computer-readable medium of claim 10, further comprising sequentially determining whether each cache in the cascade cache is to be refreshed.
12. The non-transitory, computer-readable medium of claim 10, wherein the operations further comprise:
detecting whether a circular dependency conflict exists, wherein the dependency relationship is used to detect whether the second cache is dependent upon itself;
responsive to detecting that the circular dependency conflict exists, terminating cache refreshing; and
responsive to detecting that the circular dependency conflict does not exist, continuing refreshing the cascade cache in accordance with the dependency relationship.
13. The non-transitory, computer-readable medium of claim 10, wherein determining the first cache is to be refreshed comprises:
determining the first cache is not externally triggered to be refreshed.
14. The non-transitory, computer-readable medium of claim 13 further comprising:
responsive to determining the first cache is not externally triggered to be refreshed, determining, based on a cache that the first cache depends on, whether the first cache is to be refreshed.
15. The non-transitory, computer-readable medium of claim 10, wherein determining the first cache is to be refreshed comprises:
determining the first cache is externally triggered to be refreshed.
16. The non-transitory, computer-readable medium of claim 15 wherein determining the first cache is externally triggered to be refreshed comprises determining the first cache has a refreshing tag corresponding to an external trigger, and the operations further comprise:
responsive to determining the first cache has the refreshing tag corresponding to the external trigger, refreshing the first cache, and
canceling the refreshing tag after refreshing the first cache.
17. The non-transitory, computer-readable medium of claim 10, wherein determining the first cache is to be refreshed comprises:
determining, based on a third cache that the first cache depends on, the first cache is to be refreshed.
18. The non-transitory, computer-readable medium of claim 17, wherein determining, based on the third cache that the first cache depends on, the first cache is to be refreshed comprises:
determining that a refreshing time of the third cache is later than a refreshing time of the first cache;
determining, based on the dependency relationship, that the first cache depends on the third cache; and
determining, based on determining that the refreshing time of the third cache is later than the refreshing time of the first cache and, based on the dependency relationship, that the first cache depends on the third cache, that the first cache is to be refreshed.
19. A computer-implemented system, comprising:
one or more computers; and
one or more computer memory devices interoperably coupled with the one or more computers and having tangible, non-transitory, machine-readable media storing one or more instructions that, when executed by the one or more computers, perform one or more operations comprising:
obtaining a dependency relationship between a plurality of caches in a cascade cache;
determining, based on the dependency relationship, one or more cache priorities, wherein a particular cache that does not depend on any other cache has a higher priority than another cache that depends on a different cache;
determining, based on the cache priorities, a cache refreshing sequence associated with the plurality of caches in the cascade cache;
determining, based on the cache refreshing sequence, that a first cache of the plurality of caches is to be refreshed; and
responsive to determining that the first cache is to be refreshed, refreshing the first cache;
and determining whether a second cache of the plurality of caches, the second cache following the first cache in the cache refreshing sequence, is to be refreshed after the first cache is refreshed.
20. The computer-implemented system of claim 19, further comprising sequentially determining whether each cache in the cascade cache is to be refreshed.
21. The computer-implemented system of claim 19, wherein the operations further comprise:
detecting whether a circular dependency conflict exists, wherein the dependency relationship is used to detect whether the second cache is dependent upon itself;
responsive to detecting that the circular dependency conflict exists, terminating cache refreshing; and
responsive to detecting that the circular dependency conflict does not exist, continuing refreshing the cascade cache in accordance with the dependency relationship.
22. The computer-implemented system of claim 19, wherein determining the first cache is to be refreshed comprises:
determining the first cache is not externally triggered to be refreshed.
23. The computer-implemented system of claim 22 further comprising:
responsive to determining the first cache is not externally triggered to be refreshed, determining, based on a cache that the first cache depends on, whether the first cache is to be refreshed.
24. The computer-implemented system of claim 19, wherein determining the first cache is to be refreshed comprises:
determining the first cache is externally triggered to be refreshed.
25. The computer-implemented system of claim 24 wherein determining the first cache is externally triggered to be refreshed comprises determining the first cache has a refreshing tag corresponding to an external trigger, and the operations further comprise:
responsive to determining the first cache has the refreshing tag corresponding to the external trigger, refreshing the first cache, and
canceling the refreshing tag after refreshing the first cache.
26. The computer-implemented system of claim 19, wherein determining the first cache is to be refreshed comprises:
determining, based on a third cache that the first cache depends on, the first cache is to be refreshed.
27. The computer-implemented system of claim 26, wherein determining, based on the third cache that the first cache depends on, the first cache is to be refreshed comprises:
determining that a refreshing time of the third cache is later than a refreshing time of the first cache;
determining, based on the dependency relationship, that the first cache depends on the third cache; and
determining, based on determining that the refreshing time of the third cache is later than the refreshing time of the first cache and, based on the dependency relationship, that the first cache depends on the third cache, that the first cache is to be refreshed.
US16/811,590 2019-04-04 2020-03-06 Cascade cache refreshing Active US10922236B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201910269207.1 2019-04-04
CN201910269207.1A CN110059023B (en) 2019-04-04 2019-04-04 Method, system and equipment for refreshing cascade cache
PCT/CN2020/071159 WO2020199709A1 (en) 2019-04-04 2020-01-09 Method and system for refershing cascaded cache, and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071159 Continuation WO2020199709A1 (en) 2019-04-04 2020-01-09 Method and system for refershing cascaded cache, and device

Publications (2)

Publication Number Publication Date
US20200320011A1 US20200320011A1 (en) 2020-10-08
US10922236B2 true US10922236B2 (en) 2021-02-16

Family

ID=72663467

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/811,590 Active US10922236B2 (en) 2019-04-04 2020-03-06 Cascade cache refreshing

Country Status (1)

Country Link
US (1) US10922236B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118656166B (en) * 2024-08-19 2024-11-26 广州佳新智能科技有限公司 Scene finance big data processing method and device

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030233493A1 (en) * 2002-06-15 2003-12-18 Boldon John L. Firmware installation methods and apparatus
US6681293B1 (en) * 2000-08-25 2004-01-20 Silicon Graphics, Inc. Method and cache-coherence system allowing purging of mid-level cache entries without purging lower-level cache entries
CN1484155A (en) 2002-08-13 2004-03-24 �Ҵ���˾ System and method for refreshing web proxy cache server objects
US20040162943A1 (en) 1999-04-22 2004-08-19 International Business Machines Corporation System and method for managing cachable entities
US20050015463A1 (en) * 2003-06-23 2005-01-20 Microsoft Corporation General dependency model for invalidating cache entries
US20090106495A1 (en) * 2007-10-23 2009-04-23 Sun Microsystems, Inc. Fast inter-strand data communication for processors with write-through l1 caches
US20090193187A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Design structure for an embedded dram having multi-use refresh cycles
CN102156735A (en) 2011-04-11 2011-08-17 中国有色矿业集团有限公司 Implementation method and device of business method based on database transaction
US20120198164A1 (en) * 2010-09-28 2012-08-02 Texas Instruments Incorporated Programmable Address-Based Write-Through Cache Control
CN102902741A (en) 2012-09-13 2013-01-30 国网电力科学研究院 Resource data caching and state cascading updating method for communication network
US8806138B1 (en) * 2007-02-20 2014-08-12 Pixar Dynamic dependencies and parameterizations for execution and caching
US20150143046A1 (en) 2013-11-21 2015-05-21 Green Cache AB Systems and methods for reducing first level cache energy by eliminating cache address tags
US20150293964A1 (en) * 2012-05-18 2015-10-15 Oracle International Corporation Applications of automated discovery of template patterns based on received requests
US20150378780A1 (en) 2014-06-30 2015-12-31 International Business Machines Corporation Prefetching of discontiguous storage locations in anticipation of transactional execution
US20160004639A1 (en) * 2009-06-09 2016-01-07 Hyperion Core Inc. System and method for a cache in a multi-core processor
CN106126568A (en) 2016-06-17 2016-11-16 杭州财人汇网络股份有限公司 One promotes mainly formula serializing buffer memory management method and system
CN106202112A (en) 2015-05-06 2016-12-07 阿里巴巴集团控股有限公司 CACHE DIRECTORY method for refreshing and device
US20170168941A1 (en) * 2015-12-11 2017-06-15 Oracle International Corporation Power saving for reverse directory
US20170255557A1 (en) 2016-03-07 2017-09-07 Qualcomm Incorporated Self-healing coarse-grained snoop filter
US20170344481A1 (en) * 2016-05-31 2017-11-30 Salesforce.Com, Inc. Invalidation and refresh of multi-tier distributed caches
US20180095895A1 (en) * 2016-09-30 2018-04-05 Intel Corporation System and Method for Cache Replacement Using Conservative Set Dueling
CN108600320A (en) 2018-03-23 2018-09-28 阿里巴巴集团控股有限公司 A kind of data cache method, apparatus and system
CN108897495A (en) 2018-06-28 2018-11-27 北京五八信息技术有限公司 Buffering updating method, device, buffer memory device and storage medium
US10152422B1 (en) * 2017-06-13 2018-12-11 Seagate Technology Llc Page-based method for optimizing cache metadata updates
CN109154910A (en) 2016-05-31 2019-01-04 超威半导体公司 Cache coherency for in-memory processing
US10176241B2 (en) * 2016-04-26 2019-01-08 Servicenow, Inc. Identification and reconciliation of network resource information
CN109240946A (en) 2018-09-06 2019-01-18 平安科技(深圳)有限公司 The multi-level buffer method and terminal device of data
CN109446448A (en) 2018-09-10 2019-03-08 平安科技(深圳)有限公司 Data processing method and system
CN110059023A (en) 2019-04-04 2019-07-26 阿里巴巴集团控股有限公司 A kind of method, system and equipment refreshing cascade caching

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040162943A1 (en) 1999-04-22 2004-08-19 International Business Machines Corporation System and method for managing cachable entities
US6681293B1 (en) * 2000-08-25 2004-01-20 Silicon Graphics, Inc. Method and cache-coherence system allowing purging of mid-level cache entries without purging lower-level cache entries
US20030233493A1 (en) * 2002-06-15 2003-12-18 Boldon John L. Firmware installation methods and apparatus
CN1484155A (en) 2002-08-13 2004-03-24 �Ҵ���˾ System and method for refreshing web proxy cache server objects
US20050015463A1 (en) * 2003-06-23 2005-01-20 Microsoft Corporation General dependency model for invalidating cache entries
US8806138B1 (en) * 2007-02-20 2014-08-12 Pixar Dynamic dependencies and parameterizations for execution and caching
US20090106495A1 (en) * 2007-10-23 2009-04-23 Sun Microsystems, Inc. Fast inter-strand data communication for processors with write-through l1 caches
US20090193187A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Design structure for an embedded dram having multi-use refresh cycles
US20160004639A1 (en) * 2009-06-09 2016-01-07 Hyperion Core Inc. System and method for a cache in a multi-core processor
US20120198164A1 (en) * 2010-09-28 2012-08-02 Texas Instruments Incorporated Programmable Address-Based Write-Through Cache Control
CN102156735A (en) 2011-04-11 2011-08-17 中国有色矿业集团有限公司 Implementation method and device of business method based on database transaction
US20150293964A1 (en) * 2012-05-18 2015-10-15 Oracle International Corporation Applications of automated discovery of template patterns based on received requests
CN102902741A (en) 2012-09-13 2013-01-30 国网电力科学研究院 Resource data caching and state cascading updating method for communication network
US20150143046A1 (en) 2013-11-21 2015-05-21 Green Cache AB Systems and methods for reducing first level cache energy by eliminating cache address tags
US20150378780A1 (en) 2014-06-30 2015-12-31 International Business Machines Corporation Prefetching of discontiguous storage locations in anticipation of transactional execution
CN106202112A (en) 2015-05-06 2016-12-07 阿里巴巴集团控股有限公司 CACHE DIRECTORY method for refreshing and device
US20170168941A1 (en) * 2015-12-11 2017-06-15 Oracle International Corporation Power saving for reverse directory
US20170255557A1 (en) 2016-03-07 2017-09-07 Qualcomm Incorporated Self-healing coarse-grained snoop filter
US10176241B2 (en) * 2016-04-26 2019-01-08 Servicenow, Inc. Identification and reconciliation of network resource information
CN109154910A (en) 2016-05-31 2019-01-04 超威半导体公司 Cache coherency for in-memory processing
US20170344481A1 (en) * 2016-05-31 2017-11-30 Salesforce.Com, Inc. Invalidation and refresh of multi-tier distributed caches
CN106126568A (en) 2016-06-17 2016-11-16 杭州财人汇网络股份有限公司 One promotes mainly formula serializing buffer memory management method and system
US20180095895A1 (en) * 2016-09-30 2018-04-05 Intel Corporation System and Method for Cache Replacement Using Conservative Set Dueling
US10152422B1 (en) * 2017-06-13 2018-12-11 Seagate Technology Llc Page-based method for optimizing cache metadata updates
CN108600320A (en) 2018-03-23 2018-09-28 阿里巴巴集团控股有限公司 A kind of data cache method, apparatus and system
CN108897495A (en) 2018-06-28 2018-11-27 北京五八信息技术有限公司 Buffering updating method, device, buffer memory device and storage medium
CN109240946A (en) 2018-09-06 2019-01-18 平安科技(深圳)有限公司 The multi-level buffer method and terminal device of data
CN109446448A (en) 2018-09-10 2019-03-08 平安科技(深圳)有限公司 Data processing method and system
CN110059023A (en) 2019-04-04 2019-07-26 阿里巴巴集团控股有限公司 A kind of method, system and equipment refreshing cascade caching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Crosby et al., "BlockChain Technology: Beyond Bitcoin," Sutardja Center for Entrepreneurship & Technology Technical Report, Oct. 16, 2015, 35 pages.
Nakamoto, "Bitcoin: A Peer-to-Peer Electronic Cash System," www.bitcoin.org, 2005, 9 pages.
PCT International Search Report and Written Opinion in International Application No. PCT/CN2020/071159, dated Apr. 15, 2020, 22 pages (with machine translation).

Also Published As

Publication number Publication date
US20200320011A1 (en) 2020-10-08

Similar Documents

Publication Publication Date Title
CA3048740C (en) Blockchain-based data processing method and device
CA3048739C (en) Blockchain-based data processing method and equipment
US20200293494A1 (en) Data synchronization methods, apparatuses, and devices
CA3049831C (en) Database state determining method and device, and consistency verifying method and device
CA3047884C (en) Method and device for sending transaction information and for consensus verification
US11281661B2 (en) Blockchain-based data processing method and device
US9317339B2 (en) Systems and methods for implementing work stealing using a configurable separation of stealable and non-stealable work items
CN110059023B (en) Method, system and equipment for refreshing cascade cache
EP3606010A1 (en) Method, apparatus and device for processing web application package
JP2023502296A (en) Method and apparatus for processing machine learning models within a web browser environment
US12443355B2 (en) Data storage methods, apparatuses, devices, and storage media
CN117312394B (en) A data access method, device, storage medium and electronic equipment
US12488003B2 (en) Data query methods and apparatuses, storage media, and electronic devices
CN116822657B (en) Method and device for accelerating model training, storage medium and electronic equipment
CN109376189A (en) Processing method, device and the equipment of batch data operation
CN115774552A (en) Configurated algorithm design method and device, electronic equipment and readable storage medium
US10929191B2 (en) Loading models on nodes having multiple model service frameworks
US10922236B2 (en) Cascade cache refreshing
TWI723535B (en) Data calculation method and engine
CN113032119A (en) Task scheduling method and device, storage medium and electronic equipment
CN116126750B (en) A method and device for data processing based on hardware characteristics
CN111339117B (en) Data processing method, device and equipment
US11474837B2 (en) Method and apparatus for efficient programming of electronic payment processing
US9348621B2 (en) Using hardware transactional memory for implementation of queue operations
CN110175020B (en) Frame property information extension method and device, frame loading method and device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ALIBABA GROUP HOLDING LIMITED, CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZHAO, YANGYANG;REEL/FRAME:052261/0951

Effective date: 20200309

AS Assignment

Owner name: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALIBABA GROUP HOLDING LIMITED;REEL/FRAME:053743/0464

Effective date: 20200826

AS Assignment

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.;REEL/FRAME:053754/0625

Effective date: 20200910

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4