US10922236B2 - Cascade cache refreshing - Google Patents
Cascade cache refreshing Download PDFInfo
- Publication number
- US10922236B2 US10922236B2 US16/811,590 US202016811590A US10922236B2 US 10922236 B2 US10922236 B2 US 10922236B2 US 202016811590 A US202016811590 A US 202016811590A US 10922236 B2 US10922236 B2 US 10922236B2
- Authority
- US
- United States
- Prior art keywords
- cache
- determining
- refreshing
- refreshed
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0891—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/2885—Hierarchically arranged intermediate devices, e.g. for hierarchical caching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0897—Caches characterised by their organisation or structure with two or more cache hierarchy levels
Definitions
- the present specification relates to the field of computer technologies, and in particular, to cascade cache refreshing methods, systems, and devices.
- a cache is a storage that can exchange data at a high speed.
- a cache is a memory chip on a hard disk controller, and has a very high access rate.
- the cache is a buffer between internal storage of the hard disk and an external interface. Because an internal data transmission rate of the hard disk is different from a transmission rate of the external interface, the cache serves as a buffer.
- a size and a rate of the cache are important factors that directly affect the transmission rate of the hard disk, and can greatly improve overall performance of the hard disk.
- the hard disk accesses fragmentary data, data needs to be continuously exchanged between the hard disk and a memory. If there is a large cache, the fragmentary data can be temporarily stored in the cache, to reduce system load and improve a data transmission rate.
- Embodiments of the present specification provide cascade cache refreshing methods, systems, and devices, to alleviate a problem in a cascade cache refreshing process in the existing technology.
- An embodiment of the present specification provides a cascade cache refreshing method, where the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
- sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed comprises determining whether each cache in the cascade cache needs to be refreshed.
- the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- the method further includes: determining a cache priority based on the dependency relationship, and determining the cache refreshing sequence based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
- the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining, based on a cache that the cache depends on, whether the cache needs to be refreshed; and/or determining whether the cache is externally triggered to be refreshed.
- the sequentially determining, based on the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed includes: determining whether the cache is externally triggered to be refreshed, and when the cache is not externally triggered to be refreshed, determining, based on a cache that the cache depends on, whether the cache needs to be refreshed.
- the method further includes: adding a refreshing tag to a refreshing target when cache refreshing is externally triggered; and determining whether the cache is externally triggered to be refreshed, where it is determined whether the cache has the refreshing tag; and the method further includes: canceling the refreshing tag after a cache with the refreshing tag is refreshed.
- An embodiment of the present specification further proposes a cascade cache refreshing system, where the system includes: a refreshing sorting module, configured to determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and a refreshing module, configured to sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
- a refreshing sorting module configured to determine a cache refreshing sequence based on a dependency relationship between caches in a cascade cache
- a refreshing module configured to sequentially determine, based on the cache refreshing sequence, whether the caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
- the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- a circular dependency detection unit configured to detect whether a circular dependency conflict exists, and terminate cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- An embodiment of the present specification further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the embodiments of the present specification.
- the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache.
- the following problem can be effectively alleviated: Data resource pressure is caused because cache data resources are repeatedly and centrally invoked, and cached data is inconsistent because data is changed during cache refreshing.
- FIG. 1 and FIG. 5 are flowcharts illustrating an application program running method, according to an embodiment of the present specification
- FIG. 2 and FIG. 3 are partial flowcharts illustrating an application program running method, according to an embodiment of the present specification
- FIG. 4 is a schematic diagram illustrating a cache dependency relationship tree of a cascade cache, according to an embodiment of the present specification.
- FIG. 6 and FIG. 7 are structural block diagrams of a system, according to an embodiment of the present specification.
- cache refreshing is usually managed through isolation, and caches are refreshed and controlled independently of each other.
- caches are refreshed and controlled independently of each other.
- caches are refreshed and controlled independently of each other.
- a cascade cache refreshing method proposes a cascade cache refreshing method.
- an application scenario in the existing technology is first analyzed.
- a cascade cache is characterized by the following two features: (1) there are a plurality of caches; and (2) there is a dependency relationship between caches.
- cache 1 is refreshed
- cache 2 that depends on cache 1 also needs to be refreshed.
- cache 3 that depends on cache 2 also needs to be refreshed. That is, the cascade cache is refreshed in an association way.
- each cache in the cascade cache is managed through isolation, it is equivalent to ignoring the dependency relationship between the caches and considering each cache as an independent cache. Consequently, after a cache is refreshed, an associated cache having a dependency relationship with the cache may not be refreshed synchronously. If data is changed during cache refreshing, a difference between basic data resources of the caches can be caused, and cached data can be inconsistent. In addition, the caches are refreshed and controlled independently of each other, and the cached data is also isolated from each other. Consequently, cluster refreshing data resources are repeatedly and centrally invoked, and data resource performance is affected.
- caches having a dependency relationship with each other can be refreshed synchronously
- caches having a dependency relationship in the cascade cache are used as a whole, and are refreshed synchronously during refreshing.
- the cascade cache is refreshed in an association way. As such, if a cache refreshing sequence is not considered when all the caches are refreshed synchronously, a cache that a certain cache depends on may be refreshed only after the cache is refreshed. As a result, the cache needs to be refreshed again to keep consistent with data cached by the cache that the cache depends on.
- the dependency relationship between the caches in the cascade cache is often not a simple dependency chain, when different caches are triggered to be refreshed, caches that need to be refreshed synchronously and that have a dependency relationship are also different. If all caches are refreshed each time refreshing is triggered, a refreshing operation may be performed on a cache that does not need to be refreshed, and processing resources are wasted.
- the caches in the cascade cache are not refreshed randomly. Instead, a fixed refreshing sequence is determined based on the dependency relationship between the caches, and the caches are refreshed in a refreshing round based on the refreshing sequence. In addition, in a refreshing process, it is first determined whether the cache needs to be refreshed in a current refreshing round. If the cache does not need to be refreshed, the cache is not refreshed in the current refreshing round.
- the method includes: sequentially determining, based on a cache refreshing sequence, whether caches in a cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where the cache refreshing sequence is determined based on a dependency relationship between the caches in the cascade cache; and when it is determined that a current cache needs to be refreshed, determining whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
- the caches in the cascade cache are sequentially refreshed based on the cache dependency relationship, to manage refreshing of the cascade cache.
- the following problem can be effectively alleviated: Data resource pressure that is caused because cache data resources are repeatedly and centrally invoked, and in consistent cached data that is caused because data is changed during cache refreshing.
- the method includes the following steps:
- the method further includes: obtaining a cache definition of the cascade cache, and determining the dependency relationship between the caches in the cascade cache based on the cache definition.
- the cache refreshing sequence needs to be determined only in an initial process of initializing the cascade cache, and the cache refreshing sequence is stored after the cache refreshing sequence is obtained. In a subsequent cache refreshing process, the stored cache refreshing sequence only needs to be invoked, without repeating a step of generating the cache refreshing sequence each time cache refreshing is performed.
- the cache refreshing sequence is determined for all caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between all the caches in the cascade cache, and the cache refreshing sequence includes each cache in the cascade cache.
- the cache refreshing sequence is determined for a part of caches in the cascade cache. That is, the cache refreshing sequence is determined based on a dependency relationship between a part of caches in the cascade cache, and the cache refreshing sequence includes a part of caches in the cascade cache.
- whether the caches need to be refreshed is determined for each cache in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a cache ranking first in the cache refreshing sequence, whether each cache in the cache refreshing sequence needs to be refreshed. If the cache refreshing sequence includes all the caches in the cascade cache, in a process of determining whether the caches need to be refreshed, it is determined whether each cache in the cascade cache needs to be refreshed.
- whether the caches need to be refreshed is determined for a part of caches in the cache refreshing sequence. That is, when it is sequentially determined, based on the cache refreshing sequence, whether the caches need to be refreshed, it is sequentially determined, starting from a certain cache in the cache refreshing sequence, whether the cache and each cache or certain caches following the cache need to be refreshed.
- the cache refreshing sequence is stored after the cache refreshing sequence is obtained. At the same time, it is monitored whether the cache definition of the cascade cache is updated. If the cache definition of the cascade cache is updated, a new cache refreshing sequence is generated based on an updated cache definition, and the originally stored cache refreshing sequence is updated.
- S 233 is performed to refresh the cache, and the method proceeds to step S 234 .
- step S 234 If the target cache does not need to be refreshed, the method directly jumps to step S 234 .
- step 232 freshing determining
- step S 231 If no, the method jumps to step S 231 .
- the cascade cache is refreshed in an association way.
- cache 2 depends on cache 1
- cache 3 depends on cache 2.
- cache 1 depends on cache 3
- a circular dependency relationship chain (cache 1-cache 2-cache 3-cache 1) is formed. If any cache on the circular dependency relationship chain is refreshed, an infinite refreshing sequence is formed. Consequently, refreshing may not be stopped, and an execution conflict may be generated.
- the method further includes: detecting whether a circular dependency conflict exists, and terminating cache refresh when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- the method includes the following steps:
- S 321 is performed to stop cache refreshing and output alarm information.
- S 330 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
- the following steps are performed: obtaining each cache dependency sequence chain based on the cache dependency relationship; and determining whether a same cache identifier exists in the cache dependency sequence chain; and if yes, determining that a circular dependency conflict exists in the cache cascade relationship.
- a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
- formed cache dependency sequence chains are as follows: cache D-cache B-cache A-cache C; cache D-cache B-cache A-cache E; cache B-cache A-cache C; cache B-cache A-cache E; cache B-cache F; cache A-cache C; and cache A-cache E.
- cache dependency sequence chain is formed: cache D-cache B-cache A-cache C-cache B. There are two cache B on the cache dependency sequence chain, a circular dependency conflict is caused.
- a cache priority is determined based on the dependency relationship, and the cache refreshing sequence is determined based on all cache priorities, where a cache that does not depends on any cache has a highest priority; and when a first cache depends on a second cache, a priority of the first cache is lower than a priority of the second cache.
- the cache refreshing sequence is formed in descending order of priorities.
- a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
- a formed cache dependency sequence tree is shown in FIG. 4 .
- the following can be obtained based on the cache dependency sequence tree:
- cache C, cache E, and cache F do not depend on another cache
- cache C, cache E, and cache F have a highest priority (which is set to 1); because cache A depends on cache C and cache E, cache A has a lower priority than cache C and cache E, and because priorities of both cache C and cache E are 1, the priority of cache A is 2; because cache B depends on cache A and cache F, cache B has a lower priority than cache A and cache F, and because the priority of cache A is 2 and the priority of cache F is 1, to ensure that cache B has a lower priority than cache A and cache F, the priority of cache B is 3; and because cache D depends on cache B and cache A, cache D has a lower priority than cache B and cache A, and because the priority of cache A is 2 and the priority of cache B is 3, to ensure that cache D has a lower priority than cache A and cache B, the priority of cache B is 4.
- cache refreshing sequence is formed in descending order of priorities: cache C-cache E-cache F-cache A-cache B-cache D.
- the cache when it is determined whether a cache needs to be refreshed, it is determined whether the cache is externally triggered to be refreshed. When the cache is not externally triggered to be refreshed, it is determined, based on a cache that the cache depends on, whether the cache needs to be refreshed.
- a refreshing tag is added to a refreshing target when cache refreshing is externally triggered.
- it is determined whether the cache is externally triggered to be refreshed it is determined whether the cache has the refreshing tag. Further, the refreshing tag is canceled after a cache with the refreshing tag is refreshed.
- the first cache when it is determined, based on the cache that the cache depends on, whether the cache needs to be refreshed, the first cache needs to be refreshed when there is a cache whose refreshing time is later than a refreshing time of the first cache among all caches that the first cache depends on.
- the cache refreshing sequence before it is sequentially determined, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed, it is first determined whether refreshing determining needs to be performed for the cascade cache. If no refreshing determining needs to be performed for the cascade cache, there is no need to sequentially determine, by invoking the cache refreshing sequence, whether each cache in the cascade cache needs to be refreshed.
- the cascade cache when the cascade cache is externally triggered to be refreshed, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if any cache in the cascade cache needs to be refreshed based on an external command, it is determined that refreshing determining needs to be performed for the cascade cache.
- a refreshing time point is predetermined for the cascade cache. If a current moment satisfies the refreshing time point predetermined for the cascade cache, it is determined that refreshing determining needs to be performed for the cascade cache. For example, if it is predetermined that one round of refreshing is performed for the cascade cache at an interval of 10 minutes, refreshing determining needs to be performed for the cascade cache at an interval of 10 minutes.
- the method includes the following steps:
- S 521 is performed to stop cache refreshing and output alarm information.
- S 530 is performed to determine the cache refreshing sequence based on the dependency relationship between the caches in the cascade cache.
- step S 540 If yes, the method jumps to step S 540 . If no, the method returns to step S 560 .
- S 552 is performed to refresh the cache and cancel the refreshing tag.
- S 555 is performed to traverse a refreshing time of a cache that the cache depends on and determine whether the refreshing time of the cache that the cache depends on is later than the refreshing time of the cache.
- S 556 is performed to refresh the cache and jump to step S 554 .
- step S 541 If no, the method directly jumps to step S 541 .
- step S 560 If yes, this round of cache refreshing ends, and the method jumps to step S 560 .
- step S 540 If no, the method jumps to step S 540 .
- step S 560 it is monitored whether cache refreshing is externally triggered and/or whether the current moment satisfies the refreshing time point of the cascade cache. If cache refreshing is externally triggered and/or the current moment satisfies the refreshing time point of the cascade cache, it is determined that the cascade cache needs to be refreshed.
- step S 540 a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
- a dependency relationship of a certain cascade cache is as follows: cache A depends on cache C and cache E; cache B depends on cache A and cache F; and cache D depends on cache B and cache A.
- cache refreshing sequence cache C-cache E-cache F-cache A-cache B-cache D.
- cache refreshing sequence is invoked, all the caches are traversed based on the cache refreshing sequence, and it is sequentially determined whether all the caches need to be refreshed, and a cache that needs to be refreshed is refreshed.
- cache C needs to be refreshed, cache C is refreshed, and a refreshing time of cache C is updated.
- Cache E has no refreshing tag and has no dependent cache, and cache E is not refreshed.
- Cache F has no refreshing tag and has no dependent cache, and cache F is not refreshed.
- Cache A has no refreshing tag, and cache A depends on cache C and cache E, where because the refreshing time of cache C is later than a refreshing time of cache A, it is determined that cache A needs to be refreshed, cache A is refreshed, and the refreshing time of cache A is updated.
- Cache B has no refreshing tag, and cache B depends on cache A and cache F, where because the refreshing time of cache A is later than a refreshing time of cache B, it is determined that cache B needs to be refreshed, and cache B is refreshed.
- Cache D has no refreshing tag, and cache D depends on cache B and cache A, where because both the refreshing time of cache B and the refreshing time of cache A are later than a refreshing time of cache D, it is determined that cache D needs to be refreshed, and cache D is refreshed.
- step S 540 when cache refreshing is externally triggered, a target object (a cache to which a refreshing tag is added) for which cache refreshing is externally triggered is used as a cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed.
- a cache ranking first in the cache refreshing sequence is used as a cache on which refreshing determining is initially performed, to finally determine whether all the caches in the cache refreshing sequence need to be refreshed.
- a cache on which refreshing determining is initially performed is determined based on a refreshing trigger method.
- a target object a cache to which a refreshing tag is added
- the cache on which refreshing determining is initially performed is used as the cache on which refreshing determining is initially performed, to finally determine whether the target object for which cache refreshing is externally triggered and all caches following the target object in the cache refreshing sequence need to be refreshed.
- the embodiments of the present specification further propose a cascade cache refreshing system.
- the system includes: a refreshing sorting module 620 , configured to determine a cache refreshing sequence based on a dependency relationship; and a refreshing module 630 , configured to sequentially determine, based on the cache refreshing sequence, whether caches need to be refreshed, and refresh a cache that needs to be refreshed, where when it is determined that a cache needs to be refreshed, it is determined whether a cache following the cache in the cache refreshing sequence needs to be refreshed after the cache is refreshed.
- the system further includes: a circular dependency detection unit, configured to detect whether a circular dependency conflict exists, and terminate cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- a circular dependency detection unit configured to detect whether a circular dependency conflict exists, and terminate cache refreshing when a circular dependency conflict exists, where when a cache used as a start point can be traced through cache tracing based on the dependency relationship by using any cache as a start point, the circular dependency conflict exists.
- the system further includes a dependency relationship determining module 710 .
- the dependency relationship determining module 710 is configured to obtain a cache definition, and determine the dependency relationship between caches in a cascade cache based on the cache definition.
- the dependency relationship determining module 710 includes a circular dependency detection unit 711 .
- the present disclosure further proposes a device for processing information at user equipment, where the device includes a memory configured to store a computer program instruction and a processor configured to execute a program instruction, and when the computer program instruction is executed by the processor, the device is triggered to execute the method in the present disclosure.
- a technical improvement is a hardware improvement (for example, an improvement to a circuit structure, such as a diode, a transistor, or a switch) or a software improvement (an improvement to a method procedure) can be clearly distinguished.
- a hardware improvement for example, an improvement to a circuit structure, such as a diode, a transistor, or a switch
- a software improvement an improvement to a method procedure
- a designer usually programs an improved method procedure into a hardware circuit, to obtain a corresponding hardware circuit structure. Therefore, a method procedure can be improved by using a hardware entity module.
- a programmable logic device for example, a field programmable gate array (FPGA)
- FPGA field programmable gate array
- the designer performs programming to “integrate” a digital system to a PLD without requesting a chip manufacturer to design and produce an application-specific integrated circuit chip.
- programming is mostly implemented by using “logic compiler” software.
- the logic compiler software is similar to a software compiler used to develop and write a program. Original code needs to be written in a particular programming language for compilation. The language is referred to as a hardware description language (HDL).
- HDL hardware description language
- HDLs such as the Advanced Boolean Expression Language (ABEL), the Altera Hardware Description Language (AHDL), Confluence, the Cornell University Programming Language (CUPL), HDCal, the Java Hardware Description Language (JHDL), Lava, Lola, MyHDL, PALASM, and the Ruby Hardware Description Language (RHDL).
- ABEL Advanced Boolean Expression Language
- AHDL Altera Hardware Description Language
- CUPL Cornell University Programming Language
- HDCal the Java Hardware Description Language
- JHDL Java Hardware Description Language
- Lava Lola
- MyHDL MyHDL
- PALASM Ruby Hardware Description Language
- RHDL Ruby Hardware Description Language
- VHDL very-high-speed integrated circuit hardware description language
- Verilog Verilog
- a controller can be implemented by using any appropriate method.
- the controller can be a microprocessor or a processor, or a computer-readable medium that stores computer readable program code (such as software or firmware) that can be executed by the microprocessor or the processor, a logic gate, a switch, an application-specific integrated circuit (ASIC), a programmable logic controller, or a built-in microprocessor.
- Examples of the controller include but are not limited to the following microprocessors: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320.
- the memory controller can also be implemented as a part of the control logic of the memory.
- controller can be considered as a hardware component, and an apparatus configured to implement various functions in the controller can also be considered as a structure in the hardware component. Or the apparatus configured to implement various functions can even be considered as both a software module implementing the method and a structure in the hardware component.
- the system, apparatus, module, or unit illustrated in the previous embodiments can be implemented by using a computer chip or an entity, or can be implemented by using a product having a certain function.
- a typical implementation device is a computer.
- the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device, or a combination of any of these devices.
- an embodiment of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure can use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure can use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, etc.) that include computer-usable program code.
- computer-usable storage media including but not limited to a disk memory, a CD-ROM, an optical memory, etc.
- These computer program instructions can be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine, so the instructions executed by the computer or the processor of the another programmable data processing device generate a device for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
- These computer program instructions can be stored in a computer readable memory that can instruct the computer or the another programmable data processing device to work in a specific way, so the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus.
- the instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
- a computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory.
- the memory can include a non-persistent memory, a random access memory (RAM), a non-volatile memory, and/or another form that are in a computer readable medium, for example, a read-only memory (ROM) or a flash memory (flash RAM).
- ROM read-only memory
- flash RAM flash memory
- the computer readable medium includes persistent, non-persistent, movable, and unmovable media that can store information by using any method or technology.
- the information can be a computer readable instruction, a data structure, a program module, or other data.
- Examples of a computer storage medium include but are not limited to a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), another type of random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory or another memory technology, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or another optical storage, a cassette magnetic tape, a magnetic tape/magnetic disk storage or another magnetic storage device.
- the computer storage medium can be used to store information accessible by the calculating device. Based on the definition in the present specification, the computer readable medium does not include transitory media such as a modulated data signal and carrier.
- the present application can be described in the general context of computer executable instructions executed by a computer, for example, a program module.
- the program module includes a routine, a program, an object, a component, a data structure, etc. executing a specific task or implementing a specific abstract data type.
- the present application can also be practiced in distributed computing environments. In the distributed computing environments, tasks are performed by remote processing devices connected through a communications network. In a distributed computing environment, the program module can be located in both local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
Claims (27)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910269207.1 | 2019-04-04 | ||
| CN201910269207.1A CN110059023B (en) | 2019-04-04 | 2019-04-04 | Method, system and equipment for refreshing cascade cache |
| PCT/CN2020/071159 WO2020199709A1 (en) | 2019-04-04 | 2020-01-09 | Method and system for refershing cascaded cache, and device |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2020/071159 Continuation WO2020199709A1 (en) | 2019-04-04 | 2020-01-09 | Method and system for refershing cascaded cache, and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200320011A1 US20200320011A1 (en) | 2020-10-08 |
| US10922236B2 true US10922236B2 (en) | 2021-02-16 |
Family
ID=72663467
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/811,590 Active US10922236B2 (en) | 2019-04-04 | 2020-03-06 | Cascade cache refreshing |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10922236B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118656166B (en) * | 2024-08-19 | 2024-11-26 | 广州佳新智能科技有限公司 | Scene finance big data processing method and device |
Citations (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030233493A1 (en) * | 2002-06-15 | 2003-12-18 | Boldon John L. | Firmware installation methods and apparatus |
| US6681293B1 (en) * | 2000-08-25 | 2004-01-20 | Silicon Graphics, Inc. | Method and cache-coherence system allowing purging of mid-level cache entries without purging lower-level cache entries |
| CN1484155A (en) | 2002-08-13 | 2004-03-24 | �Ҵ���˾ | System and method for refreshing web proxy cache server objects |
| US20040162943A1 (en) | 1999-04-22 | 2004-08-19 | International Business Machines Corporation | System and method for managing cachable entities |
| US20050015463A1 (en) * | 2003-06-23 | 2005-01-20 | Microsoft Corporation | General dependency model for invalidating cache entries |
| US20090106495A1 (en) * | 2007-10-23 | 2009-04-23 | Sun Microsystems, Inc. | Fast inter-strand data communication for processors with write-through l1 caches |
| US20090193187A1 (en) * | 2008-01-25 | 2009-07-30 | International Business Machines Corporation | Design structure for an embedded dram having multi-use refresh cycles |
| CN102156735A (en) | 2011-04-11 | 2011-08-17 | 中国有色矿业集团有限公司 | Implementation method and device of business method based on database transaction |
| US20120198164A1 (en) * | 2010-09-28 | 2012-08-02 | Texas Instruments Incorporated | Programmable Address-Based Write-Through Cache Control |
| CN102902741A (en) | 2012-09-13 | 2013-01-30 | 国网电力科学研究院 | Resource data caching and state cascading updating method for communication network |
| US8806138B1 (en) * | 2007-02-20 | 2014-08-12 | Pixar | Dynamic dependencies and parameterizations for execution and caching |
| US20150143046A1 (en) | 2013-11-21 | 2015-05-21 | Green Cache AB | Systems and methods for reducing first level cache energy by eliminating cache address tags |
| US20150293964A1 (en) * | 2012-05-18 | 2015-10-15 | Oracle International Corporation | Applications of automated discovery of template patterns based on received requests |
| US20150378780A1 (en) | 2014-06-30 | 2015-12-31 | International Business Machines Corporation | Prefetching of discontiguous storage locations in anticipation of transactional execution |
| US20160004639A1 (en) * | 2009-06-09 | 2016-01-07 | Hyperion Core Inc. | System and method for a cache in a multi-core processor |
| CN106126568A (en) | 2016-06-17 | 2016-11-16 | 杭州财人汇网络股份有限公司 | One promotes mainly formula serializing buffer memory management method and system |
| CN106202112A (en) | 2015-05-06 | 2016-12-07 | 阿里巴巴集团控股有限公司 | CACHE DIRECTORY method for refreshing and device |
| US20170168941A1 (en) * | 2015-12-11 | 2017-06-15 | Oracle International Corporation | Power saving for reverse directory |
| US20170255557A1 (en) | 2016-03-07 | 2017-09-07 | Qualcomm Incorporated | Self-healing coarse-grained snoop filter |
| US20170344481A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Invalidation and refresh of multi-tier distributed caches |
| US20180095895A1 (en) * | 2016-09-30 | 2018-04-05 | Intel Corporation | System and Method for Cache Replacement Using Conservative Set Dueling |
| CN108600320A (en) | 2018-03-23 | 2018-09-28 | 阿里巴巴集团控股有限公司 | A kind of data cache method, apparatus and system |
| CN108897495A (en) | 2018-06-28 | 2018-11-27 | 北京五八信息技术有限公司 | Buffering updating method, device, buffer memory device and storage medium |
| US10152422B1 (en) * | 2017-06-13 | 2018-12-11 | Seagate Technology Llc | Page-based method for optimizing cache metadata updates |
| CN109154910A (en) | 2016-05-31 | 2019-01-04 | 超威半导体公司 | Cache coherency for in-memory processing |
| US10176241B2 (en) * | 2016-04-26 | 2019-01-08 | Servicenow, Inc. | Identification and reconciliation of network resource information |
| CN109240946A (en) | 2018-09-06 | 2019-01-18 | 平安科技(深圳)有限公司 | The multi-level buffer method and terminal device of data |
| CN109446448A (en) | 2018-09-10 | 2019-03-08 | 平安科技(深圳)有限公司 | Data processing method and system |
| CN110059023A (en) | 2019-04-04 | 2019-07-26 | 阿里巴巴集团控股有限公司 | A kind of method, system and equipment refreshing cascade caching |
-
2020
- 2020-03-06 US US16/811,590 patent/US10922236B2/en active Active
Patent Citations (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040162943A1 (en) | 1999-04-22 | 2004-08-19 | International Business Machines Corporation | System and method for managing cachable entities |
| US6681293B1 (en) * | 2000-08-25 | 2004-01-20 | Silicon Graphics, Inc. | Method and cache-coherence system allowing purging of mid-level cache entries without purging lower-level cache entries |
| US20030233493A1 (en) * | 2002-06-15 | 2003-12-18 | Boldon John L. | Firmware installation methods and apparatus |
| CN1484155A (en) | 2002-08-13 | 2004-03-24 | �Ҵ���˾ | System and method for refreshing web proxy cache server objects |
| US20050015463A1 (en) * | 2003-06-23 | 2005-01-20 | Microsoft Corporation | General dependency model for invalidating cache entries |
| US8806138B1 (en) * | 2007-02-20 | 2014-08-12 | Pixar | Dynamic dependencies and parameterizations for execution and caching |
| US20090106495A1 (en) * | 2007-10-23 | 2009-04-23 | Sun Microsystems, Inc. | Fast inter-strand data communication for processors with write-through l1 caches |
| US20090193187A1 (en) * | 2008-01-25 | 2009-07-30 | International Business Machines Corporation | Design structure for an embedded dram having multi-use refresh cycles |
| US20160004639A1 (en) * | 2009-06-09 | 2016-01-07 | Hyperion Core Inc. | System and method for a cache in a multi-core processor |
| US20120198164A1 (en) * | 2010-09-28 | 2012-08-02 | Texas Instruments Incorporated | Programmable Address-Based Write-Through Cache Control |
| CN102156735A (en) | 2011-04-11 | 2011-08-17 | 中国有色矿业集团有限公司 | Implementation method and device of business method based on database transaction |
| US20150293964A1 (en) * | 2012-05-18 | 2015-10-15 | Oracle International Corporation | Applications of automated discovery of template patterns based on received requests |
| CN102902741A (en) | 2012-09-13 | 2013-01-30 | 国网电力科学研究院 | Resource data caching and state cascading updating method for communication network |
| US20150143046A1 (en) | 2013-11-21 | 2015-05-21 | Green Cache AB | Systems and methods for reducing first level cache energy by eliminating cache address tags |
| US20150378780A1 (en) | 2014-06-30 | 2015-12-31 | International Business Machines Corporation | Prefetching of discontiguous storage locations in anticipation of transactional execution |
| CN106202112A (en) | 2015-05-06 | 2016-12-07 | 阿里巴巴集团控股有限公司 | CACHE DIRECTORY method for refreshing and device |
| US20170168941A1 (en) * | 2015-12-11 | 2017-06-15 | Oracle International Corporation | Power saving for reverse directory |
| US20170255557A1 (en) | 2016-03-07 | 2017-09-07 | Qualcomm Incorporated | Self-healing coarse-grained snoop filter |
| US10176241B2 (en) * | 2016-04-26 | 2019-01-08 | Servicenow, Inc. | Identification and reconciliation of network resource information |
| CN109154910A (en) | 2016-05-31 | 2019-01-04 | 超威半导体公司 | Cache coherency for in-memory processing |
| US20170344481A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Invalidation and refresh of multi-tier distributed caches |
| CN106126568A (en) | 2016-06-17 | 2016-11-16 | 杭州财人汇网络股份有限公司 | One promotes mainly formula serializing buffer memory management method and system |
| US20180095895A1 (en) * | 2016-09-30 | 2018-04-05 | Intel Corporation | System and Method for Cache Replacement Using Conservative Set Dueling |
| US10152422B1 (en) * | 2017-06-13 | 2018-12-11 | Seagate Technology Llc | Page-based method for optimizing cache metadata updates |
| CN108600320A (en) | 2018-03-23 | 2018-09-28 | 阿里巴巴集团控股有限公司 | A kind of data cache method, apparatus and system |
| CN108897495A (en) | 2018-06-28 | 2018-11-27 | 北京五八信息技术有限公司 | Buffering updating method, device, buffer memory device and storage medium |
| CN109240946A (en) | 2018-09-06 | 2019-01-18 | 平安科技(深圳)有限公司 | The multi-level buffer method and terminal device of data |
| CN109446448A (en) | 2018-09-10 | 2019-03-08 | 平安科技(深圳)有限公司 | Data processing method and system |
| CN110059023A (en) | 2019-04-04 | 2019-07-26 | 阿里巴巴集团控股有限公司 | A kind of method, system and equipment refreshing cascade caching |
Non-Patent Citations (3)
| Title |
|---|
| Crosby et al., "BlockChain Technology: Beyond Bitcoin," Sutardja Center for Entrepreneurship & Technology Technical Report, Oct. 16, 2015, 35 pages. |
| Nakamoto, "Bitcoin: A Peer-to-Peer Electronic Cash System," www.bitcoin.org, 2005, 9 pages. |
| PCT International Search Report and Written Opinion in International Application No. PCT/CN2020/071159, dated Apr. 15, 2020, 22 pages (with machine translation). |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200320011A1 (en) | 2020-10-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CA3048740C (en) | Blockchain-based data processing method and device | |
| CA3048739C (en) | Blockchain-based data processing method and equipment | |
| US20200293494A1 (en) | Data synchronization methods, apparatuses, and devices | |
| CA3049831C (en) | Database state determining method and device, and consistency verifying method and device | |
| CA3047884C (en) | Method and device for sending transaction information and for consensus verification | |
| US11281661B2 (en) | Blockchain-based data processing method and device | |
| US9317339B2 (en) | Systems and methods for implementing work stealing using a configurable separation of stealable and non-stealable work items | |
| CN110059023B (en) | Method, system and equipment for refreshing cascade cache | |
| EP3606010A1 (en) | Method, apparatus and device for processing web application package | |
| JP2023502296A (en) | Method and apparatus for processing machine learning models within a web browser environment | |
| US12443355B2 (en) | Data storage methods, apparatuses, devices, and storage media | |
| CN117312394B (en) | A data access method, device, storage medium and electronic equipment | |
| US12488003B2 (en) | Data query methods and apparatuses, storage media, and electronic devices | |
| CN116822657B (en) | Method and device for accelerating model training, storage medium and electronic equipment | |
| CN109376189A (en) | Processing method, device and the equipment of batch data operation | |
| CN115774552A (en) | Configurated algorithm design method and device, electronic equipment and readable storage medium | |
| US10929191B2 (en) | Loading models on nodes having multiple model service frameworks | |
| US10922236B2 (en) | Cascade cache refreshing | |
| TWI723535B (en) | Data calculation method and engine | |
| CN113032119A (en) | Task scheduling method and device, storage medium and electronic equipment | |
| CN116126750B (en) | A method and device for data processing based on hardware characteristics | |
| CN111339117B (en) | Data processing method, device and equipment | |
| US11474837B2 (en) | Method and apparatus for efficient programming of electronic payment processing | |
| US9348621B2 (en) | Using hardware transactional memory for implementation of queue operations | |
| CN110175020B (en) | Frame property information extension method and device, frame loading method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: ALIBABA GROUP HOLDING LIMITED, CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ZHAO, YANGYANG;REEL/FRAME:052261/0951 Effective date: 20200309 |
|
| AS | Assignment |
Owner name: ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALIBABA GROUP HOLDING LIMITED;REEL/FRAME:053743/0464 Effective date: 20200826 |
|
| AS | Assignment |
Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD., CAYMAN ISLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ADVANTAGEOUS NEW TECHNOLOGIES CO., LTD.;REEL/FRAME:053754/0625 Effective date: 20200910 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |