CN110989939A - Data cache processing method, device and equipment and cache component - Google Patents

Data cache processing method, device and equipment and cache component Download PDF

Info

Publication number
CN110989939A
CN110989939A CN201911292361.7A CN201911292361A CN110989939A CN 110989939 A CN110989939 A CN 110989939A CN 201911292361 A CN201911292361 A CN 201911292361A CN 110989939 A CN110989939 A CN 110989939A
Authority
CN
China
Prior art keywords
cache
data
level
level cache
target function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911292361.7A
Other languages
Chinese (zh)
Inventor
张鹏鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN201911292361.7A priority Critical patent/CN110989939A/en
Publication of CN110989939A publication Critical patent/CN110989939A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present specification provides a data cache processing method, apparatus, device and cache component, wherein the method comprises: notes are added in advance in a target function needing data caching, expiration time and refreshing time of cached data are set in the notes, and a first-level cache and a second-level cache are set in a cache component. And determining the data synchronization time of the first-level cache and the second-level cache according to the annotated expiration time and the refresh time, automatically triggering a target function through the second-level cache when the data synchronization time is reached, so as to realize the automatic refresh of the cache data, and simultaneously synchronizing the newly obtained cache data of the second-level cache into the first-level cache, so as to realize the automatic data synchronization of the first-level cache and the second-level cache. The accuracy and the effectiveness of the cache data in the first-level cache are ensured, and an accurate data basis is provided for subsequent data processing.

Description

Data cache processing method, device and equipment and cache component
Technical Field
The present specification belongs to the field of computer technologies, and in particular, to a data cache processing method, apparatus, device, and cache component.
Background
Data caching, which is understood to be the technology that holds data temporarily for reading and rereading, is a key technology in the field of computers and the internet. As the amount of application concurrency grows, distributed and centralized caches (such as Redis) cannot meet the peak. An application starts to use a local (JVM, hereinafter, local) cache, but the local cache has the defect that data expiration and refreshing are not supported, or refreshing is not actively carried out, so that cache breakdown and avalanche phenomena are easily caused.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a data cache processing method, apparatus, device, and cache component, which implement timely refreshing and synchronization of cache data, and improve accuracy and validity of the cache data.
In one aspect, an embodiment of the present specification provides a data caching processing method, including:
adding an annotation in a target function needing data caching in advance, wherein the annotation comprises the following steps: cache expiration time, cache refresh time;
determining data synchronization time of a first-level cache and a second-level cache in a cache assembly according to the cache expiration time and the cache refreshing time, wherein the data synchronization time is longer than the cache refreshing time and is shorter than the cache expiration time, the first-level cache does not support expiration and refreshing of cache data, and the second-level cache supports expiration and refreshing of cache data;
and when the data synchronization time is reached, triggering the second-level cache to call and execute the target function, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
Further, in some embodiments of the present specification, the read performance of the first level cache is greater than the read performance of the second level cache, and the method further includes:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
Further, in some embodiments of the present description, the method further comprises:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
Further, in some embodiments of the present specification, the first-level cache uses a Map cache, and the second-level cache uses a Guava cache.
Further, in some embodiments of the present specification, the annotation further includes a cache name, and the method further includes:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
Further, in some embodiments of the present specification, the annotation further includes at least one of the following parameters: the cache initialization capacity, the cache maximum capacity, the parallelism, the parameter-in cache satisfying conditions and the parameter-out return value cache satisfying conditions.
In another aspect, the present specification provides a data cache processing apparatus, including:
an annotation adding module, configured to add an annotation to a target function that needs to be cached, where the annotation includes: cache expiration time, cache refresh time;
a synchronization time calculation module, configured to determine data synchronization time of a first-level cache and a second-level cache in a cache component according to the cache expiration time and the cache refresh time, where the data synchronization time is greater than the cache refresh time and less than the cache expiration time, the first-level cache does not support expiration and refresh of cache data, and the second-level cache supports expiration and refresh of cache data;
and the cache data synchronization module is used for triggering the second-level cache to call and execute the target function when the data synchronization time is reached, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
Further, in some embodiments of the present specification, a read performance of the first-level cache is greater than a read performance of the second-level cache, and the apparatus further includes a cache data obtaining module, configured to:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
Further, in some embodiments of the present specification, the cache data obtaining module is further configured to:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
Further, in some embodiments of the present specification, the annotation further includes a cache name, and the apparatus further includes a cache instance creating module, configured to:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
In yet another aspect, the present specification provides a cache assembly comprising: first grade cache, second grade cache, synchronous thread, wherein:
the first-level cache adopts Map cache, and the second-level cache adopts Guava cache;
the synchronization thread is used for refreshing the cache data of the target function according to the annotation in the target function and synchronizing the cache data of the target function in the first-level cache and the second-level cache according to the annotation in the target function.
In another aspect, the present specification provides a data cache processing apparatus, including: the data cache processing method comprises at least one processor and a memory for storing processor executable instructions, wherein the processor executes the instructions to realize the data cache processing method.
In yet another aspect, the present specification provides a computer-readable storage medium, on which computer instructions are stored, and when executed, the instructions implement the data caching processing method.
In the data cache processing method, the data cache processing device, the storage medium, and the cache component provided in this specification, an annotation is added to a target function to be cached, expiration time and refresh time of cache data are set in the annotation, and a first-level cache and a second-level cache are set in the cache component. And determining the data synchronization time of the first-level cache and the second-level cache according to the annotated expiration time and the refresh time, automatically triggering a target function through the second-level cache when the data synchronization time is reached, realizing the automatic refresh of the cache data, and simultaneously synchronizing the newly obtained cache data of the second-level cache into the first-level cache. By adding annotations into the objective function, the automatic data synchronization of the primary cache and the secondary cache is realized, the accuracy and the validity of the cache data in the primary cache are ensured, and an accurate data basis is provided for the subsequent data processing.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart illustrating a data caching processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a data caching method according to an embodiment of the present disclosure;
fig. 3 is a schematic block diagram of an embodiment of a data cache processing apparatus provided in this specification;
FIG. 4 is a schematic structural diagram of a data cache processing apparatus according to another embodiment of the present disclosure;
fig. 5 is a block diagram of a hardware configuration of a data cache processing server in one embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
The caching of data is a key technology in the field of computers, and the existing local caching technology does not support the expiration and refreshing of cached data, or supports the expiration and refreshing of data but does not actively refresh, and is a mechanism for inert refreshing.
The embodiment of the present specification provides a data cache processing method, which can be applied to a local cache, and is implemented by setting a first-level cache and a second-level cache in a cache component, adding an annotation to a function to be subjected to data cache, setting cache expiration time and cache refresh time in the annotation, and performing active refresh and data synchronization on data of the function in the first-level cache and the second-level cache based on the annotation. The method and the device realize timely updating of the cache data and ensure the accuracy and the effectiveness of the cache data.
The data caching processing method in the specification can be applied to a client or a server, and the client can be an electronic device such as a smart phone, a tablet computer, a smart wearable device (smart watch and the like), a smart vehicle-mounted device and the like.
Fig. 1 is a schematic flow chart of a data caching processing method in an embodiment of the present specification, and as shown in fig. 1, the data caching processing method provided in an embodiment of the present specification may include:
step 102, adding an annotation in a target function needing data caching in advance, wherein the annotation comprises the following steps: cache expiration time, cache refresh time.
In a specific implementation process, the data caching processing method in this embodiment may be applied to an application program in a client, and when the application program is started, a local cache component of the client may be scanned and initialized, and meanwhile, an annotation is added to an objective function that needs to perform data caching, where the objective function may be understood as a method in a computer language, and may be specifically selected according to an actual application, for example: may be a data query function. In advance, whether each function needs to perform data caching or not can be determined according to the use requirements of the functions, and notes are added to the functions if the functions need to be performed. A function may be understood as a method in a computer program, an annotation may be understood as a java term, such as the java @ Override annotation, which overwrites a parent class method, and adding an annotation to an objective function may be understood as adding an annotation (such as java @ Override) to the definition of the objective function. The annotation in the embodiments of the present specification may characterize the call execution step to the annotated target function.
The annotations in the embodiments of the present specification may include a cache expiration time and a cache refresh time, where the cache expiration time may be understood as an expiration time of the cache data of the annotated objective function, and the cache refresh time may be understood as how often the cache data is refreshed. The cache refresh time is generally less than the cache expiration time.
And step 104, determining the data synchronization time of a first-level cache and a second-level cache in the cache assembly according to the cache expiration time and the cache refreshing time, wherein the data synchronization time is longer than the cache refreshing time and is shorter than the cache expiration time.
In a specific implementation process, the data synchronization time of the first-level cache and the second-level cache can be calculated according to the cache expiration time and the cache refresh time in the annotation. The data synchronization time may be used to characterize how long to synchronize the cached data in the second level cache to the first level cache. The appropriate data synchronization time may be determined by statistical analysis of experimental data or calculated using a machine learning model. The data synchronization time is generally longer than the cache refresh time and shorter than the cache expiration time, the interval between the data synchronization time of the first-level cache and the second-level cache is longer than the cache refresh time, the latest value taken from the second-level cache during the synchronization operation can be ensured, and the non-expired value of the first-level cache can be ensured to be shorter than the cache expiration time.
The first-level cache generally has better query performance, generally the first-level cache does not support expiration and refreshing of cache data, and the second-level cache supports expiration and refreshing of the cache. In some embodiments of the present specification, the first-level cache may be a Map cache, and the second-level cache may be a Guava cache. The Map cache is a Java data structure, does not support expiration and refreshing, is a read-only cache and has better query performance. The Guava cache is a Google open source framework, is internally provided with a cache module, supports expiration and refreshing, and can be read and written.
The cache component in some embodiments of the present description may be a pluggable plug-in that is non-intrusive to the application. By adopting two-level cache of Map and Guava and combining notes, the defects that Guava cannot actively refresh and Map does not support expiration and refresh can be solved, and a local cache function is provided in a plug-in mode.
And 106, when the data synchronization time is reached, triggering the second-level cache to call and execute the target function, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
In a specific implementation process, because the first-level cache query has better reading performance but does not support expiration and refreshing, data in the first-level cache may expire after expiration, so that the cached data is inaccurate or unusable. After the data synchronization time of the first-level cache and the second-level cache is determined, when the data synchronization time is reached, the refreshing function of the second-level cache is triggered, the second-level cache can automatically call and execute a target function with annotation, an execution result is taken as cache data of the target function and cached in the second-level cache, and meanwhile, the execution result in the second-level cache is synchronized in the first-level cache, so that the cache data in the first-level cache is ensured to be the latest data.
For example: and adding annotation to the objective function A, wherein the cache expiration time set in the annotation is 60 seconds, the cache refreshing time is 40 seconds, and the data synchronization time of the first-level cache and the second-level cache is determined to be 50 seconds based on the cache expiration time and the cache refreshing time according to experience. The target function a may be called and executed once every 50 seconds by the level two cache, and the latest execution result is obtained and cached in the level two cache, while the latest execution result obtained is synchronized into the level one cache. Namely, a refreshing mechanism of the second-level cache is triggered every data synchronization time to acquire the latest cache data, and the cache data is synchronized into the first-level cache, so that the latest data can be stored in the first-level cache.
In the data caching processing method provided in the embodiments of the present specification, an annotation is added to a target function that needs to be cached, expiration time and refresh time of cached data are set in the annotation, and a first-level cache and a second-level cache are set in a cache component. And determining the data synchronization time of the first-level cache and the second-level cache according to the annotated expiration time and the refresh time, automatically triggering a target function through the second-level cache when the data synchronization time is reached, so as to realize the automatic refresh of the cache data, and simultaneously synchronizing the newly obtained cache data of the second-level cache into the first-level cache, so as to realize the automatic data synchronization of the first-level cache and the second-level cache. The accuracy and the effectiveness of the cache data in the first-level cache are ensured, and an accurate data basis is provided for subsequent data processing.
On the basis of the foregoing embodiments, in some embodiments of this specification, the read performance of the first-level cache is greater than the read performance of the second-level cache, and the method may further include:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
In a specific implementation process, when the target function with the annotation is called for the first time, the execution result of the target function can be directly cached in the second-level cache. When the target function is not called for the first time, cache data corresponding to the target function can be obtained from the first-level cache first, and if the cache data is obtained successfully, the cache data is directly returned. And if the first-level cache does not have the cache data corresponding to the target function or the value of the cache data is Null, taking a value from the second-level cache, if the second-level cache has the value, returning the value, and synchronizing the obtained value into the first-level cache.
The data cache processing method provided by the embodiment of the specification caches the return value of the annotated target function when the target function is called for the first time, and queries the cache data of the target function from the first-level cache with higher reading performance when the target function is not called for the first time, so that the query rate of the cache data is improved. When the cache data does not exist in the first-level cache, the cache data is inquired from the second-level cache, the inquired data is returned after the cache data is inquired in the second-level cache, and meanwhile, the inquired data in the second-level cache is synchronized into the first-level cache, so that a return value can be quickly obtained when a data function is called next time, and the data processing speed is improved.
On the basis of the foregoing embodiments, in some embodiments of the present specification, the method may further include:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
In a specific implementation process, when the target function is not called for the first time, the cache data corresponding to the target function is not queried in the first-level cache, and no value (or expired) exists in the second-level cache, the annotated target function is executed, the execution result of the annotated target function is cached in the second-level cache, and the execution result in the second-level cache is synchronized to the first-level cache, so that an accurate return value can be quickly obtained from the first-level cache when the target function is called for the next time, and the data processing speed is improved.
On the basis of the foregoing embodiments, in some embodiments of the present specification, the annotation further includes a cache name, and the method further includes:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
In a specific implementation process, a cache name usually corresponds to a cache class instance, and in order to avoid repeatedly creating the same class instance, a lock may be added before creation to check whether the class instance already exists, for example, no new class instance exists. When the cache is set, whether the cache instance corresponding to the cache name exists in the JVM (the cache name is a unique identifier) is preferentially checked, and if the cache instance exists, the cache instance is directly returned, so that memory leakage caused by repeated creation of cache entities is avoided.
On the basis of the above embodiments, in some embodiments of the present specification, the annotation may further include at least one of the following parameters: the cache initialization capacity, the cache maximum capacity, the parallelism, the parameter-in cache satisfying conditions and the parameter-out return value cache satisfying conditions.
The buffer initialization size may be used to set a buffer initialization size, and the buffer size is pre-allocated during initialization. Because the cache data is stored in the memory, the size of the server memory is fixed, and in order to ensure the normal operation of the server, the available size of the cache can be limited by setting the maximum capacity of the cache. The parallelism can be understood as a Guava second-level cache parameter, the cache is a read-write cache, and needs to be locked when writing in the cache in order to ensure thread safety, the parallelism is to control the size of the locked blocks, and the higher the parallelism is, the more the number of the blocks is. The condition satisfied during entry into the parameter cache may be understood as a precondition for caching a return value of a method (i.e., an objective function), and if the entry into the method is that a is 1, and if the condition is that a is 2, the result is not cached, otherwise, the result is cached. Similarly, the condition that the argument return cache satisfies may be understood as a precondition for caching the execution result of the method (i.e., the objective function). By adding the parameter setting of the cache in the annotation, the flexible cache of the cache data can be realized, and users can customize various cache parameters to meet the requirements of different users.
Fig. 2 is a schematic diagram of a data caching method in an embodiment of the present disclosure, and as shown in fig. 2, a cache component in the embodiment of the present disclosure includes two main data structures: map (first level cache, read only), Guava (second level cache, read write). The annotation method (namely annotation objective function) is subjected to section programming through a Spring AOP technology, firstly, cache expiration time and cache refreshing time defined in annotation are obtained, then, data synchronization time interval period between a primary cache Map and a secondary cache Guava is calculated through the cache expiration time and the cache refreshing time, and finally, data in the primary cache and the secondary cache are synchronized through a global asynchronous thread pool according to the frequency period. The section-oriented programming can be understood as a technology for realizing the unified maintenance of the program functions through a pre-compiling mode and a running-period dynamic proxy.
When the annotation method is called, the value is taken from the first-level cache, if the value is available, the value is directly returned, if the value is not available or is Null, the value is taken from the second-level cache, if the value is available in the second-level cache, the value is returned and synchronized to the first-level cache, and if the value is not available in the second-level cache (or is overdue), the annotated method is executed, and the execution result is cached to the second-level cache and synchronized to the first-level cache.
The following two steps are required for a developer to use the cache component in the embodiment of the present specification to use the cache:
pom File Add dependencies
<dependency>
<groupId>com.bocsoft.bocop</groupId>
<artifactId>cache</artifactId>
<version>${cache.version}</version>
</dependency>
2. Adding an annotation @ BocCacheable to a method requiring caching, and configuring a cache name, cache expiration time and cache refresh time according to actual needs, specifically referring to the following examples:
@BocCacheable(cacheName="channelAuthorization",key=
"#p0+#p1",expireAfterWriteMs="600000",refreshAfterWriteMs="540000")
public Map<String,String>authenticationByChannelAndServiceId(StringserviceId,String channel){
debug ("channel AuthorazationApper queries channel interface rights by { } - {, serviceId, channel);
Map<String,String>result=
channelAuthorizationMapper.authenticationByChannelAndServiceId(serviceId,channel);
return result;
}
referring to the description of the above embodiments, some embodiments of the present disclosure may provide a cache component, where the cache component may include: first grade cache, second grade cache, synchronous thread, wherein:
the first-level cache adopts Map cache, and the second-level cache adopts Guava cache; namely, the first-level cache is of a Map data structure and has high reading performance, and the second-level cache Guava mainly manages the expiration of cache data.
The synchronization thread is used for refreshing the cache data of the target function according to the annotation in the target function and synchronizing the cache data of the target function in the first-level cache and the second-level cache according to the annotation in the target function. The method for data synchronization and data refresh can refer to the description of the above embodiments, and is not described herein again.
The cache component in the embodiment of the present specification may mainly use Spring AOP, Java reflection, dynamic proxy, and asynchronous thread pool technologies, call an annotation method in a reflection manner in a tangent plane, complete caching of a method return value, and periodically synchronize data in the primary and secondary caches using the asynchronous thread pool. The data caching processing method provided by the embodiment of the specification can realize the functions of data caching, automatic synchronization of cached data, query and acquisition of cached data and the like by adding annotations to functions requiring a data caching function, provides an accurate data base for subsequent data processing, and improves the data processing efficiency.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The relevant points can be obtained by referring to the partial description of the method embodiment.
Based on the data cache processing method, one or more embodiments of the present specification further provide a data cache processing apparatus. The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in the embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific apparatus implementation in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Specifically, fig. 3 is a schematic block structure diagram of an embodiment of a data cache processing apparatus provided in this specification, and as shown in fig. 3, the data cache processing apparatus provided in this specification may include: an annotation adding module 31, a synchronization time calculating module 32, and a cache data synchronizing module 33, wherein:
the annotation adding module 31 may be configured to add an annotation to the target function that needs to be cached, where the annotation includes: cache expiration time, cache refresh time;
the synchronization time calculation module 32 may be configured to determine data synchronization time of a first-level cache and a second-level cache in a cache component according to the cache expiration time and the cache refresh time, where the data synchronization time is greater than the cache refresh time and is less than the cache expiration time, where the first-level cache does not support expiration and refresh of cache data, and the second-level cache supports expiration and refresh of cache data;
the cache data synchronization module 33 may be configured to trigger the second-level cache to call and execute the target function when the data synchronization time is reached, cache an execution result in the second-level cache as cache data corresponding to the target function, and synchronize the execution result in the second-level cache to the first-level cache.
The data cache processing device provided in the embodiments of the present specification adds an annotation to a target function that needs to be cached, sets expiration time and refresh time of cache data in the annotation, and sets a first-level cache and a second-level cache in a cache component. And determining the data synchronization time of the first-level cache and the second-level cache according to the annotated expiration time and the refresh time, automatically triggering a target function when the data synchronization time is reached, realizing the automatic refresh of the cache data, and simultaneously synchronizing the latest obtained cache data into the first-level cache, thereby realizing the automatic data synchronization of the first-level cache and the second-level cache. The accuracy and the effectiveness of the cache data in the first-level cache are ensured, and an accurate data basis is provided for subsequent data processing.
Fig. 4 is a schematic structural diagram of a data cache processing apparatus in another embodiment of this specification, and as shown in fig. 4, on the basis of the foregoing embodiment, in some embodiments of this specification, the read performance of the first-level cache is greater than the read performance of the second-level cache, and the apparatus further includes a cache data obtaining module 41, configured to:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
The data cache processing device provided in the embodiments of the present specification caches a return value of an annotated target function when the target function is called for the first time, and when the target function is not called for the first time, first queries cache data of the target function from a first-level cache with higher reading performance, thereby increasing a query rate of the data. When the cache data does not exist in the first-level cache, the cache data is inquired from the second-level cache, the inquired data is returned after the cache data is inquired in the second-level cache, and meanwhile, the inquired data in the second-level cache is synchronized into the first-level cache, so that a return value can be quickly obtained when a data function is called next time, and the data processing speed is improved.
On the basis of the foregoing embodiments, in some embodiments of the present specification, the cache data obtaining module 41 is further configured to:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
In the embodiment of the present specification, if the cache data corresponding to the target function is not queried in the first-level cache, and there is no value (or expired) in the second-level cache, the annotated target function is executed and the execution result thereof is cached in the second-level cache, and meanwhile, the execution result in the second-level cache is synchronized to the first-level cache, so that a return value can be quickly obtained when the next data function is called, and the data processing rate is improved.
On the basis of the foregoing embodiments, in some embodiments of the present specification, the annotation further includes a cache name, and the apparatus further includes a cache instance creating module, configured to:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
In the embodiments of the present description, before creating a cache instance, it is preferentially checked whether the cache instance corresponding to the cache name already exists in the JVM (the cache name is a unique identifier), and if so, the cache instance is directly returned, so that memory leakage caused by repeatedly creating cache entities is avoided.
It should be noted that the above-described apparatus may also include other embodiments according to the description of the method embodiment. The specific implementation manner may refer to the description of the above corresponding method embodiment, and is not described in detail herein.
An embodiment of the present specification further provides a data cache processing device, including: at least one processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the data caching method in the foregoing embodiments, for example:
adding an annotation in a target function needing data caching in advance, wherein the annotation comprises the following steps: cache expiration time, cache refresh time;
determining data synchronization time of a first-level cache and a second-level cache in a cache assembly according to the cache expiration time and the cache refreshing time, wherein the data synchronization time is longer than the cache refreshing time and is shorter than the cache expiration time, the first-level cache does not support expiration and refreshing of cache data, and the second-level cache supports expiration and refreshing of cache data;
and when the data synchronization time is reached, executing the target function, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
It should be noted that the above-mentioned processing device may also include other implementations according to the description of the method embodiment. The specific implementation manner may refer to the description of the above corresponding method embodiment, and is not described in detail herein.
The data cache processing device or processing equipment provided by the specification can also be applied to various data analysis processing systems. The system or the apparatus or the processing device may include any one of the data cache processing apparatuses in the above embodiments. The system or apparatus or processing device may be a single server, or may include a server cluster, a system (including a distributed system), software (applications), an actual operation device, a logic gate device, a quantum computer, etc. using one or more of the methods or one or more of the embodiments of the present disclosure, and a terminal device incorporating necessary hardware for implementation. The system for checking for discrepancies may comprise at least one processor and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of the method of any one or more of the embodiments described above.
The method embodiments provided by the embodiments of the present specification can be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Taking an example of the data caching method, fig. 5 is a block diagram of a hardware structure of a data caching processing server in an embodiment of the present disclosure, where the server may be a data caching processing device or system in the foregoing embodiment. As shown in fig. 5, the server 10 may include one or more (only one shown) processors 100 (the processors 100 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 200 for storing data, and a transmission module 300 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 5 is merely illustrative and is not intended to limit the structure of the electronic device. For example, the server 10 may also include more or fewer components than shown in FIG. 5, and may also include other processing hardware, such as a database or multi-level cache, a GPU, or have a different configuration than shown in FIG. 5, for example.
The memory 200 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the data caching processing method in the embodiment of the present specification, and the processor 100 executes various functional applications and resource data updates by executing the software programs and modules stored in the memory 200. Memory 200 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 200 may further include memory located remotely from processor 100, which may be connected to a computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 300 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission module 300 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission module 300 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The method or apparatus provided by the present specification and described in the foregoing embodiments may implement service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, so as to implement the effect of the solution described in the embodiments of the present specification.
The embodiment of the present application further provides a computer storage medium of a data caching processing method, where the computer storage medium stores computer program instructions, and when the computer program instructions are executed, the computer storage medium may implement:
adding an annotation in a target function needing data caching in advance, wherein the annotation comprises the following steps: cache expiration time, cache refresh time;
determining data synchronization time of a first-level cache and a second-level cache in a cache assembly according to the cache expiration time and the cache refreshing time, wherein the data synchronization time is longer than the cache refreshing time and is shorter than the cache expiration time, the first-level cache does not support expiration and refreshing of cache data, and the second-level cache supports expiration and refreshing of cache data;
and when the data synchronization time is reached, executing the target function, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The data caching method and apparatus provided in the embodiments of the present specification may be implemented in a computer by a processor executing corresponding program instructions, for example, implemented in a PC end using a c + + language of a windows operating system, implemented in a linux system, or implemented in an intelligent terminal using android, iOS system programming languages, implemented in processing logic based on a quantum computer, or the like.
It should be noted that descriptions of the apparatus, the computer storage medium, and the system described above according to the related method embodiments may also include other embodiments, and specific implementations may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to only the partial description of the method embodiment.
The embodiments of the present description are not limited to what must be consistent with industry communications standards, standard computer resource data updating and data storage rules, or what is described in one or more embodiments of the present description. Certain industry standards, or implementations modified slightly from those described using custom modes or examples, may also achieve the same, equivalent, or similar, or other, contemplated implementations of the above-described examples. The embodiments using the modified or transformed data acquisition, storage, judgment, processing and the like can still fall within the scope of the alternative embodiments of the embodiments in this specification.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When the device or the end product in practice executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures (for example, in the environment of parallel processors or multi-thread processing, even in the environment of distributed resource data update). The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. The terms first, second, etc. are used to denote names, but not any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable resource data updating apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable resource data updating apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable resource data update apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable resource data update apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, and the relevant points can be referred to only part of the description of the method embodiments. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.

Claims (13)

1. A data caching method, the method comprising:
adding an annotation in a target function needing data caching in advance, wherein the annotation comprises the following steps: cache expiration time, cache refresh time;
determining data synchronization time of a first-level cache and a second-level cache in a cache assembly according to the cache expiration time and the cache refreshing time, wherein the data synchronization time is longer than the cache refreshing time and is shorter than the cache expiration time, the first-level cache does not support expiration and refreshing of cache data, and the second-level cache supports expiration and refreshing of cache data;
and when the data synchronization time is reached, triggering the second-level cache to call and execute the target function, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
2. The method of claim 1, wherein a read performance of the level one cache is greater than a read performance of the level two cache, the method further comprising:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
3. The method of claim 2, wherein the method further comprises:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
4. The method of claim 1, wherein the first level cache is a Map cache and the second level cache is a Guava cache.
5. The method of claim 1, wherein the annotation further comprises a cache name, the method further comprising:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
6. The method of claim 1, wherein the annotation further comprises at least one of: the cache initialization capacity, the cache maximum capacity, the parallelism, the parameter-in cache satisfying conditions and the parameter-out return value cache satisfying conditions.
7. A data cache processing apparatus, the apparatus comprising:
an annotation adding module, configured to add an annotation to a target function that needs to be cached, where the annotation includes: cache expiration time, cache refresh time;
a synchronization time calculation module, configured to determine data synchronization time of a first-level cache and a second-level cache in a cache component according to the cache expiration time and the cache refresh time, where the data synchronization time is greater than the cache refresh time and less than the cache expiration time, the first-level cache does not support expiration and refresh of cache data, and the second-level cache supports expiration and refresh of cache data;
and the cache data synchronization module is used for triggering the second-level cache to call and execute the target function when the data synchronization time is reached, caching an execution result into the second-level cache as cache data corresponding to the target function, and synchronizing the execution result in the second-level cache into the first-level cache.
8. The apparatus of claim 7, wherein a read performance of the level one cache is greater than a read performance of the level two cache, the apparatus further comprising a cache data fetch module to:
when the target function is called, cache data corresponding to the target function is obtained from the first-level cache;
if the first-level cache does not have cache data corresponding to the target function, obtaining the cache data corresponding to the target function from the second-level cache;
if the cache data corresponding to the target function exists in the second-level cache, returning the cache data, and synchronizing the cache data to the first-level cache.
9. The apparatus of claim 8, wherein the cache data acquisition module is further to:
if the cache data corresponding to the objective function does not exist in the second-level cache or the cache data corresponding to the objective function is expired, executing the objective function, caching an execution result into the second-level cache, and synchronizing the execution result in the second-level cache into the first-level cache.
10. The apparatus of claim 7, wherein the annotation further comprises a cache name, the apparatus further comprising a cache instance creation module to:
when a cache instance is created, inquiring whether the cache instance corresponding to the cache name of the created cache instance is already in the cache assembly, if so, returning the cache instance in the cache assembly, and if not, creating the cache instance.
11. A cache assembly, comprising: first grade cache, second grade cache, synchronous thread, wherein:
the first-level cache adopts Map cache, and the second-level cache adopts Guava cache;
the synchronization thread is used for refreshing the cache data of the target function according to the annotation in the target function and synchronizing the cache data of the target function in the first-level cache and the second-level cache according to the annotation in the target function.
12. A data cache processing apparatus, comprising: at least one processor and a memory for storing processor-executable instructions, the processor implementing the method of any one of claims 1-6 when executing the instructions.
13. A computer-readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 6.
CN201911292361.7A 2019-12-16 2019-12-16 Data cache processing method, device and equipment and cache component Pending CN110989939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911292361.7A CN110989939A (en) 2019-12-16 2019-12-16 Data cache processing method, device and equipment and cache component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911292361.7A CN110989939A (en) 2019-12-16 2019-12-16 Data cache processing method, device and equipment and cache component

Publications (1)

Publication Number Publication Date
CN110989939A true CN110989939A (en) 2020-04-10

Family

ID=70093957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911292361.7A Pending CN110989939A (en) 2019-12-16 2019-12-16 Data cache processing method, device and equipment and cache component

Country Status (1)

Country Link
CN (1) CN110989939A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522879A (en) * 2020-04-16 2020-08-11 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111782698A (en) * 2020-07-03 2020-10-16 广州探途网络技术有限公司 Cache updating method and device and electronic equipment
CN112131260A (en) * 2020-09-30 2020-12-25 中国民航信息网络股份有限公司 Data query method and device
CN113010560A (en) * 2021-03-30 2021-06-22 建信金融科技有限责任公司 Redis cache refreshing method and device
CN113613044A (en) * 2021-07-20 2021-11-05 深圳Tcl新技术有限公司 Video playing method and device, storage medium and electronic equipment
CN113742290A (en) * 2021-11-04 2021-12-03 上海闪马智能科技有限公司 Data storage method and device, storage medium and electronic device
CN115878666A (en) * 2022-10-31 2023-03-31 四川川大智胜系统集成有限公司 Management method, system, electronic device and medium for cache dependency relationship
CN116894412A (en) * 2023-07-20 2023-10-17 北京云枢创新软件技术有限公司 On-demand loading method for constructing SystemVerilog object multilayer structure, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method
CN109684358A (en) * 2017-10-18 2019-04-26 北京京东尚科信息技术有限公司 The method and apparatus of data query
CN109947668A (en) * 2017-12-21 2019-06-28 北京京东尚科信息技术有限公司 The method and apparatus of storing data
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684358A (en) * 2017-10-18 2019-04-26 北京京东尚科信息技术有限公司 The method and apparatus of data query
CN109947668A (en) * 2017-12-21 2019-06-28 北京京东尚科信息技术有限公司 The method and apparatus of storing data
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘云朋;马艳芳;: "基于Hibernate的数据缓存技术研究" *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522879A (en) * 2020-04-16 2020-08-11 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111522879B (en) * 2020-04-16 2023-09-29 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111782698A (en) * 2020-07-03 2020-10-16 广州探途网络技术有限公司 Cache updating method and device and electronic equipment
CN112131260A (en) * 2020-09-30 2020-12-25 中国民航信息网络股份有限公司 Data query method and device
CN113010560A (en) * 2021-03-30 2021-06-22 建信金融科技有限责任公司 Redis cache refreshing method and device
CN113613044A (en) * 2021-07-20 2021-11-05 深圳Tcl新技术有限公司 Video playing method and device, storage medium and electronic equipment
CN113613044B (en) * 2021-07-20 2023-08-01 深圳Tcl新技术有限公司 Video playing method and device, storage medium and electronic equipment
CN113742290A (en) * 2021-11-04 2021-12-03 上海闪马智能科技有限公司 Data storage method and device, storage medium and electronic device
CN115878666A (en) * 2022-10-31 2023-03-31 四川川大智胜系统集成有限公司 Management method, system, electronic device and medium for cache dependency relationship
CN115878666B (en) * 2022-10-31 2023-09-12 四川川大智胜系统集成有限公司 Management method, system, electronic equipment and medium for cache dependency relationship
CN116894412A (en) * 2023-07-20 2023-10-17 北京云枢创新软件技术有限公司 On-demand loading method for constructing SystemVerilog object multilayer structure, electronic equipment and medium
CN116894412B (en) * 2023-07-20 2024-02-20 北京云枢创新软件技术有限公司 On-demand loading method for constructing SystemVerilog object multilayer structure, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN110989939A (en) Data cache processing method, device and equipment and cache component
CN110008224B (en) Database transaction processing method and device
CN108959341B (en) Data synchronization method, device and equipment
CN109614404B (en) Data caching system and method
CN108628688B (en) Message processing method, device and equipment
CN110245279B (en) Dependency tree generation method, device, equipment and storage medium
CN109947643B (en) A/B test-based experimental scheme configuration method, device and equipment
CN103608809A (en) Recommending data enrichments
CN110765165B (en) Method, device and system for synchronously processing cross-system data
CN110162573B (en) Distributed sequence generation method, device and system
CN116305298B (en) Method and device for managing computing power resources, storage medium and electronic equipment
CN109597678B (en) Task processing method and device
CN111273965B (en) Container application starting method, system and device and electronic equipment
CN109213691B (en) Method and apparatus for cache management
CN111190655A (en) Processing method, device, equipment and system for application cache data
CN111324803B (en) Query request processing method and device of search engine and client
CN110007935A (en) A kind of processing method, device and the equipment of program upgrading
CN116048977B (en) Test method and device based on data reduction
CN110874322B (en) Test method and test server for application program
CN106990944B (en) Code resource management method, device and system
CN107645541B (en) Data storage method and device and server
CN113672470A (en) Interface monitoring method, device, equipment and medium
CN112286572A (en) Configuration method and device of business process
CN108733789B (en) Method, device and equipment for evolution of execution plan of database operation instruction
US20080278198A1 (en) Buffer for Object Information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination