CN109614404B - Data caching system and method - Google Patents

Data caching system and method Download PDF

Info

Publication number
CN109614404B
CN109614404B CN201811294935.XA CN201811294935A CN109614404B CN 109614404 B CN109614404 B CN 109614404B CN 201811294935 A CN201811294935 A CN 201811294935A CN 109614404 B CN109614404 B CN 109614404B
Authority
CN
China
Prior art keywords
cache
data
client
management platform
cache data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811294935.XA
Other languages
Chinese (zh)
Other versions
CN109614404A (en
Inventor
魏保子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201811294935.XA priority Critical patent/CN109614404B/en
Publication of CN109614404A publication Critical patent/CN109614404A/en
Application granted granted Critical
Publication of CN109614404B publication Critical patent/CN109614404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The data caching system and the data caching method can achieve caching of data of different service types, can achieve real-time updating of the data, and can send updated cache data to each cache client according to configuration strategies of each cache client when the data is updated, so that cache data of each cache client are ensured to be consistent, query performance of the data is improved, and accuracy of the cache data is ensured. And the cache client can select a configuration strategy of active loading or platform pushing according to actual needs, and the cache pushing of the data is carried out according to the configuration strategy, so that data concurrency caused by updating all data through active loading, namely data query, is avoided, and data concurrency is reduced when the cache is updated.

Description

Data caching system and method
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data caching system and method.
Background
In daily business scenarios, such as commodity category seen when shopping is promoted, even commodity always has higher query performance requirement, and the data changes relatively or less, such as contract signing data of merchants, and the like, how to store and update the data in a similar way to those scenarios so as to provide better query performance requirement is a technical problem to be solved in the art.
Disclosure of Invention
The purpose of the specification is to provide a data caching system and a data caching method, which realize real-time caching of data and ensure consistency of the data.
In one aspect, an embodiment of the present disclosure provides a data caching system, including: the system comprises a cache management platform, a cache client and a cache pushing module;
the cache management platform is used for receiving cache data and sending the cache data to the cache client according to a configuration strategy of the cache client, wherein the configuration strategy comprises active loading and platform pushing;
the cache pushing module is used for receiving the cache data sent by the cache management module and pushing the cache data;
the cache client is used for managing cache data.
Further, in another embodiment of the system, if the configuration policy of the cache client is active loading, the cache client is specifically configured to:
receiving a query request;
inquiring corresponding cache data in a local cache and a secondary cache of the cache client according to the inquiry request, and if the cache data does not exist, marking the cache data as loaded, and sending the inquiry request to the cache management platform;
If the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data;
and returning the loaded cache data after the cache data corresponding to the query request is loaded.
Further, in another embodiment of the system, the cache management platform is further configured to perform local caching and/or secondary caching on the cached data.
Further, in another embodiment of the system, the cache client includes a heartbeat monitoring module, and the cache management platform includes a service monitoring module;
the heartbeat monitoring module is used for sending cache data in the cache client to the service monitoring module at intervals of preset time;
and the service monitoring module is used for comparing the received cache data in the cache client with the cache data in the cache management platform, and if the cache data in the cache client is inconsistent with the cache data in the cache management platform, the service monitoring module is used for sending the cache data to the cache client again.
Further, in another embodiment of the system, the cache management platform further includes a proximal monitoring module, and the heartbeat monitoring module is further configured to send the status of the cache client to the proximal monitoring module;
The near-end monitoring module is used for monitoring the cache client according to the received state of the cache client.
Further, in another embodiment of the system, the cache client is configured to manage cache data, including:
and managing the cache data according to the cache domains, wherein the cache size, the expiration time and the capacity limit value of each cache domain are mutually independent.
Further, in another embodiment of the system, the cache pushing module includes a data pushing interface and a multicast pushing interface.
Further, in another embodiment of the system, the cache client includes a cache change monitor, configured to receive cache data sent by the cache push module.
In another aspect, the present disclosure provides a data caching method, including:
the cache management platform receives cache data;
the cache management platform sends the cache data to the cache client according to a configuration strategy of the cache client, wherein the configuration strategy comprises active loading and platform pushing;
and the cache client saves the cache data to a corresponding cache domain.
Further, in another embodiment of the method, the method further comprises:
The cache client sends cache data to the cache management platform at intervals of preset time;
the cache management platform judges whether the received cache data is consistent with the cache data in the cache management platform;
and if the received cache data is inconsistent with the cache data in the cache management platform, the cache management platform resends the cache data to the cache client.
Further, in another embodiment of the method, if the configuration policy is active loading, the cache management platform sends the cache data to the cache client according to the configuration policy of the cache client, including:
the cache client receives a query request, and queries corresponding cache data in a local cache and a secondary cache of the cache client according to the query request;
if the cache data does not exist, marking the cache data as in-loading, and sending the query request to the cache management platform;
if the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data;
and returning the loaded cache data after the cache data corresponding to the query request is loaded.
In still another aspect, the present specification provides a data cache processing apparatus, including: at least one processor and a memory for storing processor-executable instructions that when executed by the processor implement the data caching method described above.
The data caching system, the method and the processing equipment provided by the specification can realize caching of data of different service types, can realize real-time updating of the data, and can send updated cache data to each cache client according to the configuration strategy of each cache client when the data is updated, so that the cache data of each cache client is ensured to be consistent, the query performance of the data is improved, and the accuracy of the cache data is ensured. And the cache client can select a configuration strategy of active loading or platform pushing according to actual needs, and the cache pushing of the data is carried out according to the configuration strategy, so that data concurrency caused by updating all data through active loading, namely data query, is avoided, and data concurrency is reduced when the cache is updated.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a data caching system in one embodiment of the present disclosure;
FIG. 2 is a schematic diagram of cache management of a cache client according to one embodiment of the present disclosure;
FIG. 3 is a flow diagram of a cache data query in one embodiment of the present disclosure;
FIG. 4 is a diagram illustrating the principles of data caching system startup operation in one embodiment of the present disclosure;
FIG. 5 is a flow diagram of a data caching method in one embodiment of the present disclosure;
fig. 6 is a block diagram of a hardware structure of a data cache server to which the embodiment of the present application is applied.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
With the development of computer and internet technologies, the data volume of each system is increasing, and the data processing is a key technology. Some business scenarios may need to ensure timely update of data, especially in a distributed system, that data in each server is consistent, and that data is updated in time.
The working principle of the cache can be understood as: when the CPU (Central Processing Unit ) is to read a data, firstly searching from the cache, and if so, immediately reading and sending to the CPU for processing; if not found, the data block is read from the memory at a relatively slow speed and sent to the CPU for processing, and meanwhile, the data block where the data is located is transferred into the cache, so that the reading of the whole data is carried out from the cache later, and the memory is not required to be transferred.
The embodiment of the specification can provide a cache system supporting real-time synchronization with high score, after receiving cache data, such as updating the cache data, the cache data can be pushed to each cache client according to the configuration policy of the cache client, so that the data consistency in each cache client is ensured, and the timely updating of the data is ensured. In addition, in the embodiment of the specification, the cache update of different types of data can be satisfied, and the applicability of the data cache is improved.
The data caching method in the specification can be applied to a client or a server, and the client can be an electronic device such as a smart phone, a tablet personal computer, an intelligent wearable device (a smart watch, virtual reality glasses, a virtual reality helmet and the like), an intelligent vehicle-mounted device and the like.
One or more embodiments of the present specification provide a data caching apparatus. The apparatus may include a system (including a distributed system), software (applications), modules, components, servers, clients, etc. that employ the methods described in the embodiments of the present specification in combination with the necessary apparatus to implement the hardware. Based on the same innovative concepts, the embodiments of the present description provide means in one or more embodiments as described in the following embodiments. As used below, the term "unit" or "module" may be a combination of software and/or hardware that implements the intended function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Specifically, fig. 1 is a schematic structural diagram of a data cache system according to an embodiment of the present disclosure, as shown in fig. 1, where the schematic structural diagram of the data cache system provided in one embodiment of the present disclosure may include: the system comprises a cache management platform, a cache client and a cache pushing module;
the cache management platform is used for receiving cache data and sending the cache data to the cache client according to a configuration strategy of the cache client, wherein the configuration strategy comprises active loading and platform pushing;
The cache pushing module is used for receiving the cache data sent by the cache management module and pushing the cache data;
the cache client is used for managing cache data.
In a specific implementation process, the cache data may represent data that will be cached later, as shown in fig. 1, a near-end access module may be disposed in the cache management platform, and different service data may be accessed. In the embodiment of the present specification, different types of data may be received, such as: different business data such as commodity category data, hot commodity data and the like. The data of different service scenes can be accessed into the cache management platform through the data interface, so that the cache data can be received in real time, and further the cache data can be updated in real time. Of course, other manners of receiving the buffered data may be used, and the embodiments of the present disclosure are not limited in detail.
As shown in fig. 1, the manner in which the cache management module sends the cache data to the cache client may be performed according to a configuration policy of the cache client, where the configuration policy in fig. 1 may be used to configure a loading manner of the cache data, and the configuration policy may mainly include two types, one type is that the cache management module actively pushes the cache data, and the other type is that when the cache client queries the cache data, the corresponding cache data is returned, where the query data is returned after the cache client queries, which may be understood as active loading. As shown in fig. 1, the cache update in the cache client may be actively pushed by the cache management platform, or may be actively loaded by the remote data loading module. When the configuration strategy of the cache client is active loading, the cache management platform can return the cache data when the cache client sends a query request. When the configuration policy of the cache client is platform pushing, the cache data can be sent to the cache client through the cache pushing module after the cache data is updated. Such as: when the cache data is updated or pushed, if the configuration policy of the cache client is platform pushing, the cache management platform can push the cache data to the cache pushing module through the data pushing interface, the cache pushing module pushes the cache data to the cache client, and the cache client can manage the cache data. The configuration policy of the cache client may include a data cache manner, and may further include a loading frequency, a pushing frequency, a loading time, a pushing time, etc. of the cache data besides active loading and platform pushing, and may be specifically set according to actual needs, which is not specifically limited in the embodiments of the present disclosure.
It should be noted that, the cache clients may include one or more cache clients, which are not specifically shown in fig. 1, and after the cache management platform receives the cache data, the cache management platform may push the received cache data to each cache client according to the configuration policy of each cache client.
For example: when a merchant adds a new commodity and needs to update commodity information, commodity information data can be sent to the cache management platform. After receiving the new commodity information sent by the merchant, the cache management platform may, according to each cache client, for example: and sending the updated commodity information to each cache client by configuration strategies of different commodity platforms, different application servers and the like so as to ensure the consistency of data, and enabling commodity information inquired by a user on different platforms or different servers to be consistent.
The data caching system provided by the embodiment of the specification can realize caching of data of different service types, can realize real-time updating of the data, and can send updated cache data to each cache client according to the configuration strategy of each cache client when the data is updated, so that the cache data of each cache client is ensured to be consistent, the query performance of the data is improved, and the accuracy of the cache data is ensured. And the cache client can select a configuration strategy of active loading or platform pushing according to actual needs, and the cache pushing of the data is carried out according to the configuration strategy, so that the problem that the data concurrency amount is large due to the fact that all data is updated through active loading, namely data query, is avoided, and the data concurrency is reduced when the cache is updated.
On the basis of the above embodiments, as shown in fig. 1, in one embodiment of the present disclosure, the cache client includes a heartbeat monitoring module, and the cache management platform includes a service monitoring module;
the heartbeat monitoring module is used for sending cache data in the cache client to the service monitoring module at intervals of preset time;
and the service monitoring module is used for comparing the received cache data in the cache client with the cache data in the cache management platform, and if the cache data in the cache client is inconsistent with the cache data in the cache management platform, the service monitoring module is used for sending the cache data to the cache client again.
In a specific implementation project, the cache client may send the latest version of cache data to the service monitoring module in the cache management platform at preset time intervals (e.g., 30 seconds) through the heartbeat monitoring module. The cache management platform can compare the received cache data of the cache client with the cache data in the cache management platform, judge whether the cache data of the cache client is consistent with the cache data of the cache management platform, if not, indicate that the cache data in the cache client is not the latest version, not get updated in time, or the data in the cache client is wrong, and the cache management platform can send the cache data of the latest version to the cache client again. And the cache client side accessed to the cache management platform can be connected with the cache management platform through the heart state monitoring module, and the version of the cache data is reported in real time so as to ensure that the data is updated in time.
For example: if 3 cache clients A, B, C are accessed in the cache management platform, the 3 cache clients send the pushing result of the cache data to the cache management platform every 30 seconds. Such as: the cache client A, B, C caches the data a, and receives the latest push result of the data a sent by the cache client A, B, C every 30 seconds. The cache management platform compares the received pushing result with local data of the cache management platform, and discovers that the data a of the cache client A is inconsistent with the local data, namely the data a in the cache client A is possibly not updated in time, and then the latest version of the data a is sent to the cache client A so as to ensure that the data of each cache client can be updated in time, and the data of each cache client is consistent.
According to the data caching system provided by the embodiment of the specification, the cache clients are connected with the cache management platform through the mind monitoring module, and the data pushing results of the cache clients are monitored in real time, so that the data of the cache clients and the data of the cache management platform are ensured to be consistent, the data can be updated in time, and the instantaneity and the accuracy of data caching are improved.
As shown in fig. 1, in one embodiment of the present disclosure, the cache management platform further includes a proximal monitoring module, where the heartbeat monitoring module is further configured to send the status of the cache client to the proximal monitoring module;
the near-end monitoring module is used for monitoring the cache client according to the received state of the cache client.
In a specific implementation process, the state monitoring module in the cache client may also report the state of the cache client to the near-end monitoring module of the cache management platform in real time or at intervals of a designated time, where the state of the cache client may include whether the cache client works normally or not, and so on. The cache management platform can monitor the states of all cache clients through the near-end monitoring module, and can also give an alarm in time if the cache clients have abnormal conditions, so that the stability of the system is improved. In addition, if the status reported by the cache client is abnormal, the cache management platform can temporarily not push the cache data to the cache client, so as to improve the utilization rate of the system and reduce the pressure of the system.
Fig. 2 is a schematic diagram of cache management of a cache client according to an embodiment of the present disclosure, as shown in fig. 2, in which a cylinder may represent a cache domain, and one cache domain may be used to cache a type of service data, so as to implement classified management and update of data, as in fig. 2, in which the first cylinder may be used to cache contract data of a merchant. As shown in fig. 2, in one embodiment of the present disclosure, the cache client may manage cache data according to cache domain partition blocks, each cache domain may have an independent cache size, expiration time, capacity limit value, etc., and the cache size, expiration time (may also represent a refresh time, for example, how often data in the cache domain is updated), initial time, capacity limit value, i.e., maximum capacity of each cache domain may be configured according to actual needs. The method can also distinguish and update the cache data according to the cache domain, and manage and subscribe the cache data in fine granularity, thereby facilitating the inquiry of the data.
Fig. 3 is a schematic flow chart of a cache data query in an embodiment of the present disclosure, as shown in fig. 3, in the embodiment of the present disclosure, if a configuration policy of the cache client is active loading, the cache client is specifically configured to:
receiving a query request;
inquiring corresponding cache data in a local cache and a secondary cache of the cache client according to the inquiry request, and if the cache data does not exist, marking the cache data as loaded, and sending the inquiry request to the cache management platform;
if the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data;
and returning the loaded cache data after the cache data corresponding to the query request is loaded.
As shown in fig. 1, the data caching system provided in the embodiments of the present disclosure supports data query performance, and a user may query specified data through a cache client, as shown in fig. 3, after the cache client receives a query request, the user may parse a query cache domain corresponding to the query request to obtain a cache route, that is, a path for storing cache data, and parse a cache configuration of the cache client. The configuration policy of the cache data can be represented in the cache configuration, that is, the cache data is actively loaded or pushed by the platform. Judging whether to directly inquire the cache data, if not, if so, then: and after the appointed time, inquiring, and the like, ending the inquiring process, and inquiring corresponding cache data in a local cache and a secondary cache of the cache client if the inquiring process is directly inquired. If the cache data does not exist in the cache client, it can be stated that the cache data is not pushed to the cache client, remote data recording can be performed, that is, a query request is sent to the cache management platform to request to load the cache data.
It should be noted that, if the query request is the first query request for querying the cache data, when the cache client does not have the cache data, the cache data may be marked as in the loading, and the query request is sent to the cache management platform through the remote data loading module. As shown in fig. 1, the cache client includes a remote data loading module, and a data query portal in the cache management platform may represent a data interface, and may be used to establish a connection with the data loading module in the cache client, and receive a query request sent by the cache client. After receiving the query request of the cache client, the cache management platform can return the corresponding cache data to the cache client, and the cache client finishes the loading of the cache data.
After the cache client finishes loading the cache data, the latest version of the cache data can be returned to the thread for sending the query request, and then the latest version of the cache data can be returned to the user. The real-time updating of the cache data is realized, and the data of the non-updated version is prevented from being returned to the user. In addition, in one embodiment of the present disclosure, when a subsequent query request identifies a tag that cache data is being loaded, the original value of the cache data is returned to the user, where the original value of the cache data may represent the cache data before updating, and if the cache data is loaded for the first time, the original value of the cache data may be understood as empty.
For example: if the cache data corresponding to the query request does not exist in the cache client, when the first query request comes, the cache data is marked as being loaded, and after the subsequent other query requests recognize that the cache data is being loaded, the original value of the cache data is directly returned, so that the condition that a large number of loading requests bring excessive data query pressure to a system is avoided, and the system performance is improved. When the cache updating is completed, the updated cache data is returned to the thread which sends the query request, and when other query requests load the cache data, the updated cache data is directly returned. Of course, the first query request may also return the original value of the cache data, and return the updated cache data after loading is completed.
The data caching system provided by the embodiment of the specification provides a data query function, namely active loading of data, when no content exists in a cache, the data can be marked as being loaded, when a subsequent query request queries the mark being loaded, the original value is directly returned, so that the pressure on a server side caused by a large number of query requests is avoided, and the performance of the system is improved. And after the data updating and loading are completed, the updated data is returned in time, so that the timely updating of the data and the consistency of the data are ensured.
As shown in fig. 1, in one embodiment of the present disclosure, the cache management platform may perform local caching and secondary caching on the cached data. Local caching may refer to dividing the physical memory local to a client into a portion of space for buffering data written back to a server by the client, and may also be generally referred to as local write back because of its outstanding contribution to write back. CPU cache is understood to be a temporary storage located between the CPU and memory that is smaller in capacity than memory but faster in swap speed. The data in the cache is a small part of the memory, but the small part is to be accessed by the CPU in a short time, and when the CPU calls a large amount of data, the CPU can avoid the memory to directly call from the cache, thereby accelerating the reading speed. The second level buffer can be used to improve the working efficiency of the CPU, and the second level buffer can be understood as a data transfer station between the memory and the CPU. The second-level buffer can coordinate the speed between the first-level buffer and the internal memory, and the second-level buffer has slower speed and larger capacity than the first-level buffer.
According to the embodiment of the description, the cache management platform carries out local cache and secondary cache on the cache data, so that the query and update performance of the cache data can be improved, and the cache data can be quickly returned to the cache client when the cache client actively queries, namely actively loads the cache data.
As shown in fig. 1, in the embodiment of the present disclosure, the buffer push module includes a data push interface and a multicast push interface. The multicast push interface pushes data by adopting a multicast transmission mode, and can represent that point-to-multipoint network connection is realized between a sender and each receiver. If a sender transmits the same data to multiple receivers at the same time, only one copy of the same data packet is needed. It improves the data transmission efficiency and reduces the possibility of congestion of the backbone network. A data push interface is understood to be a point-to-point network connection between a data sender and a receiver, and the data transfer efficiency is relatively slow, compared to a multicast push interface. When pushing data, the multicast pushing interface or the data pushing interface can be selected according to the use frequency of the cached data to push the data, for example: if there is a need to push multiple data to multiple cache clients, a multicast push interface may be employed. Or, the multicast push interface can be adopted for high frequency and the data push interface can be adopted for low frequency according to the use frequency of the data, and of course, the data push rule can also be set, and the data push interface can be selected according to the data push rule, for example: when the request quantity of the system is larger than a certain threshold value, a multicast push interface is adopted, otherwise, a data push interface is used. Or selecting a data pushing interface in a configuration strategy in the cache client, and specifically selecting the data pushing interface according to actual needs, which is not particularly limited in the embodiment of the present disclosure.
According to the embodiment of the specification, the corresponding data pushing interface can be selected according to the use frequency of the data or the bearing capacity of the system, so that the updating efficiency of the cached data is improved, and the performance of the system is ensured.
As shown in fig. 1, in one embodiment of the present disclosure, a cache client may include a cache change monitor, where the cache change monitor may monitor, in real time, cache data pushed by a cache push module, and after receiving the cache data sent by the cache push module, perform cache update on the received cache data, thereby implementing real-time update of the cache data.
The following specifically describes the functions of each module in the data cache system in the embodiment of the present specification with reference to fig. 1:
the data caching system in the embodiment of the present disclosure mainly includes a cache management platform, a cache client, a cache pushing module, etc., where:
and (3) a cache management platform: real-time management of each cache client, real-time viewing of states and real-time alarm of abnormal conditions can be provided. The data source of the cache client can be used for pushing data through the cache management platform when the cache data is updated and pushed, and the pushing and updating results of the cache data can be monitored in real time. Meanwhile, the caching platform can also provide a second-level cache for improving the performance of cache inquiry updating.
Cache client: the client of the service system can be represented, the cache data of each service system can be managed, and the cache updating and inquiring strategies can be adjusted in real time through the configuration strategies of the cache data. Configuration policies may include caching of data, such as: active loading, platform pushing, loading frequency, etc. Meanwhile, a heartbeat packet, namely a heartbeat monitoring module, in the cache client can establish connection with the cache management platform in real time to report the state of the cache client.
And the cache pushing module is used for: when the server side, namely the cache management platform, has cache data update, the cache pushing module can update data to each cache client side in real time, and a low-frequency data pushing interface and a high-frequency multicast pushing mode are supported. Once the push event is sent, the buffer push module can ensure that the data must reach the buffer client. The heartbeat monitoring module of the cache client can send the version of the cache data to the cache management platform every 30s, and once the data is inconsistent, the cache management platform can push the latest data to the cache client again, so that the data consistency is ensured.
Fig. 4 is a schematic diagram of a data cache system start-up working principle in an embodiment of the present disclosure, as shown in fig. 4, where the data cache system in the embodiment of the present disclosure may perform configuration initialization, cache data initialization, cache change monitor initialization, and final startup correctness checking during a cache start-up process, so as to ensure that the data cache system may be used normally. The open circles in the figure may indicate the beginning and the circles may indicate the end.
In the embodiment of the specification, a cache system supporting high concurrency distributed real-time synchronization is provided and designed, real-time caching of data is realized, consistency of the data is ensured, and concurrency is reduced when updating the cache is realized by setting an actively loaded configuration strategy.
In the present specification, all embodiments of the system are described in a progressive manner, and identical and similar parts of all embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. For relevance, see the description of the method embodiments.
Fig. 5 is a schematic flow chart of a data caching method in an embodiment of the present disclosure, as shown in fig. 5, the data caching method in the embodiment of the present disclosure may include:
step 501, the cache management platform receives cache data.
The cached data may represent data that may be cached later, and in this embodiment of the present disclosure, different types of data may be received, such as: different business data such as commodity category data, hot commodity data and the like. The data of different service scenes can be accessed into the cache management platform through the data interface so as to receive the cache data in real time, and the cache data can be updated in real time. Of course, other manners of receiving the buffered data may be used, and the embodiments of the present disclosure are not limited in detail.
Step 502, the cache management platform sends the cache data to the cache client through a pushing module according to a configuration strategy of the cache client, wherein the configuration strategy comprises active loading and platform pushing.
In a specific implementation process, the cache client may preset a configuration policy, where the configuration policy may include a data cache manner, for example: the active loading and platform pushing may also include the time of data caching, such as how often to update the cached data, which may be specifically referred to the description of the above embodiment, and will not be repeated herein. The cache management platform may send cache data to the cache client according to the configuration policy of the cache client. For example: if the configuration policy of the cache client is platform pushing, when the cache data is updated, the cache data is pushed to the cache-only client, and the interfaces of the data pushing may include a multicast pushing interface and a data pushing interface, which may be specifically referred to the description of the above embodiments and will not be repeated herein. If the configuration policy of the cache client is active loading, the cache client can actively send a data recording request, namely a query request, to the cache management platform, and after receiving the loading request of the cache client, the cache management platform sends the cache data to the cache client, and the cache client receives the cache data sent by the cache management platform.
Step 503, the cache client saves the cache data to a corresponding cache domain.
The cache client may manage cache data of each service system, such as: the cache data can be managed according to the cache domains, each cache domain can have independent cache size, expiration time, capacity limit value and the like, and the cache size, expiration time and capacity limit value of each cache domain can be configured according to actual needs. The method can also distinguish and update the cache data according to the cache domain, and manage and subscribe the cache data in fine granularity, thereby facilitating the inquiry of the data.
The data caching method provided by the embodiment of the specification can realize caching of data of different service types, can realize real-time updating of the data, and can send updated cache data to each cache client according to the configuration strategy of each cache client when the data is updated, so that the cache data of each cache client are ensured to be consistent, the query performance of the data is improved, and the accuracy of the cache data is ensured. And the cache client can select a configuration strategy of active loading or platform pushing according to actual needs, and the cache pushing of the data is carried out according to the configuration strategy, so that all the data is prevented from being updated through active loading, namely data query, and data concurrency is reduced when the cache is updated. According to the use frequency of the data or the bearing capacity of the system, a corresponding data pushing interface is selected, the updating efficiency of the cache data is improved, and the performance of the system is ensured.
On the basis of the above embodiment, the method further includes:
the cache client sends cache data to the cache management platform at intervals of preset time;
the cache management platform judges whether the received cache data is consistent with the cache data in the cache management platform;
and if the received cache data is inconsistent with the cache data in the cache management platform, the cache management platform resends the cache data to the cache client.
In a specific implementation process, each cache client may perform the following steps at preset intervals: and sending the push result of the respective cache data to the cache management platform every 30 seconds, namely sending the latest received cache data. The cache management platform can compare the received cache data sent by each cache client with the cache data of the cache management platform, judge whether the cache data of each cache client is consistent with the cache data in the cache management platform, and if not, send the cache data to the cache client again.
According to the embodiment of the specification, the data pushing results of the cache clients are monitored in real time, so that the data of the cache clients and the data of the data caching device are kept consistent, the data can be updated in time, and the instantaneity and the accuracy of the data caching are improved.
Based on the foregoing embodiments, in one embodiment of the present disclosure, if the configuration policy is active loading, the cache management platform sends the cache data to the cache client through a push module according to the configuration policy of the cache client, including:
the cache client receives a query request, and queries corresponding cache data in a local cache and a secondary cache of the cache client according to the query request;
if the cache data does not exist, marking the cache data as in-loading, and sending the query request to the cache management platform;
if the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data;
and returning the loaded cache data after the cache data corresponding to the query request is loaded.
In a specific implementation process, a user can inquire specified data through a cache client, and after the cache client receives an inquiry request, the cache client can inquire corresponding cache data in a local cache and a second-level cache of the cache client. If the cache data does not exist in the cache client, it can be stated that the cache data is not pushed to the cache client, the cache data is marked as loaded, and a query request is sent to the cache management platform. After receiving the query request of the cache client, the cache management platform can return the corresponding cache data to the cache client, and the cache client finishes the loading of the cache data.
After the cache client finishes loading the cache data, the latest version of the cache data can be returned to the thread for sending the query request. The method realizes the real-time update of the cache data and avoids sending the data with the non-updated version to the cache client. In addition, in one embodiment of the present disclosure, when a subsequent query request identifies a tag that cache data is being loaded, the original value of the cache data is returned to the cache client, where the original value of the cache data may represent the cache data before updating, and if the cache data is loaded for the first time, the original value of the cache data may be understood as empty.
The data caching method provided by the embodiment of the specification provides a data query function, when no content exists in the cache, the data can be marked as being loaded, when a subsequent query request queries the mark being loaded, the original value is directly returned, so that the pressure brought by a large number of requests to a server is avoided, and the performance of the system is improved. And after the data updating and loading are completed, the updated data is returned in time, so that the timely updating of the data and the consistency of the data are ensured.
It should be noted that the above description of the method according to the system embodiment may also include other implementations. Specific implementation may refer to the description of related system embodiments, which are not described herein in detail.
The embodiment of the present disclosure further provides a data cache processing device, including: at least one processor and a memory for storing processor-executable instructions, which when executed by the processor implement the data caching method of the above embodiment, such as:
the cache management platform receives cache data;
the cache management platform sends the cache data to the cache client through a pushing module according to a configuration strategy of the cache client, wherein the configuration strategy comprises active loading and platform pushing;
and the cache client saves the cache data to a corresponding cache domain.
The storage medium may include physical means for storing information, typically by digitizing the information before storing it in an electronic, magnetic, or optical medium. The storage medium may include: means for storing information using electrical energy such as various memories, e.g., RAM, ROM, etc.; devices for storing information using magnetic energy such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and USB flash disk; devices for optically storing information, such as CDs or DVDs. Of course, there are other ways of readable storage medium, such as quantum memory, graphene memory, etc.
It should be noted that the description of the processing apparatus according to the method embodiment described above may also include other implementations. Specific implementation may refer to descriptions of related method embodiments, which are not described herein in detail.
The data caching system provided in the specification can be a single data caching system, and can also be applied to various data analysis processing systems. The system may comprise any of the data caching means of the above embodiments. The system may be a stand-alone server or may include a server cluster, a system (including a distributed system), software (applications), an actual operating device, a logic gate device, a quantum computer, etc., using one or more of the methods or one or more of the embodiment devices of the present specification in combination with a terminal device that implements the necessary hardware. The detection system for reconciling discrepancy data may comprise at least one processor and a memory storing computer executable instructions that when executed by the processor perform the steps of the method described in any one or more of the embodiments described above.
The method embodiments provided in the embodiments of the present specification may be performed in a mobile terminal, a computer terminal, a server, or similar computing device. Taking the operation on a server as an example, fig. 6 is a block diagram of a hardware structure of a data cache server to which the embodiment of the present application is applied. As shown in fig. 6, the server 10 may include one or more (only one is shown in the figure) processors 100 (the processors 100 may include, but are not limited to, a microprocessor MCU, a processing device such as a programmable logic device FPGA), a memory 200 for storing data, and a transmission module 300 for communication functions. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, server 10 may also include more or fewer components than shown in FIG. 6, for example, may also include other processing hardware such as a database or multi-level cache, a GPU, or have a different configuration than that shown in FIG. 6.
The memory 200 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the data caching method in the embodiment of the present disclosure, and the processor 100 executes the software programs and modules stored in the memory 200 to perform various functional applications and data processing. Memory 200 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 200 may further include memory located remotely from processor 100, which may be connected to server 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 300 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 10. In one example, the transmission module 300 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission module 300 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The method or system according to the above embodiments provided in the present specification may implement service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, so as to implement the effects of the schemes described in the embodiments of the present specification.
The data caching method or system provided in the embodiments of the present disclosure may be implemented in a computer by executing corresponding program instructions by a processor, for example, implemented on a PC side using the c++ language of a windows operating system, implemented on a linux system, or implemented on an intelligent terminal using, for example, android, iOS system programming languages, and implemented on a processing logic of a quantum computer.
It should be noted that, the description of the computer storage medium and the system according to the related method embodiments described above in the specification may further include other implementations, and specific implementation manners may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a hardware+program class embodiment, the description is relatively simple, as it is substantially similar to the method embodiment, as relevant see the partial description of the method embodiment.
Embodiments of the present description are not limited to situations in which industry communication standards, standard computer data processing and data storage rules are required or described in one or more embodiments of the present description. Some industry standards or embodiments modified slightly based on the implementation described by the custom manner or examples can also realize the same, equivalent or similar or predictable implementation effect after modification of the above examples. Examples of data acquisition, storage, judgment, processing, etc., using these modifications or variations may still fall within the scope of alternative implementations of the examples of this specification.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a car-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element. The terms first, second, etc. are used to denote a name, but not any particular order.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, when one or more of the present description is implemented, the functions of each module may be implemented in the same piece or pieces of software and/or hardware, or a module that implements the same function may be implemented by a plurality of sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely an example of one or more embodiments of the present specification and is not intended to limit the one or more embodiments of the present specification. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the present specification, should be included in the scope of the claims.

Claims (12)

1. A data caching system, comprising: the system comprises a cache management platform, a cache client and a cache pushing module;
the cache management platform is used for receiving cache data and sending the cache data to the cache client according to a configuration strategy of the cache client so as to reduce data concurrency when updating the cache, wherein the configuration strategy comprises active loading and platform pushing;
the cache pushing module is used for receiving the cache data sent by the cache management module and pushing the cache data;
the cache client is used for managing cache data;
if the configuration policy of the cache client is active loading and the cache data corresponding to the query request does not exist in the cache client, the cache client is specifically configured to: marking the cache data as loaded, and sending the query request to the cache management platform; if the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data; after the buffer data corresponding to the query request is loaded, returning the loaded buffer data; the original value of the cache data is the cache data before updating, and if the cache data is loaded for the first time, the original value of the cache data is empty.
2. The system of claim 1, wherein if the configuration policy of the cache client is active loading, the cache client is specifically configured to:
receiving a query request;
and according to the query request, querying corresponding cache data in a local cache and a second-level cache of the cache client.
3. The system of claim 1, wherein the cache client comprises a heartbeat monitoring module, and the cache management platform comprises a traffic monitoring module;
the heartbeat monitoring module is used for sending cache data in the cache client to the service monitoring module at intervals of preset time;
and the service monitoring module is used for comparing the received cache data in the cache client with the cache data in the cache management platform, and if the cache data in the cache client is inconsistent with the cache data in the cache management platform, the service monitoring module is used for sending the cache data to the cache client again.
4. The system of claim 3, wherein the cache management platform further comprises a proximal monitoring module, the heartbeat monitoring module further configured to send the status of the cache client to the proximal monitoring module;
the near-end monitoring module is used for monitoring the cache client according to the received state of the cache client.
5. The system of claim 1, the cache client for management of cache data, comprising:
and managing the cache data according to the cache domains, wherein the cache size, the expiration time and the capacity limit value of each cache domain are mutually independent.
6. The system of claim 1, wherein the cache management platform is further configured to locally cache and/or secondary cache the cached data.
7. The system of claim 1, the cache push module comprising a data push interface, a multicast push interface.
8. The system of claim 1, wherein the cache client comprises a cache change monitor configured to receive cache data sent by the cache push module.
9. A data caching method, comprising:
the cache management platform receives cache data;
the cache management platform sends the cache data to the cache client according to a configuration strategy of the cache client so as to reduce data concurrency when updating the cache, wherein the configuration strategy comprises active loading and platform pushing;
the cache client saves the cache data to a corresponding cache domain;
if the configuration policy of the cache client is active loading and cache data corresponding to the query request does not exist in the cache client, the cache client marks the cache data as being loaded and sends the query request to the cache management platform; if the cache data corresponding to the query request is queried to be marked in loading, returning to the original value of the cache data; after the buffer data corresponding to the query request is loaded, returning the loaded buffer data; the original value of the cache data is the cache data before updating, and if the cache data is loaded for the first time, the original value of the cache data is empty.
10. The method of claim 9, the method further comprising:
the cache client sends cache data to the cache management platform at intervals of preset time;
the cache management platform judges whether the received cache data is consistent with the cache data in the cache management platform;
and if the received cache data is inconsistent with the cache data in the cache management platform, the cache management platform resends the cache data to the cache client.
11. The method of claim 9, wherein if the configuration policy is active loading, the cache management platform sends the cache data to a cache client according to the configuration policy of the cache client, including:
and the cache client receives the query request and queries corresponding cache data in a local cache and a secondary cache of the cache client according to the query request.
12. A data cache processing apparatus comprising: at least one processor and a memory for storing processor-executable instructions, which when executed by the processor implement the method of any one of claims 9-11.
CN201811294935.XA 2018-11-01 2018-11-01 Data caching system and method Active CN109614404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811294935.XA CN109614404B (en) 2018-11-01 2018-11-01 Data caching system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811294935.XA CN109614404B (en) 2018-11-01 2018-11-01 Data caching system and method

Publications (2)

Publication Number Publication Date
CN109614404A CN109614404A (en) 2019-04-12
CN109614404B true CN109614404B (en) 2023-08-01

Family

ID=66002108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811294935.XA Active CN109614404B (en) 2018-11-01 2018-11-01 Data caching system and method

Country Status (1)

Country Link
CN (1) CN109614404B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110336851B (en) * 2019-05-06 2021-09-24 腾讯科技(深圳)有限公司 Content access processing method and device, computer equipment and storage medium
CN110287217A (en) * 2019-06-10 2019-09-27 天翼电子商务有限公司 Buffer control method, system and electronic equipment based on distributed business system
CN110989939A (en) * 2019-12-16 2020-04-10 中国银行股份有限公司 Data cache processing method, device and equipment and cache component
CN111859109A (en) * 2020-06-10 2020-10-30 广东省安心加科技有限公司 Control method and device for state query of Internet of things equipment
CN112163001A (en) * 2020-09-25 2021-01-01 同程网络科技股份有限公司 High-concurrency query method, intelligent terminal and storage medium
CN114465896A (en) * 2022-03-30 2022-05-10 深信服科技股份有限公司 Configuration information processing method, device, equipment and readable storage medium
CN114722046A (en) * 2022-04-18 2022-07-08 聚好看科技股份有限公司 Server and home page cache data version generation method
CN117951044B (en) * 2024-03-27 2024-05-31 江西曼荼罗软件有限公司 Cache identification and updating method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119203A (en) * 1998-08-03 2000-09-12 Motorola, Inc. Mechanism for sharing data cache resources between data prefetch operations and normal load/store operations in a data processing system
CN101335923A (en) * 2008-08-01 2008-12-31 中兴通讯股份有限公司 Message type service access number management method and apparatus
CN104965717A (en) * 2014-06-05 2015-10-07 腾讯科技(深圳)有限公司 Method and apparatus for loading page

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183145B2 (en) * 2009-04-27 2015-11-10 Intel Corporation Data caching in a network communications processor architecture
CN103259683B (en) * 2013-05-16 2016-06-01 烽火通信科技股份有限公司 Based on the Web network management system L2 cache method for pushing of HTML5
CN104202424B (en) * 2014-09-19 2016-01-27 中国人民财产保险股份有限公司 A kind of method using software architecture to expand buffer memory
CN104281668A (en) * 2014-09-28 2015-01-14 墨仕(厦门)电子商务有限公司 Data processing method
CN107943594B (en) * 2016-10-13 2021-11-12 北京京东尚科信息技术有限公司 Data acquisition method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119203A (en) * 1998-08-03 2000-09-12 Motorola, Inc. Mechanism for sharing data cache resources between data prefetch operations and normal load/store operations in a data processing system
CN101335923A (en) * 2008-08-01 2008-12-31 中兴通讯股份有限公司 Message type service access number management method and apparatus
CN104965717A (en) * 2014-06-05 2015-10-07 腾讯科技(深圳)有限公司 Method and apparatus for loading page

Also Published As

Publication number Publication date
CN109614404A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109614404B (en) Data caching system and method
US10204114B2 (en) Replicating data across data centers
CN110765165B (en) Method, device and system for synchronously processing cross-system data
CN103019960B (en) Distributed caching method and system
CN111767143A (en) Transaction data processing method, device, equipment and system
CN108683695A (en) Hot spot access processing method, cache access agent equipment and distributed cache system
CN109344348B (en) Resource updating method and device
CN110989939A (en) Data cache processing method, device and equipment and cache component
CN104111804A (en) Distributed file system
CN108459913B (en) Data parallel processing method and device and server
CN111078723B (en) Data processing method and device for block chain browser
CN109344157A (en) Read and write abruption method, apparatus, computer equipment and storage medium
CN111355816B (en) Server selection method, device, equipment and distributed service system
CN105335170A (en) Distributed system and incremental data updating method
CN111131079B (en) Policy query method and device
CN111190655B (en) Processing method, device, equipment and system for application cache data
CN111324533A (en) A/B test method and device and electronic equipment
CN104657435A (en) Storage management method for application data and network management system
CN104423982A (en) Request processing method and device
CN111784468B (en) Account association method and device and electronic equipment
CN110471629A (en) A kind of method, apparatus of dynamic capacity-expanding, storage medium, equipment and system
CN115617799A (en) Data storage method, device, equipment and storage medium
CN112003922A (en) Data transmission method and device
CN112433921A (en) Method and apparatus for dynamic point burying
CN106156050B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: Greater Cayman, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20201012

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant