CN118250333A - Cache processing method and device, storage medium and electronic equipment - Google Patents

Cache processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN118250333A
CN118250333A CN202410361071.8A CN202410361071A CN118250333A CN 118250333 A CN118250333 A CN 118250333A CN 202410361071 A CN202410361071 A CN 202410361071A CN 118250333 A CN118250333 A CN 118250333A
Authority
CN
China
Prior art keywords
nodes
cache
event
target
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410361071.8A
Other languages
Chinese (zh)
Inventor
马昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN202410361071.8A priority Critical patent/CN118250333A/en
Publication of CN118250333A publication Critical patent/CN118250333A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a cache processing method, a cache processing device, a storage medium and electronic equipment. Wherein the method comprises the following steps: determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation; under the condition that a cache change event occurs to a target node in a plurality of application nodes, responding to interaction between the target node and a configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; and updating the initial data storage states of other nodes based on the cache change event to obtain other nodes refreshed to the target data storage state. The invention solves the technical problem of high cache refreshing limitation of the interactive isolated distributed node in the related technology.

Description

Cache processing method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of internet and the technical field of cache, in particular to a cache processing method, a device, a storage medium and electronic equipment.
Background
In the current micro-service architecture technology, the multi-machine-room parallel computing system is widely applied, such as a novel unitized architecture system, and some traditional development means face problems under the multi-machine-room parallel computing concept. The local cache is used as a high-performance cache use mode, and after the content of the cache needs to be changed, the local cache among all application server nodes can be inconsistent. This can lead to data inconsistencies between local caches.
In the related art, an event is generally issued through Redis (Remote Dictionary Server, an open-source data structure storage system in a memory), and the mechanism of monitoring the event by other nodes realizes refreshing of cache data and ensures consistency of all nodes. However, under the multi-machine-room parallel computing concept, because Redis between each machine room cannot be shared into the same set and are independently deployed, one application cannot be connected with a Redis cluster of another machine room, limitation exists in data consistency for an interactive isolation scene, and the requirement of cache refreshing consistency between distributed nodes in multi-machine-room parallel computing cannot be met.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a cache processing method, a device, a storage medium and electronic equipment, which are used for at least solving the technical problem that cache refreshing of a distributed node which is isolated interactively in the related technology is limited greatly.
According to an aspect of an embodiment of the present invention, there is provided a cache processing method, including: determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation; when a cache change event occurs to a target node in the plurality of application nodes, responding to interaction between the target node and the configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; and updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to be in the target data storage state, wherein a first query result of the other nodes based on the target data storage state for performing the predetermined query is matched with a second query result of the target nodes based on the cache change event.
Optionally, the plurality of application nodes are respectively provided with event publishers, and the event publishers are used for interacting with the configuration center server, and the method further includes: and sending the cache identifiers respectively corresponding to the plurality of application nodes through event publishers respectively corresponding to the plurality of application nodes, and generating node cache information at the configuration center server.
Optionally, the issuing the cache change event to other nodes in the plurality of application nodes in response to the interaction of the target node with the configuration center server includes: issuing a target parameter indicated by the cache change event to the configuration center server through an event issuer of the target node, wherein the target parameter is used as interaction between the target node and the configuration center server; responding to the interaction between the target node and the configuration center server, and changing the node cache information based on the target parameter to obtain updated cache information in the configuration center server; and when the event publisher of the other node detects updating cache information, publishing the cache change event to the other node.
Optionally, after the event publisher of the target node publishes the target parameter indicated by the cache change event to the configuration center server, the method further includes: determining an event publisher passing through the target node, and executing a published publishing result to the configuration center server; under the condition that the release result indicates release failure, executing retry release of preset retry times to the configuration center server through an event publisher of the target node to obtain a retry result; and under the condition that the retry result indicates that the retry fails, sending alarm information indicating release failure to a preset receiving end.
Optionally, the plurality of application nodes are respectively provided with an event receiver, and the event receiver is used for receiving the cache change event issued by the event issue device of the corresponding application node; the issuing of the cache change event to the other nodes includes: and pushing the cache change event to event publishers of other nodes through the configuration center server.
Optionally, updating the initial data storage state of the other node based on the cache change event to obtain the other node refreshed to the target data storage state, including: based on the target parameter indicated by the cache change event, notifying event receivers of other nodes by adopting event publishers of the other nodes; and responding to the notification of the cache change event received by the event receiver of the other nodes, calling a preset cleaning function through the event receiver, and cleaning the data of the initial data storage state of the other nodes to obtain the other nodes refreshed to the target data storage state.
Optionally, the plurality of application nodes interact with a predetermined database respectively, the cache change event is that a first data change value of a target parameter is executed in the predetermined database, and a second data change value of the target parameter is executed on a local cache of the target node, the first data change value is more matched with the second data change value, and after the other nodes refreshed to the target data storage state are obtained, the method further includes: executing the predetermined query on the other nodes under the condition that the target data storage state is the state that the target parameters are cleared and the query request for executing the predetermined query on the other nodes is responded, so that a third query result is a non-returned parameter value; executing the preset query to the preset database to obtain a first data change value as the first query result under the condition that the third query result is the unreturned parameter value; and under the condition of responding to a query request for executing the preset query on the target node, and obtaining the second data change value as the second query result.
According to another aspect of an embodiment of the present invention, there is provided a cache processing apparatus, including: the system comprises an application node determining module, a configuration center server and a data processing module, wherein the application node determining module is used for determining a plurality of application nodes with matched initial data storage states and a preset configuration center server which has interaction with the application nodes respectively, and the application nodes are distributed nodes with interaction isolation; the release cache change module is used for responding to the interaction between the target node and the configuration center server and releasing the cache change event to other nodes in the plurality of application nodes under the condition that the target node in the plurality of application nodes generates the cache change event; and the cache processing module is used for updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to be in the target data storage state, wherein a first query result of the other nodes based on the target data storage state for carrying out preset query is matched with a second query result of the target nodes based on the cache change event.
According to another aspect of an embodiment of the present invention, there is provided a nonvolatile storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform any one of the cache processing methods.
According to another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the cache processing methods.
In the embodiment of the invention, a plurality of application nodes with matched initial data storage states and a preset configuration center server which is interacted with the plurality of application nodes are determined, wherein the plurality of application nodes are distributed nodes with interaction isolation; when a cache change event occurs to a target node in the plurality of application nodes, responding to interaction between the target node and the configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; and updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to be in the target data storage state, wherein a first query result of the other nodes based on the target data storage state for performing the predetermined query is matched with a second query result of the target nodes based on the cache change event. The method and the device achieve the purposes of improving the cache processing flexibility by utilizing interaction between the distributed application nodes and the configuration center server, achieve the technical effect of relieving cache consistency refreshing restriction on the interactively isolated distributed nodes, and further solve the technical problem of high cache refreshing restriction on the interactively isolated distributed nodes in the related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flowchart of an alternative cache processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative cache processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an alternative cache processing apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of description, the following will describe some terms or terminology involved in the embodiments of the present application:
SPRINGCACHE is an abstract layer provided by a Spring framework, which is an open source Java application framework, for simplifying the use of buffering. SPRINGCACHE provide abstract support for various caching solutions. By using the Cache in the annotation mode, the Cache behavior can be conveniently declared on the method level by using the annotation, and specific Cache implementation details do not need to be deeply known.
Nacos, which is a high-performance and high-availability configuration center, can be used to publish configurations, each application node can acquire and monitor Nacos configuration and publication events, and especially in micro-service architecture, nacos serves as a service registration and discovery center, so that service providers and consumers can easily discover and communicate with each other.
Cache is a technique that temporarily stores the results of computation for direct use in subsequent requests. In applications, caches may be used to store frequently accessed data to improve access speed and performance. When an application program needs certain data, firstly checking whether the data exists in a cache, if so, directly returning the data in the cache, and if not, executing operations such as calculation, inquiry and the like, and storing the result in the cache for the next use.
Dubbo is a high performance, lightweight RPC (remote procedure call) framework widely used to construct micro services and communications between services.
In the related art distributed system, events are issued by Redis, other nodes monitor the events to realize the refreshing of the cache data, and the mechanism can ensure the consistency of the data of each node. However, in a multi-machine room parallel computing environment, this strategy is no longer applicable, as each machine room typically has an independent Redis deployment, which makes it impossible for a Redis of one machine room to issue events to Redis of other machine rooms. Meanwhile, applications typically do not cross-machine room to connect to Redis clusters of other machines rooms due to security, network delay, bandwidth, etc.
In another related art, the problem of data consistency can be solved to a certain extent by publishing an MQ (Message Queue) Message and performing broadcast listening, but under some architectures, this method is not feasible. For example, if each machine room has its own MQ server, broadcasting messages across machine rooms becomes very difficult. In addition, some custom MQ solutions may not implement broadcast listening functionality, or may not be implemented to meet specific business requirements.
SPRINGCACHE is a framework that provides a cache abstraction layer for Java applications, which can be integrated with a variety of cache solutions (e.g., dis, etc.). However, the cache framework that SPRINGCACHE natively supports does not typically have a cache refresh mechanism, meaning that when an application updates cache data, it needs to manually trigger cache refresh, resulting in data consistency problems, especially in high concurrency scenarios where adaptation is not ideal.
In view of the foregoing, embodiments of the present invention provide a method embodiment for caching, it should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
FIG. 1 is a flowchart of a cache processing method according to an embodiment of the present invention, as shown in FIG. 1, the method includes the following steps:
Step S102, determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation.
It will be appreciated that in a distributed system, the initial data storage states of the distributed plurality of application nodes included therein are the same or match. The existence of interaction isolation between multiple application nodes means that data cannot be directly communicated or shared. The plurality of application nodes respectively interact with a preset configuration center server. Support is provided for consistency and synchronization of data in subsequent distributed systems by determining a plurality of application nodes whose initial data storage states match, and a predetermined configuration center server with which each has interaction.
Step S104, when a cache change event occurs to a target node in the plurality of application nodes, the cache change event is issued to other nodes in the plurality of application nodes in response to interaction between the target node and the configuration center server.
It will be appreciated that when a cache change event occurs at one of the plurality of application nodes, i.e., the target node interacts with the configuration hub server to notify the configuration hub server that the cache state has changed. After receiving the notification of the target node, the configuration center server issues the cache change event to other application nodes. By the method, other nodes can know that the cache data has changed, and the configuration center server can rapidly issue the cache change event to other application nodes after receiving the notification of the target node. Because the application nodes keep interaction with the configuration center server, the notification of the change event can be received in time, and the self cache data can be updated according to the notification. By means of the configuration center server to realize release and notification, compared with the related art, the method and the device have the advantages that dependence on Redis as a caching mechanism can be relieved, and expandability is improved.
Alternatively, the use of a cache may reduce the frequency of access to a database or other time consuming operation, improving the response speed and concurrency capabilities of the system. In the Spring framework application, SPRINGCACHE can be used to simplify the use and management of the cache, and improve the performance and expandability of the application.
In an alternative embodiment, the plurality of application nodes are respectively provided with event publishers for interacting with the configuration center server, the method further comprising: and sending the cache identifications corresponding to the application nodes respectively through event publishers corresponding to the application nodes respectively, and generating node cache information at the configuration center server.
It will be appreciated that each application node is provided with an event publisher that is responsible for interacting with the application node and with the configuration hub server. The existence of the event publisher provides a standardized interface for the application node, through which information of the cache change event can be exchanged with the configuration center server. By sending the corresponding cache identifier to the configuration center server, the configuration center server can obtain node cache information which indicates that a plurality of application nodes respectively need to be refreshed in parallel.
Alternatively, the event publisher may be in the form of an interface, which defines a Notifier (Notifier) event publisher interface, and the event publisher is configured to publish the cache change event to the configuration center server (such as Nacos), which is defined as an interface, meaning that the event publisher may be implemented by a different implementation class. The event publisher defines a publish (Key key) method that accepts a cached key as a parameter. The publish method represents a publishing method, key represents an object type, and Key represents an object identifier.
The event publisher also defines a register (Notified notified) method, and the register is used for registering an object or entity, can receive an object identified as notified of Notified (notifier) type, and after the event publisher publishes a cache change event, the event publisher calls an event receiver interface registered therein to notify the event receiver interface of the cache change event content.
It should be noted that the Interface (Interface) defines a set of methods, but these methods are not specifically implemented. An interface may be regarded as a declaration of which methods should be provided by the class implementing the interface. An implementation class (Implementation Class) is a class that implements an interface or abstract class in particular, providing a concrete implementation of all methods defined in the interface or abstract class.
Optionally, the publish method used by the event publisher Notifier is aimed at writing a Key list of cache changes under the designation dataId (configuration set) of the Nacos configuration center server in CAS (compound-And-Swap) mode. This dataId format is set to notafycche. Xxx, where xxx stands for the name of a certain local cache. For example, the nottifyCache. UserInfo may be used as an identification of the user information cache. When writing configuration, the method can serialize the cached Key list into a JSON character string form, and JSON (JavaScript Object Notation) character strings are a lightweight data exchange format. This JSON string will be stored as configuration content in the form of an array list. Each array element is a JSON string representation of a Key, i.e., each cache Key is serialized into string form.
Optionally, after receiving the cache identifiers from the different application nodes, the configuration center server integrates the identifier information to generate a node cache information table or similar data structure. The node cache information table contains cache identifiers of all application nodes and possibly other related information, such as the state of the nodes, the last update time and the like.
In an alternative embodiment, in response to interaction of the target node with the configuration hub server, issuing a cache change event to other nodes of the plurality of application nodes, comprising: the method comprises the steps that through an event publisher of a target node, target parameters indicated by a cache change event are published to a configuration center server and serve as interaction between the target node and the configuration center server; responding to interaction between the target node and the configuration center server, and changing node cache information based on the target parameter to obtain updated cache information in the configuration center server; and when the event publisher of the other node detects that the cache information is updated, publishing a cache change event to the other node.
It will be appreciated that when a cache change event occurs at a target node, the event publisher constructs a message containing a specific indication of the cache change event and associated target parameters, which is sent to the configuration hub server. After receiving the message issued by the event issuer from the target node, the configuration center server analyzes the target parameters. Then, the configuration center server updates or changes the node cache information in the configuration center server according to the target parameters, so that the configuration center server can know which parameter is changed. Once the node cache information is updated, the configuration center server broadcasts the updated cache information to all other application nodes. Each application node has an event publisher for not only publishing events but also for detecting updated information from the configuration hub server in real time. When event publishers of other nodes detect update cache information published by the configuration center server, the event publishers of other nodes trigger to continuously notify event receivers corresponding to the other nodes. Through the processing, the distributed system can efficiently maintain the consistency of the cache among all application nodes, and simultaneously ensure that all nodes can timely receive the notification and update the state of the nodes by using the configuration center server when the cache is changed.
Alternatively, the above-described cache change event may be, for example, a data update, deletion, or addition.
Alternatively, the target parameter may be, for example, a key and a value of the changed cache item.
In an alternative embodiment, after the issuing, by the event issuer of the target node, of the target parameter indicated by the cache change event to the configuration center server, the method further comprises: determining an event publisher passing through the target node, and executing a published publishing result to a configuration center server; under the condition that the release result indicates release failure, executing retry release of preset retry times to a configuration center server through an event publisher of the target node to obtain a retry result; and under the condition that the retry result indicates that the retry fails, sending alarm information indicating release failure to a preset receiving end.
It will be appreciated that by the event publisher of the target node, the publication is performed to the configuration hub server, and determining the publication results described above involves checking whether the publication operation was successful. If the issue result indicates an issue failure, an attempt is made to retry the issue operation. The number of retries is preset, which ensures that the issuing operation can succeed in the event of temporary network problems or short-lived unavailability of the configuration center server. In the retry process, determining the result of each retry, if the retry times reach a predetermined value, the issuing operation still fails, generating an alarm message indicating the issuing failure, and transmitting the message to a predetermined receiving end. Through the processing, the cache change event can be reliably issued to the configuration center server, and a corresponding alarm mechanism is provided when the issue fails, so that the problem can be found out in time and processed. This helps to improve the usability and stability of the system.
Alternatively, sending the alert information may be performed, such as by attempting other recovery strategies, generating a local alert log, and so forth.
Optionally, the alarm information includes information related to the severity level of the problem and the abnormality type so that the receiving end can identify the problem, so as to solve the problem by corresponding measures.
Optionally, a delay of a predetermined length of time is provided between each retry to avoid placing excessive stress on the configuration center server.
In an optional embodiment, the plurality of application nodes are respectively provided with an event receiver, and the event receiver is used for receiving the cache change event issued by the event issue unit of the corresponding application node; issuing a cache change event to other nodes, including: and pushing the cache change event to event publishers of other nodes through the configuration center server.
It will be appreciated that a plurality of application nodes receive the cache change event by providing event receivers, each of which is provided with an event receiver for receiving the cache change event issued by the event issuer of that node. When a cache change occurs, the configuration hub server pushes the cache change event to the event publisher of the other node, rather than directly to the event receiver of that node. And the event receiver at the other node is configured to receive the cache change event issued by the event issuer. By the above-described process, the functions of the event publisher and the event receiver are separated, and the abstracted application party can be realized by virtually any subclass thereof, and this mechanism helps to realize decoupling, scalability and high availability of the system.
An interface is an abstract type that defines a method but does not contain implementation details, allowing different classes to implement the same method, thus implementing polymorphisms. Such as event publishers and event receivers in the form of interfaces are abstracted, meaning that any class implementing declaration interface functionality can be used to inform other nodes about events of cache changes.
NacosNotifier is a specific implementation of the Notifier interface, pushing the cache change event using Nacos as a configuration center server. Providing support for a different notification mechanism or integrating other systems, for example DubboNotifier may be another implementation of the Notifier interface, push the cache change event using the broadcast call mode of Dubbo, send the cache change event to Dubbo service provider when the cache change occurs using the broadcast call mode of Dubbo. Dubbo the service provider will then broadcast the event to all consumers subscribed to the event using a broadcast mechanism, and all relevant application nodes can receive notification of the cache change in time and update the local state accordingly.
By using different Notifier implementations (e.g., nacosNotifier or DubboNotifier), flexibility and scalability can be maintained, enabling selection of appropriate notification mechanisms according to specific needs and technology stacks without modifying the business logic or event handling flows of the core.
Optionally, nacosNitfier needs to detect the content change of the cache dataId of Nacos in real time after the construction is finished, for example, notify cache. Userinfo, when there is a content change, parse each Key according to the agreed format, deserialize it into Java object, once Key is successfully deserialized, nacosNotifier will call the notified method of Notified object registered before, for example notified (Key Key) method, notify the changed Key information, and other parts depending on dataId can know that the content has been updated, and make corresponding processing accordingly.
And step S106, updating the initial data storage state of other nodes based on the cache change event to obtain other nodes refreshed to the target data storage state, wherein a first query result of the other nodes based on the target data storage state for the predetermined query is matched with a second query result of the target node based on the cache change event.
It will be appreciated that upon receipt of a cache change event, other nodes will update their own data storage state in response to the event. The first query result of the predetermined query based on the other nodes of the updated target data storage state is matched with the second query result of the predetermined query based on the target node with the cache change event, which means that no matter which distributed node is used for executing the query, the obtained results are the same, and the consistency among the node cache data is ensured.
In an alternative embodiment, updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to the target data storage state includes: based on the target parameter indicated by the cache change event, adopting event publishers of other nodes to notify event receivers of other nodes; and responding to the notification of the cache change event received by the event receiver of the other nodes, calling a preset cleaning function through the event receiver, and cleaning the data of the initial data storage state of the other nodes to obtain the other nodes which are refreshed to be the target data storage state.
It can be understood that the cache change event contains relevant information about the target parameter, and the target node uses its event publisher to notify the event receivers of other nodes of the cache change event, so as to ensure that all relevant nodes know that the data change occurs, and the event receivers of other nodes receive the notification of the cache change event, and mark the start of the data cleaning and updating process. After receiving the notification, the event receiver of the other node invokes a predetermined cleaning function, which is responsible for cleaning the initial data storage state of the node and preparing for data update. The predetermined cleaning function may perform a series of operations to clean up the initial data storage state of other nodes, which may include deleting old data, marking data invalid, or performing any other necessary cleaning task. Through the processing, the data storage states of other nodes are refreshed to be target data storage states, and the data storage states of all relevant nodes can be timely and accurately updated when data change occurs based on the notification of the event and the method of the preset cleaning function, so that the consistency and the accuracy of data are maintained.
In an alternative embodiment, the plurality of application nodes interact with the predetermined database respectively, the cache change event is to execute a first data change value of the target parameter in the predetermined database, and execute a second data change value of the target parameter on a local cache of the target node, the first data change value is more matched with the second data change value, and after obtaining the other nodes refreshed to the target data storage state, the method further includes: under the condition that the target data storage state is the state that the target parameters are cleared, and in response to a query request for executing the predetermined query on other nodes, executing the predetermined query on the other nodes, and obtaining a third query result as unreturned parameter values; executing a preset query to a preset database under the condition that the third query result is not the parameter value returned, and obtaining a first data change value as a first query result; in response to a query request to perform a predetermined query on the target node, performing the predetermined query on the target node, obtaining a second data change value as a second query result.
It will be appreciated that when a plurality of application nodes interact with a predetermined database and a cache change event causes a data change of a target parameter, a first data change value for the target parameter is executed in the predetermined database, and a corresponding data change, i.e. a second data change value, is executed in a local cache of the target node. And ensuring that the first data change value and the second data change value are matched so as to maintain data consistency between the database and the cache. It should be noted that the latest data state in the predetermined database is due to the cache inconsistency problem caused by the fact that local caches of other distributed application nodes are not synchronously refreshed. After receiving the buffer change event, other nodes refresh to the target data storage state and keep consistent. If the target data storage state is a state in which the target parameter is emptied, the node will accordingly empty its local cache of data regarding the target parameter.
When a predetermined query is performed on other nodes, if the data about the target parameter in the node's local cache has been emptied (i.e., the target data storage state is empty), the query does not return any parameter value, i.e., the third query result is a non-returned parameter value. In this case, the system will fall back to the predetermined database to perform the predetermined query to obtain the latest data change value (i.e., the first data change value).
When a predetermined query is performed on the target node, since the latest data change value (i.e., the second data change value) is already contained in the local cache of the target node, this value will be returned directly to the client that initiated the query as a second query result.
By the processing mode, even if the cache data is emptied, the latest and matched query results can be returned, so that the consistency of the data is maintained. Furthermore, since all nodes remain synchronized through the configuration center server or other mechanism, the resulting query results will remain consistent throughout the distributed system.
It should be noted that, in the embodiment of the present invention, it is avoided that, because the caches of other nodes are not updated, the stored data which is in the initial data storage state and is not updated, and the query result obtained for the target node which is updated is inconsistent with the query result obtained for other nodes which are not updated. The other nodes are not required to have the same data storage state as the target node. In this way, even if the cached data of a node is emptied, the system can still utilize the configuration center server or other shared database to ensure that the latest query value is returned, thereby ensuring that the query results are matched. The method effectively improves the usability and data consistency of the system.
It should be noted that, defining a custom Cache interface, recorded as SpringNotifyCache, the custom Cache interface inherits the Cache interface provided by SPRINGCACHE, and inherits the Notified interface (event publisher) and the Notifier interface (event receiver), which means that the custom Cache has the capability of being used with SPRINGCACHE system, also has the capability of receiving a Cache change event, and also has the capability of publishing a Cache change event and the access function of the Cache itself. If it is not required to be used with SPRINGCACHE's hierarchy, then the custom Cache interface is not required to inherit the Cache interface provided by SPRINGCACHE. However, inheritance Notified interface and Notifier interface mean that the custom cache does not need to be matched with SPRINGCACHE system, only needs to have the capability of receiving a cache change event, and also has the capability of issuing a cache change event and the access function of the cache itself.
Optionally, the method further comprises: defining a custom Cache decorator class SpringNotifyCacheDecorator, the construction of the custom Cache decorator requires a specific implementation of Notifier (event publisher) and SPRINGCACHE, such as a Spring built-in ConcurrentMapCache (an implementation Cache system, which safely accesses and modifies key-value pairs in a multithreaded environment without race conditions), and the specific implementation of SPRINGCACHE can implement a Spring Cache interface, which can be the same as the SPRING CACHE interface inherited by SpringNotifyCache.
Optionally, the publishing method of the custom buffer decorator for the Notifier interface is realized by directly issuing an event through the Notifier instance provided during construction.
Optionally, the method notified of the custom cache decorator for Notified interfaces is implemented to clean up cache contents in the local cache object according to the key in the parameter.
Optionally, the buffer decorator is implemented by a evit (for clearing a specific buffer) and clear (for clearing all buffers) method for a Spring Cache interface, clearing the content in the buffer according to a key in a method parameter, and calling a publish method to issue a buffer change event.
In an alternative embodiment, the method further comprises: a custom cache decorator class is defined, denoted NotifyCacheDecorator. If the buffer decorator is not required to be used with SPRINGCACHE systems, the buffer decorator can be constructed by only one specific implementation of a Notifier and a Cache, namely the specific implementation of the bottom layer buffer can be replaced at will.
In an alternative embodiment, the method further comprises: a cache factory is created to create cache objects, which are instantiated based on defined cache decorator types, and executable cache objects can be created by entering pre-created NacosNotifier and a ConcurrentMapCache implementation.
Optionally, the created cache object may be used in SPRINGCACHE manner and used in annotation manner, and then if it is desired to clean a certain parameter of the cache, the Spring framework automatically invokes the evict method to automatically issue the cache change event to other nodes.
Through the step S102, determining a plurality of application nodes with matched initial data storage states and a predetermined configuration center server with interaction with the plurality of application nodes, wherein the plurality of application nodes are distributed nodes with interaction isolation; step S104, under the condition that a cache change event occurs to a target node in a plurality of application nodes, responding to the interaction between the target node and a configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; and step S106, updating the initial data storage state of other nodes based on the cache change event to obtain other nodes refreshed to the target data storage state, wherein a first query result of the other nodes based on the target data storage state for the predetermined query is matched with a second query result of the target node based on the cache change event. The method can realize the purpose of improving the cache processing flexibility by utilizing the interaction between the distributed application nodes and the configuration center server, and the technical effect of releasing the cache consistency refreshing limit on the interactively isolated distributed nodes is realized, so that the technical problem of high cache refreshing limit on the interactively isolated distributed nodes in the related technology is solved.
Based on the foregoing embodiments and optional embodiments, an optional implementation manner is provided in the present invention, and fig. 2 is a schematic diagram of an optional cache processing method provided in accordance with an embodiment of the present invention, as shown in fig. 2, nacos configures a central server to interact with a plurality of NotifiyCache cache containers, where each NotifiyCache cache container corresponds to a respective NacosNotifier event publisher. Each application node is correspondingly provided with a Notifier event publisher interface, a Notified event receiver interface and a Cache container.
Nacos the configuration center server receives the NacosNotifier event publisher, which is assumed to be named as an "a node", performs active notification by adopting a corresponding Notifier event publisher, sends a changed Key object (target parameter), and uses a publicish method to implement that dataId (notify cache. Userinfo, as user information cache) is designated in the Nacos configuration center server, and a Key class list of cache change is written in a CAS mode.
If NacosNotifier event publisher named "node A" fails to send to Nacos configuration hub server, a retry can be performed, after which the print log alerts to manually publish events to Nacos configuration hub server.
After publishing writes the change Key sequence to the Nacos configuration center server dataId, the Nacos configuration center server pushes the Key change event to it through a long connection, assuming a NacosNotifier event publisher named "node B", and a Notified event receiver interface registered in the NacosNotifier event publisher named "node B", the Notified event receiver interface calls notified (Key Key) to determine what Key class object named "Key" should be modified to clean up the stored Key in the named "node B".
Taking a Spring framework as an example, notifyCache custom cache interfaces and SpringNotifyCacheDecorator custom cache decorators are used for interacting with NotifiyCache as a cache container, and the custom cache decorators can directly issue events through Notifiier examples provided in construction for a publish method of the Notifiier interfaces. The notified method for Notified interfaces is implemented to clean up cache contents in the local cache object according to the keys in the parameters. For the evit (used for clearing a specific Cache) and clear (used for clearing all caches) method implementation of the Spring Cache interface, the Cache is cleared up according to the key in the method parameters, and a publish method is called to issue a Cache change event.
At least the following effects are achieved by the above alternative embodiments: the post-change refresh function of the cache contents of custom SPRINGCACHE is implemented using the Nacos configuration publishing and detection capabilities. The Notifier interface is abstracted, the limitation on the implementation class is removed, and a specific user-oriented cache is realized by assembling the event publisher, the event processor and the bottom layer cache memory. Supporting the individual use and on-demand replacement of the above-described components increases the flexibility of the caching process and avoids limitations due to the integration framework or the integration system.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a cache processing device, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "apparatus" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
According to an embodiment of the present invention, there is further provided an embodiment of an apparatus for implementing a cache processing method, and fig. 3 is a schematic diagram of a cache processing apparatus according to an embodiment of the present invention, as shown in fig. 3, where the cache processing apparatus includes: the application node determining module 302 issues the cache changing module 304, and the cache processing module 306, which is described below.
An application node determining module 302, configured to determine a plurality of application nodes whose initial data storage states are matched, and a predetermined configuration center server having interactions with the plurality of application nodes, where the plurality of application nodes are distributed nodes having interaction isolation;
the release cache change module 304 is connected with the application node determining module 302, and is configured to respond to interaction between the target node and the configuration center server when a cache change event occurs in the target node in the plurality of application nodes, and release the cache change event to other nodes in the plurality of application nodes;
the cache processing module 306 is connected to the issue cache change module 304, and is configured to update the initial data storage states of the other nodes based on the cache change event, so as to obtain other nodes refreshed to the target data storage state, where a first query result of the other nodes based on the target data storage state performing the predetermined query matches a second query result of the target node performing the predetermined query based on the target node having the cache change event.
In the cache processing device provided by the embodiment of the invention, an application node determining module 302 is arranged and is used for determining a plurality of application nodes with matched initial data storage states and a preset configuration center server which is interacted with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation; the release cache change module 304 is connected with the application node determining module 302, and is configured to respond to interaction between the target node and the configuration center server when a cache change event occurs in the target node in the plurality of application nodes, and release the cache change event to other nodes in the plurality of application nodes; the cache processing module 306 is connected to the issue cache change module 304, and is configured to update the initial data storage states of the other nodes based on the cache change event, so as to obtain other nodes refreshed to the target data storage state, where a first query result of the other nodes based on the target data storage state performing the predetermined query matches a second query result of the target node performing the predetermined query based on the target node having the cache change event. The method and the device achieve the purposes of improving the cache processing flexibility by utilizing interaction between the distributed application nodes and the configuration center server, achieve the technical effect of relieving cache consistency refreshing restriction on the interactively isolated distributed nodes, and further solve the technical problem of high cache refreshing restriction on the interactively isolated distributed nodes in the related technology.
It should be noted that each of the above modules may be implemented by software or hardware, for example, in the latter case, it may be implemented by: the above modules may be located in the same processor; or the various modules described above may be located in different processors in any combination.
Here, the application node determining module 302, the published cache changing module 304, and the cache processing module 306 correspond to steps S102 to S106 in the embodiment, and the modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the embodiment. It should be noted that the above modules may be run in a computer terminal as part of the apparatus.
It should be noted that, the optional or preferred implementation manner of this embodiment may be referred to the related description in the embodiment, and will not be repeated herein.
The above-mentioned cache processing apparatus may further include a processor and a memory, where the application node determining module 302, the issue cache changing module 304, the cache processing module 306, and the like are stored as program units, and the processor executes the above-mentioned program units stored in the memory to implement corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel may be provided with one or more. The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the invention provides a nonvolatile storage medium, on which a program is stored, which when executed by a processor, implements a cache processing method.
The embodiment of the invention provides an electronic device, which comprises a processor, a memory and a program stored on the memory and capable of running on the processor, wherein the following steps are realized when the processor executes the program: determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation; under the condition that a cache change event occurs to a target node in a plurality of application nodes, responding to interaction between the target node and a configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; updating the initial data storage state of other nodes based on the cache change event to obtain other nodes refreshed to the target data storage state, wherein a first query result of the other nodes based on the target data storage state for performing the predetermined query is matched with a second query result of the target nodes based on the cache change event. The device herein may be a server, a PC, etc.
The invention also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation; under the condition that a cache change event occurs to a target node in a plurality of application nodes, responding to interaction between the target node and a configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes; updating the initial data storage state of other nodes based on the cache change event to obtain other nodes refreshed to the target data storage state, wherein a first query result of the other nodes based on the target data storage state for performing the predetermined query is matched with a second query result of the target nodes based on the cache change event.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present invention and is not intended to limit the present invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are to be included in the scope of the claims of the present invention.

Claims (10)

1. The cache processing method is characterized by comprising the following steps:
Determining a plurality of application nodes with matched initial data storage states and a preset configuration center server with interaction with the plurality of application nodes respectively, wherein the plurality of application nodes are distributed nodes with interaction isolation;
When a cache change event occurs to a target node in the plurality of application nodes, responding to interaction between the target node and the configuration center server, and issuing the cache change event to other nodes in the plurality of application nodes;
And updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to be in the target data storage state, wherein a first query result of the other nodes based on the target data storage state for performing the predetermined query is matched with a second query result of the target nodes based on the cache change event.
2. The method of claim 1, wherein the plurality of application nodes are each provided with an event publisher for interacting with the configuration center server, the method further comprising:
and sending the cache identifiers respectively corresponding to the plurality of application nodes through event publishers respectively corresponding to the plurality of application nodes, and generating node cache information at the configuration center server.
3. The method of claim 2, wherein said issuing the cache change event to other nodes of the plurality of application nodes in response to the target node interacting with the configuration center server comprises:
Issuing a target parameter indicated by the cache change event to the configuration center server through an event issuer of the target node, wherein the target parameter is used as interaction between the target node and the configuration center server;
responding to the interaction between the target node and the configuration center server, and changing the node cache information based on the target parameter to obtain updated cache information in the configuration center server;
And when the event publisher of the other node detects updating cache information, publishing the cache change event to the other node.
4. A method according to claim 3, wherein after said issuing, by the event issuer of the target node, the target parameter indicated by the cache change event to the configuration center server, the method further comprises:
determining an event publisher passing through the target node, and executing a published publishing result to the configuration center server;
Under the condition that the release result indicates release failure, executing retry release of preset retry times to the configuration center server through an event publisher of the target node to obtain a retry result;
And under the condition that the retry result indicates that the retry fails, sending alarm information indicating release failure to a preset receiving end.
5. A method according to claim 3, wherein the plurality of application nodes are respectively provided with an event receiver, and the event receiver is configured to receive the cache change event issued by the event issue unit of the corresponding application node; the issuing of the cache change event to the other nodes includes:
and pushing the cache change event to event publishers of other nodes through the configuration center server.
6. The method of claim 5, wherein updating the initial data storage state of the other node based on the cache change event to obtain the other node refreshed to the target data storage state comprises:
based on the target parameter indicated by the cache change event, notifying event receivers of other nodes by adopting event publishers of the other nodes;
And responding to the notification of the cache change event received by the event receiver of the other nodes, calling a preset cleaning function through the event receiver, and cleaning the data of the initial data storage state of the other nodes to obtain the other nodes refreshed to the target data storage state.
7. The method of any of claims 1 to 6, wherein the plurality of application nodes each interact with a predetermined database, the cache change event is executing a first data change value for a target parameter in the predetermined database and a second data change value for the target parameter for a local cache of the target node, the first data change value and the second data change value being more matched, and after obtaining the other nodes refreshed to a target data storage state, the method further comprises:
executing the predetermined query on the other nodes under the condition that the target data storage state is the state that the target parameters are cleared and the query request for executing the predetermined query on the other nodes is responded, so that a third query result is a non-returned parameter value;
Executing the preset query to the preset database to obtain a first data change value as the first query result under the condition that the third query result is the unreturned parameter value;
And under the condition of responding to a query request for executing the preset query on the target node, and obtaining the second data change value as the second query result.
8. A cache processing apparatus, comprising:
The system comprises an application node determining module, a configuration center server and a data processing module, wherein the application node determining module is used for determining a plurality of application nodes with matched initial data storage states and a preset configuration center server which has interaction with the application nodes respectively, and the application nodes are distributed nodes with interaction isolation;
The release cache change module is used for responding to the interaction between the target node and the configuration center server and releasing the cache change event to other nodes in the plurality of application nodes under the condition that the target node in the plurality of application nodes generates the cache change event;
and the cache processing module is used for updating the initial data storage state of the other nodes based on the cache change event to obtain the other nodes refreshed to be in the target data storage state, wherein a first query result of the other nodes based on the target data storage state for carrying out preset query is matched with a second query result of the target nodes based on the cache change event.
9. A non-volatile storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the cache processing method of any one of claims 1 to 7.
10. An electronic device, comprising: one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the cache processing method of any of claims 1 to 7.
CN202410361071.8A 2024-03-27 2024-03-27 Cache processing method and device, storage medium and electronic equipment Pending CN118250333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410361071.8A CN118250333A (en) 2024-03-27 2024-03-27 Cache processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410361071.8A CN118250333A (en) 2024-03-27 2024-03-27 Cache processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN118250333A true CN118250333A (en) 2024-06-25

Family

ID=91564305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410361071.8A Pending CN118250333A (en) 2024-03-27 2024-03-27 Cache processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118250333A (en)

Similar Documents

Publication Publication Date Title
US10218809B2 (en) Dynamic configuration of service communication
CN109981716B (en) Micro-service calling method and device
CN102868736B (en) A kind of cloud computing Monitoring framework design basis ground motion method and cloud computing treatment facility
US20200226185A1 (en) Publishing rest api changes based on subscriber's customized request
CN111078504A (en) Distributed call chain tracking method and device, computer equipment and storage medium
US20170064027A1 (en) Data caching in a collaborative file sharing system
US11546233B2 (en) Virtual network function bus-based auto-registration
US10521263B2 (en) Generic communication architecture for cloud microservice infrastructure
CN107786527B (en) Method and equipment for realizing service discovery
US9374417B1 (en) Dynamic specification auditing for a distributed system
CN110968603B (en) Data access method and device
CN107172214B (en) Service node discovery method and device with load balancing function
CN111510330B (en) Interface management device, method and storage medium
US20200012545A1 (en) Event to serverless function workflow instance mapping mechanism
KR20150083938A (en) System for interoperation between dds and dbms
CN116708266A (en) Cloud service topological graph real-time updating method, device, equipment and medium
Rotter et al. Telecom strategies for service discovery in microservice environments
US20230283695A1 (en) Communication Protocol for Knative Eventing's Kafka components
Korontanis et al. Real-time monitoring and analysis of edge and cloud resources
CN118250333A (en) Cache processing method and device, storage medium and electronic equipment
CN112445851A (en) Plug-in ORM framework implementation method and device, electronic equipment and storage medium
US9288177B2 (en) Inventory updating of an internet protocol (IP) alias within a highly available computing cluster
CN116647552A (en) Service processing method and system in heterogeneous micro-service cluster, terminal and storage medium
CN115373757A (en) Solving method and device for cluster monitoring data loss in Promethues fragmentation mode
CN114860432A (en) Method and device for determining information of memory fault

Legal Events

Date Code Title Description
PB01 Publication