CN116361309B - Data query system and method for updating cache data - Google Patents
Data query system and method for updating cache data Download PDFInfo
- Publication number
- CN116361309B CN116361309B CN202310639558.3A CN202310639558A CN116361309B CN 116361309 B CN116361309 B CN 116361309B CN 202310639558 A CN202310639558 A CN 202310639558A CN 116361309 B CN116361309 B CN 116361309B
- Authority
- CN
- China
- Prior art keywords
- data
- update
- cache
- target data
- updating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000004891 communication Methods 0.000 claims abstract description 52
- 230000008569 process Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 8
- 238000007726 management method Methods 0.000 description 24
- 230000002159 abnormal effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000015556 catabolic process Effects 0.000 description 5
- 238000013523 data management Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000035515 penetration Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000012536 storage buffer Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2365—Ensuring data consistency and integrity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure provides a data query system and a method for updating cache data, wherein the method includes: receiving a data query request sent by a client through a service instance; acquiring target data corresponding to the data query request; if the target data is acquired from the cache area, detecting whether the target data meets a preset cache data updating condition or not based on prestored updating information corresponding to the target data, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining the update information corresponding to the target data, and sending the re-determined update information to the communication middleware so as to update the update information stored by other service instances except the service instance receiving the data query request through the communication middleware.
Description
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a data query system and a method for updating cache data.
Background
With the rapid development of internet technology, more and more data appear on the internet, and in the process of data query, a cache system is often adopted to cache the data so as to avoid frequent query of the data in a database, thereby improving the data query performance.
In the related art, when a data query system including a cache function is used to query data, a fixed validity period is often set for different data, so that the data is cached and used in the validity period, but the abnormal conditions such as cache breakdown, cache penetration, cache avalanche and the like may occur in the data caching mode with the fixed validity period, so how to improve the reliability of the data query system becomes a technical problem to be solved in the field.
Disclosure of Invention
The embodiment of the disclosure at least provides a data query system and a method for updating data.
In a first aspect, an embodiment of the present disclosure provides a data query system, including a service cluster and a communication middleware, the service cluster including a plurality of service instances, wherein:
the service instance is used for receiving a data query request sent by a client and acquiring target data corresponding to the data query request; if the target data is acquired from the cache area, based on the update information stored by the service instance and corresponding to the target data, detecting whether the target data meets a preset cache data update condition, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining update information corresponding to the target data and transmitting the re-determined update information to the communication middleware;
And the communication middleware is used for sending the received update information to other service examples except the service example after receiving the update information sent by any service example so as to update the update information stored by the other service examples.
In one possible implementation manner, the service instance comprises an application process and a cache management component;
the application process is used for determining a query keyword corresponding to the data query request according to the received data query request;
the cache management component is used for acquiring target data matched with the query keyword from a database or the cache area corresponding to the database.
In one possible implementation, the service instance includes an update time management component;
the updating time management component is used for storing updating information corresponding to each data in the cache area; wherein the update information includes a next update time.
In one possible implementation, the service instance includes a cache asynchronous update thread;
the service instance, when updating the target data in the cache area, is configured to:
Starting an asynchronous thread through the cache asynchronous update thread, and acquiring update data corresponding to the target data from a database;
updating the target data based on the update data.
In a possible implementation manner, the update information includes a next update time, and the service instance is configured to, when redetermining the update information corresponding to the target data:
and determining the next updating time corresponding to the target data based on the preset updating time interval and the current time.
In a possible implementation manner, the service instance is further used for:
receiving first update information sent by other service instances based on the communication middleware;
determining second update information corresponding to the query keyword in update information stored by an update time management component based on the query keyword carried in the first update information;
updating the second update information based on the first update information.
In a second aspect, an embodiment of the present disclosure provides a method for updating cache data, including:
receiving a data query request sent by a client through a service instance;
acquiring target data corresponding to the data query request;
If the target data is acquired from the cache area, detecting whether the target data meets a preset cache data updating condition or not based on prestored updating information corresponding to the target data, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining the update information corresponding to the target data, and sending the re-determined update information to the communication middleware so as to update the update information stored by other service instances except the service instance receiving the data query request through the communication middleware.
In a third aspect, embodiments of the present disclosure further provide a computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps in the second aspect described above.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the steps of the second aspect described above.
According to the data query system and the method for updating the cache data, the service clusters comprising the plurality of service instances are arranged to update the cache data in the cache area, so that the risk of cache breakdown of the data query system caused by untimely updating of a large amount of cache data can be reduced, and the stability of the data query system is improved; on the other hand, after the cache data is updated, each service instance can also re-determine the update information corresponding to the cache data, and synchronize the re-determined update information to other service instances through the communication middleware, so that the synchronization of the update information is realized, the problem of calculation resource waste caused by repeated update of the same cache data in a short period due to the asynchronous update information can be avoided, and the reliability of the data query system is improved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a schematic architecture of a data query system provided by embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a service instance 13 in a data query system according to an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method for updating cache data according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for updating cache data in actual use, in the method for updating cache data according to an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart of a data query and a cache data update provided by an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
According to research, when a data query system comprising a cache function is used for querying data, a fixed validity period is often set for different data, so that the data is cached and used in the validity period, but abnormal conditions such as cache breakdown, cache penetration, cache avalanche and the like may occur in a data caching mode with the fixed validity period.
If the cache data is not updated timely, a large number of related query requests query the hot spot data after the hot spot data is out of date (i.e. needs to be updated), the database is caused to have a large number of access and query requests in a short time, so that the request of the database is blocked, and even the database is possibly crashed, so that the abnormal condition of 'cache penetration' is caused;
In addition, if the cache data is not updated timely, a large amount of cache data is out of date at a certain moment, a large amount of access and query requests exist in a short time in the database, so that the database is blocked, and even the database is crashed, so that the abnormal condition of 'cache avalanche' is caused.
Based on the above-mentioned research, the disclosure provides a data query system, a method and an apparatus for updating cache data, where by setting a service cluster including a plurality of service instances, after receiving a plurality of data query requests, the plurality of service instances can process the plurality of data query requests respectively, so as to update a plurality of cache data in a cache area at the same time, so that the risk of cache breakdown of the data query system caused by untimely updating of a large amount of cache data can be reduced, and stability of the data query system is improved; on the other hand, after the cache data is updated, each service instance can also re-determine the update information corresponding to the cache data, and synchronize the re-determined update information to other service instances through the communication middleware, so that synchronization of the update information among a plurality of service instances is realized, instantaneity of the cache data of each service instance is ensured, and reliability of the data query system is improved.
For the sake of understanding the present embodiment, first, the architecture of the data query system disclosed in the embodiments of the present disclosure is described in detail, where the data query system is composed of a service cluster and communication middleware.
Referring to fig. 1, an architecture diagram of a data query system according to an embodiment of the present disclosure includes a service cluster 11 and a communication middleware 12, where the service cluster 11 includes a plurality of service instances 13, and the service instances 13 are as follows:
the service instance 13 is configured to receive a data query request sent by the client 14, and obtain target data corresponding to the data query request; if the target data is obtained from the cache area 15, based on the update information corresponding to the target data stored in the service instance 13, detecting whether the target data meets a preset cache data update condition, if so, updating the target data in the cache area 15, and sending the updated target data to the client 14; and, re-determining update information corresponding to the target data and transmitting the re-determined update information to the communication middleware 12;
in the case of distributed deployment, each service instance 13 in the service cluster 11 may be deployed in a distributed server cluster, while each service instance may be deployed in a different server in a distributed deployment scenario, if different service instances are used to store update information corresponding to different cache data, when a certain server fails and cannot normally operate, the update information of the cache data corresponding to the service instance deployed on the server may be lost, so that abnormal situations such as normal update of the cache data cannot be caused, so that in the case of distributed deployment, each service instance 13 in the service cluster 11 may store update information corresponding to each cache data (i.e., data cached in the cache area 15) respectively, and the update information stored by each service instance is the same, so that the update information stored by a plurality of service instances is backup for each other, and stability of the data query system is improved; the cache area 15 may be a cache area 15 corresponding to the database 16, where the database 16 may include a local database and/or a third party database, and the third party database may perform data transmission with the cache area 15 through a third party data interface, and in the process of performing data query, if corresponding data cannot be queried from the local database, the data may be searched through the third party data interface, and the searched data is cached in the cache area 15, so as to improve the hit rate of the data query request; after being queried, the data in the database 16 can be cached in the corresponding cache area 15 of the database 16, so that the corresponding data can be directly found in the cache area 15 in the subsequent query process, and each service example 13 can query and acquire the data from the same cache area; whether the target data meets the preset cache data updating condition or not can be detected, whether the current time reaches the next updating time corresponding to the target data or not can be detected, if yes, it can be determined that the target data meets the preset cache data updating condition, and the target data in the cache area is updated.
In addition, for other cache data existing in the cache area, when judging whether the target data needs to be updated, whether other cache data except the target data in the cache area needs to be updated can be judged.
Specifically, whether the other cache data except the target data also meets the cache data updating condition can be detected, namely whether the current time reaches the next updating time corresponding to any cache data in the other cache data can be detected, and when the current time reaches the next updating time corresponding to any cache data, the cache data can be updated. Therefore, the computing resources of each service instance in the data query system provided by the embodiment of the disclosure can be fully utilized, and the cache data in the cache area can be updated in time.
In addition, the service cluster 11 may actively update the cache data in the cache area 16 according to other cache update manners; the cache updating mode may include at least one of the following modes:
mode 1, detecting whether the cache data in the cache area meets a cache data update condition according to a preset time interval.
The service instances 13 included in the service cluster 11 may set different time intervals, so that each service instance in the service cluster 11 may sequentially detect the cache data in the cache area 15.
For example, taking the service instance included in the service cluster 11 as an example 1 and an example 2, the time interval for the example 1 may be set to 5s, and the time interval for the example 2 may be set to 8s, then the example 1 may sequentially detect the cache data in the cache area 15 at the 5 th s, 10 th s, 15 th s, etc., and the example 2 may sequentially detect the cache data in the cache area 15 at the 8 th s, 16 th s, 24 th s, etc., so that by setting different time intervals for each service instance, the load of different service instances may be balanced, the detection waiting time of the cache area may be shortened, and the cache data updating efficiency of the cache area may be improved.
And 2, determining a target instance for detection according to the real-time load of each service instance, and detecting whether the cache data in the cache area meets the cache data updating condition by using the target instance.
Here, at a preset detection moment, according to real-time load conditions corresponding to each service instance 13, a target instance meeting a preset real-time load requirement may be determined, and whether the cache data in the cache area meet a cache data update condition is detected by using the target instance, so as to complete cache data update of the cache area, and meanwhile, improve utilization efficiency of each service instance.
In the two modes, whether the data query request of the client is received or not is not limited, and the cache data in the cache area is actively updated under the condition that the corresponding condition is met, so that the instantaneity of the cache data in the cache area can be ensured.
The communication middleware 12 is configured to, after receiving update information sent by any service instance 13, send the received update information to other service instances 13 except the service instance 13, so as to update the update information stored by the other service instances 13.
Here, it should be noted that, the client 14 may include a plurality of clients, each of the plurality of clients may send a data query request to the service cluster 11, each service instance 13 included in the service cluster 11 may have an equal opportunity to receive the data query request (or may be distributed according to a load balancing manner), and perform a data query according to the received data query request; the communication middleware 12 is configured to provide communication services for each service instance 13 in a distributed deployment environment, and each service instance 13 may issue information to the communication middleware 12 to implement data transmission.
Specifically, each service instance 13 may subscribe to the broadcast data of the communication middleware 12, so that the broadcast function of the communication middleware 12 may receive the data issued by other service instances 13, and because in the distributed deployment environment, each service instance 13 needs to store update information corresponding to each cache data, when the communication middleware 12 sends the broadcast data, the communication middleware 12 may send the broadcast data with the same content to the other service instances 13, so as to implement data synchronous update (i.e. perform synchronous update on the update information) of the other service instances 13; in addition, the communication middleware may perform data synchronization between the service instances 13 in other manners, for example, may use a message queue, a shared memory, or the like.
In a possible implementation manner, the service instance 13 includes an application process 131 and a cache management component 132, where:
the application process 131 is configured to determine, according to a received data query request, a query keyword corresponding to the data query request; the cache management component 132 is configured to obtain target data matching the query keyword from the database 16 or the cache area 15 corresponding to the database 16.
Here, the cache area 15 may be established based on a remote dictionary service (Remote Dictionary Server, dis), and when data is queried from cache data, a data value matching the query key may be found from the cache data according to a mapping relationship between the query key corresponding to the data query request and a pre-configured key-value.
The query keyword may be determined based on a request parameter included in the data query request, and when the query keyword is determined based on the request parameter, a preset prefix and the request parameter may be combined according to a preset keyword combination method, so as to obtain a query keyword corresponding to the request parameter; aiming at different requirements in practical application, preset prefixes used in the keyword combination method can be different, and the embodiment of the disclosure does not limit how to generate the query keywords based on the request parameters specifically, so that the method can be implemented in practical application.
Specifically, when the data query is performed, the data query may be performed first from the cache area 15, if the corresponding data cannot be found in the cache area 15, the data query may be performed from the database 16, and in the case that the corresponding data is found in the database 16, the update information corresponding to the corresponding data is determined according to a preset update time interval, so that the found corresponding data and update information are stored in the cache area 15, and meanwhile, the corresponding data is sent to the corresponding client 14 as a data query result.
In a possible implementation, the service instance 13 includes an update time management component 133;
the update time management component 133 is configured to store update information corresponding to each data in the cache area 15; wherein the update information includes a next update time.
The data in the cache area 15 and the corresponding update information may be preconfigured, for example, when the data query system is started, a data processing method such as a preheating method may be adopted to store part of the data in the database 16 into the cache area 15 for use by the cache area 15; alternatively, the update time management component 133 may dynamically add data according to data used in the real-time data query, for example, corresponding data and update information may be added to the cache area 15 according to a real-time data query request.
Specifically, the update time management component 133 may further search the next update time next_update_time of the data Value from the application memory based on the query key.
In this way, the embodiment of the present disclosure may trigger updating of the cache data stored in the cache area 15 based on the stored next update time and the data request, thereby ensuring the validity of the data in the cache area 15.
In one possible implementation, the service instance 13 includes a cache asynchronous update thread 134;
any service instance 13, when updating the target data in the cache area 15, is configured to:
the asynchronous thread is started by the cache asynchronous update thread 134, and update data corresponding to the target data is obtained from the database 16 and updated based on the update data.
Here, in the case where it is detected that the target data satisfies the preset cache data update condition, an asynchronous thread may be acquired from a pre-configured thread pool by the cache asynchronous update thread 134 and started to acquire update data corresponding to the target data from the database 16 by way of asynchronous processing, and update the target data based on the update data.
Specifically, when the target data is updated based on the update data, the update data may be used to replace the target data.
In this way, the process of updating the target data and other tasks are performed asynchronously, so that the service instance 13 can realize asynchronous updating of the cache data in the process of normally performing data query, thereby improving the data processing efficiency of the service instance 13 in the actual use process.
In a possible implementation manner, the update information includes a next update time, and the service instance 13 is configured, when determining the update information corresponding to the target data again, to:
and determining the next updating time corresponding to the target data based on the preset updating time interval and the current time.
The preset update time interval may be manually configured, or may be determined in real time by integrating relevant parameters such as the data amount of the real-time cache data of the cache area 15, the data processing capability of the cache area 15, the data amount of the real-time storage data of the database 16, etc., which are not limited in the embodiment of the present disclosure, so as to be capable of representing the real-time and/or expected data processing pressure of at least one element in the cache area 15, the service instance 13, and the database 16.
Specifically, the step of redefining the update information corresponding to the target data may be performed by the update time management component 133 in the service instance 13, and the update time management component 133 may determine the next update time next_update_time corresponding to the target data according to the current time and a preset update time interval update_interval; after the next updating time corresponding to the target data is calculated, storing the query key words corresponding to the target data and the next updating time corresponding to the target data to a local (namely, an application process memory); when the query key corresponding to the target data and the next update time (hereinafter abbreviated as n) are stored, the query key corresponding to the target data and the next update time n may be spliced to generate a k-n key value pair (where k and n uniquely correspond, that is, one query key corresponds to one next update time n), and the k-n key value pair is stored in a storage Map or the like, so that the next update time n matched with the query key corresponding to the target data may be found by querying the k-n key value pair, where the storage Map may be a Hash Map (Hash Map), for example.
Therefore, the next updating time corresponding to the cache data can be calculated by setting the updating time interval, so that the cache data can be updated according to the determined next updating time, and various abnormal conditions caused by untimely updating of the cache data are avoided; in addition, by setting a plurality of service examples to continuously and iteratively update the cache data, the time of the effective cache data in the cache area can be prolonged, so that the cache hit rate is improved, and the service performance is improved.
In a possible implementation manner, the service instance 13 includes a communication component 136, where the communication component 136 is configured to send the redetermined update information to the communication middleware 12, so that the redetermined update information can be synchronized to other service instances 13, so as to avoid a waste of computing resources caused by repeated updating of the target data by other service instances in a short time.
Specifically, when the redetermined update information is sent to the communication middleware 12, the query keyword corresponding to the target data and the next update time may be sent to the cache data update channel cache_update_channel of the communication middleware 12 together, and the query keyword corresponding to the data and the next update time may be broadcast to other service instances subscribed to the cache data update channel cache_update_channel through the communication middleware 12, so that the redetermined update information may be synchronized to the other service instances 13.
In practical applications, any service instance 13 may not only update the cached data according to the next update time corresponding to the cached data when performing the data query, but also receive update information sent by other service instances 13 through the communication middleware 12.
In a possible implementation, the service instance 13 is further configured to: receiving first update information sent by other service instances 13 based on the communication middleware 12; determining second update information corresponding to the query keyword in the update information stored in the update time management component 133 based on the query keyword carried in the first update information; updating the second update information based on the first update information.
The second update information comprises next update time to be updated, the first update information comprises next update time which is redetermined after the cache data is updated, and the first update information is used for updating the second update information, so that the previously stored next update time to be updated is updated according to the next update time corresponding to the redetermined cache data, and abnormal conditions such as resource waste caused by repeated update of the cache data in a short time are avoided.
In this way, the service instances can be distributed and the communication middleware synchronizes the update information to the service instances of the distributed deployment through the steps, so that the synchronization of the update time can be realized, and repeated operations such as data query, cache data update and the like are avoided.
In addition, by automatically refreshing the cache data in the cache updating mode and notifying other service examples while updating the cache data in time, the risk of cache breakdown can be avoided, and therefore hidden danger of system stability is eliminated.
Referring to fig. 2, a schematic architecture diagram of a service instance 13 in a data query system according to an embodiment of the present disclosure is provided, where the service instance 13 includes an application process 131, a cache management component 132, an update time management component 133, a cache asynchronous update thread 134, a data management component 135, and a communication component 136, where:
the application process 131 is configured to determine, according to a received data query request, a query keyword corresponding to the data query request;
the cache management component 132 is configured to obtain target data matched with the query keyword from the database 16 or the cache area 15 corresponding to the database 16;
The update time management component 133 is configured to store update information corresponding to each data in the cache area 15; wherein the update information comprises next update time;
the cache asynchronous update thread 134 initiates an asynchronous thread for acquiring update data corresponding to the target data from the database 16 and updating the target data based on the update data;
the data management component 135 is configured to query the database 16 for data;
in the case that the corresponding data cannot be found from the cached data according to the data query request, the data management component 135 may be used to query the database 16 for the data corresponding to the data query request by using the data management component 135.
The communication component 136 is configured to send the redetermined update information to the communication middleware 12.
In particular, the detailed description of the above components may refer to the relevant content above, and will not be repeated herein.
In the following, a description will be given of a method for updating cache data provided by an embodiment of the present disclosure, for convenience of understanding the present embodiment, an execution body of the method for updating cache data provided by the embodiment of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: the terminal device or server or other processing device may be a User Equipment (UE), a mobile device, a User terminal, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the cache data update method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 3, a flowchart of a method for updating cache data according to an embodiment of the disclosure is shown, where the method includes S301 to S303, where:
s301: and receiving a data query request sent by the client through the service instance.
S302: and acquiring target data corresponding to the data query request.
S303: if the target data is acquired from the cache area, detecting whether the target data meets a preset cache data updating condition or not based on prestored updating information corresponding to the target data, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining the update information corresponding to the target data, and sending the re-determined update information to the communication middleware so as to update the update information stored by other service instances except the service instance receiving the data query request through the communication middleware.
The pre-stored update information corresponding to the target data may include a next update time corresponding to the target data, where the next update time is used to determine whether the target data needs to be updated at the current moment; the update information corresponding to the target data may be stored in an update time management component of the service instance.
For example, as shown in fig. 4, a flowchart of the method for updating cache data provided by the embodiment of the disclosure in actual use may be shown in fig. 4, where in fig. 4, the application process 131 determines, according to a received data query request, a query keyword corresponding to the data query request; the cache management component 132 obtains target data matched with the query keyword from the cache area 15 corresponding to the database 16; the data management component 135 queries the data corresponding to the data query request from the database 16 in the case that the corresponding data cannot be found from the cached data according to the data query request; the update time management component 133 stores update information corresponding to each data in the cache area 15, and determines a next update time corresponding to the target data based on a preset update time interval and a current time; the cache asynchronous update thread 134 initiates an asynchronous thread for acquiring update data corresponding to the target data from the database 16 and updating the target data based on the update data; in the process of completing the above, the service instance a may send the redetermined update information to the communication middleware 12 through the communication middleware to send the redetermined update information to other service instances 13 through the communication middleware, so that the update time management component 133 of the other service instances 13 updates the next update time corresponding to the stored target data based on the received next update time to update the update information of the target data; on the other hand, after the update of the target data is completed, the updated target data may be transmitted to the client 14.
In a possible implementation manner, when acquiring the target data corresponding to the data query request, a query keyword corresponding to the data query request may be determined according to the received data query request, and the target data matched with the query keyword may be acquired from a database or the cache region corresponding to the database.
In a possible implementation manner, the update information corresponding to each data in the cache area stored in the update time management component can be obtained; wherein the update information includes a next update time.
In a possible implementation manner, when updating the target data in the cache area, an asynchronous thread can be started through the cache asynchronous updating thread, and the updating data corresponding to the target data is obtained from a database; updating the target data based on the update data.
In a possible implementation manner, when the update information corresponding to the target data is redetermined, the next update time corresponding to the target data may be determined based on a preset update time interval and the current time.
In a possible implementation manner, the update information may be updated through the following steps A1 to A3:
A1: and receiving first update information sent by other service instances based on the communication middleware.
A2: and determining second update information corresponding to the query keyword in the update information stored in the update time management component based on the query keyword carried in the first update information.
A3: updating the second update information based on the first update information.
Specifically, the details of the foregoing embodiments are referred to the relevant descriptions in the foregoing system, and are not repeated herein.
Referring to fig. 5, a flowchart of a data query process (i.e., a first stage in the figure) and a data update process (i.e., a second stage in the figure) provided in an embodiment of the disclosure is shown, and includes the following steps:
step 1, receiving a data query request.
And 2, generating a key of the target data V based on the request parameters.
And step 3, searching new target data V from the cache area based on the key.
And 4, inquiring the next updating time corresponding to V based on the key.
And 5, judging whether the current time reaches the next updating time or not.
Here, if the current time reaches the next update time, step 6 is executed, and if the current time does not reach the next update time, step 7 is executed.
And step 6, asynchronously updating the cache area data.
And 7, returning data.
The step 6 of asynchronously updating the cache area data may comprise the following steps:
step 61, starting an asynchronous thread.
Step 62, query the latest data Vn corresponding to the key.
Step 63, replacing the old data V with the Vn storage buffer area.
Step 64, calculating the next update time n corresponding to the Vn.
Step 65, storing the key and n corresponding to the Vn as a record k-n to the local.
Step 66, the k-n record is synchronized to other service instances.
Step 67, the asynchronous thread is ended.
Specifically, the details of the above steps are referred to the related descriptions in the above system, and are not repeated herein.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 6, a schematic diagram of a computer device 600 according to an embodiment of the disclosure includes a processor 601, a memory 602, and a bus 603. The memory 602 is used for storing execution instructions, including a memory 6021 and an external memory 6022; the memory 6021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 601 and data exchanged with the external memory 6022 such as a hard disk, the processor 601 exchanges data with the external memory 6022 through the memory 6021, and when the computer device 600 operates, the processor 601 and the memory 602 communicate through the bus 603, so that the processor 601 executes the following instructions:
Receiving a data query request sent by a client through a service instance;
acquiring target data corresponding to the data query request;
if the target data is acquired from the cache area, detecting whether the target data meets a preset cache data updating condition or not based on prestored updating information corresponding to the target data, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining the update information corresponding to the target data, and sending the re-determined update information to the communication middleware so as to update the update information stored by other service instances except the service instance receiving the data query request through the communication middleware.
In a possible implementation manner, in an instruction of the processor 601, the acquiring target data corresponding to the data query request includes:
determining a query keyword corresponding to a received data query request according to the data query request;
and acquiring target data matched with the query keyword from the database or the cache area corresponding to the database.
In a possible implementation manner, in the instruction of the processor 601, the updating the target data in the cache area includes:
starting an asynchronous thread through the cache asynchronous update thread, and acquiring update data corresponding to the target data from a database;
updating the target data based on the update data.
In a possible implementation manner, in an instruction of the processor 601, the redefining update information corresponding to the target data includes:
and determining the next updating time corresponding to the target data based on the preset updating time interval and the current time.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the cache data updating method described in the above-described method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, where instructions included in the program code may be used to perform the steps of the cache data updating method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (9)
1. A data query system comprising a service cluster and a communication middleware, the service cluster comprising a plurality of service instances, wherein:
the service instance is used for receiving a data query request sent by a client and acquiring target data corresponding to the data query request; if the target data is acquired from the cache area, based on the update information stored by the service instance and corresponding to the target data, detecting whether the target data meets a preset cache data update condition, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining update information corresponding to the target data and transmitting the re-determined update information to the communication middleware, wherein the update information comprises next update time;
And the communication middleware is used for sending the received update information to other service examples except the service example after receiving the update information sent by any service example so as to update the update information stored by the other service examples.
2. The system of claim 1, wherein the service instance includes an application process and a cache management component;
the application process is used for determining a query keyword corresponding to the data query request according to the received data query request;
the cache management component is used for acquiring target data matched with the query keyword from a database or the cache area corresponding to the database.
3. The system according to claim 1 or 2, wherein the service instance comprises an update time management component;
the updating time management component is used for storing updating information corresponding to each data in the cache area; wherein the update information includes a next update time.
4. The system of claim 3, wherein the service instance includes a cache asynchronous update thread;
the service instance, when updating the target data in the cache area, is configured to:
Starting an asynchronous thread through the cache asynchronous update thread, and acquiring update data corresponding to the target data from a database;
updating the target data based on the update data.
5. The system of claim 1, wherein the service instance, when redefining the update information corresponding to the target data, is configured to:
and determining the next updating time corresponding to the target data based on the preset updating time interval and the current time.
6. The system of claim 1, wherein the service instance is further configured to:
receiving first update information sent by other service instances based on the communication middleware;
determining second update information corresponding to the query keyword in update information stored by an update time management component based on the query keyword carried in the first update information;
updating the second update information based on the first update information.
7. A method for updating cache data, comprising:
receiving a data query request sent by a client through a service instance;
acquiring target data corresponding to the data query request;
If the target data is acquired from the cache area, detecting whether the target data meets a preset cache data updating condition or not based on prestored updating information corresponding to the target data, if so, updating the target data in the cache area, and sending the updated target data to the client; and re-determining the update information corresponding to the target data, and sending the re-determined update information to the communication middleware so as to update the update information stored by other service instances except the service instance receiving the data query request through the communication middleware, wherein the update information comprises the next update time.
8. A computer device, comprising: a processor, a memory and a bus, said memory storing machine-readable instructions executable by said processor, said processor and said memory communicating over the bus when the computer device is running, said machine-readable instructions when executed by said processor performing the steps of the cache data updating method of claim 7.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the cache data updating method of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310639558.3A CN116361309B (en) | 2023-05-31 | 2023-05-31 | Data query system and method for updating cache data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310639558.3A CN116361309B (en) | 2023-05-31 | 2023-05-31 | Data query system and method for updating cache data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116361309A CN116361309A (en) | 2023-06-30 |
CN116361309B true CN116361309B (en) | 2023-09-05 |
Family
ID=86910947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310639558.3A Active CN116361309B (en) | 2023-05-31 | 2023-05-31 | Data query system and method for updating cache data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116361309B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102281332A (en) * | 2011-08-31 | 2011-12-14 | 上海西本网络科技有限公司 | Distributed cache array and data updating method thereof |
CN110647318A (en) * | 2019-09-29 | 2020-01-03 | 星环信息科技(上海)有限公司 | Method, device, equipment and medium for creating instance of stateful application |
CN110825772A (en) * | 2019-10-28 | 2020-02-21 | 爱钱进(北京)信息科技有限公司 | Method and device for synchronizing memory data of multiple service instances and storage medium |
CN111767314A (en) * | 2020-06-29 | 2020-10-13 | 中国平安财产保险股份有限公司 | Data caching and querying method and device, lazy caching system and storage medium |
CN114036195A (en) * | 2021-11-11 | 2022-02-11 | 深圳乐信软件技术有限公司 | Data request processing method, device, server and storage medium |
CN114675987A (en) * | 2022-04-18 | 2022-06-28 | 北京高途云集教育科技有限公司 | Cache data processing method and device, computer equipment and storage medium |
CN116192956A (en) * | 2023-01-04 | 2023-05-30 | 北京皮尔布莱尼软件有限公司 | Cache data updating method, system, computing device and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9569193B2 (en) * | 2012-09-07 | 2017-02-14 | Oracle International Corporation | System and method for patching java cloud services for use with a cloud computing environment |
-
2023
- 2023-05-31 CN CN202310639558.3A patent/CN116361309B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102281332A (en) * | 2011-08-31 | 2011-12-14 | 上海西本网络科技有限公司 | Distributed cache array and data updating method thereof |
CN110647318A (en) * | 2019-09-29 | 2020-01-03 | 星环信息科技(上海)有限公司 | Method, device, equipment and medium for creating instance of stateful application |
CN110825772A (en) * | 2019-10-28 | 2020-02-21 | 爱钱进(北京)信息科技有限公司 | Method and device for synchronizing memory data of multiple service instances and storage medium |
CN111767314A (en) * | 2020-06-29 | 2020-10-13 | 中国平安财产保险股份有限公司 | Data caching and querying method and device, lazy caching system and storage medium |
CN114036195A (en) * | 2021-11-11 | 2022-02-11 | 深圳乐信软件技术有限公司 | Data request processing method, device, server and storage medium |
CN114675987A (en) * | 2022-04-18 | 2022-06-28 | 北京高途云集教育科技有限公司 | Cache data processing method and device, computer equipment and storage medium |
CN116192956A (en) * | 2023-01-04 | 2023-05-30 | 北京皮尔布莱尼软件有限公司 | Cache data updating method, system, computing device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116361309A (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107943594B (en) | Data acquisition method and device | |
CN110489417B (en) | Data processing method and related equipment | |
KR100791628B1 (en) | Method for active controlling cache in mobile network system, Recording medium and System thereof | |
US10489476B2 (en) | Methods and devices for preloading webpages | |
US20050235019A1 (en) | Method and system for transmitting data for data synchronization between server and client when data synchronization session was abnormally terminated | |
US20110219093A1 (en) | Synchronizing services across network nodes | |
EP2579167A1 (en) | Method for active information push and server therefor | |
US9590947B2 (en) | IP management method, client and server | |
US20210158310A1 (en) | Blockchain-based transaction processing methods and apparatuses and electronic devices | |
CN106657433B (en) | Naming method and device for physical network card in multi-network snap ring environment | |
CN111209349A (en) | Method and device for updating session time | |
CN112069169A (en) | Block data storage method and device, electronic equipment and readable storage medium | |
CN114064668A (en) | Method, electronic device and computer program product for storage management | |
CN115421764A (en) | Method, device, equipment and storage medium for identifying module to be upgraded | |
US20240073291A1 (en) | Identifying outdated cloud computing services | |
US20150278364A1 (en) | Method and system for second-degree friend query | |
CN116361309B (en) | Data query system and method for updating cache data | |
WO2018050055A1 (en) | Data request processing method and system, access device, and storage device therefor | |
CN117473011A (en) | Data synchronization method, device and hybrid cache system | |
CN101616002B (en) | User identity authentication method and device thereof | |
US8281000B1 (en) | Variable-length nonce generation | |
CN109254853B (en) | Data sharing method, data sharing system and computer readable storage medium | |
CN113420241A (en) | Page access method and device, electronic equipment and storage medium | |
KR101298852B1 (en) | Method of restoring file and system for the same | |
CN113379542B (en) | Block chain transaction query method, device, medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |