CN117390068A - Data query method, device, equipment and storage medium - Google Patents

Data query method, device, equipment and storage medium Download PDF

Info

Publication number
CN117390068A
CN117390068A CN202311288987.7A CN202311288987A CN117390068A CN 117390068 A CN117390068 A CN 117390068A CN 202311288987 A CN202311288987 A CN 202311288987A CN 117390068 A CN117390068 A CN 117390068A
Authority
CN
China
Prior art keywords
user
query
preset
request
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311288987.7A
Other languages
Chinese (zh)
Inventor
胡峥
李江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Bank Co Ltd
Original Assignee
China Merchants Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Bank Co Ltd filed Critical China Merchants Bank Co Ltd
Priority to CN202311288987.7A priority Critical patent/CN117390068A/en
Publication of CN117390068A publication Critical patent/CN117390068A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data query method, a device, equipment and a storage medium, and relates to the technical field of computers, wherein the method comprises the following steps: responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. The invention can provide highly available data query services.

Description

Data query method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data query method, apparatus, device, and storage medium.
Background
Query services that alter infrequent hotspot data are widely used in a variety of scenarios. For example, one of the systems is a user management system, and such a system needs to frequently query data such as account information for operations such as login, registration, and personal data viewing of the user. In the above scenario, the frequency of these hotspot data changes is relatively low, but the query requests are very frequent. In order to handle a large number of concurrent peripheral requests, reduce the burden on the database and ensure continuity of service, corresponding measures need to be taken.
According to the scheme, the cache middleware is added at the server, and the server loads hot spot data into the cache middleware, so that frequent access to a database can be reduced. However, when middleware fails and is not available, peripheral requests penetrate directly to the database level for retrieval, and the database is subjected to more pressure and load due to no buffering and optimization of the middleware. Frequent direct database accesses may cause database crashes, affecting all peripheral users and failing to achieve high availability for individual peripheral uses.
Disclosure of Invention
The invention mainly aims to provide a data query method, a device, equipment and a storage medium, which aim to solve the problem that the existing hot spot data query scheme lacks high availability.
In order to achieve the above object, the present invention provides a data query method, which is applied to a client, the method comprising:
responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side;
receiving the current limiting parameters returned by the server side;
performing a current limit check on the request sent by the user based on the current limit parameter;
And if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result.
Optionally, the step of performing multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query policy, and returning a query result includes:
the request content of the request sent by the user is locally cached at the client side for inquiry;
if the query time of the local cache of the client does not exceed the preset client local cache query time threshold and the target data is obtained by query, returning the target data to the user;
if the query time of the local cache of the client exceeds the preset client local cache query time threshold or the target data cannot be queried, forwarding a request sent by the user to the server;
receiving the target data returned by the server side;
and updating the client local cache based on the target data, and returning the target data to the user.
Optionally, the step of sending, to the server, a request for querying a preset current limit parameter corresponding to the user in response to the request sent by the user includes:
Generating a preset number of tokens for the user;
responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token;
and if the token accords with the preset access condition, sending a current limiting parameter corresponding to the user, which is inquired to be preset, to a server.
Optionally, the step of generating a preset number of tokens for the user is preceded by:
responding to a registration instruction of a user, setting a current limiting parameter corresponding to the user and sending the current limiting parameter corresponding to the user to a server side.
Optionally, the present invention provides a data query method, where the method is applied to a server, and the method includes:
receiving a request sent by a client for inquiring preset current limiting parameters corresponding to the user;
and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result.
Optionally, the step of obtaining the current limiting parameter based on the request for querying the current limiting parameter corresponding to the preset user and sending the current limiting parameter to the client includes:
receiving a request sent by the user and forwarded by the client;
the request content of the request sent by the user is locally cached in a server for inquiry;
if the query time of the local cache of the server does not exceed the preset threshold value of the query time of the local cache of the server and the target data is obtained through query, returning the target data to the client;
if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache;
if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained through query, returning the target data to the client, and updating the server local cache based on the target data;
if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried to obtain, querying the request content in the database and triggering an alarm function;
Acquiring the target data inquired in the database;
the target data is returned to the client, and the server local cache and the middleware cache are updated based on the target data.
Optionally, the step of receiving the request sent by the client for querying the preset current limit parameter corresponding to the user includes:
and based on a preset refreshing frequency, taking the database as a reference object, and synchronously refreshing the middleware cache.
The embodiment of the invention also provides a data query device, which comprises:
the sending module responds to a request sent by a user and sends a request for inquiring preset current limiting parameters corresponding to the user to the server;
the receiving module is used for receiving the current limiting parameters returned by the server side;
the checking module is used for checking the current limit of the request sent by the user based on the current limit parameter;
and the query module is used for carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy if the flow of the request sent by the user does not reach a preset flow threshold value, and returning a query result.
The embodiment of the invention also provides a terminal device which comprises a memory, a processor and a data query program stored in the memory and capable of running on the processor, wherein the data query program realizes the data query method when being executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a data query program, and the data query program realizes the data query method when being executed by a processor.
The data query method, the device, the equipment and the storage medium provided by the embodiment of the invention respond to the request sent by the user and send a request for querying the preset current limiting parameters corresponding to the user to the server; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided.
Drawings
FIG. 1 is a schematic diagram of functional modules of a terminal device to which a data query device of the present invention belongs;
FIG. 2 is a flow chart of an exemplary embodiment of a data query method according to the present invention;
FIG. 3 is a schematic diagram of the overall architecture of the data query method of the present invention;
FIG. 4 is a schematic diagram of a functional module of the data query method of the present invention;
FIG. 5 is a schematic diagram of a client data query flow in an embodiment of the present invention;
FIG. 6 is a flow chart of another exemplary embodiment of a data query method of the present invention;
FIG. 7 is a flow chart of another exemplary embodiment of a data query method of the present invention;
FIG. 8 is a flow chart of another exemplary embodiment of a data query method of the present invention;
FIG. 9 is a flowchart of another exemplary embodiment of a data query method of the present invention;
FIG. 10 is a flowchart of another exemplary embodiment of a data query method of the present invention;
FIG. 11 is a schematic diagram of a server-side data query flow in an embodiment of the present invention;
FIG. 12 is a flowchart of another exemplary embodiment of a data query method of the present invention;
fig. 13 is a schematic diagram of an interaction flow between a client and a server in the data query method of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided.
Technical terms related to the embodiment of the invention:
redis: redis is an open-source high-performance key-value storage system, and is also widely used as a middleware. It stores data in the form of an in-memory database and provides persistence and disk storage functions.
SDK: the SDK (Software Development Kit ) is intended to assist developers in creating, testing, and deploying software for a particular platform or application. SDKs are typically composed of a series of libraries, APIs, example code, documents, and tools to simplify the development process and provide the necessary resources and guidance. By using the SDK, a developer can more efficiently develop software, accelerate the development period, and construct a stable and reliable application program with rich functions by utilizing tools and resources provided by the SDK.
RPC: RPC (Remote Procedure Call ) is a computer communication protocol for making remote calls (i.e., calling functions or methods on other computers) between different computers on a network, making it behave like a local call. In a conventional local procedure call, the function or method caller and callee are in the same process and can exchange data directly through memory. In a distributed system, the memory cannot be directly shared between different computers, so that RPC is required for remote communication. By using RPC, function or method call can be conveniently carried out between different computers in the distributed system, cooperation and communication process between the systems are simplified, and expandability and flexibility of the system are improved. Common RPC frameworks are gRPC, apache thread, dubbo, etc.
Token bucket: token buckets are a common algorithm and data structure for controlling access rates or limiting request frequencies. It may be used to limit the amount of requests for a certain system or service to protect the system from overload or abuse. By using token buckets, the rate of requests can be effectively controlled to prevent the system from being overwhelmed by too many requests. The token bucket algorithm also has certain flexibility, and can be adjusted according to actual requirements, for example, different token generation rates and bucket sizes can be set according to different interfaces or user types.
Bucket missing algorithm: the leaky bucket algorithm is a flow control algorithm that sends tokens into the token bucket at a constant rate. When a request arrives, if there are enough tokens in the token bucket, the request is allowed to pass and a corresponding number of tokens are consumed from the token bucket; if there are not enough tokens in the token bucket, the request is denied. Thus, the leaky bucket algorithm may achieve a fixed request processing rate.
Apache Ignite: apache Ignit is an open source memory computing platform, providing distributed caching functionality. It may store data in memory to speed up access and response times.
MongoDB: mongoDB is a document-oriented NoSQL database with built-in caching function. It can improve the performance of data access by configuration and indexing.
Query services that alter infrequent hotspot data are widely used in a variety of scenarios. For example, one of the systems is a user management system, and such a system needs to frequently query data such as account information for operations such as login, registration, and personal data viewing of the user. In the above scenario, the frequency of these hotspot data changes is relatively low, but the query requests are very frequent. In order to handle a large number of concurrent peripheral requests, reduce the burden on the database and ensure continuity of service, corresponding measures need to be taken.
The main current solutions are read-write separation and one-level buffer addition.
According to the read-write separation scheme, peripheral requests directly penetrate through a database read-library layer for retrieval, the peripheral requests are limited by the performance of a database, the throughput is obviously insufficient, and when the database is not available, the whole query service is not available, and the processing of peripheral business is influenced, so that loss is caused.
According to the scheme of adding the first-level cache, the server loads hot spot data into the cache middleware, when the middleware fails and is not available, peripheral requests directly penetrate through a database layer to be retrieved, so that impact is caused to a server database, all peripheral users can be possibly affected, and high availability of each peripheral cannot be achieved.
These solutions are too much dependent on the server side to achieve both performance and reliability. For the scheme of reading and writing separation, the performance is limited by a database, and for the scheme of adding a first-level cache, the scheme is too dependent on a server end, so that the practical high availability cannot be realized.
The embodiment of the invention considers that: according to the scheme, the cache middleware is added at the server, and the server loads hot spot data into the cache middleware, so that frequent access to a database can be reduced. However, when middleware fails and is not available, peripheral requests penetrate directly to the database level for retrieval, and the database is subjected to more pressure and load due to no buffering and optimization of the middleware. Frequent direct database accesses may cause database crashes, affecting all peripheral users and failing to achieve high availability of individual peripheral uses.
Therefore, the embodiment of the invention provides a solution, and the current limiting parameters are used for carrying out the current limiting check on the request sent by the user, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided.
Specifically, referring to fig. 1, fig. 1 is a schematic diagram of functional modules of a terminal device to which a data query device of the present invention belongs. The data querying means may be device independent means capable of data processing, which may be carried on the device in the form of hardware or software. The device can be an intelligent mobile terminal with a data processing function such as a mobile phone and a tablet personal computer, and can also be a fixed device or a server with a data processing function.
In this embodiment, the apparatus to which the data query device belongs at least includes an output module 110, a processor 120, a memory 130, and a communication module 140.
The memory 130 stores an operating system and a data query program; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a bluetooth module, and the like, and communicate with an external device or a server through the communication module 140.
Wherein the data query program in the memory 130 when executed by the processor performs the steps of:
responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side;
receiving the current limiting parameters returned by the server side;
Performing a current limit check on the request sent by the user based on the current limit parameter;
and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
the request content of the request sent by the user is locally cached at the client side for inquiry;
if the query time of the local cache of the client does not exceed the preset client local cache query time threshold and the target data is obtained by query, returning the target data to the user;
if the query time of the local cache of the client exceeds the preset client local cache query time threshold or the target data cannot be queried, forwarding a request sent by the user to the server;
receiving the target data returned by the server side;
and updating the client local cache based on the target data, and returning the target data to the user.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
Generating a preset number of tokens for the user;
responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token;
and if the token accords with the preset access condition, sending a current limiting parameter corresponding to the user, which is inquired to be preset, to a server.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
responding to a registration instruction of a user, setting a current limiting parameter corresponding to the user and sending the current limiting parameter corresponding to the user to a server side.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
receiving a request sent by a client for inquiring preset current limiting parameters corresponding to the user;
and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
receiving a request sent by the user and forwarded by the client;
the request content of the request sent by the user is locally cached in a server for inquiry;
if the query time of the local cache of the server does not exceed the preset threshold value of the query time of the local cache of the server and the target data is obtained through query, returning the target data to the client;
if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache;
if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained through query, returning the target data to the client, and updating the server local cache based on the target data;
if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried to obtain, querying the request content in the database and triggering an alarm function;
Acquiring the target data inquired in the database;
the target data is returned to the client, and the server local cache and the middleware cache are updated based on the target data.
Further, the data query program in the memory 130 when executed by the processor also implements the steps of:
and based on a preset refreshing frequency, taking the database as a reference object, and synchronously refreshing the middleware cache.
According to the scheme, the request for inquiring the preset current limiting parameters corresponding to the user is sent to the server side in response to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided.
Based on the above device architecture, but not limited to the above architecture, the method embodiments of the present invention are presented.
The main execution body of the method of this embodiment may be a data query device, which may be a device-independent device capable of performing data processing, and may be carried on the device in a form of hardware or software.
Referring to fig. 2, fig. 2 is a flowchart of an exemplary embodiment of a data query method according to the present invention. The method is applied to the client, and the data query method comprises the following steps:
step S50, responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side.
Referring to fig. 3, fig. 3 is a schematic diagram of the overall architecture of the data query method of the present invention.
The overall architecture adopts a client side mode and a server side mode, and the client side can request services from the server side through an SDK (software development kit), micro-service calling and routing gateway calling modes. The service side is deployed one time, and the service users with multiple access modes directly access, so that each user does not need to be deployed or modified independently.
In the client, the service user is an application of the peripheral service user.
In the server side, an application A represents an application of the server side for providing external query service; redis is a middleware for caching hot spot data at a server side; the application B is used for executing data updating operation and synchronizing redis data; the database is used for storing hot spot data.
Referring to fig. 4, fig. 4 is a schematic diagram of a functional module of the data query method of the present invention.
The client comprises an admission check module, a local cache module and a current limiting check module.
The admission verification module can generate a designated token for a user in a client token bucket when the client application is started, and then respond to a request sent by the user and judge admission conditions of the request sent by the user based on the designated token.
The local cache module is a client local cache, and the data query is carried out on the client local cache, so that frequent requests to a server side can be reduced, the data access speed is improved, the pressure of the server side is reduced, and the performance and the user experience of query service can be obviously improved.
The flow limiting verification module performs flow limiting check on a request sent by a user based on preset flow limiting parameters, and performs multi-level cache query on request content of the request based on a preset multi-level query strategy if the flow of the request sent by the user does not reach a preset flow threshold.
The server side comprises a processing layer, a guarantee layer and a database. The processing layer comprises an access control module, a flow control module, a local cache module and a rediscache module.
The local caching module and the redis caching module provide a two-level caching scheme of server local caching and redis caching, and are directly provided with service by caching under normal conditions, the average response time is about 1ms, and the method has excellent response capability.
And an admission control module: the admission control module can perform identity verification, authority control, request filtering, attack defense and the like on the request sent by the client, and specific functions of the admission control module are formulated according to actual requirements.
And a flow control module: the flow control module can allocate a designated flow limit for each user, and can directly reject the flow when abnormal flow accesses the server, so that the abnormal flow sudden increase of one user is prevented from affecting other users. For the flow sudden increase in the plans such as the demand of the robbery, the flow limit can be added to the user in advance, so that the influence on the normal service scene is avoided.
In addition, the guarantee layer comprises a cache refreshing module, a cache degradation module, a cache monitoring module and an abnormality alarming module.
And the cache refreshing module is used for: the cache refreshing module plays a role in actively refreshing cache data in the maintenance operation of the database so as to ensure timeliness of externally provided service data.
And a cache degradation module: the cache degradation module has a cache degradation function, and when one of the local cache or the redis cache of the server is abnormal, the other one accepts service; when the server local cache and the redis cache are abnormal, the database receives the service, so that the continuity of the query service is not influenced, and the high availability of the query service is ensured.
And the cache monitoring module: the buffer monitoring module can monitor the states of the local buffer and the redis buffer of the server in real time, and when the local buffer and the redis buffer of the server are not inquired to obtain target data, the abnormal alarm module is informed to send abnormal alarm.
An abnormality alarm module: when the cache monitoring module monitors that the local cache of the server and the redis cache are not queried to obtain target data, the abnormal alarm module is informed to send abnormal alarm, and manual intervention is informed to recover the redis cache data as soon as possible.
Step S50, responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side.
Referring to fig. 5, fig. 5 is a schematic diagram of a client data query flow in an embodiment of the present invention.
The flow limiting parameters may be a limited request number, a limited concurrent connection number, a limited request rate, a time window limit, an API quota limit, a priority limit, and the like, where each flow limiting parameter has a corresponding flow threshold.
Where the limited number of requests is the limited number of requests that each user can send over a period of time, typically in units of a fixed number of times or request rate, such as a maximum number of requests per second or a maximum number of requests per minute.
The limited concurrent connection number is to limit the number of concurrent connections that each user can simultaneously establish, which can prevent excessive concurrent connections from occupying server resources and ensure the stability and reliability of the system.
The limited request rate is the rate at which each user is limited to send requests. For example, the maximum number of requests allowed to be transmitted per second is set.
The time window limit is a limit on the number of requests that each user or client can send within a given time window. For example, the number of requests to be transmitted is set up to the maximum allowable number within one hour.
The API quota limit is a limit to the number of calls or request rates per user to a particular API interface. This may protect API resources from abuse or from excessive access.
The priority limit is to set the priority of the requests for different users, ensuring that high priority requests can be responded to more quickly.
Wherein, the specific content and specific parameter value of the current limiting parameter can be set according to specific service requirements and system performance.
In one embodiment, before a request for inquiring a preset current limiting parameter corresponding to a user is sent to a server in response to a request sent by the user, the current limiting parameter corresponding to the user may be set in response to a registration instruction of the user and the current limiting parameter corresponding to the user may be sent to the server.
The current limiting request corresponding to the user is stored at the server side.
Step S60, receiving the current limiting parameters returned by the server side.
After receiving a request sent by a client for inquiring preset current limiting parameters corresponding to a user, the server side obtains the corresponding current limiting parameters and sends the corresponding current limiting parameters to the client, and the client receives the current limiting parameters returned by the server side.
And step S70, carrying out flow limiting check on the request sent by the user based on the flow limiting parameters.
Specifically, as an implementation manner, according to a request sent by a user, a flow parameter of the request may be obtained and compared with a flow limiting parameter. If the flow parameter of the request exceeds the flow threshold of the flow limiting parameter, prompting that the flow reaches the upper limit and refusing the access of the request, thereby avoiding the source of abnormal flow, avoiding the influence of abnormal sudden increase of the flow of a certain user on other users and ensuring the safety of query service.
Step S80, if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result.
Wherein each flow limiting parameter has a preset flow threshold.
In one embodiment, the preset multi-level cache query policy may be a multi-level cache query policy based on a client and a server.
The method comprises the steps of setting a client local cache at a client, and setting a server local cache, a middleware cache and a database at a server.
The preset multi-level cache query policy may be expressed as follows:
1. the client can query the local cache of the client preferentially, and if hit data is directly returned, the query request is ended;
2. if the local cache data of the client fails, a request is sent to the server, the server can inquire the local cache of the server preferentially after receiving the request, and if hit data is directly returned, the data of the local cache of the client can be updated at the same time, and the inquiry request is ended;
3. if the data of the server local cache is invalid, the server side can inquire the data in the middleware cache again, if the hit data is directly returned, the data of the server local cache and the client local cache can be updated at the same time, and the inquiry request is ended;
4. If the middleware cache is abnormal, the server side can inquire the data of the database again, and meanwhile, the middleware cache, the server local cache and the client side local cache are updated, and an inquiry result is returned.
According to the scheme, a request for inquiring the preset current limiting parameters corresponding to the user is sent to a server side by responding to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured.
Referring to fig. 6, fig. 6 is a flowchart illustrating another exemplary embodiment of a data query method according to the present invention.
Based on the embodiment shown in fig. 2, in step S80, if the flow of the request sent by the user does not reach the preset flow threshold, the multi-level cache query is performed on the request content of the request sent by the user based on the preset multi-level cache query policy, and the returning the query result includes:
step S81, the request content of the request sent by the user is locally cached at the client side for inquiry.
After the data is requested from the server for the first time, the data is loaded into the local cache of the client, and meanwhile, the inquiry time length threshold of the local cache of the client is set.
Step S82, if the query time of the local cache of the client does not exceed the preset threshold of the query time of the local cache of the client and the target data is obtained by query, the target data is returned to the user.
After the data is requested from the server for the first time, the data is loaded into the local cache of the client, meanwhile, a query time threshold of the local cache of the client is set, for example, 10 minutes, if the query time of the local cache of the client is smaller than the query time threshold of the local cache of the client, the query service of the client directly queries from the local cache of the client without requesting the server.
Step S83, if the query time of the local cache at the client exceeds the preset threshold value of the query time of the local cache at the client or the target data cannot be queried, forwarding the request sent by the user to the server.
And if the query time of the local cache of the client exceeds a preset local cache query time threshold of the client and the target data cannot be obtained from the local cache query of the client, or if the query time of the local cache of the client does not exceed the preset local cache query time threshold of the client, but the query in the local cache of the client is completed and the target data cannot be obtained, forwarding the request sent by the user to the server.
When the local cache of the client fails, a query request is directly sent to the server, so that service continuity is not affected. If the client application fails as a whole, only the appointed service of the client is affected, and other services are not affected in a diffusion way.
The server side can receive different clients and access the query service at the same time, and the different clients respectively have one part of cache data, do not interfere with each other and do not affect each other, so that the high availability among different service parties is greatly improved.
The client can provide high concurrency query service for multiple parties and access in multiple modes, and can request service from the server through SDK, micro service call and routing gateway call modes, so that one set of deployment is realized, different service users can directly access in multiple modes, adaptation is not needed, and excessive resource deployment for supporting access in multiple different modes is avoided.
Step S84, receiving the target data returned by the server side.
After receiving a request sent by a user and forwarded by a client, the server performs data query on the server based on a preset multi-level cache query strategy to obtain target data and returns the target data to the client, and the client receives the target data returned by the server.
Step S85, updating the local cache of the client based on the target data, and returning the target data to the user.
After receiving the target data returned by the server, updating the local cache of the client based on the target data for the next data query. And the local cache of the client is updated in time based on the target data, so that frequent requests to the server can be reduced, the data access speed is improved, the pressure of the server is reduced, and the performance and user experience of the query service can be remarkably improved. In addition, the embodiment of the invention queries the local cache of the client and updates the client cache based on the target data, but does not depend on the server, and when the whole short fault of the server is unavailable, the data can still be queried normally through the client, so that the loss caused by system faults is effectively reduced.
According to the scheme, a request for inquiring the preset current limiting parameters corresponding to the user is sent to a server side by responding to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. The method comprises the steps of locally caching a request content of a request sent by a user at a client; if the query time of the local cache of the client does not exceed the preset client local cache query time threshold and the target data is obtained by query, returning the target data to the user; if the query time of the local cache of the client exceeds the preset client local cache query time threshold or the target data cannot be queried, forwarding a request sent by the user to the server; receiving the target data returned by the server side; and updating the client local cache based on the target data, and returning the target data to the user.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured. After receiving the target data returned by the server, the embodiment of the invention updates the local cache of the client based on the target data for the next data query. And the local cache of the client is updated in time based on the target data, so that frequent requests to the server can be reduced, the data access speed is improved, the pressure of the server is reduced, and the performance and user experience of the query service can be remarkably improved. In addition, the embodiment of the invention queries the local cache of the client and updates the client cache based on the target data, but does not depend on the server, and when the whole short fault of the server is unavailable, the data can still be queried normally through the client, so that the loss caused by system faults is effectively reduced. In addition, when the local cache of the client fails, a query request is directly sent to the server, so that service continuity is not affected. If the client application fails as a whole, only the appointed service of the client is affected, and other services are not affected in a diffusion way. In addition, the embodiment of the invention can support different clients to access the query service at the same time, and the different clients respectively have one part of cache data, do not interfere with each other and do not affect each other, so that the high availability among different service parties is greatly improved. In addition, the embodiment of the invention can simultaneously provide high-concurrency query service for multiple parties, provide access in multiple modes, enable the client to request service from the server through SDK, micro service call and routing gateway call modes, realize one set of deployment, enable different service users to directly access in multiple modes, avoid adaptation and transformation, and avoid deploying excessive resources for supporting access in multiple different modes.
Referring to fig. 7, fig. 7 is a flowchart of another exemplary embodiment of a data query method according to the present invention.
Based on the embodiment shown in fig. 2, the step S50, before sending, to the server, a request for querying a preset current limit parameter corresponding to the user in response to the request sent by the user, includes:
step S20, generating a preset number of tokens for the user.
When the client application is started, various parameters are initialized, and a preset number of tokens are generated for the user in the client.
In one embodiment, a token bucket algorithm, a leaky bucket algorithm, or the like may be used to generate a preset number of tokens for the user.
In this embodiment, a preset number of tokens are generated for the client through the token bucket. When a client application is started, various parameters are initialized and a specified number of tokens are generated in a token bucket, which can be used to limit the access rate of a user to the client application. For example, a token bucket may be provided that generates n tokens per second, from which a token needs to be retrieved before a user initiates a request, and if the token bucket is empty, wait until enough tokens are available. Therefore, the access frequency of the user to the client application can be effectively controlled, the pressure brought by the instantaneous high and the query service is avoided, and a certain flow control and protection mechanism is provided.
And step S30, responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token.
In one embodiment, when a user sends a request, the server first checks whether there are enough tokens in the token bucket to be used. If there are enough tokens in the token bucket, the request is allowed to enter the server side for processing. At the same time, the server consumes a token from the token bucket. If there are not enough tokens in the token bucket, the server side will process according to the specifically set restriction policy. This may include rejecting the request, delaying processing, or returning a specific error response. At this point, the user needs to wait for a sufficient number of tokens to be regenerated in the token bucket to initiate a new request. The token bucket may be refreshed and replenished periodically to restore its capacity. This is typically done according to a set rate or time window to ensure that the token bucket continues to provide tokens.
Wherein, by judging based on the admission condition of the token, the processing speed and frequency of the request can be limited, and the access to the server resource can be controlled.
Step S40, if the token accords with the preset admittance condition, sending a query preset current limiting parameter corresponding to the user to a server.
Wherein, as an implementation mode, the preset admission condition may be that tokens can be taken out from the token bucket, that is, the number of the tokens is not 0.
According to the scheme, a request for inquiring the preset current limiting parameters corresponding to the user is sent to a server side by responding to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. Wherein a preset number of tokens are generated for the user; responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token; and if the token accords with the preset access condition, sending a current limiting parameter corresponding to the user, which is inquired to be preset, to a server.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured. The embodiment of the invention judges the admission condition of the request sent by the user based on the token, for example, a token bucket which generates n tokens per second can be set, one token needs to be acquired from the token bucket before the user initiates the request, and if the token bucket is empty, the user waits until enough tokens are available, thereby effectively controlling the access frequency of the user to the client application, avoiding the pressure caused by high instantaneous concurrence to the query service and providing a certain flow control and protection mechanism.
Referring to fig. 8, fig. 8 is a flowchart illustrating another exemplary embodiment of a data query method according to the present invention.
Based on the embodiment shown in fig. 7, the step S20 includes, before generating a preset number of tokens for the user:
step S10, responding to a registration instruction of a user, setting a current limiting parameter corresponding to the user and sending the current limiting parameter corresponding to the user to a server side.
The current limiting parameters corresponding to the user can be set according to the actual demands of the user. And according to the requirements, the access admission and the concurrent control dynamic flow distribution are carried out on each user, so that the effective flow control can be carried out, and the unknown flow impact is avoided.
In order to cope with the flow sudden increase in the plans such as the robbery, the flow threshold value can be raised to the corresponding user in advance, so that the influence on the normal service scene is avoided.
According to the scheme, a request for inquiring the preset current limiting parameters corresponding to the user is sent to a server side by responding to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. Wherein a preset number of tokens are generated for the user; responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token; and if the token accords with the preset access condition, sending a current limiting parameter corresponding to the user, which is inquired to be preset, to a server. In addition, in response to a registration instruction of a user, setting a current limiting parameter corresponding to the user and sending the current limiting parameter corresponding to the user to a server side.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured. The embodiment of the invention judges the admission condition of the request sent by the user based on the token, for example, a token bucket which generates n tokens per second can be set, one token needs to be acquired from the token bucket before the user initiates the request, and if the token bucket is empty, the user waits until enough tokens are available, thereby effectively controlling the access frequency of the user to the client application, avoiding the pressure caused by high instantaneous concurrence to the query service and providing a certain flow control and protection mechanism. In addition, the embodiment of the invention can carry out effective flow control and avoid unknown flow impact by setting the flow limiting parameters corresponding to the users in advance and carrying out access admittance and concurrent control dynamic flow distribution on each user according to the requirements.
Referring to fig. 9, fig. 9 is a flowchart of another exemplary embodiment of a data query method according to the present invention. The method is applied to a server side and comprises the following steps:
step S200, a request for inquiring the preset current limiting parameters corresponding to the user is received, wherein the request is sent by the client.
The server side stores the current limiting parameters corresponding to the users.
Step S300, obtaining the current limiting parameter based on the request for inquiring the preset current limiting parameter corresponding to the user, and sending the current limiting parameter to the client, so that the client performs current limiting inspection on the request corresponding to the user based on the current limiting parameter, if the flow of the request corresponding to the user does not reach the preset flow threshold, performing multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result.
After the server side sends the current limiting parameters to the client side, the client side performs current limiting check on the request sent by the user based on the current limiting parameters, so that only the request which does not reach the preset flow threshold value can pass through, the load of the server and the pressure of the database are reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. And the request sent by the user is subjected to the current limiting inspection based on the current limiting parameters, and once the current limiting parameters exceed the threshold value, the flow is prompted to reach the upper limit, so that the source of abnormal flow can be avoided, the influence of abnormal sudden increase of the flow of a certain user on other users is avoided, and the safety of query service is ensured.
And then, if the flow of the request corresponding to the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request corresponding to the user based on a preset multi-level query strategy, and returning a query result.
The multi-level cache query strategy is set, and when one of the storage banks fails, data can be queried from the other storage banks, so that the continuity of the service is not affected, and a high-availability query service is provided.
According to the scheme, the request for inquiring the preset current limiting parameters corresponding to the user is received, wherein the request is sent by the client; and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured.
Referring to fig. 10, fig. 10 is a flowchart illustrating another exemplary embodiment of a data query method according to the present invention.
Based on the embodiment shown in fig. 9, the step S300 includes, after obtaining the current limiting parameter based on the request for querying the current limiting parameter corresponding to the preset user, sending the current limiting parameter to the client:
step S400, receiving the request sent by the user and forwarded by the client.
Referring to fig. 11, fig. 11 is a schematic diagram of a server-side data query flow in an embodiment of the present invention.
And if the query time of the local cache of the client exceeds the preset client local cache query time threshold or the target data cannot be queried, forwarding the request sent by the user to the server.
And then, the server side receives the request sent by the user and forwarded by the client side.
Step S500, the request content of the request sent by the user is locally cached in a server for inquiry.
When the server receives a request sent by a user, the server side can preferentially go to a local cache of the server to acquire data and return the data to the client side.
The server local cache is set, so that quick response can be realized when different clients send requests.
Step S600, if the query time of the server local cache does not exceed the preset server local cache query time threshold and the target data is obtained by query, returning the target data to the client.
The preset threshold value of the local cache time of the server can be set according to the local cache capacity, the application scene or the service requirement.
And step S700, if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache.
And if the query time of the server local cache exceeds the preset server local cache query time threshold and the target data is not obtained yet, or if the query time of the server local cache does not exceed the preset server local cache query time threshold but the data in the server local cache is already queried and the target data is not obtained yet, querying the request content in the middleware cache.
Step S800, if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained by query, returning the target data to the client, and updating the server local cache based on the target data.
The middleware caches can be Redis, apache Ignite, mongoDB and the like, and proper middleware caches can be selected according to specific requirements and system architecture.
The preset middleware cache query time threshold can be set according to the memory capacity, the application scene or the service requirement.
Step S900, if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried, querying the request content in the database and triggering an alarm function.
And if the query time of the middleware cache exceeds the preset middleware cache query time threshold and the target data is not obtained through query, or if the query time of the middleware cache does not exceed the preset middleware cache query time threshold but the data in the middleware cache is queried completely and the target data is not obtained through query, querying the request content in the database and triggering an alarm function.
The middleware cache is arranged, so that the middleware cache can serve as a database for normal query service, data in the middleware cache and the data in the database are kept synchronous in real time, the high throughput capacity of the middleware is utilized to provide high concurrency, and the freshness of the data is guaranteed. When the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried, the manual intervention can be notified as soon as possible by triggering an alarm function so as to recover the data of the middleware cache.
In this embodiment, redis is adopted as a middleware cache, and a redis cluster adopts a main and standby mode of slicing multiple activities, and is accompanied with a perfect monitoring and early warning mechanism, so that the robustness of the service is ensured. When one of the local cache or the redis cache is abnormal, the other one is served. When both the local cache and the redis cache are abnormal, the database receives service, so that the continuity of the query service is not influenced, and the high availability is ensured.
Step S1000, obtaining the target data queried in the database.
Step S1100, returning the target data to the client, and updating the server local cache and the middleware cache based on the target data.
And updating the server local cache and the middleware cache based on target data for next query.
The embodiment of the invention sets a three-level cache scheme based on the client and the server, wherein the first-level cache is a client local cache, the second-level cache is a server local cache, and the third-level cache is a middleware cache. The first-level cache and the second-level cache are data caches, a query time threshold is set, and the third-level cache is a copy of database data so as to keep consistency with the database data.
An abnormal degradation processing mechanism is arranged between the three-level caches, and when the first-level cache fails, service provided by the second-level cache is degraded; when the first-level buffer memory and the second-level buffer memory fail, the service provided by the third-level buffer memory is degraded; when the first, second and third level caches fail, the database can be degraded to provide service, so that the overall fault tolerance is improved, high-availability external service is realized, and when part of nodes fail, the service can be provided uninterruptedly, the stability of the system is improved, and the service continuity is also ensured.
According to the scheme, the request for inquiring the preset current limiting parameters corresponding to the user is received, wherein the request is sent by the client; and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result. Receiving a request sent by the user and forwarded by the client; the request content of the request sent by the user is locally cached in a server for inquiry; if the query time of the local cache of the server does not exceed the preset threshold value of the query time of the local cache of the server and the target data is obtained through query, returning the target data to the client; if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache; if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained through query, returning the target data to the client, and updating the server local cache based on the target data; if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried to obtain, querying the request content in the database and triggering an alarm function; acquiring the target data inquired in the database; the target data is returned to the client, and the server local cache and the middleware cache are updated based on the target data.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured. The embodiment of the invention can realize quick response when different clients send requests by setting the local cache of the server. In addition, the embodiment of the invention sets the middleware cache, so that the middleware cache can serve as a database for normal query service, and the data in the middleware cache and the data in the database are kept synchronous in real time, thereby not only providing high concurrency capacity by utilizing high throughput of the middleware, but also ensuring the freshness of the data. When the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried, the embodiment of the invention can inform the personnel to intervene as soon as possible so as to recover the data of the middleware cache by triggering the alarm function. In addition, the embodiment of the invention enables the query service to have excellent response capability by deploying the local cache, the middleware cache and the database at the server side and directly providing the service by the cache under normal conditions. When one of the local cache or the redis cache is abnormal, the other one is served. When both the local cache and the redis cache are abnormal, the database receives service, so that the continuity of the query service is not influenced, and the high availability is ensured. In addition, the redis cluster adopts a main and standby mode with multiple active fragments, and is accompanied by a perfect monitoring and early warning mechanism, so that the service robustness is ensured. When the database fails, the data in the cache can still provide service to the outside, but the data is not excessively dependent on the database, so that the uninterrupted performance of the service to the outside is ensured.
Referring to fig. 12, fig. 12 is a flowchart of another exemplary embodiment of a data query method according to the present invention.
Based on the embodiment shown in fig. 10, the step S200 includes, before receiving the request sent by the client to query the preset current limit parameter corresponding to the user:
and step S100, based on a preset refreshing frequency, taking the database as a reference object, and synchronously refreshing the middleware cache.
The data in the middleware cache is stored in the memory, the data in the database is stored in the disk, the memory access speed is higher than the disk access speed, and in order to improve the response speed and reduce the burden of the database, the database is required to be used as a reference object to synchronously refresh the middleware cache.
The preset refresh frequency can be determined according to the change frequency of the data in the database, and if the data is changed frequently, a shorter refresh interval may be required to maintain the real-time performance of the middleware cache. In addition, the preset refresh frequency is also limited by the resource and performance requirements of the system. A shorter refresh interval may increase the load on resources such as databases and networks, and therefore the preset refresh frequency needs to be set in consideration of the resource limitation and performance requirement of the system.
According to the scheme, the request for inquiring the preset current limiting parameters corresponding to the user is received, wherein the request is sent by the client; and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result. Receiving a request sent by the user and forwarded by the client; the request content of the request sent by the user is locally cached in a server for inquiry; if the query time of the local cache of the server does not exceed the preset threshold value of the query time of the local cache of the server and the target data is obtained through query, returning the target data to the client; if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache; if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained through query, returning the target data to the client, and updating the server local cache based on the target data; if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried to obtain, querying the request content in the database and triggering an alarm function; acquiring the target data inquired in the database; the target data is returned to the client, and the server local cache and the middleware cache are updated based on the target data. In addition, based on a preset refreshing frequency, the database is used as a reference object, and the middleware cache is synchronously refreshed.
According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided. In addition, the embodiment of the invention carries out the current limiting check on the request sent by the user based on the current limiting parameter, and once the current limiting parameter exceeds the threshold value, the current is prompted to reach the upper limit, so that the source of abnormal current can be avoided, the influence of abnormal sudden increase of the current of a certain user on other users is avoided, and the safety of query service is ensured. The embodiment of the invention can realize quick response when different clients send requests by setting the local cache of the server. In addition, the embodiment of the invention sets the middleware cache, so that the middleware cache can serve as a database for normal query service, and the data in the middleware cache and the data in the database are kept synchronous in real time, thereby not only providing high concurrency capacity by utilizing high throughput of the middleware, but also ensuring the freshness of the data. When the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried, the embodiment of the invention can inform the personnel to intervene as soon as possible so as to recover the data of the middleware cache by triggering the alarm function. In addition, the embodiment of the invention enables the query service to have excellent response capability by deploying the local cache, the middleware cache and the database at the server side and directly providing the service by the cache under normal conditions. When one of the local cache or the redis cache is abnormal, the other one is served. When both the local cache and the redis cache are abnormal, the database receives service, so that the continuity of the query service is not influenced, and the high availability is ensured. In addition, the redis cluster adopts a main and standby mode with multiple active fragments, and is accompanied by a perfect monitoring and early warning mechanism, so that the service robustness is ensured. When the database fails, the data in the cache can still provide service to the outside, but the data is not excessively dependent on the database, so that the uninterrupted performance of the service to the outside is ensured. In addition, the embodiment of the invention uses the database as a reference object based on the preset refresh frequency, synchronously refreshes the middleware cache, and can keep the data of the middleware cache and the data of the database synchronous, thereby ensuring that the cache data in the middleware cache is consistent with the data in the database all the time, ensuring the timeliness of the data providing service to the outside, improving the cache hit rate and lightening the burden of the database.
Referring to fig. 13, fig. 13 is a schematic diagram of an interaction flow between a client and a server in the data query method of the present invention.
When the client uses the inquiry service, the client firstly makes admission condition judgment to avoid the unauthenticated request source.
1. After passing the authentication check, the client side can query the local cache of the client side preferentially, namely the first-level cache, if hit data is directly returned, the query request is ended;
2. if the first-level cache data is invalid, a request is sent to the server side. After receiving the request, the server side can preferentially inquire the local cache of the server, namely the second-level cache, if hit data is directly returned, the data of the first-level cache can be updated, and the inquiry request is ended;
3. if the second-level cache data is invalid, the server side can inquire the data in the redis, namely the third-level cache, if hit data is directly returned, the second-level cache and the first-level cache data are updated at the same time, and the inquiry request is ended;
4. if the three-level cache is abnormal, the server side can inquire the data of the database again, and meanwhile, the data of the three-level cache, the second-level cache and the first-level cache can be updated, and an inquiry result is returned;
5. the three-level cache data are kept synchronous with the database, namely, data copying is performed, so that the real-time performance of the data is realized.
When a certain storage structure or node in the interaction flow fails, an abnormal degradation processing flow is entered:
a. if the first-level buffer fails, the second-level buffer and the third-level buffer are directly inquired, and the service continuity is not affected;
b. if the second-level buffer memory fails, the first-level buffer memory and the third-level buffer memory provide service, and the service continuity is not affected;
c. if the third-level buffer memory fails, the first-level buffer memory and the second-level buffer memory provide service, and the service continuity is not affected;
d. if the database fails, the primary cache, the secondary cache and the tertiary cache provide services, and the service continuity is not affected;
e. if the whole client application fails, only the appointed service of the client is affected, and other services are not affected in a diffusion way;
f. if the server-side application fails integrally, the service can be supported for a short time within the effective time range of the data cached at the first level, so that the whole downtime is avoided.
In addition, the embodiment of the application also provides a data query device, which comprises:
the sending module responds to a request sent by a user and sends a request for inquiring preset current limiting parameters corresponding to the user to the server;
The receiving module is used for receiving the current limiting parameters returned by the server side;
the checking module is used for checking the current limit of the request sent by the user based on the current limit parameter;
and the query module is used for carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy if the flow of the request sent by the user does not reach a preset flow threshold value, and returning a query result.
The principle and implementation process of the data query are implemented in this embodiment, please refer to the above embodiments, and are not repeated here.
In addition, the embodiment of the application also provides a terminal device, which comprises a memory, a processor and a data query program stored on the memory and capable of running on the processor, wherein the data query program realizes the steps of the data query method when being executed by the processor.
Because all the technical solutions of all the embodiments are adopted when the data query program is executed by the processor, the data query program at least has all the beneficial effects brought by all the technical solutions of all the embodiments, and is not described in detail herein.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a data query program, and the data query program realizes the steps of the data query method when being executed by a processor.
Because all the technical solutions of all the embodiments are adopted when the data query program is executed by the processor, the data query program at least has all the beneficial effects brought by all the technical solutions of all the embodiments, and is not described in detail herein.
According to the scheme, the request for inquiring the preset current limiting parameters corresponding to the user is sent to the server side in response to the request sent by the user; receiving the current limiting parameters returned by the server side; performing a current limit check on the request sent by the user based on the current limit parameter; and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result. According to the embodiment of the invention, the request sent by the user is subjected to the flow limiting check based on the flow limiting parameters, and only the request which does not reach the preset flow threshold value is allowed to pass, so that the load of the server and the pressure of the database can be reduced, the probability of failure of the server and the database can be reduced, and the provided query service has high availability and continuity. In addition, the embodiment of the invention carries out multi-level cache inquiry on the request content of the request based on the preset multi-level cache inquiry strategy, and sets the multi-level cache inquiry strategy, when one of the storage banks fails, the data can be inquired from the other storage banks, so that the continuity of the service is not affected, and thus, the high-availability inquiry service is provided.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or method that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A data query method, wherein the method is applied to a client, and the method comprises the following steps:
responding to a request sent by a user, and sending a request for inquiring preset current limiting parameters corresponding to the user to a server side;
receiving the current limiting parameters returned by the server side;
performing a current limit check on the request sent by the user based on the current limit parameter;
and if the flow of the request sent by the user does not reach the preset flow threshold, carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy, and returning a query result.
2. The method according to claim 1, wherein the step of performing multi-level cache query on the requested content of the request sent by the user based on a preset multi-level cache query policy, and returning the query result includes:
The request content of the request sent by the user is locally cached at the client side for inquiry;
if the query time of the local cache of the client does not exceed the preset client local cache query time threshold and the target data is obtained by query, returning the target data to the user;
if the query time of the local cache of the client exceeds the preset client local cache query time threshold or the target data cannot be queried, forwarding a request sent by the user to the server;
receiving the target data returned by the server side;
and updating the client local cache based on the target data, and returning the target data to the user.
3. The method of claim 1, wherein the step of sending, to the server, a request for querying a preset current limit parameter corresponding to the user in response to the request sent by the user includes:
generating a preset number of tokens for the user;
responding to the request sent by the user, and judging the admission condition of the request sent by the user based on the token;
and if the token accords with the preset access condition, sending a current limiting parameter corresponding to the user, which is inquired to be preset, to a server.
4. A method according to claim 3, wherein the step of generating a predetermined number of tokens for the user is preceded by:
responding to a registration instruction of a user, setting a current limiting parameter corresponding to the user and sending the current limiting parameter corresponding to the user to a server side.
5. The data query method is characterized by being applied to a server side, and comprises the following steps of:
receiving a request sent by a client for inquiring preset current limiting parameters corresponding to the user;
and acquiring the current limiting parameters based on the request for inquiring the preset current limiting parameters corresponding to the user, sending the current limiting parameters to the client so that the client can conduct current limiting inspection on the request corresponding to the user based on the current limiting parameters, and if the flow of the request corresponding to the user does not reach a preset flow threshold, conducting multi-level cache inquiry on the request content of the request corresponding to the user based on a preset multi-level inquiry strategy, and returning an inquiry result.
6. The method according to claim 5, wherein the step of obtaining the current limit parameter based on the request for inquiring the preset current limit parameter corresponding to the user and sending the current limit parameter to the client includes:
Receiving a request sent by the user and forwarded by the client;
the request content of the request sent by the user is locally cached in a server for inquiry;
if the query time of the local cache of the server does not exceed the preset threshold value of the query time of the local cache of the server and the target data is obtained through query, returning the target data to the client;
if the query time of the server local cache exceeds the preset server local cache query time threshold or the target data cannot be queried, querying the request content in the middleware cache;
if the query time of the middleware cache does not exceed the preset middleware cache query time threshold and the target data is obtained through query, returning the target data to the client, and updating the server local cache based on the target data;
if the query time of the middleware cache exceeds the preset middleware cache query time threshold or the target data is not queried to obtain, querying the request content in the database and triggering an alarm function;
acquiring the target data inquired in the database;
The target data is returned to the client, and the server local cache and the middleware cache are updated based on the target data.
7. The method of claim 6, wherein the step of receiving the request sent by the client to query the preset current limit parameter corresponding to the user includes:
and based on a preset refreshing frequency, taking the database as a reference object, and synchronously refreshing the middleware cache.
8. A data query device, wherein the device is disposed at a client, the device comprising:
the sending module responds to a request sent by a user and sends a request for inquiring preset current limiting parameters corresponding to the user to the server;
the receiving module is used for receiving the current limiting parameters returned by the server side;
the checking module is used for checking the current limit of the request sent by the user based on the current limit parameter;
and the query module is used for carrying out multi-level cache query on the request content of the request sent by the user based on a preset multi-level cache query strategy if the flow of the request sent by the user does not reach a preset flow threshold value, and returning a query result.
9. A data querying device, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the data querying method according to any of claims 1-4 or the data querying method according to any of claims 5-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the data query method of any of claims 1-7.
CN202311288987.7A 2023-09-28 2023-09-28 Data query method, device, equipment and storage medium Pending CN117390068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311288987.7A CN117390068A (en) 2023-09-28 2023-09-28 Data query method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311288987.7A CN117390068A (en) 2023-09-28 2023-09-28 Data query method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117390068A true CN117390068A (en) 2024-01-12

Family

ID=89471158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311288987.7A Pending CN117390068A (en) 2023-09-28 2023-09-28 Data query method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117390068A (en)

Similar Documents

Publication Publication Date Title
CN111431740B (en) Data transmission method, device, equipment and computer readable storage medium
RU2671045C2 (en) Synchronising credential hashes between directory services
US9465819B2 (en) Distributed database
US10798218B2 (en) Environment isolation method and device
CN111131451A (en) Service processing system and service processing method
CN109213571B (en) Memory sharing method, container management platform and computer readable storage medium
US9760370B2 (en) Load balancing using predictable state partitioning
CN115242798B (en) Task scheduling method based on edge cloud, electronic equipment and storage medium
EP4033719A1 (en) System for providing exact communication delay protection of request response for distributed service
CN107508700B (en) Disaster recovery method, device, equipment and storage medium
CN113946408A (en) Cloud native edge container control method and system and storage medium
CN112671554A (en) Node fault processing method and related device
CN108366087B (en) ISCSI service realization method and device based on distributed file system
CN117278640B (en) API (application program interface) calling method and system based on data aggregation
CN117390068A (en) Data query method, device, equipment and storage medium
CN116414628A (en) Transaction request processing method and device in new and old system switching process
CN114884964A (en) Service wind control method and system based on Tuxedo architecture
US20230146880A1 (en) Management system and management method
CN113542373A (en) Routing service discovery device and method for PAAS platform
CN114257578A (en) Information verification method and device
CN115622988B (en) Call response method and device for web interface, electronic equipment and storage medium
WO2018229153A1 (en) Cross-cluster service provision
CN117478504B (en) Information transmission method, device, terminal equipment and storage medium
CN113515375B (en) Calling method and device for multiple CICS nodes
CN117793112A (en) Access processing method and service platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination