CN115422237A - Data query method and device, computer equipment and storage medium - Google Patents

Data query method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115422237A
CN115422237A CN202211030962.2A CN202211030962A CN115422237A CN 115422237 A CN115422237 A CN 115422237A CN 202211030962 A CN202211030962 A CN 202211030962A CN 115422237 A CN115422237 A CN 115422237A
Authority
CN
China
Prior art keywords
cache
query
hit
data
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211030962.2A
Other languages
Chinese (zh)
Inventor
许成卿
刘鹏
马万铮
王志国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Coocaa Network Technology Co Ltd
Original Assignee
Shenzhen Coocaa Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Coocaa Network Technology Co Ltd filed Critical Shenzhen Coocaa Network Technology Co Ltd
Priority to CN202211030962.2A priority Critical patent/CN115422237A/en
Publication of CN115422237A publication Critical patent/CN115422237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the field of data query, in particular to a data query method, a data query device, computer equipment and a storage medium, wherein query request parameters are analyzed to obtain a corresponding request strategy, the query request parameters are generated into a cache key according to a preset cache key generation rule, the hit quantity matched with the cache key is queried in a first cache database according to the cache key, if the query is successful, the hit quantity matched with the cache key is output, if the query is failed, the hit quantity of a tag condition is queried in a second cache database according to a corresponding tag condition in the request strategy, the hit data and the cache key are mapped and cached in the first cache database.

Description

Data query method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data query, and in particular, to a data query method, apparatus, computer device, and storage medium.
Background
A database is a repository that organizes, stores, and manages data according to a data structure, and a database may receive a query request and return data corresponding to the query request. When the query requests are too many or too frequent, the response time of the database may become long and even crash, and some data processing systems use distributed remote caches to store corresponding data in order to cope with large-scale data writing or reading, so as to reduce the access pressure on the database. In such an implementation, a large amount of data of the database may be stored in different machines to handle a large batch of access requests. However, if only the distributed remote cache is used to store data, when data query is performed, since the access amount of a single computing node is limited, when the access request exceeds the access limit of the single computing node, the system still has the problem of long response time and even the system is at the edge of crash. Therefore, how to reduce the response time of the request and improve the data query efficiency when performing data query becomes an urgent problem to be solved.
Disclosure of Invention
Therefore, it is necessary to provide a data query method, device, apparatus and storage medium to solve the problem of low data query efficiency.
A first aspect of an embodiment of the present application provides a data query method, where the query method includes:
analyzing the query request parameters to obtain corresponding request strategies;
generating the query request parameter into a cache key according to a generation rule of a preset cache key;
according to the cache key, inquiring the hit amount matched with the cache key in a first cache database;
if the query is successful, outputting the hit quantity matched with the cache key; the hit amount when the historical query request parameters are queried is cached in the first cache database;
or if the query fails, querying the hit amount of which the tag value meets the tag condition in a second cache database according to the corresponding tag condition in the request strategy, mapping the hit data and the cache key, and caching the mapped hit data and the cache key into the first cache database; and tag values and corresponding hit quantities which hit during historical query request parameter query are cached in the second cache database.
A second aspect of embodiments of the present application provides a data query apparatus, including:
the analysis module is used for analyzing the query request parameters to obtain a corresponding request strategy;
the generating module is used for generating the query request parameters into cache keys according to the generating rules of preset cache keys;
the query module: the cache key is used for outputting the hit quantity matched with the cache key if the query is successful; the first cache database caches hit amount when historical query request parameters are queried;
the first cache database output module is used for inquiring the hit quantity matched with the cache key in the first cache database according to the cache key;
a second cache database query module, configured to, if the query fails, query, according to a tag condition corresponding to the request policy, a hit amount that a tag value satisfies the tag condition in a second cache database, map the hit data and the cache key, and cache the hit data and the cache key in the first cache database; and tag values and corresponding hit quantities which hit during historical query request parameter query are cached in the second cache database.
In a third aspect, an embodiment of the present invention provides a computer device, where the computer device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the data query method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the data query method according to the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
the query request parameters are analyzed to obtain corresponding request strategies, the query request parameters are generated into cache ke according to the generation rules of preset cache keys, the hit quantity matched with the cache keys is queried in a first cache database according to the cache keys, if the query is successful, the hit quantity matched with the cache keys is output, wherein the hit quantity during the historical query request parameter query is cached in the first cache database, if the query fails, the hit quantity of the tag conditions is queried in a second cache database according to the corresponding tag conditions in the request strategies, the hit data and the cache keys are cached in the first cache database after being mapped, and the tag values and the corresponding hit quantities during the historical query request parameter query are cached in the second cache database.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a data query method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a data query method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a data query method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a data query device according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a data query device according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a data query device according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a data query device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present invention and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
It should be understood that, the sequence numbers of the steps in the following embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by the function and the internal logic thereof, and should not limit the implementation process of the embodiments of the present invention in any way.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
An embodiment of the present invention provides a data query method, which can be applied in an application environment as shown in fig. 1, where a client communicates with a server. The client includes, but is not limited to, a palm top computer, a desktop computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and other computer devices. The server can be implemented by an independent server or a server cluster generated by a plurality of servers.
Referring to fig. 2, which is a flowchart illustrating a data query method according to an embodiment of the present invention, where the data query may be applied to the server in fig. 1, and the server is connected to a corresponding client, as shown in fig. 2, the data query method may include the following steps.
S201: and analyzing the query request parameters to obtain a corresponding request strategy.
In step S201, the query request parameter may be an HTTP request sent by the user to obtain/send some data, for example, the social software chat interface may send an HTTP request to obtain new information by refreshing and send a HTTP request to send a message to the recipient by sending a button in the social software. The request policy is a request condition contained in the request parameter.
In this embodiment, the data request is received through the request interface, and the data request parameter is analyzed, so as to ensure the request information in the data request parameter, thereby performing corresponding operation, and ensuring normal operation of the system. For example, when the request mode is a get request, the interface name and the request policy are obtained by analyzing from the URL corresponding to the interface simulation request, and when the request mode is a post request, the interface name and the request policy are obtained by analyzing from the request parameter corresponding to the interface simulation request.
Optionally, analyzing the query request parameter to obtain a corresponding request policy, including:
analyzing the query request parameters to obtain request strategy identification numbers corresponding to the query request parameters;
and matching the identification numbers in the preset strategy set through the identification numbers in the request strategies to obtain the corresponding request strategies.
In this embodiment, the received request parameters include identification information corresponding to the request policy, and after parsing, the identification numbers corresponding to the request policy are obtained, the preset policy set includes identification numbers corresponding to all the request policies, the identification numbers are matched in the preset policy set through the identification numbers in the request policy, and when the identification numbers are successfully matched, the corresponding request policy is obtained, where the obtained request policy may be one request policy or a plurality of request policies.
S202: and generating the query request parameters into cache keys according to the generation rule of the preset cache keys.
In step S202, a cache key is generated from the query request parameter, and the corresponding data is queried in the cache database according to the cache key.
In this embodiment, the query request parameter is generated into a cache key according to a preset key value generation rule, and then, corresponding data is obtained from a cache database based on the generated cache key. The one or more cache keys may be generated according to one or more single-value parameters and/or multi-value parameter values corresponding to one or more single-value parameters and/or multi-value parameters in the request parameters.
It should be noted that the single-valued parameter is a parameter including only one element, and the element value is a single-valued parameter value, and the multi-valued parameter is an array including a plurality of elements. If the request parameter includes a multi-value parameter (including the case of 0 or one or more single-value parameters), a cache key corresponding to each element is generated based on each element in the multi-value parameter. Further, if the request parameter includes one or more single-valued parameters and multi-valued parameters, a cache key corresponding to each element is generated based on each element of all the single-valued parameter values and multi-valued parameters, respectively. Here, what is finally generated is a plurality of cache keys corresponding to a plurality of elements in the multivalued parameter.
Optionally, generating the query request parameter into the cache key according to a preset cache key generation rule, including:
and encrypting the query request parameters to obtain encrypted data, and taking the encrypted data as a cache key.
In this embodiment, a symmetric encryption algorithm is used to encrypt the query enforcement parameter, where the symmetric encryption algorithm (also called a private key encryption algorithm) refers to an encryption algorithm using the same key for encryption and decryption, that is, an encryption key can be derived from a decryption key, and the decryption key can also be derived from the encryption key. Whereas in most symmetric algorithms the encryption key and decryption key are the same, it requires the sender and receiver to agree on a key before secure communication. The security of symmetric algorithms relies on keys, and revealing keys means that anyone can decrypt messages they send or receive, so the confidentiality of keys is critical to the security of communications.
Encrypting the query request parameters to obtain a unique cache key, mapping the cache key and a cache database, and searching corresponding cache data according to the cache key;
it should be noted that, when the query request parameter includes a multi-valued parameter, each parameter is encrypted to obtain a plurality of cache keys.
S203: and according to the cache key, inquiring the hit amount matched with the cache key in the first cache database.
In step S203, the first cache database is queried for the hit amount corresponding to the cache key according to the cache key, where the cache key is used for matching with the corresponding value.
In this embodiment, if the first cache database is a conventional key-value (KV) non-relational database, for the KV database, if the key and the value include a plurality of fields, it is inconvenient to query some of the fields therein. In addition, KV data only obtains a unique value through a key value, which is inconvenient for querying data through the value. In the actual development process, the attributes of each field in key and value in the DB need to be known, such as: the number of primarykeys, the type of each primaryKey, how many fields the value corresponding to the primaryKey has, the type of each field, and the like. In addition, the service often does not satisfy the mapping for obtaining the key value uniqueness, and in general, all data satisfying a certain condition needs to be pulled. In addition, when a value contains multiple fields and development focuses only on one or a few of the fields, there is often a need for partial field lookup.
S204: and if the query is successful, outputting the hit quantity matched with the cache key.
In step S204, if the query is successful, a hit amount matching the cache key is output in the first cache database, and the hit amount may be one or multiple.
In this embodiment, when data is queried through a cache key, a query is performed according to key value pairs, the first cache database is a key value pair database, a data query request carries query conditions for performing data query in the key value database, data in the key value database is stored in a key value pair manner, the key value pair is used for representing a corresponding relationship between the query conditions and the cache database, the data query request is responded, and a hit amount matching the query conditions is determined in the key value database.
S205: if the query fails, according to the corresponding tag condition in the request strategy, querying the hit amount of which the tag value meets the tag condition in the second cache database, mapping the hit data and the cache key value, and caching the mapped hit data and the cache key value into the first cache database.
In step S205, tag values and corresponding hit amounts in the historical query request parameter query are cached in the second cache database, where the tag conditions are query ranges in the corresponding request policy, and the tag values are hit conditions of each query range.
In this embodiment, when the first cache database does not query the corresponding hit amount, the corresponding hit amount is queried in the second cache database according to the obtained request policy by using the tag value, for example, when the tag condition in one of the request policies is that the number of times of message prompt clicks for a tag for about 30 days is greater than or equal to 1, the value of the number of times of message prompt clicks for a tag for about 30 days needs to be queried in the second cache database, and then the value is transmitted to the above tag condition to determine whether the condition is satisfied, and if the number of times of message prompt clicks for about 30 days is 2, the crowd condition is satisfied, and the hit amount satisfied by the crowd is obtained. When the tag condition in one of the request strategies is that the data source is the data of the first platform or the data of the second platform, the data source of the video source needs to be inquired in the second cache database, and when the data source of the video source is the first platform or the second platform, the crowd condition is established, and the hit amount of the established crowd is obtained.
And mapping the hit population acquired in the second cache database with the corresponding cache key, and caching the mapped hit population and the corresponding cache key into the first cache database so as to inquire the corresponding hit population in the first cache database in the next inquiry.
The query request parameters are analyzed to obtain corresponding request strategies, the query request parameters are generated into cache ke according to the generation rules of preset cache keys, the hit quantity matched with the cache keys is queried in a first cache database according to the cache keys, if the query is successful, the hit quantity matched with the cache keys is output, wherein the hit quantity during the historical query request parameter query is cached in the first cache database, if the query fails, the hit quantity of the tag conditions is queried in a second cache database according to the corresponding tag conditions in the request strategies, the hit data and the cache keys are cached in the first cache database after being mapped, and the tag values and the corresponding hit quantities during the historical query request parameter query are cached in the second cache database.
Referring to fig. 3, which is a schematic flowchart of a data query method according to an embodiment of the present invention, as shown in fig. 3, the data query method may include the following steps:
s301: constructing a first cache database based on a cluster architecture, wherein the first cache database is used for caching a hit result set;
s302: and constructing a second cache database based on the master-slave replication architecture, wherein the second cache database is used for caching the tag set.
In this embodiment, through the Redis cluster caching corresponding hit result set, the Redis may start a service instance at different physical machines or different ports of the same physical machine, and use the service instance as a storage node in the consistent hash ring. Each storage node is set according to the actual memory of the physical machine. Consistent hashing is to map data and virtual nodes to a certain numerical value space, and meanwhile, the numerical value space is connected end to be used as a hash ring. The virtual node is subjected to hash operation and is mapped to a number on the hash ring, and the corresponding relation between the number and the virtual node is stored to be the hash mapping table. Initializing an array, and setting a Master node Master for the Redis cluster, wherein the Master node Master is used for managing metadata of the Redis cluster and maintaining a consistent hash mapping table, and other service instances started by the Redis cluster are used as Slave nodes Slave of data storage, are responsible for data management and storage, maintain the state of the node instance, and perform data and state interaction on the Master node.
In this embodiment, 10 redis memory servers are deployed in common, the memory of each redis application instance of the redis memory server is 32G, each memory server deploys 3 redis application instances, one master instance and two slave instances. One exception characteristic of this architecture is the capability of automatic recovery from failure, which is required by the system to automatically recover normal service from recovery after the cache is hit and the interface requests an exception, otherwise the cache will be aged for a long time, seriously affecting the interface performance, even causing the interface to run,
it should be noted that the longer the length of the initialized array, the more uniform the subsequent storage of data in the Redis cluster will be. However, when the length of the array is too long and the performance of the storage node is low, although the distribution of data of the Redis cluster is more uniform, the writing speed and the reading speed are slower. Therefore, the length of the initialization array needs to be considered comprehensively by combining the size of the Redis cluster and the performance of the storage node.
When the second cache database is constructed, a master-slave replication architecture is used for construction, the tag value in the Redis master node is replicated to the Redis slave node when the tag value is queried because the tag value is cached in the second cache database, the tag value can be frequently queried according to the second cache data constructed by the master-slave replication architecture, and a query result can be quickly returned.
S303: analyzing the query request parameters to obtain corresponding request strategies;
s304: generating the query request parameters into cache keys according to the generation rules of the preset cache keys; s305:
according to the cache key, a hit matching the cache key is queried in the first cache database S306: if the query is successful, outputting the hit quantity matched with the cache key;
s307: if the query fails, according to the corresponding tag condition in the request strategy, querying the hit amount of which the tag value meets the tag condition in the second cache database, mapping the hit data and the cache key, and caching the mapped hit data and the cache key into the first cache database.
The contents of the steps S303 to S307 are the same as the contents of the steps S201 to S205, and the descriptions of the steps S201 to S205 may be referred to, which are not repeated herein.
Referring to fig. 4, which is a schematic flowchart of a data query method according to an embodiment of the present invention, as shown in fig. 4, the data query method may include the following steps:
s401: analyzing the query request parameters to obtain corresponding request strategies;
s402: generating the query request parameters into cache keys according to the generation rules of the preset cache keys;
s403: according to the cache key, inquiring the hit amount matched with the cache key in the first cache database;
s404: if the query is successful, outputting the hit quantity matched with the cache key;
the contents of the steps S401 to S404 are the same as the contents of the steps S201 to S204, and the descriptions of the steps S201 to S204 may be referred to, which are not repeated herein.
S405: dividing the label condition into a dynamic label condition and a static label condition according to a preset threshold and the change times of the label value corresponding to the label condition, and acquiring the dynamic label condition;
s406: and searching data corresponding to the dynamic label condition in a preset database, and caching the data corresponding to the searched dynamic label condition in a second cache database.
In this embodiment, a preset threshold is set, and a tag condition in a request policy is divided into a dynamic tag and a static tag, where when a change number of a tag value in the tag condition in a unit time is greater than the preset threshold, the tag condition is considered as the dynamic tag, and when the change number of the tag value in the tag condition in the unit time is less than the preset threshold, the tag condition is considered as the static tag, and according to the tag value corresponding to the dynamic tag, a hit result corresponding to a corresponding tag value can be queried in a cache database by presetting a hit result satisfying the tag value in the database and caching the tag value and a corresponding query result, and query results corresponding to a plurality of tag values are cached, so that a hit result corresponding to the corresponding tag value can be queried in the cache database during data query, thereby avoiding query of a database engine and improving query efficiency.
For example, when users in each click rate stage, such as users with a daily click rate of 3000, users with a daily click rate of 5000, users with a daily click rate of 7000, and the like, need to query the click rate tag conditions, the tag values are constantly changed, if the number of times of change is greater than a preset threshold, the tag conditions of the daily click rate are used as dynamic tag conditions, the hit results corresponding to the tag values are queried in a preset database according to the different tag values, and the hit results are cached in a cache database. When the label condition of the video source needs to be queried, the queried video source generally comes from the video of the first platform, and then the label condition of the queried video source is used as a static label.
S407: if the query fails, according to the corresponding tag condition in the request strategy, querying the hit amount of which the tag value meets the tag condition in the second cache database, mapping the hit data and the cache key, and caching the mapped hit data and the cache key into the first cache database.
The content of the step S407 is the same as that of the step S205, and reference may be made to the description of the step S205, which is not repeated herein.
Referring to fig. 5, which is a schematic flowchart of a data query method according to an embodiment of the present invention, as shown in fig. 5, the data query method may include the following steps:
s501: analyzing the query request parameters to obtain a corresponding request strategy;
s502: generating the query request parameters into cache keys according to the generation rules of the preset cache keys;
s503: according to the cache key, inquiring the hit amount matched with the cache key in the first cache database,
s504: if the query is successful, outputting the hit quantity matched with the cache key;
s505: if the query fails, according to the corresponding tag condition in the request strategy, querying the hit amount of which the tag value meets the tag condition in the second cache database, mapping the hit data and the cache key, and caching the mapped hit data and the cache key into the first cache database.
The contents of the steps S501 to S505 are the same as the contents of the steps S201 to S205, and the descriptions of the steps S201 to S205 can be referred to, and are not repeated herein.
S506: and if the label value corresponding to the label condition changes, calling back the service interface corresponding to the label condition, and updating the label value of the second cache database.
In this embodiment, when a corresponding tag value in the tag condition is updated, cache data obtained by the tag value before updating needs to be deleted, cached data corresponding to the corresponding tag value is updated in the cache database, and when the tag value is updated, the query step corresponding to the tag value before updating is re-queried by calling back the corresponding service interface, so that the re-queried data is cached in the cache database.
Referring to fig. 6, which is a schematic flowchart of a data query method according to an embodiment of the present invention, as shown in fig. 6, the data query method may include the following steps:
s601: analyzing the query request parameters to obtain a corresponding request strategy;
s602: generating the query request parameters into cache keys according to the generation rules of the preset cache keys;
s603: according to the cache key, inquiring the hit matched with the cache key in the first cache database;
s604: if the query is successful, outputting the hit quantity matched with the cache key;
s605: if the query fails, according to the corresponding tag condition in the request strategy, querying the hit amount of which the tag value meets the tag condition in the second cache database, mapping the hit data and the cache key, and caching the mapped hit data and the cache key into the first cache database.
The contents of the steps S601 to S605 are the same as the contents of the steps S201 to S205, and the descriptions of the steps S201 to S205 may be referred to, which are not repeated herein.
S606: and if the hit quantity of the label value meeting the label condition is empty in the second cache database, querying the preset database according to the query request parameters.
In this embodiment, when the hit amount when the query tag value in the second cache database meets the tag condition is null, that is, the corresponding data is not cached in the second cache database, the preset database is queried according to the cache key corresponding to the query request parameter, and after the corresponding data is queried in the preset database, the queried database is cached in the first cache database, so as to facilitate the next query.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a data query device according to an embodiment of the present invention. The units included in the terminal in this embodiment are configured to execute the steps in the embodiments corresponding to fig. 2 to 6. Please refer to fig. 2 to 6 and fig. 2 to 6 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 7, the inquiry apparatus 70 includes: the system comprises a parsing module 71, a generating module 72, an inquiring module 73, a first cache database output module 74 and a second cache database inquiring module 75.
The analysis module 71 is configured to analyze the query request parameter to obtain a corresponding request policy;
the generating module 72 is configured to generate the query request parameter into a cache key according to a generating rule of a preset cache key;
the query module 73 is configured to query, according to the cache key, a hit amount matching the cache key in the first cache database;
a first cache database output module 74, configured to output a hit amount matching the cache key if the query is successful; the hit amount of the historical query request parameter during query is cached in the first cache database;
a second cache database query module 75, configured to, if the query fails, query, according to a tag condition corresponding to the request policy, a hit amount that a tag value satisfies the tag condition in the second cache database, map hit data and a cache key, and then cache the hit data in the first cache database; and the second cache database caches tag values and corresponding hit quantities of historical query request parameter query times.
Optionally, the parsing module 71 includes:
the query parameter analyzing unit is used for analyzing the query request parameters to obtain request strategy identification numbers corresponding to the query request parameters;
and the matching unit is used for matching the identification numbers in the preset strategy set through the identification numbers in the request strategies to obtain the corresponding request strategies.
Optionally, the generating module 72 includes:
and the encryption unit is used for encrypting the query request parameters to obtain encrypted data, and taking the encrypted data as a cache key.
Optionally, the querying device 70 further includes:
the first cache database construction module is used for constructing a first cache database based on a cluster architecture, and the first cache database is used for caching a hit result set;
and the second cache database construction module is used for constructing a second cache database based on the master-slave replication architecture, and the second cache database is used for caching the tag set.
Optionally, the querying device 70 further includes:
the dividing module is used for dividing the label conditions into dynamic label conditions and static label conditions according to the preset threshold and the change times of the label values corresponding to the label conditions, and acquiring the dynamic label conditions;
and the dynamic label caching module is used for searching the data corresponding to the dynamic label condition in a preset database and caching the searched data corresponding to the dynamic label condition in a second caching database.
Optionally, the querying device 70 further includes:
and the updating module is used for calling back the service interface corresponding to the label condition and updating the label value of the second cache database if the label value corresponding to the label condition changes.
Optionally, the querying device 70 further includes:
and the preset database query module is used for querying the preset database according to the query request parameters if the hit quantity of the tag value meeting the tag condition in the second cache database is null.
It should be noted that, because the contents of information interaction, execution process, and the like between the above units are based on the same concept, specific functions and technical effects thereof according to the method embodiment of the present invention, reference may be made to the part of the method embodiment specifically, and details are not described herein again.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown in fig. 8, the computer device of this embodiment includes: at least one processor (only one shown in fig. 8), a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various data query method embodiments described above when executing the computer program.
The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 8 is merely an example of a computer device and is not intended to be limiting, and that a computer device may include more or fewer components than those shown, or some components may be combined, or different components may be included, such as a network interface, a display screen, and input devices, etc.
The Processor may be a CPU, or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory includes readable storage media, internal memory, etc., wherein the internal memory may be the internal memory of the computer device, and the internal memory provides an environment for the operating system and the execution of the computer-readable instructions in the readable storage media. The readable storage medium may be a hard disk of the computer device, and in other embodiments may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device. Further, the memory may also include both internal and external storage units of the computer device. The memory is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program, and the like. The memory may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the above method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
The present invention can also be implemented by a computer program product, which when executed on a computer device causes the computer device to implement all or part of the processes in the method of the above embodiments.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for querying data, the method comprising:
analyzing the query request parameters to obtain corresponding request strategies;
generating the query request parameter into a cache key according to a generation rule of a preset cache key;
according to the cache key, inquiring the hit amount matched with the cache key in a first cache database;
if the query is successful, outputting the hit quantity matched with the cache key; the hit amount when the historical query request parameters are queried is cached in the first cache database;
or if the query fails, querying the hit quantity of which the tag value meets the tag condition in a second cache database according to the corresponding tag condition in the request strategy, and caching the hit data and the cache key into the first cache database after mapping; and tag values and corresponding hit quantities which hit during historical query request parameter query are cached in the second cache database.
2. The data query method of claim 1, wherein before parsing the query request parameters to obtain the corresponding request policy, the method further comprises:
constructing the first cache database based on a cluster architecture, wherein the first cache database is used for caching a hit result set;
and constructing the second cache database based on a master-slave replication architecture, wherein the second cache database is used for caching the tag set.
3. The data query method as claimed in claim 1, wherein said parsing the query request parameters to obtain the corresponding request policy comprises:
analyzing the query request parameters to obtain request strategy identification numbers corresponding to the query request parameters;
and matching the identification number in a preset strategy set through the identification number in the request strategy to obtain a corresponding request strategy.
4. The data query method according to claim 1, wherein the generating the query request parameter into the cache key according to a preset cache key generation rule includes:
and encrypting the query request parameters to obtain encrypted data, and taking the encrypted data as a cache key.
5. The data query method according to claim 1, wherein if the query fails, querying a hit amount that a tag value satisfies the tag condition in a second cache database according to a corresponding tag condition in the request policy, and after mapping the hit data and the cache key, caching the hit data in the first cache database, further comprising:
dividing the label condition into a dynamic label condition and a static label condition according to a preset threshold and the change times of the label value corresponding to the label condition, and acquiring the dynamic label condition;
and searching data corresponding to the dynamic label condition in a preset database, and caching the searched data corresponding to the dynamic label condition in the second cache database.
6. The data query method according to claim 1, wherein after querying a hit amount of a tag value satisfying the tag condition in a second cache database according to a corresponding tag condition in the request policy according to the query failure, mapping the hit data and the cache key, and caching the mapped hit data in the first cache database, the method further comprises:
and if the label value corresponding to the label condition changes, calling back the service interface corresponding to the label condition, and updating the label value of the second cache database.
7. The data query method according to claim 1, wherein after querying a hit amount of a tag value satisfying the tag condition in a second cache database according to a corresponding tag condition in the request policy according to the query failure, mapping the hit data and the cache key, and caching the mapped hit data in the first cache database, the method further comprises:
and if the hit quantity of the label value meeting the label condition is empty in the second cache database, querying a preset database according to the query request parameter.
8. A data query apparatus, characterized in that the apparatus comprises:
the analysis module is used for analyzing the query request parameters to obtain a corresponding request strategy;
the generating module is used for generating the query request parameter into a cache key according to a generating rule of a preset cache key;
the query module: the cache key is used for inquiring the hit quantity matched with the cache key in a first cache database according to the cache key;
the first cache database output module is used for outputting the hit amount matched with the cache key if the query is successful; the first cache database caches hit amount when historical query request parameters are queried;
a second cache database query module, configured to, if the query fails, query, according to a tag condition corresponding to the request policy, a hit amount that a tag value satisfies the tag condition in a second cache database, map the hit data and the cache key, and cache the hit data and the cache key in the first cache database; and the second cache database caches tag values and corresponding hit quantities of historical query request parameter query times.
9. A computer device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the data query method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a data query method according to any one of claims 1 to 7.
CN202211030962.2A 2022-08-26 2022-08-26 Data query method and device, computer equipment and storage medium Pending CN115422237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211030962.2A CN115422237A (en) 2022-08-26 2022-08-26 Data query method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211030962.2A CN115422237A (en) 2022-08-26 2022-08-26 Data query method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115422237A true CN115422237A (en) 2022-12-02

Family

ID=84199380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211030962.2A Pending CN115422237A (en) 2022-08-26 2022-08-26 Data query method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115422237A (en)

Similar Documents

Publication Publication Date Title
US11216433B2 (en) Encrypted search with no zero-day leakage
CN113094334B (en) Digital service method, device, equipment and storage medium based on distributed storage
CN107844488B (en) Data query method and device
US11909861B2 (en) Privately querying a database with private set membership using succinct filters
US20230144072A1 (en) Data storage server and client devices for securely storing data
US20190340391A1 (en) Multiple message retrieval for secure electronic communication
CN115544579B (en) Double-random data confusion query method, device and system
CN111783140A (en) Request response method and device, electronic equipment and computer readable storage medium
CN115757545A (en) Ciphertext retrieval method, ciphertext storage method, ciphertext retrieval device, electronic equipment and ciphertext storage medium
CN115104286B (en) Encryption searching method and system for encrypting E-mail client
Weintraub et al. Data integrity verification in column-oriented NoSQL databases
EP4154147A1 (en) Data storage server and client devices for securely storing data
CN115422237A (en) Data query method and device, computer equipment and storage medium
US8005849B2 (en) Database access server with reformatting
CN111274484A (en) Method and device for managing interactive data
EP4154149B1 (en) Data storage server and client devices for securely storing data
WO2024130721A1 (en) Data storage server and client devices for securely storing and retrieving data
CN117194298B (en) Control method, device, equipment and storage medium
CN117033445A (en) Full-secret database cost transfer method, device, equipment and storage medium
CN118260328A (en) Data query method, device, computer equipment, readable storage medium and product
CN116595546A (en) Data encryption method, device, computer equipment and storage medium
CN116414841A (en) Database updating method, device, electronic equipment, medium and product
CN116562862A (en) Request anti-reprocessing method and device, storage medium and computer equipment
CN117473555A (en) Keyword-based query method and device
CN118694552A (en) Request processing method, request processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination