CN112286971A - Cache data management method and device, server and computer storage medium - Google Patents

Cache data management method and device, server and computer storage medium Download PDF

Info

Publication number
CN112286971A
CN112286971A CN202011215906.7A CN202011215906A CN112286971A CN 112286971 A CN112286971 A CN 112286971A CN 202011215906 A CN202011215906 A CN 202011215906A CN 112286971 A CN112286971 A CN 112286971A
Authority
CN
China
Prior art keywords
data
cache
user
cache data
cached
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011215906.7A
Other languages
Chinese (zh)
Inventor
丁琪
辛绪武
侯培建
侯文捷
唐日清
王良浩
张益兵
邓洪桥
陈曦
欧辉
车甜甜
刘凯
曾菁
邓攀纪
仲卫南
苏振兴
董星辰
赵冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Huitong Jincai Beijing Information Technology Co ltd
China Power Finance Co ltd
Original Assignee
State Grid Huitong Jincai Beijing Information Technology Co ltd
China Power Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Huitong Jincai Beijing Information Technology Co ltd, China Power Finance Co ltd filed Critical State Grid Huitong Jincai Beijing Information Technology Co ltd
Priority to CN202011215906.7A priority Critical patent/CN112286971A/en
Publication of CN112286971A publication Critical patent/CN112286971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1004Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Quality & Reliability (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a management method, a device, a server and a computer storage medium for cache data, wherein the management method for the cache data comprises the following steps: receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user; then, obtaining cache data which needs to be inquired by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types; then according to the business type of the cache data, packaging the cache data which needs to be inquired by each user to obtain a cache data object; and finally, displaying the cache data object to a user. Therefore, the user can more intuitively acquire the state information of the current cache data.

Description

Cache data management method and device, server and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for managing cache data, a server, and a computer storage medium.
Background
With the rapid development of the information age, more and more cache data are generated every day.
Because the cache data are more and more, the difficulty of managing the cache data is also improved, when the cache data volume becomes huge, if a problem occurs at the moment, the efficiency of analyzing and judging the generated problem by a user is greatly limited when the problem is subsequently checked, and the time cost and the labor cost of checking the problem by the user are increased.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, a server, and a computer storage medium for managing cached data, which enable a user to more intuitively obtain state information of current cached data.
A first aspect of the present application provides a method for managing cache data, including:
receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user;
obtaining cache data which needs to be inquired by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types;
according to the service type of the cache data, packaging the cache data required to be inquired by each user to obtain a cache data object;
and displaying the cache data object to a user.
Optionally, the querying request includes an encrypted character string, and before obtaining the cache data that each user needs to query from the distributed cache cluster, the method further includes:
judging whether the encrypted character string is consistent with a pre-stored encrypted character string or not;
and if the encrypted character string is judged to be consistent with the pre-stored encrypted character string, executing the step of obtaining the cache data required to be inquired by each user from the distributed cache cluster.
Optionally, the manner in which the distributed cache cluster stores the cache data according to the service type includes:
receiving at least one data to be cached uploaded by a user;
determining the service type of each data to be cached;
performing cyclic check code check on the name of the service type of each piece of data to be cached;
if the name of the service type of the data to be cached passes the cyclic check code check, performing modulo operation on all hash grooves in the distributed cache cluster, and determining the hash grooves of the data to be cached in the distributed cache;
and storing each piece of data to be cached according to the hash slot of the data to be cached in the distributed cache.
Optionally, the obtaining cache data that each user needs to query from the distributed cache cluster includes:
and obtaining the cache data in a hash groove where the business type of the cache data belongs to.
A second aspect of the present application provides a management apparatus for caching data, including:
the first receiving unit is used for receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user;
the acquisition unit is used for acquiring cache data to be queried by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types;
the encapsulation unit is used for encapsulating the cache data required to be inquired by each user according to the service type of the cache data to obtain a cache data object;
and the display unit is used for displaying the cache data object to a user.
Optionally, the query request includes an encrypted character string, and the management apparatus for caching data further includes:
the judging unit is used for judging whether the encrypted character string is consistent with a pre-stored encrypted character string;
and the execution unit is used for calling the acquisition unit to acquire the cache data required to be inquired by each user from the distributed cache cluster if the judgment unit judges that the encrypted character string is consistent with the pre-stored encrypted character string.
Optionally, storing the cache data in a storage unit of the distributed cache cluster according to the service type includes:
the second receiving unit is used for receiving at least one piece of data to be cached uploaded by a user;
a first determining unit, configured to determine a service type of each piece of data to be cached;
the checking unit is used for carrying out cyclic check code checking on the name of the service type of each piece of data to be cached;
a second determining unit, configured to perform modulo operation on all hash slots in the distributed cache cluster if the name of the service type of the data to be cached passes cyclic check code check, and determine the hash slot of the data to be cached in the distributed cache;
and the storage subunit is configured to store each piece of data to be cached according to the hash slot of the data to be cached in the distributed cache.
Optionally, the obtaining unit includes:
and the obtaining subunit is configured to obtain the cached data in the hash slot in which the service type to which the cached data belongs is located.
A third aspect of the present application provides a server comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the first aspects.
A fourth aspect of the present application provides a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any one of the first aspect.
According to the above scheme, the present application provides a method, an apparatus, a server and a computer storage medium for managing cache data, where the method for managing cache data includes: receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user; then, obtaining cache data which needs to be inquired by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types; then according to the business type of the cache data, packaging the cache data which needs to be inquired by each user to obtain a cache data object; and finally, displaying the cache data object to a user. Therefore, the user can more intuitively acquire the state information of the current cache data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a detailed flowchart of a management method for cache data according to an embodiment of the present disclosure;
fig. 2 is a detailed flowchart of an implementation manner in which cache data is stored according to a service type in a distributed cache cluster according to another embodiment of the present application;
fig. 3 is a detailed flowchart of a method for managing cache data according to another embodiment of the present application;
fig. 4 is a schematic diagram of a management apparatus for caching data according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a memory cell according to another embodiment of the present application;
fig. 6 is a schematic diagram of a management apparatus for caching data according to another embodiment of the present application;
fig. 7 is a schematic diagram of a server implementing a cache data management method according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", and the like, referred to in this application, are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of functions performed by these devices, modules or units, but the terms "include", or any other variation thereof are intended to cover a non-exclusive inclusion, so that a process, method, article, or apparatus that includes a series of elements includes not only those elements but also other elements that are not explicitly listed, or includes elements inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
An embodiment of the present application provides a method for managing cache data, which specifically includes, as shown in fig. 1, the following steps:
s101, receiving a query request initiated by a user.
The query request comprises at least one cache data which needs to be queried by the user.
The user can initiate a query request in the browser by means of HTTP. For example: at least one cache data needing to be queried is input on HTTP, and a query request is initiated after a query button is clicked.
S102, obtaining cache data which needs to be inquired by each user from the distributed cache cluster.
And the distributed cache cluster stores cache data according to the service type. The service types may be differentiated according to, but not limited to, bank, user, asset, front-end processor, monitoring time, etc., and are not limited herein.
It should be noted that, a Redis cache, a Memcached cache, and the like may be adopted in the distributed cache cluster, and the performance of the Redis cache and the Memcached cache is compared, the Memcached cache and the Redis excellent, but since the Redis cache only uses a single core and the Memcached cache may use multiple cores, the performance of the Redis cache on each core is higher than that of the Memcached cache when small data is stored, and in data above 100k, the performance of the Memcached cache is higher than that of the Redis cache. Comparing Redis cache with Memcached cache in terms of memory space and data size, the MemCached cache can modify the maximum memory, and LRU algorithm is adopted. The Redis cache increases the characteristics of the VM and breaks through the limitation of a physical memory. Redis cache and Memcached cache are compared in operation convenience, the MemCache cache is single in data structure and only used for caching data, the Redis cache supports richer data types, and the data can be directly enriched at a server side, so that network IO times and data volume can be reduced. The Redis cache and the Memcached cache are compared in reliability, the MemCache cache does not support data persistence, data disappear after power failure or restart, but the stability of the MemCache cache is guaranteed, the Redis cache supports data persistence and data recovery, single-point failure is allowed, and meanwhile performance cost is paid. The Redis cache and the Memcached cache are compared in an application scene, the load of a database is reduced in a dynamic system by the Memcached cache, so that the performance is improved, and the Memcached cache is suitable for the conditions of more reading and less writing and large data volume (such as massive inquiry of user information, friend information, article information and the like in a human network). The Redis cache is suitable for systems with high requirements on read-write efficiency, complex data processing service and high requirements on safety (such as a system for counting Xinlang microblogs and issuing microblogs, and high requirements on data safety and read-write).
Therefore, in the actual application process, different cache clusters can be selected and adopted in combination with different application requirements.
Optionally, in another embodiment of the present application, an implementation manner of step S102 specifically includes:
and obtaining the cache data in the hash groove where the business type of the cache data belongs to.
It should be noted that, if the distributed cache cluster adopted in step S102 is a Redis cache cluster, the Redis cache cluster does not have a consistent hash value, but adopts a concept of a hash slot. Generally, a Redis cache cluster has 16384 virtual hash slots, and each node of the Redis cache cluster is responsible for a part of the hash slots.
Specifically, according to the service type to which the cache data belongs, the hash slot in which the cache data is located is found in the node corresponding to the service type, and the cache data is obtained from the hash slot.
Optionally, in another embodiment of the present application, an implementation manner in which cache data is stored in a distributed cache cluster according to a service type is shown in fig. 2, and includes:
s201, receiving at least one data to be cached uploaded by a user.
The user can initiate an upload request in the browser in an HTTP manner. For example: inputting at least one data to be cached to be uploaded on the HTTP, and uploading the at least one data to be cached after clicking an upload button.
S202, determining the service type of each data to be cached.
It should be noted that, when the user uploads the data to be cached, the user may not know the service type of the data to be cached in the distributed cache, and then performs keyword extraction, synonym replacement, word segmentation, and the like on the name and the like of the data to be cached to determine the service type of the data to be cached.
S203, performing cyclic check code check on the name of the service type of each data to be cached.
Specifically, the cyclic check code check is performed on the key represented by the name of the service type of each data to be cached, so that the reliability in the data transmission process is ensured.
And S204, if the name of the service type of the data to be cached passes the cyclic check code check, performing modulo on all hash slots in the distributed cache cluster, and determining the hash slot of the data to be cached in the distributed cache.
Specifically, all hash slots in the distributed cache cluster are calculated for the character string corresponding to the key represented by the name of the service type of each data to be cached, and the hash slot of the data to be cached in the distributed cache is determined according to the calculation result.
And S205, storing each piece of data to be cached in a hash slot of the distributed cache according to the data to be cached.
S103, according to the service type of the cache data, the cache data which needs to be inquired by each user is packaged to obtain a cache data object.
The cache data object may be data in JSON format, or may be data in other formats, which is not limited herein.
For example: a user requests to inquire cache data of a plurality of banks and cache data of a plurality of monitoring times, and then the cache data of which the service type is the bank are packaged together to obtain a cache object corresponding to the cache data of the bank; and encapsulating the cache data with the service type of the monitoring time together to obtain an encapsulated object corresponding to the monitoring time.
And S104, displaying the cached data object to a user.
According to the scheme, the management method for the cache data receives the query request initiated by the user; the query request comprises at least one cache data which needs to be queried by a user; then, obtaining cache data which needs to be inquired by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to the service type; then according to the business type of the cache data, encapsulating the cache data to be inquired by each user to obtain a cache data object; finally, the cached data object is presented to the user. Therefore, the user can more intuitively acquire the state information of the current cache data.
Optionally, in another embodiment of the present application, an implementation manner of a method for managing cache data, as shown in fig. 3, includes:
s301, receiving a query request initiated by a user.
The query request comprises at least one cache data which needs to be queried by the user and an encrypted character string. The encrypted character string is generated by the server, and the server corresponding to the encrypted character string can be found according to the encrypted character string.
It should be noted that, since a plurality of servers are deployed as the distributed cache cluster, the consistency of data must be ensured. If a client initiates a request to be distributed to any one of the servers in the distributed cache cluster, it needs to determine on which server the cached data needs to be sent according to whether the encrypted character string is the same session or not.
It should be further noted that the specific implementation process of step S301 is the same as the specific implementation process of step S101, and reference may be made to this.
S302, judging whether the encrypted character string is consistent with the pre-stored encrypted character string.
Specifically, whether the encrypted character string in the received query request is consistent with the encrypted character string pre-stored in the server is judged, and if the encrypted character string in the received query request is consistent with the encrypted character string pre-stored in the server, step S303 is executed; if the encrypted character string in the received query request is determined to be inconsistent with the encrypted character string pre-stored in the server, it is determined whether the encrypted character string in the received query request is consistent with the encrypted character string pre-stored in another server, until a server consistent with the encrypted character string in the received query request is found, and step S303 is executed.
S303, obtaining the cache data which needs to be inquired by each user from the distributed cache cluster.
It should be noted that the specific implementation process of step S303 is the same as the specific implementation process of step S102, and reference may be made to this.
S304, according to the service type of the cache data, the cache data which needs to be inquired by each user is packaged to obtain a cache data object.
It should be noted that the specific implementation process of step S304 is the same as the specific implementation process of step S103, and reference may be made to this.
S305, displaying the cache data object to a user.
It should be noted that the specific implementation process of step S305 is the same as the specific implementation process of step S104, and reference may be made to this.
According to the scheme, the management method for the cache data receives the query request initiated by the user; the query request comprises at least one cache data which needs to be queried by a user; then, judging whether the encrypted character string is consistent with the pre-stored encrypted character string, and if so, acquiring cache data required to be queried by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to the service type; then according to the business type of the cache data, encapsulating the cache data to be inquired by each user to obtain a cache data object; finally, the cached data object is presented to the user. Therefore, the user can more intuitively acquire the state information of the current cache data.
Another embodiment of the present application provides a management apparatus for caching data, as shown in fig. 4, specifically including:
a first receiving unit 401, configured to receive an inquiry request initiated by a user.
The query request comprises at least one cache data which needs to be queried by the user.
An obtaining unit 402, configured to obtain, from the distributed cache cluster, cache data that each user needs to query.
And the distributed cache cluster stores cache data according to the service type.
Optionally, in another embodiment of the present application, an implementation manner of the obtaining unit 402 includes:
and the obtaining subunit is used for obtaining the cache data in the hash slot in which the service type to which the cache data belongs is located.
For specific working processes of the units disclosed in the above embodiments of the present application, reference may be made to the contents of the corresponding method embodiments, which are not described herein again.
Optionally, in another embodiment of the present application, an implementation manner of storing the cache data to the storage unit of the distributed cache cluster according to the service type is shown in fig. 5, and includes:
a second receiving unit 501, configured to receive at least one to-be-cached data uploaded by a user.
A first determining unit 502, configured to determine a service type of each data to be buffered.
The checking unit 503 is configured to perform cyclic check code checking on the name of the service type of each piece of data to be cached.
A second determining unit 504, configured to perform modulo operation on all hash slots in the distributed cache cluster if the name of the service type of the data to be cached passes the cyclic check code check, and determine a hash slot of the data to be cached in the distributed cache.
And the storage subunit 505 is configured to store each piece of data to be cached according to a hash slot of the data to be cached in the distributed cache.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 2, which is not described herein again.
The encapsulating unit 403 is configured to encapsulate the cache data that needs to be queried by each user according to the service type to which the cache data belongs, so as to obtain a cache data object.
And a presentation unit 404, configured to present the cached data object to a user.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 1, which is not described herein again.
According to the above scheme, in the management device for caching data provided by the present application, the first receiving unit 401 receives an inquiry request initiated by a user; the query request comprises at least one cache data which needs to be queried by a user; then, the obtaining unit 402 obtains the cache data that each user needs to query from the distributed cache cluster; the distributed cache cluster stores cache data according to the service type; the encapsulating unit 403 encapsulates the cache data that each user needs to query according to the service type to which the cache data belongs, to obtain a cache data object; finally, the presentation unit 404 presents the cached data object to the user. Therefore, the user can more intuitively acquire the state information of the current cache data.
Optionally, in another embodiment of the present application, an implementation manner of the management apparatus for caching data, as shown in fig. 6, includes:
a first receiving unit 601, configured to receive a query request initiated by a user and including an encrypted character string.
The query request comprises at least one cache data which needs to be queried by the user.
A judging unit 602, configured to judge whether the encrypted string is consistent with a pre-stored encrypted string.
An executing unit 603, configured to, if the determining unit 602 determines that the encrypted character string is consistent with the pre-stored encrypted character string, invoke the obtaining unit 604 to obtain, from the distributed cache cluster, the cache data that each user needs to query.
An obtaining unit 604, configured to obtain, from the distributed cache cluster, cache data that each user needs to query.
And the distributed cache cluster stores cache data according to the service type.
The encapsulating unit 605 is configured to encapsulate the cache data that each user needs to query according to the service type to which the cache data belongs, so as to obtain a cache data object.
And a presentation unit 606 for presenting the cached data object to a user.
For a specific working process of the unit disclosed in the above embodiment of the present application, reference may be made to the content of the corresponding method embodiment, as shown in fig. 3, which is not described herein again.
According to the above scheme, in the management device for caching data provided by the application, the first receiving unit 601 receives a query request initiated by a user; the query request comprises at least one cache data which needs to be queried by a user; then, the determining unit 602 determines whether the encrypted character string is consistent with the pre-stored encrypted character string, and the executing unit 603 determines in the determining unit 602 that the encrypted character string is consistent with the pre-stored encrypted character string, and invokes the obtaining unit 604 to obtain the cache data that each user needs to query from the distributed cache cluster; the distributed cache cluster stores cache data according to the service type; the encapsulating unit 605 encapsulates the cache data to be queried by each user according to the service type of the cache data, so as to obtain a cache data object; finally, the presentation unit 606 presents the cached data object to the user. Therefore, the user can more intuitively acquire the state information of the current cache data.
Another embodiment of the present application provides a server, as shown in fig. 7, including:
one or more processors 701.
A storage 702 having one or more programs stored thereon.
The one or more programs, when executed by the one or more processors 701, cause the one or more processors 701 to implement a method as in any of the above embodiments.
Another embodiment of the present application provides a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method as described in any of the above embodiments.
In the above embodiments disclosed in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present disclosure may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a live broadcast device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Those skilled in the art can make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for managing cache data, comprising:
receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user;
obtaining cache data which needs to be inquired by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types;
according to the service type of the cache data, packaging the cache data required to be inquired by each user to obtain a cache data object;
and displaying the cache data object to a user.
2. The management method according to claim 1, wherein the query request includes an encrypted string, and before obtaining the cache data that each of the users needs to query from the distributed cache cluster, the method further includes:
judging whether the encrypted character string is consistent with a pre-stored encrypted character string or not;
and if the encrypted character string is judged to be consistent with the pre-stored encrypted character string, executing the step of obtaining the cache data required to be inquired by each user from the distributed cache cluster.
3. The management method according to claim 1, wherein the manner in which the cache data is stored in the distributed cache cluster according to the service type includes:
receiving at least one data to be cached uploaded by a user;
determining the service type of each data to be cached;
performing cyclic check code check on the name of the service type of each piece of data to be cached;
if the name of the service type of the data to be cached passes the cyclic check code check, performing modulo operation on all hash grooves in the distributed cache cluster, and determining the hash grooves of the data to be cached in the distributed cache;
and storing each piece of data to be cached according to the hash slot of the data to be cached in the distributed cache.
4. The management method according to claim 3, wherein the obtaining cache data that each of the users needs to query from the distributed cache cluster includes:
and obtaining the cache data in a hash groove where the business type of the cache data belongs to.
5. A management apparatus for caching data, comprising:
the first receiving unit is used for receiving a query request initiated by a user; wherein, the query request comprises at least one cache data which needs to be queried by the user;
the acquisition unit is used for acquiring cache data to be queried by each user from the distributed cache cluster; the distributed cache cluster stores cache data according to service types;
the encapsulation unit is used for encapsulating the cache data required to be inquired by each user according to the service type of the cache data to obtain a cache data object;
and the display unit is used for displaying the cache data object to a user.
6. The management apparatus according to claim 5, wherein the query request includes an encryption string, and the apparatus for managing cache data further comprises:
the judging unit is used for judging whether the encrypted character string is consistent with a pre-stored encrypted character string;
and the execution unit is used for calling the acquisition unit to acquire the cache data required to be inquired by each user from the distributed cache cluster if the judgment unit judges that the encrypted character string is consistent with the pre-stored encrypted character string.
7. The management device according to claim 5, wherein the storing the cache data to the storage unit of the distributed cache cluster according to the service type comprises:
the second receiving unit is used for receiving at least one piece of data to be cached uploaded by a user;
a first determining unit, configured to determine a service type of each piece of data to be cached;
the checking unit is used for carrying out cyclic check code checking on the name of the service type of each piece of data to be cached;
a second determining unit, configured to perform modulo operation on all hash slots in the distributed cache cluster if the name of the service type of the data to be cached passes cyclic check code check, and determine the hash slot of the data to be cached in the distributed cache;
and the storage subunit is configured to store each piece of data to be cached according to the hash slot of the data to be cached in the distributed cache.
8. The management apparatus according to claim 7, wherein the acquisition unit includes:
and the obtaining subunit is configured to obtain the cached data in the hash slot in which the service type to which the cached data belongs is located.
9. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
10. A computer storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method of any of claims 1 to 4.
CN202011215906.7A 2020-11-04 2020-11-04 Cache data management method and device, server and computer storage medium Pending CN112286971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011215906.7A CN112286971A (en) 2020-11-04 2020-11-04 Cache data management method and device, server and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011215906.7A CN112286971A (en) 2020-11-04 2020-11-04 Cache data management method and device, server and computer storage medium

Publications (1)

Publication Number Publication Date
CN112286971A true CN112286971A (en) 2021-01-29

Family

ID=74351923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011215906.7A Pending CN112286971A (en) 2020-11-04 2020-11-04 Cache data management method and device, server and computer storage medium

Country Status (1)

Country Link
CN (1) CN112286971A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110137888A1 (en) * 2009-12-03 2011-06-09 Microsoft Corporation Intelligent caching for requests with query strings
US20140223100A1 (en) * 2013-02-07 2014-08-07 Alex J. Chen Range based collection cache
CN109388657A (en) * 2018-09-10 2019-02-26 平安科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
CN109769028A (en) * 2019-01-25 2019-05-17 深圳前海微众银行股份有限公司 Redis cluster management method, device, equipment and readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110137888A1 (en) * 2009-12-03 2011-06-09 Microsoft Corporation Intelligent caching for requests with query strings
US20140223100A1 (en) * 2013-02-07 2014-08-07 Alex J. Chen Range based collection cache
CN109388657A (en) * 2018-09-10 2019-02-26 平安科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
CN109769028A (en) * 2019-01-25 2019-05-17 深圳前海微众银行股份有限公司 Redis cluster management method, device, equipment and readable storage medium storing program for executing

Similar Documents

Publication Publication Date Title
WO2017028697A1 (en) Method and device for growing or shrinking computer cluster
US11232253B2 (en) Document capture using client-based delta encoding with server
CN106649670B (en) Data monitoring method and device based on stream computing
CN107229619B (en) Method and device for counting and displaying calling condition of internet service link
US11122128B2 (en) Method and device for customer resource acquisition, terminal device and storage medium
CN106202235B (en) Data processing method and device
CN106980699B (en) Data processing platform and system
CN111447102B (en) SDN network device access method and device, computer device and storage medium
CN110661829B (en) File downloading method and device, client and computer readable storage medium
CN113411404A (en) File downloading method, device, server and storage medium
US10903989B2 (en) Blockchain transaction processing method and apparatus
US20160269446A1 (en) Template representation of security resources
WO2015154682A1 (en) Network request processing method, network server, and network system
CN113010542B (en) Service data processing method, device, computer equipment and storage medium
CN113467855A (en) Webpage request processing method and device, electronic equipment and storage medium
CN117171108A (en) Virtual model mapping method and system
CN112286971A (en) Cache data management method and device, server and computer storage medium
CN114039801B (en) Short link generation method, short link analysis system, short link analysis equipment and storage medium
CN107294766B (en) Centralized control method and system
CN112528189B (en) Data-based component packaging method and device, computer equipment and storage medium
CN112416875B (en) Log management method, device, computer equipment and storage medium
CN112910988A (en) Resource acquisition method and resource scheduling device
US20160127496A1 (en) Method and system of content caching and transmission
CN113553518A (en) Resource identifier generation method, device, equipment and storage medium
CN112788077A (en) Data acquisition method and device, computer equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination