CN113760974A - Dynamic caching method, device and system - Google Patents

Dynamic caching method, device and system Download PDF

Info

Publication number
CN113760974A
CN113760974A CN202010935445.4A CN202010935445A CN113760974A CN 113760974 A CN113760974 A CN 113760974A CN 202010935445 A CN202010935445 A CN 202010935445A CN 113760974 A CN113760974 A CN 113760974A
Authority
CN
China
Prior art keywords
cache
indicating
judging
local
local cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010935445.4A
Other languages
Chinese (zh)
Inventor
何进萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010935445.4A priority Critical patent/CN113760974A/en
Publication of CN113760974A publication Critical patent/CN113760974A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method, a device and a system for dynamic caching, and relates to the technical field of computers. One embodiment of the method comprises: judging whether a flow peak occurs; if yes, closing the current cache strategy, and loading application data from the database to a local cache; and responding to the data query request of the user by utilizing the local cache. According to the embodiment, the application data is cached by adopting the physical resources of the application server when the traffic peak occurs, so that the technical problems of unstable service, reduced performance and cache penetration caused by the traffic peak can be solved in a low-cost mode.

Description

Dynamic caching method, device and system
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, and a system for dynamic caching.
Background
The cache problem is usually solved in the prior art by using redis (a key-value storage system). However, when the traffic peak of a small amount of data and a large amount of requests is faced, especially when the traffic is instantly increased by 20 times or even higher, a single redis cannot bear huge network traffic, and cache penetration and other phenomena are likely to occur at the peak of sudden traffic increase, so that the database bears excessive pressure, and the problems of service performance reduction and failure in timely response to a front-end request occur. If the flow peak is dealt with by increasing the number of redis, the cost is high, and the principle of cost reduction and efficiency improvement is not met.
Disclosure of Invention
In view of this, embodiments of the present invention provide a dynamic caching method, apparatus, and system, which use a physical resource of an application server to cache application data when a traffic peak occurs, and can use a low-cost manner to solve technical problems of unstable service, performance degradation, and cache penetration caused by the occurrence of the traffic peak.
To achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a dynamic caching method, including:
judging whether a flow peak occurs;
if yes, closing the current cache strategy, and loading application data from the database to a local cache;
and responding to the data query request of the user by utilizing the local cache.
Optionally, the method of the embodiment of the present invention further includes: judging whether the traffic peak is finished or not; and if so, clearing the local cache, and responding to the data query request of the user by using the current cache strategy.
Optionally, the determining whether the traffic peak occurs includes:
monitoring whether a cache instruction for indicating to start a local cache strategy exists or not through a subscription issuing mode; if so, judging that a flow peak occurs; otherwise, judging that the flow peak does not occur; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating to start a local cache strategy is triggered or not; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur.
Optionally, determining whether the traffic peak is over includes:
monitoring whether a cache instruction for indicating to close a local cache strategy exists or not through a publish-subscribe mode; if yes, judging that the flow peak is ended; otherwise, judging that the flow peak is not finished; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating closing of a local cache strategy is triggered or not; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished.
Optionally, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
Optionally, the cache instruction further includes: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for dynamic caching, including:
the judging module is used for judging whether a flow peak occurs or not;
the switching module is used for closing the current cache strategy and loading application data from the database to the local cache if the flow peak occurs;
and the response module is used for responding the data query request of the user by utilizing the local cache.
Optionally, the determining module is further configured to: judging whether the traffic peak is finished or not; if yes, clearing the local cache; the response module is further to: and responding to the data query request of the user by using the current cache strategy.
Optionally, the determining module determines whether a traffic peak occurs, including:
monitoring whether a cache instruction for indicating to start a local cache strategy exists or not through a subscription issuing mode; if so, judging that a flow peak occurs; otherwise, judging that the flow peak does not occur; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating to start a local cache strategy is triggered or not; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur.
Optionally, the determining module determines whether the traffic peak is over, including:
monitoring whether a cache instruction for indicating to close a local cache strategy exists or not through a publish-subscribe mode; if yes, judging that the flow peak is ended; otherwise, judging that the flow peak is not finished; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating closing of a local cache strategy is triggered or not; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished.
Optionally, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
Optionally, the cache instruction further includes: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
According to a third aspect of the embodiments of the present invention, there is provided a system for dynamic caching, including: a console, message middleware and an application server; wherein the content of the first and second substances,
the console is used for generating a cache instruction for indicating to start a local cache strategy and issuing the cache instruction to the message middleware;
the application server monitors the message middleware through a subscription mode, judges that a traffic peak occurs when monitoring the cache instruction for indicating to start the local cache strategy, closes the current cache strategy, loads application data from the database to the local cache, and responds to a data query request of a user by using the local cache.
Optionally, the console is further configured to: generating a cache instruction for indicating to close a local cache strategy, and issuing the cache instruction to the message middleware;
and when monitoring the cache instruction for indicating closing the local cache strategy, the application server judges that the flow peak is ended, clears the local cache, and responds to a data query request of a user by using the current cache strategy.
Optionally, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
Optionally, the cache instruction further includes: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
According to a fourth aspect of the embodiments of the present invention, there is provided an electronic device with dynamic cache, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: by caching application data using physical resources (CPU and memory) of the application server when a traffic peak occurs, the technical problems of unstable service, performance degradation, and cache breakthrough due to the traffic peak can be solved in a low-cost manner.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of a dynamic caching method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the main modules of an apparatus for dynamic caching according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a dynamic cache system in an alternative embodiment of the invention;
FIG. 4 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 5 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
According to an aspect of the embodiments of the present invention, a method for dynamic caching is provided.
Fig. 1 is a schematic diagram of a main flow of a dynamic caching method according to an embodiment of the present invention, and as shown in fig. 1, the dynamic caching method includes: step S101, step S102, and step S103.
In step S101, it is determined whether or not a traffic peak occurs. If yes, jumping to step S102; otherwise, the step is executed circularly. In step S102, the current cache policy is closed, and the application data is loaded from the database to the local cache. In step S103, a data query request of a user is responded to by using the local cache.
Illustratively, the current caching policy of the application server is to adopt a redis cache, when the application server receives a query request of a user, application data is first obtained from the redis, and if the query request is not hit, the database is queried and cached in the redis. After the local cache strategy is started, when the application server receives a query request of a user, the application server does not use redis, but reads the memory of the application server, and if the query request does not hit, the query database stores a return value into the local cache according to the cache strategy.
A traffic peak is a sudden increase in traffic, for example, when the traffic suddenly reaches a set traffic threshold, or when the traffic increases to an incremental threshold. In the practical application process, the mode of judging whether the flow peak occurs can be selected according to the practical situation. For example, the actual traffic of the application server is monitored in real time, and when the actual traffic suddenly reaches a certain set traffic threshold or the traffic increment reaches a certain increment threshold, it is determined that a traffic peak occurs.
It should be noted that, when the local cache policy is adopted, part of the application data may be loaded from the database, or the entire application data may be loaded. Assuming that the goal of the local cache strategy is to raise the hit rate of the local cache to 100% at zero point, and the data request is not sent to the database, the full amount of application data is loaded from the database to the local cache of the application server. Illustratively, the console issues a local cache policy instruction of the full cache, and according to the field value of the cache policy field and the execution time indicated by the execution time, the application server starts a manner of actively downloading data, and actively loads all application data from the database into the local cache. For example, the active loading of the application data is started at 23:50, and the cache aging of the application data is 15 minutes, so that the hit rate of the local cache of the application server is close to 100% at about zero point.
By caching application data using physical resources (CPU and memory) of the application server when a traffic peak occurs, the technical problems of unstable service, performance degradation, and cache breakthrough due to the traffic peak can be solved in a low-cost manner.
In some optional embodiments, determining whether a traffic peak occurs comprises: monitoring whether a cache instruction for indicating to start a local cache strategy exists or not through a subscription issuing mode; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur. Illustratively, a cache instruction for indicating to start a local cache policy is generated by using a console and issued to message middleware; the application server monitors the message middleware through a subscription mode, and judges that a traffic peak occurs when a cache instruction for indicating to start a local cache strategy is monitored. By adopting the mode of issuing and subscribing, the generation process and the execution process of the cache instruction can be decoupled, and the expansion is convenient.
In other alternative embodiments, determining whether a traffic peak occurs includes: judging whether a preset cache instruction for indicating to start a local cache strategy is triggered or not; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur. Illustratively, a cache instruction for instructing to start the local cache policy is set in the application server in advance, for example, the local cache policy is started when 6/17/23: 55/2021 is reached, and when the cache policy reaches 23: 55/6/2021, the application server switches the cache policy to the local cache policy, and responds to the data query request of the user by using the local cache. By presetting the cache instruction, the local cache can be started when the flow peak occurs without depending on a console, message middleware and the like, and the system complexity is low.
In practical application, the method of the embodiment of the present invention may further include: judging whether the flow peak is finished or not; if yes, the local cache is cleared, and the data query request of the user is responded by using the current cache strategy. Illustratively, the current caching policy of the application server is a redis cache, when the application server receives a query request of a user, application data is first obtained from the redis, and if the query request is not hit, a database is queried and cached in the redis. When the flow peak occurs, the application server starts a local cache strategy, when an inquiry request of a user is received, the internal memory of the application server is read, if the inquiry request is not hit, the inquiry database stores a return value into a local cache according to the cache strategy. After the traffic peak is over, the application server clears the local cache, a redis cache is adopted, when the application server receives a query request of a user, application data is firstly obtained from the redis, and if the query request is not hit, a database is queried and cached in the redis.
By switching back to the previous cache strategy when the traffic peak is over, on one hand, the local memory consumption of the application server can be reduced, and the service performance of the application server is improved; on the other hand, the local cache is distributed, and the prior centralized cache strategy is switched back when the traffic peak is over, so that the application data updating and management are facilitated.
In the practical application process, a mode of judging whether the flow peak is finished or not can be selected according to the practical situation. For example, the actual traffic of the application server is monitored in real time, and when the actual traffic suddenly falls below a certain set traffic threshold, or the traffic increment falls below a certain increment threshold, it is determined that the traffic peak is over.
In some optional embodiments, determining whether the traffic peak is over comprises: monitoring whether a cache instruction for indicating to close a local cache strategy exists or not through a publish-subscribe mode; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished. Illustratively, a cache instruction for indicating to close the local cache policy is generated by using the console and issued to the message middleware; the application server monitors the message middleware through a subscription mode, and judges that the traffic peak is ended when monitoring a cache instruction for indicating to close the local cache strategy. By adopting the mode of issuing and subscribing, the generation process and the execution process of the cache instruction can be decoupled, and the expansion is convenient.
In other alternative embodiments, determining whether the traffic peak is over includes: judging whether a preset cache instruction for indicating closing of a local cache strategy is triggered or not; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished. Illustratively, a cache instruction for instructing to close the local cache policy is set in the application server in advance, for example, the local cache policy is closed when 6/18/01: 00/2021 is reached, and when the local cache policy is closed when 6/18/01: 00/2021 is reached, the application server closes the local cache policy and switches the cache policy to the previous cache policy. By presetting the cache instruction, the local cache can be started when the flow peak occurs without depending on a console, message middleware and the like, and the system complexity is low.
The format of the cache instruction can be selectively set according to actual conditions, as long as the format can indicate whether a traffic peak occurs or whether the traffic peak is over. For example, the cache instructions are set in the format described in the foregoing text. In an alternative embodiment, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating that the local caching policy is turned on, or a third field value for indicating that the local caching policy is turned off. The cache instruction is set by adopting the format, the structure is simple, the analysis is convenient, and the memory occupation is small. The field value of the cache policy field may also be a second field value indicating that the local cache policy is updated. Therefore, the application data in the local cache can be updated in time through the cache instruction.
The cache instruction may further include an execution time field to indicate that the cache policy field corresponds to an operation execution time. For example { "type": 1, "time": 15, "unit": minute "}, the type may take values of 1, 2, and 3, where 1 is used to indicate that the local caching policy is turned on, 2 is used to indicate that the local caching policy is updated, and 3 is used to indicate that the local caching policy is turned off; time is an integer and represents how long time later the corresponding operation is executed; the unit is a unit and can be seconds, minutes, hours, days and the like.
The cache instruction may further comprise a cache invalidation field for indicating the time of invalidation of the application data in the local cache, e.g. setting the time of aging, or how long later the aging of the application data is indicated by setting a field value through the cache invalidation field. By setting the failure time, the timeliness of the application data in the local cache can be improved.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing the above method.
Fig. 2 is a schematic diagram of main modules of an apparatus for dynamic caching according to an embodiment of the present invention. As shown in fig. 2, the apparatus 200 for dynamic caching includes:
a judging module 201, which judges whether a traffic peak occurs;
the switching module 202 is used for closing the current cache strategy and loading application data from the database to the local cache if the traffic peak occurs;
and the response module 203 responds to the data query request of the user by using the local cache.
Optionally, the determining module is further configured to: judging whether the traffic peak is finished or not; if yes, clearing the local cache; the response module is further to: and responding to the data query request of the user by using the current cache strategy.
Optionally, the determining module determines whether a traffic peak occurs, including:
monitoring whether a cache instruction for indicating to start a local cache strategy exists or not through a subscription issuing mode; if so, judging that a flow peak occurs; otherwise, judging that the flow peak does not occur; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating to start a local cache strategy is triggered or not; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur.
Optionally, the determining module determines whether the traffic peak is over, including:
monitoring whether a cache instruction for indicating to close a local cache strategy exists or not through a publish-subscribe mode; if yes, judging that the flow peak is ended; otherwise, judging that the flow peak is not finished; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating closing of a local cache strategy is triggered or not; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished.
Optionally, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
Optionally, the cache instruction further includes: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
According to a third aspect of the embodiments of the present invention, there is provided a system for dynamic caching, including: a console, message middleware and an application server; wherein the content of the first and second substances,
the console is configured to generate a cache instruction for instructing to open a local cache policy, and issue the cache instruction to the message middleware (e.g., an MQ (message queue) in fig. 3, where a DB in fig. 3 represents a database);
the application server monitors the message middleware through a subscription mode, judges that a traffic peak occurs when monitoring the cache instruction for indicating to start the local cache strategy, closes the current cache strategy, loads application data from the database to the local cache, and responds to a data query request of a user by using the local cache.
Optionally, the console is further configured to: generating a cache instruction for indicating to close a local cache strategy, and issuing the cache instruction to the message middleware;
and when monitoring the cache instruction for indicating closing the local cache strategy, the application server judges that the flow peak is ended, clears the local cache, and responds to a data query request of a user by using the current cache strategy.
Optionally, the cache instruction includes: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
Optionally, the cache instruction further includes: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
According to a fourth aspect of the embodiments of the present invention, there is provided an electronic device with dynamic cache, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
Fig. 4 illustrates an exemplary system architecture 400 to which the method of dynamic caching or the apparatus of dynamic caching of embodiments of the present invention may be applied.
As shown in fig. 4, the system architecture 400 may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 serves as a medium for providing communication links between the terminal devices 401, 402, 403 and the server 405. Network 404 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 401, 402, 403 to interact with a server 405 over a network 404 to receive or send messages or the like. The terminal devices 401, 402, 403 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 405 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 401, 402, 403. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the method for dynamic caching provided by the embodiment of the present invention is generally executed by the server 405, and accordingly, a device for dynamic caching is generally disposed in the server 405.
It should be understood that the number of terminal devices, networks, and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprising: the judging module is used for judging whether a flow peak occurs or not; the switching module is used for closing the current cache strategy and loading application data from the database to the local cache if the flow peak occurs; and the response module is used for responding the data query request of the user by utilizing the local cache. The names of these modules do not form a limitation to the module itself in some cases, for example, the switching module may also be described as a "module responding to a data query request of a user by using the local cache".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: judging whether a flow peak occurs; if yes, closing the current cache strategy, and loading application data from the database to a local cache; and responding to the data query request of the user by utilizing the local cache.
According to the technical scheme of the embodiment of the invention, the application data is cached by adopting the physical resources (CPU and memory) of the application server when the traffic peak occurs, so that the technical problems of unstable service, reduced performance and cache penetration caused by the traffic peak can be solved by using a low-cost mode.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for dynamic caching, comprising:
judging whether a flow peak occurs;
if yes, closing the current cache strategy, and loading application data from the database to a local cache;
and responding to the data query request of the user by utilizing the local cache.
2. The method of claim 1, further comprising: judging whether the traffic peak is finished or not; and if so, clearing the local cache, and responding to the data query request of the user by using the current cache strategy.
3. The method of claim 1, wherein determining whether a traffic peak occurs comprises:
monitoring whether a cache instruction for indicating to start a local cache strategy exists or not through a subscription issuing mode; if so, judging that a flow peak occurs; otherwise, judging that the flow peak does not occur; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating to start a local cache strategy is triggered or not; if so, judging that a flow peak occurs; otherwise, judging that the traffic peak does not occur.
4. The method of claim 2, wherein determining whether the traffic peak is over comprises:
monitoring whether a cache instruction for indicating to close a local cache strategy exists or not through a publish-subscribe mode; if yes, judging that the flow peak is ended; otherwise, judging that the flow peak is not finished; alternatively, the first and second electrodes may be,
judging whether a preset cache instruction for indicating closing of a local cache strategy is triggered or not; if yes, judging that the flow peak is ended; otherwise, judging that the traffic peak is not finished.
5. The method of claim 3 or 4, wherein the caching instruction comprises: a cache policy field; the field value of the cache policy field is: a first field value for indicating to turn on the local caching policy, or a second field value for indicating to update the local caching policy, or a third field value for indicating to turn off the local caching policy.
6. The method of claim 5, wherein the cache instruction further comprises: an execution time field for indicating the operation execution time corresponding to the cache policy field; and/or a cache invalidation field for indicating an invalidation time of the application data in the local cache.
7. An apparatus for dynamic caching, comprising:
the judging module is used for judging whether a flow peak occurs or not;
the switching module is used for closing the current cache strategy and loading application data from the database to the local cache if the flow peak occurs;
and the response module is used for responding the data query request of the user by utilizing the local cache.
8. A system for dynamic caching, comprising: a console, message middleware and an application server; wherein the content of the first and second substances,
the console is used for generating a cache instruction for indicating to start a local cache strategy and issuing the cache instruction to the message middleware;
the application server monitors the message middleware through a subscription mode, judges that a traffic peak occurs when monitoring the cache instruction for indicating to start the local cache strategy, closes the current cache strategy, loads application data from the database to the local cache, and responds to a data query request of a user by using the local cache.
9. A dynamic cache electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010935445.4A 2020-09-08 2020-09-08 Dynamic caching method, device and system Pending CN113760974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010935445.4A CN113760974A (en) 2020-09-08 2020-09-08 Dynamic caching method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010935445.4A CN113760974A (en) 2020-09-08 2020-09-08 Dynamic caching method, device and system

Publications (1)

Publication Number Publication Date
CN113760974A true CN113760974A (en) 2021-12-07

Family

ID=78785715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010935445.4A Pending CN113760974A (en) 2020-09-08 2020-09-08 Dynamic caching method, device and system

Country Status (1)

Country Link
CN (1) CN113760974A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112566A (en) * 2023-01-05 2023-05-12 中国第一汽车股份有限公司 Method and device for processing vehicle flow data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN108614847A (en) * 2016-12-30 2018-10-02 北京京东尚科信息技术有限公司 A kind of caching method and system of data
CN108667916A (en) * 2018-04-24 2018-10-16 百度在线网络技术(北京)有限公司 A kind of data access method and system of Web applications
CN109729108A (en) * 2017-10-27 2019-05-07 阿里巴巴集团控股有限公司 A kind of method, associated server and system for preventing caching from puncturing
CN109947668A (en) * 2017-12-21 2019-06-28 北京京东尚科信息技术有限公司 The method and apparatus of storing data
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium
CN111125247A (en) * 2019-12-06 2020-05-08 北京浪潮数据技术有限公司 Method, device, equipment and storage medium for caching redis client
CN111464615A (en) * 2020-03-30 2020-07-28 北京达佳互联信息技术有限公司 Request processing method, device, server and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614847A (en) * 2016-12-30 2018-10-02 北京京东尚科信息技术有限公司 A kind of caching method and system of data
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN109729108A (en) * 2017-10-27 2019-05-07 阿里巴巴集团控股有限公司 A kind of method, associated server and system for preventing caching from puncturing
CN109947668A (en) * 2017-12-21 2019-06-28 北京京东尚科信息技术有限公司 The method and apparatus of storing data
CN108667916A (en) * 2018-04-24 2018-10-16 百度在线网络技术(北京)有限公司 A kind of data access method and system of Web applications
CN111125247A (en) * 2019-12-06 2020-05-08 北京浪潮数据技术有限公司 Method, device, equipment and storage medium for caching redis client
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium
CN111464615A (en) * 2020-03-30 2020-07-28 北京达佳互联信息技术有限公司 Request processing method, device, server and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112566A (en) * 2023-01-05 2023-05-12 中国第一汽车股份有限公司 Method and device for processing vehicle flow data

Similar Documents

Publication Publication Date Title
CA2582064A1 (en) Dynamic syndicated content delivery system and method
CN112003945A (en) Service request response method and device
CN109428926B (en) Method and device for scheduling task nodes
CN116303608A (en) Data processing method and device for application service
US11463549B2 (en) Facilitating inter-proxy communication via an existing protocol
CN113742389A (en) Service processing method and device
CN113282589A (en) Data acquisition method and device
CN113760974A (en) Dynamic caching method, device and system
CN112181733A (en) Service request processing method, device, equipment and storage medium
CA2582072A1 (en) System and method for fragmentation of mobile content
CN112948138A (en) Method and device for processing message
CN112688982B (en) User request processing method and device
CN113138943B (en) Method and device for processing request
CN114374657A (en) Data processing method and device
CN113761433A (en) Service processing method and device
CN113742617A (en) Cache updating method and device
CN109981320B (en) Method and device for managing configuration information
CN110019671B (en) Method and system for processing real-time message
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN112769960A (en) Active flow control method and system based on Nginx server
CN113778909B (en) Method and device for caching data
CN116996481B (en) Live broadcast data acquisition method and device, electronic equipment and storage medium
CN113329010B (en) User access management method and system
CN112448931B (en) Network hijacking monitoring method and device
CN116389500A (en) Method, apparatus, device, storage medium and program product for limiting current

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination