CN113760980A - Data caching method, data providing end and data using end - Google Patents

Data caching method, data providing end and data using end Download PDF

Info

Publication number
CN113760980A
CN113760980A CN202011384423.XA CN202011384423A CN113760980A CN 113760980 A CN113760980 A CN 113760980A CN 202011384423 A CN202011384423 A CN 202011384423A CN 113760980 A CN113760980 A CN 113760980A
Authority
CN
China
Prior art keywords
data
identifier
local cache
request
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011384423.XA
Other languages
Chinese (zh)
Inventor
韩金魁
岳晓敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202011384423.XA priority Critical patent/CN113760980A/en
Publication of CN113760980A publication Critical patent/CN113760980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The invention discloses a data caching method, a data providing end and a data using end, and relates to the technical field of computers. One embodiment of the method comprises: receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a using end corresponding to the data to be used; determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier; and when the first total times meet a first condition, pushing the data to be used to the using end so that the using end stores the data to be used in a first local cache. The implementation mode realizes reasonable use of the cache resources according to the actual running condition of the system, and improves the system performance.

Description

Data caching method, data providing end and data using end
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data caching method, a data providing end, and a data using end.
Background
The reasonable utilization of the cache resources is very important for improving the system performance. However, when the cache resources are used, the data meeting the static rule is generally stored in the cache according to the static rule established during system design, which may cause some data that is not frequently used to be stored in the cache for a long time, thereby causing waste of the cache resources and further reducing system performance.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data caching method, a data providing end, and a data using end, which can push data to a local cache of the using end according to the number of times that the using end requests the data, so that the data stored in the local cache is data used at a high frequency, which is beneficial to reducing remote access of the data, and thus, reasonable use of cache resources according to the actual operating condition of the system is achieved, and further, system performance is improved.
To achieve the above object, according to an aspect of an embodiment of the present invention, a data caching method is provided.
When being applied to a data providing end, the data caching method of the embodiment of the invention comprises the following steps:
receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a using end corresponding to the data to be used;
determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier;
and when the first total times meet a first condition, pushing the data to be used to the using end so that the using end stores the data to be used in a first local cache.
Optionally, the method further comprises:
determining a second total number of times that the data to be used is requested by a plurality of using terminals according to the first identification;
and when the second total times meet a preset second condition, storing the data to be used in a second local cache of the data providing end.
Optionally, pushing the data to be used to the user end, so that the user end stores the data to be used in a first local cache, includes:
and pushing the data to be used to the using end through a specified port, so that the using end stores the data flowing from the specified port into the first local cache by monitoring the specified port.
Optionally, the first condition comprises: the first total number is greater than a first threshold value, and/or the frequency corresponding to the first total number is greater than a second threshold value.
Optionally, the second condition comprises: the second total times is greater than a preset third threshold, and/or the number of the using ends corresponding to the second total times is greater than a fourth threshold.
To achieve the above object, according to an aspect of an embodiment of the present invention, a data caching method is provided.
When the data caching method of the embodiment of the invention is applied to a data using end, the method comprises the following steps:
acquiring a data use request, wherein the data use request indicates data to be used;
determining whether the data to be used is stored in a second local cache;
if yes, reading the data to be used from the second local cache;
if not, generating a data acquisition request according to the first identifier of the data to be used and the second identifier of the data to be used, and sending the data acquisition request to a data providing end.
Optionally, determining, by using bytecode enhancement logic, whether the second local cache stores the data to be used.
Optionally, the method further comprises:
and determining the storage time length of the data in the second local cache, determining whether the storage time length is greater than a preset data expiration time length, and if so, deleting the data in the local cache.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a data providing terminal.
A data providing end of an embodiment of the present invention includes: the device comprises a request receiving module, a data monitoring module and a data pushing module; wherein the content of the first and second substances,
the request acquisition module is used for receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a user end corresponding to the data to be used;
the data monitoring module is used for determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier;
and the data pushing module is used for pushing the data to be used to the using end when the first total times meets a first condition so that the using end stores the data to be used in a first local cache.
To achieve the above object, according to still another aspect of the embodiments of the present invention, a data consumer is provided.
The data using end of the embodiment of the invention comprises: the system comprises a request acquisition module, a processing module and a request sending module; wherein the content of the first and second substances,
the request acquisition module is used for acquiring a data use request, and the data use request indicates data to be used;
the processing module is used for determining whether the data to be used is stored in a second local cache; if yes, reading the data to be used from the second local cache; if not, triggering the request sending module;
and the request sending module is used for generating a data acquisition request according to the first identifier of the data to be used and the second identifier of the data to be used, and sending the data acquisition request to a data providing end.
To achieve the above object, according to another aspect of the embodiments of the present invention, an electronic device for data caching is provided.
An electronic device for data caching according to an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement the data caching method of the embodiment of the invention.
To achieve the above object, according to still another aspect of embodiments of the present invention, there is provided a computer-readable storage medium.
A computer-readable storage medium of an embodiment of the present invention stores thereon a computer program, which when executed by a processor implements a method of data caching of an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: determining a first total number of times of requesting the to-be-used data by the using end according to a first identifier of the to-be-used data indicated by the data acquisition request and a second identifier of the using end; and when the first total times meet a first condition, pushing the data end to be used to a first local cache of the user end. Therefore, the data stored in the local cache is the data used at high frequency, the remote access of the data is reduced, the reasonable use of cache resources according to the actual operation condition of the system is realized, and the system performance is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of main steps of a data caching method applied to a data providing end according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of main steps of a data caching method applied to a data consumer according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the main modules of a data provider according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the main modules of a data consumer according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 6 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments of the present invention and the technical features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of main steps of a data caching method applied to a data providing end according to an embodiment of the present invention.
As shown in fig. 1, a data caching method according to an embodiment of the present invention mainly includes the following steps:
step S101: receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a user end corresponding to the data to be used.
Step S102: and determining a first total number of times of the data to be used requested by the user end according to the first identifier and the second identifier.
Step S103: and when the first total times meet a first condition, pushing the data to be used to the using end so that the using end stores the data to be used in a first local cache.
The data providing end can provide an access door for data, that is, the data providing end mainly comprises the access door and the data. The data access of the data providing end all passes through the access door, and the access door can be a same data acquisition channel and has uniform service to the outside, such as rpc or http or socket service channel.
After configuring and starting the data providing end, the data gate service may be created or issued, and then the data providing end may receive a data obtaining request from the data using end from the access gate, and then obtain corresponding data to be used, for example, the corresponding data to be used may be obtained through sql or noSql or file download operation, and then the data to be used is output to the data using end through the access gate.
In the process, the data monitoring device mounted on the data providing end can collect the data flowing in and out from the access door to determine the first total times of the data to be used requested by the data using end. The data monitoring device can be mounted to the data providing end in a plug-in mode and started along with the starting of the data providing end. For example, the data monitoring device can be implemented by at least the following programming languages: java-java agent, collection.
In an embodiment of the present invention, an acquisition rule, such as a data usage rate acquisition frequency, a total number of usage times threshold for storing data in a local cache of a data providing end, a total number of usage times threshold for pushing data to a data using end, and the like, may be set in advance for the data monitoring apparatus. The data monitoring device then listens for the egress of data, which may employ dynamic proxy techniques to collect data as it flows in and out.
The data monitoring device may monitor data from both the data-in and data-out directions, respectively. When a data acquisition request sent by a user end is monitored from a data access door, a flow direction to be used (namely, the flow direction from a data providing end to the user end with a second identifier) can be determined according to the first identifier of the data to be used indicated by the data acquisition request and the second identifier of the user end, and at this time, the data monitoring device can increase the number of times of requests of the user end with the second identifier on the data to be used, so that the first total number of times of requests of the user end on the data to be used is obtained.
The first total number is a result obtained by multiple monitoring, that is, the data monitoring apparatus may store a total number corresponding to each of the identifiers of the plurality of data to be used and the identifier of the user. And when the data monitoring device monitors that the data to be used is requested by the corresponding using end, searching the corresponding first total times from the stored total times according to the first identification of the data to be used and the second identification of the using end, and further increasing the first total times.
It can be understood that after acquiring the data acquisition request of the user terminal, the data providing terminal will provide the corresponding data to be used for the user terminal, and at this time, the data to be used will flow out from the access door of the data providing terminal. Because the data monitoring device collects the data flowing in and out from the access door, the data monitoring device can monitor the flowing data to be used and can determine a first identifier (such as a data ID or a code) of the data to be used, a second identifier (such as a using end IP) of a using end, the flowing time of the data, IP information of the publishing server and the like. Therefore, the data monitoring device can also determine the first total times of the requests of the using end of the data to be used according to the outflow data.
Then, the data monitoring apparatus determines whether the first total number of times satisfies a first condition. In an embodiment of the present invention, the first condition may be that the first total number of times is greater than a first threshold, and/or a frequency corresponding to the first total number of times is greater than a second threshold.
When the first condition is that the first total number of times is greater than a first threshold, then when the number of times of waiting for use of the first identifier by the request end with the second identifier is greater than the first threshold, the data to be used can be pushed to a first local cache of the user end. When the first condition is: when the frequency corresponding to the first total number is greater than the second threshold, the frequency corresponding to a certain preset duration of the first total number can be calculated, and the preset duration can be configured when the data monitoring device is configured, for example, when the data usage rate acquisition frequency in the acquisition rule of the data monitoring device is configured, the preset duration is configured together. When the calculated frequency corresponding to the first total number is greater than the second threshold, the data to be used is pushed to the first local cache of the user end, and of course, the first total number and the frequency corresponding to the first total number may be used in combination, that is, when the first total number is greater than the first threshold and the frequency corresponding to the first total number is greater than the second threshold, the data to be used is pushed to the first cache of the user end.
In addition, in an embodiment of the present invention, the data monitoring device may further determine a second total number of times the data to be used is requested by the plurality of users according to the data flowing in and out from the access door. Specifically, a second total number of times that the data to be used is requested by a plurality of users can be determined according to the first identifier; and when the second total times meet a preset second condition, storing the data to be used in a second local cache of the data providing end. Wherein the second condition may include: the second total times is greater than a preset third threshold, and/or the number of the using ends corresponding to the second total times is greater than a fourth threshold.
In this embodiment, if the data to be used is used by a plurality of users and the usage rate thereof satisfies the second condition, the data to be used may be stored in the second local cache of the data provider. For example, when the number of the clients corresponding to the second total number of times is greater than the fourth threshold, it indicates that the data to be used is requested/used by the multiple clients, and if the second total number of times corresponding to the data to be used is greater than the preset third threshold, it further indicates that the number of times of requesting/using the data to be used by the multiple clients is higher, and at this time, the data to be used may be stored in the second local cache of the data provider. Then, when a data acquisition request is received next time, the data provider can acquire corresponding data to be used from the local cache, so that the problem that the efficiency is reduced due to too large data pushing amount due to pushing of the data to be used to a plurality of using ends is avoided.
When the data to be used needs to be pushed to the first local cache of the using end or the second local cache of the data providing end, the data monitoring device can trigger the data pushing device to push the data to be used through the data pushing device. Specifically, the data pushing device may receive data to be used to be pushed into the local cache, then assemble the pushed data according to a first condition or a second condition that the data to be used satisfies, and push the assembled data to the data using end or the data providing end in a udp or tcp manner, so that the data using end stores the data to be used in the first local cache, or the data providing end stores the data to be used in the second local cache.
The data pushing device can push data to be used to corresponding local caches according to configurations of data providers or data users.
When the data pushing device pushes the to-be-used data which needs to be stored in the local cache to the using end or the providing end, the to-be-used data can be pushed through the appointed port. Taking the example of pushing the data to be used to the user side, the data to be used can be pushed to the user side through the designated port, so that the user side can store the data flowing from the designated port into the first local cache by monitoring the designated port.
The data consumer can mount a plug-in for managing local data, the plug-in is started with the start of the data consumer, and the plug-in can be realized by at least the following programming languages: jar-java agent: localrepository.
The data pushing device pushes the data to be used through the specified port in a udp or tcp mode, so that the management plug-in mounted on the data using end can acquire the data to be used by monitoring the specified port, then the data is analyzed according to the format of the first condition- > data, and then the analyzed data is stored in the first local cache of the data using end. Therefore, the data with higher use frequency of the user end is stored in the local cache of the user end, so that the remote data access of the user end is reduced, and the blind use of cache resources of the user end is reduced.
The data analysis process can be realized by at least the following procedures:
Figure BDA0002809281310000091
Figure BDA0002809281310000101
according to the data caching method, the first total times of the requests of the using end of the data to be used are determined according to the first identification of the data to be used and the second identification of the using end indicated by the data obtaining request; and when the first total times meet a first condition, pushing the data end to be used to a first local cache of the user end. Therefore, the data stored in the local cache is the data used at high frequency, the remote access of the data is reduced, the reasonable use of cache resources according to the actual operation condition of the system is realized, and the system performance is improved.
Fig. 2 is a schematic diagram of main steps of a data caching method applied to a user side of data according to an embodiment of the present invention.
As shown in fig. 2, a data caching method according to an embodiment of the present invention mainly includes the following steps:
step S201: a data usage request is obtained, the data usage request indicating data to be used.
Step S202: determining whether the data to be used is stored in a second local cache; if yes, go to step S203, otherwise go to step S204.
Step S203: and reading the data to be used from the second local cache.
Step S204: and generating a data acquisition request according to the first identifier of the data to be used and the second identifier of the data to be used, and sending the data acquisition request to a data providing end.
The data using end can obtain data from the data provider for use in an interface mode, and the data using end can be a C/S or B/S program. The data using end can utilize the bytecode enhancement logic to perform bytecode enhancement on the original data acquisition logic, specifically, instructions (such as interface classes or methods) can be added through the bytecode, then class files of the enhanced classes are found from the class path, further, the bytecode enhancement is performed on the data acquisition method, and finally, the enhanced bytecode class (classloader) is reloaded, so that the logic of acquiring data from the local cache is added into the data acquisition logic. If the data to be used is not acquired, the request is sent to a data provider, so that whether the data to be used is stored in the second local cache is determined through byte code enhancement logic, and the data to be used is acquired from a remote data provider only when the data to be used is not stored in the local cache, so that remote access of the data acquired by the user is reduced.
In addition, in the embodiment of the present invention, a second local cache of the data consumer is further provided with a data validity period or a remote data change policy, and a plug-in the data consumer for managing local data may determine a storage duration of the data in the second local cache, determine whether the storage duration is greater than a preset data expiration duration (i.e., a data validity period), and if so, delete the data in the local cache. Therefore, the local resources are further reasonably managed, the utilization rate of the local resources is improved, and the system performance is improved. In addition, when the data in the local cache needs to be updated according to the remote data change strategy, the data can be correspondingly updated so as to improve the effectiveness of the data in the local cache.
According to the data caching method provided by the embodiment of the invention, the data with higher use frequency of the user end is stored in the local cache of the user end, and the data is obtained from the data provider at the far end only when the data to be used does not exist in the local cache, so that the remote access of the data of the user end is reduced, and the blind use of cache resources of the user end is reduced.
Fig. 3 is a schematic diagram of main blocks of a data providing side according to an embodiment of the present invention.
As shown in fig. 3, a data providing terminal 300 according to an embodiment of the present invention includes: a request receiving module 301, a data monitoring module 302 and a data pushing module 303; wherein the content of the first and second substances,
the request obtaining module 301 is configured to receive a data obtaining request, where the data obtaining request indicates a first identifier of data to be used and a second identifier of a user corresponding to the data to be used;
the data monitoring module 302 is configured to determine, according to the first identifier and the second identifier, a first total number of times that the data to be used is requested by the user;
the data pushing module 303 is configured to, when the first total number of times meets a first condition, push the data to be used to the user end, so that the user end stores the data to be used in a first local cache.
In an embodiment of the present invention, the data monitoring module 302 is further configured to determine, according to the first identifier, a second total number of times that the data to be used is requested by multiple users;
the data pushing module 303 is further configured to store the data to be used in a second local cache of the data providing end when the second total number of times meets a preset second condition.
In an embodiment of the present invention, the data pushing module 303 is configured to push the data to be used to the user end through a specified port, so that the user end stores the data flowing from the specified port into the first local cache by monitoring the specified port.
In one embodiment of the invention, the first condition comprises: the first total number is greater than a first threshold value, and/or the frequency corresponding to the first total number is greater than a second threshold value.
In one embodiment of the invention, the second condition comprises: the second total times is greater than a preset third threshold, and/or the number of the using ends corresponding to the second total times is greater than a fourth threshold.
According to the data providing end of the embodiment of the invention, the first total times requested by the using end of the data to be used is determined according to the first identifier of the data to be used indicated by the data acquisition request and the second identifier of the using end; and when the first total times meet a first condition, pushing the data end to be used to a first local cache of the user end. Therefore, the data stored in the local cache is the data used at high frequency, the remote access of the data is reduced, the reasonable use of cache resources according to the actual operation condition of the system is realized, and the system performance is improved.
Fig. 4 is a schematic diagram of main modules of a data using end according to an embodiment of the present invention.
As shown in fig. 4, a data consumer 400 according to an embodiment of the present invention includes: a request acquisition module 401, a processing module 402 and a request sending module 403; wherein the content of the first and second substances,
the request obtaining module 401 is configured to obtain a data use request, where the data use request indicates data to be used;
the processing module 402 is configured to determine whether the to-be-used data is stored in a second local cache; if yes, reading the data to be used from the second local cache; if not, triggering the request sending module;
the request sending module 403 is configured to generate a data obtaining request according to the first identifier of the data to be used and the second identifier of the data to be used, and send the data obtaining request to a data providing end.
In an embodiment of the present invention, the processing module 402 is configured to determine whether the second local cache stores the data to be used by using bytecode enhancement logic.
In an embodiment of the present invention, the processing module 402 is further configured to determine a storage duration of the data in the second local cache, determine whether the storage duration is greater than a preset data expiration duration, and if so, delete the data in the local cache.
According to the data using end of the embodiment of the invention, the data with higher using frequency of the using end is stored in the local cache of the using end, and the data is obtained from the data provider at the far end only when the data to be used does not exist in the local cache, so that the remote access of the data of the using end is reduced, and the blind use of cache resources of the using end is reduced.
Fig. 5 shows an exemplary system architecture 500 of a data caching method or a data provider or a data consumer to which embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 501, 502, 503 to interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have various communication client applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 501, 502, 503 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 505 may be a server that provides various services, such as a background management server that supports shopping websites browsed by users using the terminal devices 501, 502, 503. The background management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (e.g., target push information and product information) to the terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for use with a terminal device implementing an embodiment of the invention is shown. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a request receiving module, a data monitoring module and a data pushing module. The names of the modules do not limit the modules themselves in some cases, for example, the data pushing module may also be described as a module for pushing data to be used to a user.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a using end corresponding to the data to be used; determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier; and when the first total times meet a first condition, pushing the data to be used to the using end so that the using end stores the data to be used in a first local cache.
According to the technical scheme of the embodiment of the invention, the first total times of the requests of the using end of the data to be used are determined according to the first identification of the data to be used and the second identification of the using end indicated by the data acquisition request; and when the first total times meet a first condition, pushing the data end to be used to a first local cache of the user end. Therefore, the data stored in the local cache is the data used at high frequency, the remote access of the data is reduced, the reasonable use of cache resources according to the actual operation condition of the system is realized, and the system performance is improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A data caching method is characterized by being applied to a data providing end; the method comprises the following steps:
receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a using end corresponding to the data to be used;
determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier;
and when the first total times meet a first condition, pushing the data to be used to the using end so that the using end stores the data to be used in a first local cache.
2. The method of claim 1, further comprising:
determining a second total number of times that the data to be used is requested by a plurality of using terminals according to the first identification;
and when the second total times meet a preset second condition, storing the data to be used in a second local cache of the data providing end.
3. The method of claim 1, wherein pushing the data to be used to the user end so that the user end stores the data to be used in a first local cache comprises:
and pushing the data to be used to the using end through a specified port, so that the using end stores the data flowing from the specified port into the first local cache by monitoring the specified port.
4. The method of claim 1,
the first condition includes: the first total number is greater than a first threshold value, and/or the frequency corresponding to the first total number is greater than a second threshold value.
5. The method of claim 2,
the second condition includes: the second total times is greater than a preset third threshold, and/or the number of the using ends corresponding to the second total times is greater than a fourth threshold.
6. A data caching method is characterized by being applied to a data using end; the method comprises the following steps:
acquiring a data use request, wherein the data use request indicates data to be used;
determining whether the data to be used is stored in a second local cache;
if yes, reading the data to be used from the second local cache;
if not, generating a data acquisition request according to the first identifier of the data to be used and the second identifier of the data to be used, and sending the data acquisition request to a data providing end.
7. The method of claim 6,
determining whether the data to be used is stored in the second local cache by using bytecode enhancement logic.
8. The method of claim 6 or 7, further comprising:
and determining the storage time length of the data in the second local cache, determining whether the storage time length is greater than a preset data expiration time length, and if so, deleting the data in the local cache.
9. A data provider, comprising: the device comprises a request receiving module, a data monitoring module and a data pushing module; wherein the content of the first and second substances,
the request acquisition module is used for receiving a data acquisition request, wherein the data acquisition request indicates a first identifier of data to be used and a second identifier of a user end corresponding to the data to be used;
the data monitoring module is used for determining a first total number of times that the data to be used is requested by the user end according to the first identifier and the second identifier;
and the data pushing module is used for pushing the data to be used to the using end when the first total times meets a first condition so that the using end stores the data to be used in a first local cache.
10. A data consumer, comprising: the system comprises a request acquisition module, a processing module and a request sending module; wherein the content of the first and second substances,
the request acquisition module is used for acquiring a data use request, and the data use request indicates data to be used;
the processing module is used for determining whether the data to be used is stored in a second local cache; if yes, reading the data to be used from the second local cache; if not, triggering the request sending module;
and the request sending module is used for generating a data acquisition request according to the first identifier of the data to be used and the second identifier of the data to be used, and sending the data acquisition request to a data providing end.
11. An electronic device for data caching, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5 or 6-8.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5 or 6-8.
CN202011384423.XA 2020-11-30 2020-11-30 Data caching method, data providing end and data using end Pending CN113760980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011384423.XA CN113760980A (en) 2020-11-30 2020-11-30 Data caching method, data providing end and data using end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011384423.XA CN113760980A (en) 2020-11-30 2020-11-30 Data caching method, data providing end and data using end

Publications (1)

Publication Number Publication Date
CN113760980A true CN113760980A (en) 2021-12-07

Family

ID=78786122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011384423.XA Pending CN113760980A (en) 2020-11-30 2020-11-30 Data caching method, data providing end and data using end

Country Status (1)

Country Link
CN (1) CN113760980A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883187A (en) * 2012-09-17 2013-01-16 华为技术有限公司 Time-shift program service method, equipment and system
CN108153783A (en) * 2016-12-06 2018-06-12 腾讯科技(北京)有限公司 A kind of method and apparatus of data buffer storage
CN108696895A (en) * 2017-04-07 2018-10-23 华为技术有限公司 Resource acquiring method, apparatus and system
CN108920573A (en) * 2018-06-22 2018-11-30 北京奇艺世纪科技有限公司 A kind of data buffer storage processing method, device and terminal device
CN109634876A (en) * 2018-12-11 2019-04-16 广东省新代通信与网络创新研究院 File access method, device and computer readable storage medium
CN110535521A (en) * 2018-05-25 2019-12-03 北京邮电大学 The business transmitting method and device of Incorporate network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883187A (en) * 2012-09-17 2013-01-16 华为技术有限公司 Time-shift program service method, equipment and system
CN108153783A (en) * 2016-12-06 2018-06-12 腾讯科技(北京)有限公司 A kind of method and apparatus of data buffer storage
CN108696895A (en) * 2017-04-07 2018-10-23 华为技术有限公司 Resource acquiring method, apparatus and system
CN110535521A (en) * 2018-05-25 2019-12-03 北京邮电大学 The business transmitting method and device of Incorporate network
CN108920573A (en) * 2018-06-22 2018-11-30 北京奇艺世纪科技有限公司 A kind of data buffer storage processing method, device and terminal device
CN109634876A (en) * 2018-12-11 2019-04-16 广东省新代通信与网络创新研究院 File access method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
KR102294326B1 (en) Prefetching application data for periods of disconnectivity
US9516091B2 (en) Web page script management
US8935798B1 (en) Automatically enabling private browsing of a web page, and applications thereof
CN107547548B (en) Data processing method and system
CN109918191B (en) Method and device for preventing frequency of service request
CN109829121B (en) Method and device for reporting click behavior data
CN110909022A (en) Data query method and device
CN112948138A (en) Method and device for processing message
CN113452733A (en) File downloading method and device
CN112149392A (en) Rich text editing method and device
CN113722007B (en) Configuration method, device and system of VPN branch equipment
CN113760980A (en) Data caching method, data providing end and data using end
CN110851194A (en) Method and device for acquiring code for realizing new interface
CN113114611B (en) Blacklist management method and device
CN113761433A (en) Service processing method and device
CN110019671B (en) Method and system for processing real-time message
CN109087097B (en) Method and device for updating same identifier of chain code
CN113722193A (en) Method and device for detecting page abnormity
CN113132447A (en) Reverse proxy method and system
CN113742617A (en) Cache updating method and device
CN112685481A (en) Data processing method and device
CN110928850A (en) Traffic statistic method and device
CN112699116A (en) Data processing method and system
CN113778909B (en) Method and device for caching data
CN110888939A (en) Data management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination