CN110018969B - Data caching method, device, computer equipment and storage medium - Google Patents

Data caching method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110018969B
CN110018969B CN201910175754.3A CN201910175754A CN110018969B CN 110018969 B CN110018969 B CN 110018969B CN 201910175754 A CN201910175754 A CN 201910175754A CN 110018969 B CN110018969 B CN 110018969B
Authority
CN
China
Prior art keywords
time
request
cache
stored
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910175754.3A
Other languages
Chinese (zh)
Other versions
CN110018969A (en
Inventor
李桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910175754.3A priority Critical patent/CN110018969B/en
Publication of CN110018969A publication Critical patent/CN110018969A/en
Priority to PCT/CN2019/118426 priority patent/WO2020181820A1/en
Application granted granted Critical
Publication of CN110018969B publication Critical patent/CN110018969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a data caching method, a device, computer equipment and a storage medium, which are used for caching data with fixed request frequency, wherein the method comprises the following steps: receiving a current request sent by equipment; judging whether the request object of the current request is an object in a cache or not; if the request object is an object in a cache, calling the request object from the cache; otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache; if the request object is judged to be stored in the cache, deleting the target object in the cache, and storing the request object in the cache, wherein the target object is the latest object in the moment when each object is called in the first preset time.

Description

Data caching method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data caching method, apparatus, computer device, and storage medium.
Background
The cache is usually a backup memory of each device system, and a fixed memory value is usually set due to limited memory resources, and after the size of the cache exceeds a fixed value, the cache is cleaned according to algorithms such as LRU (Least Rencently Used), LFU (Least Frequently Used) or FIFO (First In First Out). If the rule engine is used for caching the rule engine instance, the rule engine instance is used for processing the data requested by the device without re-creating the object every time the request is made, and when the cache reaches a set fixed value, the rule engine instance is cleaned according to a cache algorithm.
However, in the internet of things scenario, the frequency of reporting data or requesting data by the device is often fixed, but in such a scenario, if the cache is cleaned according to the above-mentioned algorithm such as LRU, LFU or FIFO, the cache hit rate is not high, and the cache resource is not fully utilized, in view of the shortcomings of the above-mentioned various cache algorithms in the internet of things scenario, a cache method for invoking the scenario with a substantially fixed cache frequency is needed.
Disclosure of Invention
The invention mainly aims to provide a data caching method, a data caching device, computer equipment and a storage medium, and aims to solve the technical problem of insufficient cache resource utilization in a scene of basically fixed request frequency.
Based on the above object, the present invention provides a data caching method, for caching data with fixed request frequency, comprising:
receiving a current request sent by equipment;
judging whether the request object of the current request is an object in a cache or not;
if the request object is an object in a cache, calling the request object from the cache; otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache;
if the request object is judged to be stored in the cache, acquiring the calling frequency and the calling time of each object in the cache, wherein the calling time is the time when the object was called for the last time based on the current time;
calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a designated time from the current time;
Comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as the target object;
deleting the target object and storing the request object into the cache.
Further, the step of calculating the time when each object is called each time in the first preset time according to the calling frequency and the calling time includes:
calculating the time when each object is called each time within the first preset time by using the following formula:
Figure SMS_1
/>
wherein i is any one of the cachesObject, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i.
Further, the step of determining whether the request object needs to be stored in the cache includes:
calculating according to the preset request frequencies of all requests to obtain the arrangement sequence of the requests in the second preset time according to the request time sequence;
calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is not stored in the cache, and marking the time as first time consumption;
calculating the time required for traversing each request according to the sorting order under the condition that the request object is stored in the cache, and marking the time as second time consumption;
Comparing the first time consumption with the second time consumption;
if the first time consumption is longer than the second time consumption, judging that the request object needs to be stored in the cache;
and if the first time consumption is shorter than the second time consumption, judging that the request object does not need to be stored in the cache.
Further, the step of calculating the arrangement sequence of each request ordered in time sequence in the second preset time according to the preset request frequencies of all the requests includes:
calculating all request moments of all requests within the second preset time according to the preset request frequencies of all requests;
sequencing the requests corresponding to each request time according to the time sequence to obtain the sequence;
wherein, all the request moments of the request j are calculated by the following formula:
Figure SMS_2
wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
Further, the step of determining whether the request object needs to be stored in the cache includes:
Acquiring the calling frequency of each object in the cache;
calculating the object of the next request according to the calling frequency, and recording the object as the next object;
judging whether the next object is a request object of the current request or not;
if yes, determining that the request object needs to be stored in the cache, otherwise, determining that the request object does not need to be stored in the cache.
Further, the step of determining whether the currently requested request object is an object in a cache includes:
identifying the request object according to the current request;
comparing the request object with each object in a cache list, wherein the cache list is a list of all objects stored in the cache;
if the request object is consistent with the object in the cache list, judging that the request object is the object in the cache; otherwise, judging that the request object is not the object in the cache.
The invention also provides a data caching device, which is used for caching the data with fixed request frequency, and comprises the following components:
a receiving request unit, configured to receive a current request sent by a device;
a judging object unit, configured to judge whether the currently requested request object is an object in a cache;
A calling object unit, configured to call the request object from the cache if the request object is an object in the cache; otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache;
the acquisition time unit is used for acquiring the calling frequency and the calling time of each object in the cache if the request object is judged to be stored in the cache, wherein the calling time is the time when the object is called last time based on the current time;
the calculating time unit is used for calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a specified time from the current time;
the comparison time unit is used for comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as the target object;
and the deleting object unit is used for deleting the target object and storing the request object into the cache.
Further, the call object unit includes:
A calculation sequence subunit, configured to calculate, according to the request frequencies of all the preset requests, a ranking sequence of each request ordered according to time sequence in a second preset time;
a first time-consuming subunit, configured to calculate a time required for traversing each request according to the arrangement sequence when the request object is not stored in the cache, and record as a first time-consuming;
a second time-consuming subunit, configured to calculate a time required for traversing each request according to the ordering order when the request object is stored in the cache, and record as a second time-consuming;
a comparison time consuming subunit configured to compare the first time consuming with the second time consuming;
a first determining subunit, configured to determine that the request object needs to be stored in the cache when the first time consumption is longer than the second time consumption;
and the second judging subunit is used for judging that the request object does not need to be stored in the cache when the first time consumption is shorter than the second time consumption.
The invention also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
The beneficial effects of the invention are as follows: firstly judging whether a request object of a request is an object in a cache, if not, further judging that the request object is not required to be stored in the cache, if so, obtaining the latest object among the moments when the objects are called in the first preset time through a preset rule, deleting the object and storing the request object in the cache.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a data buffering method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a data buffering device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the data caching method in this embodiment includes:
step S1: receiving a current request sent by equipment;
step S2: judging whether the request object of the current request is an object in a cache or not;
step S3: if the request object is an object in a cache, calling the request object from the cache, otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache;
step S4: if the request object is judged to be stored in the cache, acquiring the calling frequency and the calling time of each object in the cache, wherein the calling time is the time when the object was called for the last time based on the current time;
step S5: calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a designated time from the current time;
step S6: comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as the target object;
Step S7: deleting the target object and storing the request object into the cache.
In this embodiment, the data caching method is used for caching data with fixed request frequency, and is mainly applied to a scene with fixed request frequency, for example, an internet of things scene. The current request at least comprises a request header and a request body, wherein the request header comprises information such as a request method, a version, a used protocol and the like, the request body comprises information such as parameters (such as equipment ID) of a server side, request content and the like, a request object is obtained according to the request content, and the request object is data or tools for processing the request. For example, in a scenario where the status of the device is processed, the device needs to report status data, including temperature, humidity, battery status, etc., and after the device makes a request, the server receives the request and initializes a rule engine, which is responsible for analyzing and processing the data, where the rule engine may be an object in a cache, in this embodiment, for convenience of description, the request sent by the device received by the current system is denoted as the current request.
As described in step S2, it is known that, after receiving the current request, it is necessary to respond to the request, if the request (data for responding) is stored in the cache, then it is not necessary to search for and retrieve from a preset database, but only the request is directly called from the cache, so that efficiency is improved, but there is a case that the request is not in the cache, for example, the request is a, but there are bcd three in the cache, then the cache cannot be called at this time, so that after obtaining the request, it is determined whether the corresponding request is an object in the cache, if the request is searched in the cache, if the request is found, it is determined that the request is provided in the cache, then it is directly called, if the request is not searched in the cache, then it is determined that the request is not provided in the cache, and the request is required to be obtained from the database of the system, and this database is a preset database in which the request (response data) corresponding to each request is stored, for the subsequent system can further determine whether the request is required to be stored in the cache, if the cache is a fixed time, if the frequency of storing the request is required to be stored in the cache is a fixed, if the frequency is required to store the cache is a fixed, and the cache is required to be stored at a fixed time, if the frequency is required to be a fixed, and the cache is a is required is a fixed, and the frequency is required to be stored at a fixed time, and a is a fixed time is a is required, if is a is required is a fixed time is required is a.
In this embodiment, when it is determined that the request object needs to be stored in the cache, that is, it indicates that in the subsequent system operation, in the process of responding to the request, the efficiency of calling the request object from the cache is better, and the cost is less. Because the capacity of the cache is fixed, if the request object needs to be stored in the cache, the original one of the objects in the cache needs to be deleted, and the deletion of the original one of the objects can be realized by the deleted target object, wherein the target object is the latest object among the next called time of each object in the first preset time, and after the target object is deleted, the request object is stored in the cache so as to be called by the next request.
As described in the above steps S4-S7, the above request frequency is the frequency of calling the request object, and in the internet of things scenario, the calling frequency of each object in the cache and the last time of calling each object can be directly obtained from the system, where the last time of calling is based on the current time and the time of previous calling of the object. And then calculating the time of each time of calling in a first preset time according to the calling frequency of each object and the last calling time, wherein the first preset time is a time period in a designated time from the current time, and finally comparing the calculated time to obtain the latest target time, wherein the object corresponding to the target time is the target object, so that the caching method can be called as a Latest Arrival (LA) caching algorithm, and the LA algorithm is a self-defined algorithm of the application.
In this embodiment, the following formula is used to calculate the time when each object is called in the first preset time:
Figure SMS_3
wherein i is any one of the above-mentioned cache objects, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i.
For example, the corresponding calling frequency of the object A in the cache is 0.05 times/second, the corresponding time of the last call is 12:10:15, the time of the next call is 12:10:35 through the formula, the time of the next call is 12:10:55, all the calling times of the object A in the first preset time are calculated, and the like, all the calling times of each object in the cache in the first preset time are calculated, then the calling times are compared, the latest target time is obtained, the object corresponding to the target time is the object which should be deleted, and the object is marked as the target object for convenience of description.
In the cache method provided by the invention, in a scene of basically fixed calling frequency, the request object is further judged to be stored in the cache by taking less cost as a reference, and when the request object is required to be stored in the cache, the replaced target object is the latest object in the next time when each object is called in the first preset time, so that the hit rate of the cache is higher, the cost is less, and the efficiency is higher.
The following describes why the target object is deleted from the cache in terms of cache hit rate and cost: first, define cost c as the additional time overhead for a cache miss (i.e. for a request, the time spent in making a call with a request object in the cache is different from the time spent in no request object in the cache needs to be obtained from the database and then responded). The total cost S is the sum of all overheads in a certain time.
Then in time t, the number of cache hits is h, and the total number of cache calls is m, so the hit rate can be expressed as:
Figure SMS_4
the total cost can be expressed as: s= (m-h) c. Assume that the cached object has a frequency of 1,2,3, 4..k, and the corresponding frequencies of calls are f1, f2, f 3..fk, and the last time of call is respectively denoted as t1, t2, t3...tk. Then when the time is t, the time at which the i object is subsequently invoked is: />
Figure SMS_5
The latest instant in this period of time can therefore be expressed as: />
Figure SMS_6
Figure SMS_7
I-object corresponding to the last TLA is the object to be cleaned out of the cache.
The existing caching algorithm comprises an LRU (least recently used) algorithm, an LFU (linear frequency unit) algorithm and a FIFO algorithm, wherein objects needing to be cleaned out of the cache can be expressed by formulas respectively: t (T) LRU =max{t-t i },i∈(1,2,3...k),
Figure SMS_8
Figure SMS_9
Tfifo=maxti, i e (1, 2, 3..k). Comparing the existing algorithm with the LA algorithm provided by the scheme, the method comprises the following steps:
for example, the cache has objects a, b, c, d, the corresponding call frequencies are fa=1/5, fb=1/4, fc=1/3, fd=1/2, and only 3 objects can be stored in the cache, the cost c is 1, and the existing object set in the cache is denoted by U. If the initial object in the cache has a, b, c, and the object d is ready to be added to the cache at t=2, then the hit number h and the total cost S corresponding to each time point in t time are as follows:
Figure SMS_10
Figure SMS_11
as can be seen from the table, compared with the existing LRU, LFU and FIFO algorithms, the LA algorithm provided by the invention has the highest hit number of cache at each time point and the least total cost.
In one embodiment, the step S3 includes:
step S31: calculating according to the preset request frequencies of all requests to obtain the arrangement sequence of the requests in the second preset time according to the request time sequence;
step S32: calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is not stored in the cache, and marking the time as first time consumption;
step S33: calculating the time required for traversing each request according to the sorting order under the condition that the request object is stored in the cache, and marking the time as second time consumption;
Step S34: comparing the first time consumption with the second time consumption;
step S35: if the first time consumption is longer than the second time consumption, judging that the request object needs to be stored in the cache;
step S36: and if the first time consumption is shorter than the second time consumption, judging that the request object does not need to be stored in the cache.
In this embodiment, the step of determining whether the request object needs to be stored in the cache is implemented through the steps S31-36, as described in the step S31, since the preset request frequencies of all the requests are fixed, the request frequency of each request can be obtained, and then the arrangement sequence of the requests in the second preset time according to the sequence of the request time is calculated according to the request frequency.
In one embodiment, the step S31 includes:
step S310: calculating all request moments of all requests within the second preset time according to the preset request frequencies of all requests;
step S310: sequencing the requests corresponding to each request time according to the time sequence to obtain the sequence;
wherein, all the request moments of the request j are calculated by the following formula:
Figure SMS_12
Wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
In this embodiment, since the request frequencies of all requests are known in the second preset time, the request time of all requests in the time period can be calculated by the request frequencies, and then the requests corresponding to each request time are ordered in time order to obtain the above-mentioned order, for example, the initial time t is set 0 Request a has a frequency f a Frequency f of request b b Request c has a frequency f c The next time the request a arrives at the moment is
Figure SMS_13
Next oneThe arrival time of the secondary request b is:
Figure SMS_14
the next time the request c arrives at the moment is: />
Figure SMS_15
By analogy, all request a arrival times can be expressed as: />
Figure SMS_16
n is a natural integer; all request b arrival times can be expressed as: />
Figure SMS_17
All request c arrival times can be expressed as: />
Figure SMS_18
At this time, the arrangement sequence of the corresponding requests can be obtained according to the arrangement sequence from small to large at these moments, e.g. H a ,H b ,H c Ordering from small to large and recording the requests arriving at each time instant, resulting in a sequence Q, e.g., Q = { (2, a), (3, b), (5, c), (6, ab), (8, a), (9,b) }, where (2, a) represents that at time instant 2, the arriving request has a; (3, b) indicates that at time 3, there is b in the arriving request; at time 6, (6, ab) indicates that a and b are requests to be arrived at.
As described in the above steps S32-S33, under the condition that the request object is not stored in the cache and the request object is not stored in the cache, the corresponding request is traversed according to the above arrangement sequence to obtain the corresponding time consumption, and the corresponding time consumption is respectively marked as the first time consumption and the second time consumption, so that whether the request object needs to be stored in the cache can be determined according to the first time consumption and the second time consumption. In the above example, in the second preset time, for example, the arrangement order of the requests in 9 minutes is a-b-c-ab-a-b, where the current request object is a, only bc can be placed in the cache, if the object a is not stored in the cache, the sum of the time spent traversing each request according to the arrangement order by the system is the first time spent, and it is noted that the time spent acquiring the object from the database is longer than the time spent acquiring the object from the cache, and the time spent responding to each request, the time spent acquiring the object from the database or the time spent acquiring the object from the cache can be obtained through code statistics, so the first time spent can be obtained statistically. Similarly, when the object a is stored in the cache, the object b arranged at the last in the sequence is deleted from the cache, the object is the request a, ac is in the cache, the sequence of each request in 9 minutes is a-b-c-ab-a-b, and the system traverses the second time consuming time of the sum of the time consuming time of each request according to the arranged sequence.
As described in the above steps S34-S36, after the first time consumption and the second time consumption are obtained, the first time consumption and the second time consumption are compared, and when the first time consumption is longer than the second time consumption, that is, the system consumes more time and is slower and costs more cost when the request object is not stored in the cache, so that it is determined that the request object needs to be stored in the cache. On the contrary, when the first time consumption is shorter than the second time consumption, which means that the system consumes less time and costs less when the request object is not stored in the cache, so that it is determined that the request object is not needed to be stored in the cache.
In another embodiment, the time consumption of the request object corresponding to each request time under the condition of storing in the cache and not storing in the cache may be calculated first, according to the principle of short time consumption of selection, a preferred policy for storing the request object in the cache or not is obtained each time, and then when the request is responded, the request is executed according to the preferred policy, so as to determine whether the request object of each request needs to be stored in the cache.
The detailed process is as follows: the first step: the order of the requests may be denoted as Q, and the sequence q= { (t 1, u 1), (t 2, u 2), (t 3, u 3), …, (tn, un) }, where t represents the time and u represents the set of requests that arrive at that time, as in the example above, u1= { a } when t1=2; t4=6, u4= { ab }. For any binary tree (the request object is divided into two cases when it arrives, one is stored in the cache, and the other is not stored in the cache) the node S (i) consists of five parts: parent node index, left child node index, right child node index, total time spent at time t (i) c (i) and current cache set m (i). The left child node Sl (i+1) for S (i) represents the object set A ε u (i) and
Figure SMS_19
When the set a does not use the cache, sr (i+1) is the right child node, and both sets a use the cache. In general, since the time spent on hit and miss of 1 object differs greatly, the time spent on hit is set to 0 and the time spent on miss is set to 1.
And a second step of: let the root node of the binary tree be S0, represent the initial state of the buffer, where c (0) =0,
Figure SMS_20
parent node index, child node index equal to null; sequentially taking out elements in the sequence Q, firstly judging whether u1 epsilon m (0) is true at the time t1, if true, c (1) =0, m (1) =m (0) in the Sr (1) node, wherein the index of the right child node of S (0) points to Sr (1), and the index of the father node of Sr (1) points to S (0). If false, i.e.)>
Figure SMS_21
Two situations are distinguished: the set u1 uses the cache and does not use the cache, when using the cache, the S (0) right child node index points to Sr (1), the Sr (1) parent node index points to S (0), and c (1) equals c (0) and +.>
Figure SMS_22
The number k of elements in the set is added, +.>
Figure SMS_23
Wherein set p is the set of cached instances left after k objects are eliminated using the LA algorithm described above. When the cache is not used, the S (0) right child index points to Sr (1), the Sr (1) parent index points to S (0), and c (1) equals c (0) and +.>
Figure SMS_24
The number k of elements in the set is added.
And a third step of: repeating the steps until the elements in the Q are completely fetched, finally obtaining a binary tree, finding out the node with the smallest c (n) in all leaf nodes in the binary tree in the corresponding time period in the second preset time, and then searching upwards according to the parent node index until the root node S (0), wherein the path is the shortest time-consuming path. And judging which time request objects on the path need to be stored in the cache according to the rule that the left child node does not use the cache and the right child node uses the cache, so as to obtain an optimal strategy of whether the request objects are stored in the cache or not at each request time in the time period.
In another embodiment, the step S3 includes:
step S31': acquiring the calling frequency of each object in the cache;
step S32': calculating the object of the next request according to the calling frequency, and recording the object as the next object;
step S33': judging whether the next object is a request object of the current request or not;
step S34': if yes, determining that the request object needs to be stored in the cache, otherwise, determining that the request object does not need to be stored in the cache.
In addition to the above-described determination of whether the request object needs to be stored in the cache by the steps S31 to S36, the determination may be made by whether the next-called object is the current request object. As described in the above steps S31'-S34', the call frequency of each object in the cache is obtained from the system, and the request object of the next request is calculated according to the call frequency of each object and the last call time, that is, the object of the next request ordered in the current request is calculated, and the calculation method is described in the above step S31 and is not repeated here. When the next object is obtained, further judging whether the next object is a request object of the current request, if so, judging that the request object needs to be stored in a cache so as to be called next time, and if not, judging that the request object does not need to be stored in the cache.
In one embodiment, the step S2 includes:
step S21: identifying the request object according to the current request;
step S22: comparing the request object with each object in a cache list, wherein the cache list is a list of all objects stored in the cache;
step S23: if the request object is consistent with the object in the cache list, judging that the request object is the object in the cache; otherwise, judging that the request object is not the object in the cache.
In this embodiment, the known request object is data or tool for processing the request, because each request includes a request header and a request body, and the request body includes information such as a parameter (e.g. an equipment ID) of a server side and request content, information for processing the content can be obtained from the request content, the request object of the current request is identified according to the information, then the request object is compared with objects in a cache list, the cache list is a list of all objects stored in the cache, when an object consistent with the request object is found in the cache list, it can be determined that the request object is an object in the cache, and when an object consistent with the request object is not found in the cache list, it indicates that the request object is not an object in the cache.
Referring to fig. 2, the data caching apparatus in this embodiment includes:
a receiving request unit 100, configured to receive a current request sent by a device;
a judging object unit 200, configured to judge whether the currently requested request object is an object in a cache;
a calling object unit 300, configured to call the request object from the cache if the request object is an object in the cache, or acquire the request object from a preset database, and determine whether the request object needs to be stored in the cache;
an obtaining time unit 400, configured to obtain, if it is determined that the request object needs to be stored in the cache, a calling frequency and a calling time of each object in the cache, where the calling time is a time when the object was previously called based on a current time;
a calculating time unit 500, configured to calculate, according to the calling frequency and the calling time, a time when each object is called each time within a first preset time, where the first preset time is a time period within a specified time from a current time;
a comparison time unit 600, configured to compare the invoked time corresponding to each object to obtain a target time with the latest time, and record an object corresponding to the target time as the target object;
And a deleted object unit 700, configured to delete the target object and store the requested object in the cache.
In this embodiment, the data caching device is used for caching data with fixed request frequency, and is mainly applied to a scene with fixed request frequency, for example, an internet of things scene. The current request at least comprises a request header and a request body, wherein the request header comprises information such as a request method, a version, a used protocol and the like, the request body comprises information such as parameters (such as equipment ID) of a server side, request content and the like, a request object is obtained according to the request content, and the request object is data or tools for processing the request. For example, in a scenario where the status of the device is processed, the device needs to report status data, including temperature, humidity, battery status, etc., and after the device makes a request, the server receives the request and initializes a rule engine, which is responsible for analyzing and processing the data, where the rule engine may be an object in a cache, in this embodiment, for convenience of description, the request sent by the device received by the current system is denoted as the current request.
As described in the determining object unit 200, it is known that, after the current request is received, it is required to respond to the request, if the request object (data for responding) is stored in the cache, it is not required to search for and retrieve from a preset database, but only the request object is directly called from the cache, so that efficiency is improved, but there is a case that the request object is not in the cache, for example, the request object is a, but there are bcd three in the cache, then the cache cannot be called, so that after the request is obtained, it is determined whether the corresponding request object is an object in the cache, if the request object is found in the cache, it is determined that the request object is directly called, if the request object is not found in the cache, it is determined that the request object is not in the cache, and then the database is a preset database in which the request object (response data) corresponding to each request is stored, for improving efficiency, and for the subsequent system, it is further determined whether the request needs to be stored in the cache or not in the cache, if the cache is a fixed time, if the frequency of the cache is equal to the preset cache is equal to the fixed, and the frequency of the cache is equal to the fixed, if the frequency of the cache is required to be stored is equal to 15.
In this embodiment, when it is determined that the request object needs to be stored in the cache, that is, it indicates that in the subsequent system operation, in the process of responding to the request, the efficiency of calling the request object from the cache is better, and the cost is less. Because the capacity of the cache is fixed, if the request object needs to be stored in the cache, the original one of the objects in the cache needs to be deleted, and the deletion of the original one of the objects can be realized by the deleted target object, wherein the target object is the latest object among the next called time of each object in the first preset time, and after the target object is deleted, the request object is stored in the cache so as to be called by the next request.
As described in the above-mentioned acquisition time unit 400, calculation time unit 500, and comparison time unit 600, the above-mentioned request frequency is the frequency of calling the request object, and in the internet of things scenario, generally the scenario where the calling frequency is substantially fixed, therefore, the calling frequency of each object in the cache and the last calling time of each object can be directly obtained from the system, wherein the calling time is based on the current time and the last time the object is called. And then calculating the time of each time of calling in a first preset time according to the calling frequency of each object and the last calling time, wherein the first preset time is a time period in a designated time from the current time, and finally comparing the calculated time to obtain the latest target time, wherein the object corresponding to the target time is the target object, so that the caching method can be called as a Latest Arrival (LA) caching algorithm, and the LA algorithm is a self-defined algorithm of the application.
In this embodiment, the following formula is used to calculate the time when each object is called next time within the first preset time:
Figure SMS_25
wherein i is any one of the above-mentioned cache objects, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i.
For example, the corresponding calling frequency of the object A in the cache is 0.05 times/second, the corresponding time of the last call is 12:10:15, the time of the next call is 12:10:35 through the formula, the time of the next call is 12:10:55, all the calling times of the object A in the first preset time are calculated, and the like, all the calling times of each object in the cache in the first preset time are calculated, then the calling times are compared, the latest target time is obtained, the object corresponding to the target time is the object which should be deleted, and the object is marked as the target object for convenience of description.
In the caching method provided by the application, in a scene of basically fixed calling frequency, the request object is further judged to be stored in the cache by taking less cost as a reference, and when the request object is required to be stored in the cache, the replaced target object is the latest object among the next time when each object is called in the first preset time, so that the hit rate of the cache is higher, the cost is less, and the efficiency is higher.
The following describes why the target object is deleted from the cache in terms of cache hit rate and cost: first, define cost c as the additional time overhead for a cache miss (i.e. for a request, the time spent in making a call with a request object in the cache is different from the time spent in no request object in the cache needs to be obtained from the database and then responded). The total cost S is the sum of all overheads in a certain time.
Then in time t, the number of cache hits is h, and the total number of cache calls is m, so the hit rate can be expressed as:
Figure SMS_26
the total cost can be expressed as: s= (m-h) c. Assume that the cached object has a frequency of 1,2,3, 4..k, and the corresponding frequencies of calls are f1, f2, f 3..fk, and the last time of call is respectively denoted as t1, t2, t3...tk. Then when the time is t, the time at which the i object is subsequently invoked is: />
Figure SMS_27
The latest instant in this period of time can therefore be expressed as: />
Figure SMS_28
Figure SMS_29
I.e. last T LA The corresponding i object is the object which needs to be cleaned out of the cache.
The existing caching algorithm comprises an LRU (least recently used) algorithm, an LFU (linear frequency unit) algorithm and a FIFO algorithm, wherein objects needing to be cleaned out of the cache can be expressed by formulas respectively: t (T) LRU =max{t-t i },i∈(1,2,3...k),
Figure SMS_30
Figure SMS_31
Tfifo=maxti, i e (1, 2, 3..k). Comparing the existing algorithm with the LA algorithm provided by the scheme, the method comprises the following steps:
for example, the cache has objects a, b, c, d, the corresponding call frequencies are fa=1/5, fb=1/4, fc=1/3, fd=1/2, and only 3 objects can be stored in the cache, the cost c is 1, and the existing object set in the cache is denoted by U. If the initial object in the cache has a, b, c, and the object d is ready to be added to the cache at t=2, then the hit number h and the total cost S corresponding to each time point in t time are as follows:
Figure SMS_32
as can be seen from the table, compared with the existing LRU, LFU and FIFO algorithms, the LA algorithm provided by the invention has the highest hit number of cache at each time point and the least total cost.
In one embodiment, the calling object unit 300 includes:
a calculation sequence subunit, configured to calculate, according to the request frequencies of all the preset requests, a ranking sequence of each request ordered according to the request time in a second preset time;
a first time-consuming subunit, configured to calculate a time required for traversing each request according to the arrangement sequence when the request object is not stored in the cache, and record as a first time-consuming;
A second time-consuming subunit, configured to calculate a time required for traversing each request according to the ordering order when the request object is stored in the cache, and record as a second time-consuming;
a comparison time consuming subunit configured to compare the first time consuming with the second time consuming;
a first determining subunit, configured to determine that the request object needs to be stored in the cache when the first time consumption is longer than the second time consumption;
and the second judging subunit is used for judging that the request object does not need to be stored in the cache when the first time consumption is shorter than the second time consumption.
In this embodiment, since the preset request frequencies of all the requests are fixed, the request frequency of each request can be obtained, and then the arrangement sequence of the requests in the second preset time according to the sequence of the request time is calculated according to the request frequency.
In one embodiment, the computing sequence subunit includes:
the calculating time module is used for calculating all request time of each request in the second preset time according to the preset request frequency of all requests;
the ordering request module is used for ordering the requests corresponding to each request moment according to the time sequence to obtain the ordering sequence;
Wherein, all the request moments of the request j are calculated by the following formula:
Figure SMS_33
wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
In this embodiment, since the request frequencies of all requests are known in the second preset time, the request time of all requests in the time period can be calculated by the request frequencies, and then the requests corresponding to each request time are ordered in time order to obtain the above-mentioned order, for example, the initial time t is set 0 The frequency of the request a object is f a Frequency f of b object b The frequency of the c object is f c The next time the request a arrives at the moment is
Figure SMS_34
The next request b arrives at the time:
Figure SMS_35
the next time the request c arrives at the moment is: />
Figure SMS_36
By analogy, all request a arrival times can be expressed as: />
Figure SMS_37
n is a natural integer; all request b arrival times can be expressed as: />
Figure SMS_38
All request c arrival times can be expressed as: />
Figure SMS_39
At this time, the arrangement sequence of the corresponding requests can be obtained according to the arrangement sequence from small to large at these moments, e.g. H a ,H b ,H c Ordering from small to large and recording the requests arriving at each time instant, resulting in a sequence Q, e.g., Q = { (2, a), (3, b), (5, c), (6, ab), (8, a), (9,b) }, where (2, a) represents that at time instant 2, the arriving request has a; (3, b) indicates that at time 3, there is b in the arriving request; at time 6, (6, ab) indicates that a and b are requests to be arrived at.
As described in the first time-consuming subunit and the second time-consuming subunit, under the condition that the request object is not stored in the cache and the request object is stored in the cache, traversing the corresponding requests according to the arrangement sequence to obtain corresponding time-consuming, and marking the corresponding time-consuming as the first time-consuming and the second time-consuming respectively, so as to determine whether the request object needs to be stored in the cache according to the first time-consuming and the second time-consuming. In the above example, in the second preset time, for example, the arrangement order of the requests in 9 minutes is a-b-c-ab-a-b, where the current request object is a, only bc can be placed in the cache, if the object a is not stored in the cache, the sum of the time spent traversing each request according to the arrangement order by the system is the first time spent, and it is noted that the time spent acquiring the object from the database is longer than the time spent acquiring the object from the cache, and the time spent responding to each request, the time spent acquiring the object from the database or the time spent acquiring the object from the cache can be obtained through code statistics, so the first time spent can be obtained statistically. Similarly, when the object a is stored in the cache, the object b arranged at the last in the sequence is deleted from the cache, the object is the request a, ac is in the cache, the sequence of each request in 9 minutes is a-b-c-ab-a-b, and the system traverses the second time consuming time of the sum of the time consuming time of each request according to the arranged sequence.
After the first time consumption and the second time consumption are obtained, the first time consumption and the second time consumption are compared, and when the first time consumption is longer than the second time consumption, namely, the fact that the request object is not stored in the cache is indicated, the system consumes more time, the efficiency is slower, and the cost is higher, so that the request object is judged to be stored in the cache. On the contrary, when the first time consumption is shorter than the second time consumption, which means that the system consumes less time and costs less when the request object is not stored in the cache, so that it is determined that the request object is not needed to be stored in the cache.
In another embodiment, the time consumption of the request object corresponding to each request time under the condition of storing in the cache and not storing in the cache may be calculated first, according to the principle of short time consumption of selection, a preferred policy for storing the request object in the cache or not is obtained each time, and then when the request is responded, the request is executed according to the preferred policy, so as to determine whether the request object of each request needs to be stored in the cache. The detailed process is as follows: the first step: the order of the requests may be denoted as Q, and the sequence q= { (t 1, u 1), (t 2, u 2), (t 3, u 3), (tn, un) }, where t represents the time and u represents the set of requests that arrive at that time, as in the example above, u1= { a } when t1=2; t4=6, u4= { ab }. For any binary tree (the request object is divided into two cases when it arrives, one is stored in the cache, and the other is not stored in the cache) the node S (i) consists of five parts: parent node index, left child node index, right child node index, total time spent at time t (i) c (i) and current cache set m (i). The left child node SI (i+1) for S (i) represents the object set A ε u (i) and
Figure SMS_40
When the set a does not use the cache, sr (i+1) is the right child node, and both sets a use the cache. In general, since the time spent on hit and miss of 1 object differs greatly, the time spent on hit is set to 0 and the time spent on miss is set to 1.
And a second step of: let the root node of the binary tree be S0, represent the initial state of the buffer, where c (0) =0,
Figure SMS_41
parent node index, child node index equal to null; in turnThe elements in the sequence Q are taken out, whether u1 epsilon m (0) is true or not is judged at the time t1, if true, c (1) =0, m (1) =m (0) in the Sr (1) node, the index of the right child node of S (0) points to Sr (1), and the index of the father node of Sr (1) points to S (0). If false, i.e.)>
Figure SMS_42
Two situations are distinguished: the set u1 uses the cache and does not use the cache, when using the cache, the S (0) right child node index points to Sr (1), the Sr (1) parent node index points to S (0), and c (1) equals c (0) and +.>
Figure SMS_43
The number k of elements in the set is added, +.>
Figure SMS_44
Wherein set p is the set of cached instances left after k objects are eliminated using the LA algorithm described above. When the cache is not used, the S (0) right child index points to Sr (1), the Sr (1) parent index points to S (0), and c (1) equals c (0) and +.>
Figure SMS_45
The number k of elements in the set is added.
And a third step of: repeating the steps until the elements in the Q are completely fetched, finally obtaining a binary tree, finding out the smallest and uniform node of c (n) in all leaf nodes in the binary tree in the corresponding time period in the second preset time, and then searching upwards according to the parent node index until the root node S (0), wherein the path is the shortest time-consuming path. And judging which time request objects on the path need to be stored in the cache according to the rule that the left child node does not use the cache and the right child node uses the cache, so as to obtain an optimal strategy of whether the request objects are stored in the cache or not at each request time in the time period.
In another embodiment, the calling object unit 300 includes:
a calling frequency subunit, configured to obtain a calling frequency of each object in the cache;
a calculation object subunit, configured to calculate an object of the next request according to the calling frequency, and record the object as a next object;
a judging object subunit, configured to judge whether the next object is the request object of the current request;
and the judging and storing subunit is used for judging that the next object is the request object of the current request, judging that the request object needs to be stored in the cache, and otherwise, judging that the request object does not need to be stored in the cache.
In this embodiment, the call frequency of each object in the cache is obtained from the system, and the request object of the next request is calculated according to the call frequency of each object and the last call time, that is, the object of the next request ordered in the current request is calculated, and the calculating method refers to the above-mentioned sub-unit of the calculation time, which is not described herein. When the next object is obtained, further judging whether the next object is a request object of the current request, if so, judging that the request object needs to be stored in a cache so as to be called next time, and if not, judging that the request object does not need to be stored in the cache.
In one embodiment, the determining object unit 200 includes:
an identification object subunit, configured to identify the request object according to the current request;
a comparison object subunit, configured to compare the request object with each object in a cache list, where the cache list is a list of all objects stored in the cache;
a judging cache subunit, configured to judge that the request object is an object in the cache if the request object is consistent with an object in the cache list; otherwise, judging that the request object is not the object in the cache.
In this embodiment, the known request object is data or tool for processing the request, because each request includes a request header and a request body, and the request body includes information such as a parameter (e.g. an equipment ID) of a server side and request content, information for processing the content can be obtained from the request content, the request object of the current request is identified according to the information, then the request object is compared with objects in a cache list, the cache list is a list of all objects stored in the cache, when an object consistent with the request object is found in the cache list, it can be determined that the request object is an object in the cache, and when an object consistent with the request object is not found in the cache list, it indicates that the request object is not an object in the cache.
Referring to fig. 3, in an embodiment of the present invention, there is further provided a computer device, which may be a server, and an internal structure thereof may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing all data required by the calling cache object. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data caching method.
The processor executes the steps of the data caching method: receiving a current request sent by equipment; judging whether the request object of the current request is an object in a cache or not; if the request object is an object in a cache, calling the request object from the cache, otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache; if the request object is judged to be stored in the cache, acquiring the calling frequency and the calling time of each object in the cache, wherein the calling time is the time when the object was called for the last time based on the current time; calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a designated time from the current time; comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as the target object; deleting the target object and storing the request object into the cache.
The step of calculating the time when each object is called each time in the first preset time according to the calling frequency and the calling time, where the step includes: calculating the time when each object is called each time within the first preset time by using the following formula:
Figure SMS_46
Wherein i is any object in the cache, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i. />
In one embodiment, the step of determining whether the request object needs to be stored in the cache includes: calculating according to the preset request frequencies of all requests to obtain the arrangement sequence of the requests in the second preset time according to the request time sequence; calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is not stored in the cache, and marking the time as first time consumption; calculating the time required for traversing each request according to the sorting order under the condition that the request object is stored in the cache, and marking the time as second time consumption; comparing the first time consumption with the second time consumption; if the first time consumption is longer than the second time consumption, judging that the request object needs to be stored in the cache; and if the first time consumption is shorter than the second time consumption, judging that the request object does not need to be stored in the cache.
In one embodiment, the step of calculating the arrangement order of the requests ordered in time sequence within the second preset time according to the preset request frequencies of all the requests includes: calculating all request moments of all requests within the second preset time according to the preset request frequencies of all requests; sequencing the requests corresponding to each request time according to the time sequence to obtain the sequence; wherein, by the following formula All request moments of the request j are calculated:
Figure SMS_47
Figure SMS_48
wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
In one embodiment, the step of determining whether the request object needs to be stored in the cache includes: acquiring the calling frequency of each object in the cache; calculating the object of the next request according to the calling frequency, and recording the object as the next object; judging whether the next object is a request object of the current request or not; if yes, determining that the request object needs to be stored in the cache, otherwise, determining that the request object does not need to be stored in the cache.
In one embodiment, the step of determining whether the currently requested request object is an object in the cache includes: identifying the request object according to the current request; comparing the request object with each object in a cache list, wherein the cache list is a list of all objects stored in the cache; if the request object is consistent with the object in the cache list, judging that the request object is the object in the cache; otherwise, judging that the request object is not the object in the cache.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of a portion of the architecture in connection with the present application and is not intended to limit the computer device to which the present application is applied.
An embodiment of the present invention further provides a computer readable storage medium having a computer program stored thereon, where the computer program when executed by a processor implements a data caching method, specifically: receiving a current request sent by equipment; judging whether the request object of the current request is an object in a cache or not; if the request object is an object in a cache, calling the request object from the cache, otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache; if the request object is judged to be stored in the cache, acquiring the calling frequency and the calling time of each object in the cache, wherein the calling time is the time when the object was called for the last time based on the current time; calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a designated time from the current time; comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as the target object; deleting the target object and storing the request object into the cache.
The step of calculating the time when each object is called each time within the first preset time according to the calling frequency and the calling time includes: calculating the time when each object is called each time within the first preset time by using the following formula:
Figure SMS_49
wherein i is any object in the cache, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i.
In one embodiment, the step of determining whether the request object needs to be stored in the cache includes: calculating according to the preset request frequencies of all requests to obtain the arrangement sequence of the requests in the second preset time according to the request time sequence; calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is not stored in the cache, and marking the time as first time consumption; calculating the time required for traversing each request according to the sorting order under the condition that the request object is stored in the cache, and marking the time as second time consumption; comparing the first time consumption with the second time consumption; if the first time consumption is longer than the second time consumption, judging that the request object needs to be stored in the cache; and if the first time consumption is shorter than the second time consumption, judging that the request object does not need to be stored in the cache.
In one embodiment, the step of calculating the arrangement order of the requests ordered in time sequence within the second preset time according to the preset request frequencies of all the requests includes: calculating all request moments of all requests within the second preset time according to the preset request frequencies of all requests; sequencing the requests corresponding to each request time according to the time sequence to obtain the sequence; wherein, all the request moments of the request j are calculated by the following formula:
Figure SMS_50
Figure SMS_51
wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
In one embodiment, the step of determining whether the request object needs to be stored in the cache includes: acquiring the calling frequency of each object in the cache; calculating the object of the next request according to the calling frequency, and recording the object as the next object; judging whether the next object is a request object of the current request or not; if yes, determining that the request object needs to be stored in the cache, otherwise, determining that the request object does not need to be stored in the cache.
In one embodiment, the step of determining whether the currently requested request object is an object in the cache includes: identifying the request object according to the current request; comparing the request object with each object in a cache list, wherein the cache list is a list of all objects stored in the cache; if the request object is consistent with the object in the cache list, judging that the request object is the object in the cache; otherwise, judging that the request object is not the object in the cache.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by hardware associated with a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the invention.

Claims (10)

1. A data caching method for caching data with a fixed request frequency, comprising:
receiving a current request sent by equipment;
judging whether the request object of the current request is an object in a cache or not;
If the request object is an object in a cache, calling the request object from the cache, otherwise, acquiring the request object from a preset database, and judging whether the request object needs to be stored in the cache;
if the request object is judged to be stored in the cache, acquiring the calling frequency and the calling time of each object in the cache, wherein the calling time is the time when the object was called for the last time based on the current time;
calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a designated time from the current time;
comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as a target object;
deleting the target object and storing the request object into the cache.
2. The method according to claim 1, wherein the step of calculating the time at which each of the objects is called each time within the first preset time according to the calling frequency and the calling time includes:
Calculating the time when each object is called each time within the first preset time by using the following formula:
Figure FDA0004191128650000011
wherein i is any object in the cache, T i For the call time of object i, f i Call frequency, t, for object i i The time of the previous invocation of object i.
3. The data caching method according to claim 1, wherein the step of determining whether the request object needs to be stored in the cache includes:
calculating according to the preset request frequencies of all requests to obtain the arrangement sequence of the requests in the second preset time according to the request time sequence;
calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is not stored in the cache, and marking the time as first time consumption;
calculating the time required for traversing each request according to the arrangement sequence under the condition that the request object is stored in the cache, and marking the time as second time consumption;
comparing the first time consumption with the second time consumption;
if the first time consumption is longer than the second time consumption, judging that the request object needs to be stored in the cache;
and if the first time consumption is shorter than the second time consumption, judging that the request object does not need to be stored in the cache.
4. A data caching method according to claim 3, wherein the step of calculating the arrangement order of the requests according to the request time in the second preset time according to the request frequencies of all the preset requests comprises:
calculating all request moments of all requests within the second preset time according to the preset request frequencies of all requests;
sequencing the requests corresponding to each request time according to the time sequence to obtain the sequence;
wherein, all the request moments of the request j are calculated by the following formula:
Figure FDA0004191128650000021
/>
wherein j is any one of all preset requests, H j (n) the request time of the nth request of the request j, f j To request the request frequency of j, t 0 And the initial time within the second preset time is the initial time.
5. The data caching method according to claim 1, wherein the step of determining whether the request object needs to be stored in the cache includes:
acquiring the calling frequency of each object in the cache;
calculating the object of the next request according to the calling frequency, and recording the object as the next object;
judging whether the next object is a request object of the current request or not;
If yes, determining that the request object needs to be stored in the cache, otherwise, determining that the request object does not need to be stored in the cache.
6. The data caching method according to claim 1, wherein the step of determining whether the currently requested request object is an object in a cache comprises:
identifying the request object according to the current request;
comparing the request object with each object in a cache list, wherein the cache list is a list of all objects stored in the cache;
if the request object is consistent with the object in the cache list, judging that the request object is the object in the cache; otherwise, judging that the request object is not the object in the cache.
7. A data caching apparatus for caching data with a fixed request frequency, comprising:
a receiving request unit, configured to receive a current request sent by a device;
a judging object unit, configured to judge whether the currently requested request object is an object in a cache;
a calling object unit, configured to call the request object from the cache if the request object is an object in the cache, or acquire the request object from a preset database, and determine whether the request object needs to be stored in the cache;
The acquisition time unit is used for acquiring the calling frequency and the calling time of each object in the cache if the request object is judged to be stored in the cache, wherein the calling time is the time when the object is called last time based on the current time;
the calculating time unit is used for calculating the time when each object is called in a first preset time according to the calling frequency and the calling time, wherein the first preset time is a time period in a specified time from the current time;
the comparison time unit is used for comparing the called time corresponding to each object to obtain the target time with the latest time, and marking the object corresponding to the target time as a target object;
and the deleting object unit is used for deleting the target object and storing the request object into the cache.
8. The data caching apparatus of claim 7, wherein the call object unit comprises:
a calculation sequence subunit, configured to calculate, according to the request frequencies of all the preset requests, a ranking sequence of each request ordered according to time sequence in a second preset time;
a first time-consuming subunit, configured to calculate a time required for traversing each request according to the arrangement sequence when the request object is not stored in the cache, and record as a first time-consuming;
A second time-consuming subunit, configured to calculate a time required for traversing each request according to the arrangement sequence when the request object is stored in the cache, and record as a second time-consuming;
a comparison time consuming subunit configured to compare the first time consuming with the second time consuming;
a first determining subunit, configured to determine that the request object needs to be stored in the cache when the first time consumption is longer than the second time consumption;
and the second judging subunit is used for judging that the request object does not need to be stored in the cache when the first time consumption is shorter than the second time consumption.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201910175754.3A 2019-03-08 2019-03-08 Data caching method, device, computer equipment and storage medium Active CN110018969B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910175754.3A CN110018969B (en) 2019-03-08 2019-03-08 Data caching method, device, computer equipment and storage medium
PCT/CN2019/118426 WO2020181820A1 (en) 2019-03-08 2019-11-14 Data cache method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910175754.3A CN110018969B (en) 2019-03-08 2019-03-08 Data caching method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110018969A CN110018969A (en) 2019-07-16
CN110018969B true CN110018969B (en) 2023-06-02

Family

ID=67189375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910175754.3A Active CN110018969B (en) 2019-03-08 2019-03-08 Data caching method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110018969B (en)
WO (1) WO2020181820A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018969B (en) * 2019-03-08 2023-06-02 平安科技(深圳)有限公司 Data caching method, device, computer equipment and storage medium
CN112364016B (en) * 2020-10-27 2021-08-31 中国地震局地质研究所 Construction method of time nested cache model of pilot frequency data object
CN113329051A (en) * 2021-04-20 2021-08-31 海南视联大健康智慧医疗科技有限公司 Data acquisition method and device and readable storage medium
CN113806249B (en) * 2021-09-13 2023-12-22 济南浪潮数据技术有限公司 Object storage sequence lifting method, device, terminal and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11112541A (en) * 1997-09-12 1999-04-23 Internatl Business Mach Corp <Ibm> Message repeating method, message processing method, router device, network system and storage medium storing program controlling router device
US5933849A (en) * 1997-04-10 1999-08-03 At&T Corp Scalable distributed caching system and method
JP2012043338A (en) * 2010-08-23 2012-03-01 Nippon Telegr & Teleph Corp <Ntt> Cache management apparatus, cache management program and recording medium
CN103544119A (en) * 2013-09-26 2014-01-29 广东电网公司电力科学研究院 Method and system for cache scheduling and medium thereof
CN106899558A (en) * 2015-12-21 2017-06-27 腾讯科技(深圳)有限公司 The treating method and apparatus of access request
CN108241583A (en) * 2017-11-17 2018-07-03 平安科技(深圳)有限公司 Data processing method, application server and the computer readable storage medium that wages calculate
CN109240613A (en) * 2018-08-29 2019-01-18 平安科技(深圳)有限公司 Data cache method, device, computer equipment and storage medium
CN109388550A (en) * 2018-11-08 2019-02-26 浪潮电子信息产业股份有限公司 A kind of cache hit rate determines method, apparatus, equipment and readable storage medium storing program for executing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100476781B1 (en) * 2001-12-28 2005-03-16 삼성전자주식회사 Method for controlling a terminal of MPEG-4 system
CN102223681B (en) * 2010-04-19 2015-06-03 中兴通讯股份有限公司 IOT system and cache control method therein
US9141527B2 (en) * 2011-02-25 2015-09-22 Intelligent Intellectual Property Holdings 2 Llc Managing cache pools
US10242050B2 (en) * 2015-12-23 2019-03-26 Sybase, Inc. Database caching in a database system
CN106888262A (en) * 2017-02-28 2017-06-23 北京邮电大学 A kind of buffer replacing method and device
CN110018969B (en) * 2019-03-08 2023-06-02 平安科技(深圳)有限公司 Data caching method, device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5933849A (en) * 1997-04-10 1999-08-03 At&T Corp Scalable distributed caching system and method
JPH11112541A (en) * 1997-09-12 1999-04-23 Internatl Business Mach Corp <Ibm> Message repeating method, message processing method, router device, network system and storage medium storing program controlling router device
JP2012043338A (en) * 2010-08-23 2012-03-01 Nippon Telegr & Teleph Corp <Ntt> Cache management apparatus, cache management program and recording medium
CN103544119A (en) * 2013-09-26 2014-01-29 广东电网公司电力科学研究院 Method and system for cache scheduling and medium thereof
CN106899558A (en) * 2015-12-21 2017-06-27 腾讯科技(深圳)有限公司 The treating method and apparatus of access request
CN108241583A (en) * 2017-11-17 2018-07-03 平安科技(深圳)有限公司 Data processing method, application server and the computer readable storage medium that wages calculate
CN109240613A (en) * 2018-08-29 2019-01-18 平安科技(深圳)有限公司 Data cache method, device, computer equipment and storage medium
CN109388550A (en) * 2018-11-08 2019-02-26 浪潮电子信息产业股份有限公司 A kind of cache hit rate determines method, apparatus, equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
WO2020181820A1 (en) 2020-09-17
CN110018969A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN110018969B (en) Data caching method, device, computer equipment and storage medium
US6263364B1 (en) Web crawler system using plurality of parallel priority level queues having distinct associated download priority levels for prioritizing document downloading and maintaining document freshness
US6351755B1 (en) System and method for associating an extensible set of data with documents downloaded by a web crawler
CN105956183B (en) The multilevel optimization&#39;s storage method and system of mass small documents in a kind of distributed data base
US6754799B2 (en) System and method for indexing and retrieving cached objects
CN110753099B (en) Distributed cache system and cache data updating method
CN102761627A (en) Cloud website recommending method and system based on terminal access statistics as well as related equipment
US11250166B2 (en) Fingerprint-based configuration typing and classification
US7814165B2 (en) Message classification system and method
CN112015820A (en) Method, system, electronic device and storage medium for implementing distributed graph database
CN109766318B (en) File reading method and device
CN109033462B (en) Method and system for determining low frequency data items in a storage device for large data storage
CN103761279A (en) Method and system for scheduling network crawlers on basis of keyword search
CN108322495B (en) Method, device and system for processing resource access request
CN114911830B (en) Index caching method, device, equipment and storage medium based on time sequence database
CN111198856A (en) File management method and device, computer equipment and storage medium
CN112235396A (en) Content processing link adjustment method, content processing link adjustment device, computer equipment and storage medium
CN110647542A (en) Data acquisition method and device
CN109218131B (en) Network monitoring method and device, computer equipment and storage medium
CN110011838B (en) Real-time tracking method for PageRank value of dynamic network
CN108763458B (en) Content characteristic query method, device, computer equipment and storage medium
US20200293543A1 (en) Method and apparatus for transmitting data
CN112733060B (en) Cache replacement method and device based on session cluster prediction and computer equipment
CN107145502A (en) A kind of method of mass picture storage and search
CN111159131A (en) Performance optimization method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant