CN110674121A - Cache data cleaning method, device, equipment and computer readable storage medium - Google Patents
Cache data cleaning method, device, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN110674121A CN110674121A CN201910780198.2A CN201910780198A CN110674121A CN 110674121 A CN110674121 A CN 110674121A CN 201910780198 A CN201910780198 A CN 201910780198A CN 110674121 A CN110674121 A CN 110674121A
- Authority
- CN
- China
- Prior art keywords
- cache
- data
- node
- circular queue
- amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/215—Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2462—Approximate or statistical queries
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a cache data cleaning method, a cache data cleaning device and a computer readable storage medium, relates to the technical field of internet, and can predict the predicted cache growth amount of a circular queue, count the current data total amount, and clean a target cache node with a low importance coefficient when the sum of the predicted cache growth amount and the data total amount is greater than a cleaning threshold value, so that the importance of the cleaned cache data is ensured to be minimum, the utilization value of a cache space is improved, and the user viscosity is higher. The method comprises the following steps: predicting the predicted buffer increase amount of the circular queue according to the historical buffer increase amount of the circular queue in the historical time period; counting the total data amount of the currently stored cache data of the circular queue, and calculating the sum of the total data amount and the predicted cache increase amount; when the sum is larger than or equal to the cleaning threshold value, calculating an important coefficient of each cache node based on the node position and the node access rate of each cache node in the circular queue; and clearing the cache data of the target cache node in the circular queue.
Description
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for cleaning cache data.
Background
With the continuous development of internet technology, more and more intelligent terminals enter the daily work and life of people. The user can interact information with other users through the intelligent terminal, and also can acquire information such as words, videos or pictures from a network through the intelligent terminal. When a user performs information interaction with other users through the intelligent terminal, the generated interaction information needs to be stored locally in the intelligent terminal, and the data stored locally in the intelligent terminal is called cache data. Along with the lapse of time, the information interaction that the user carries on based on intelligent terminal can be more and more for the data bulk of cache data in the intelligent terminal also can be bigger and bigger, and intelligent terminal's storage space is limited, consequently, need regularly clear up the cache data of storage in intelligent terminal.
In the related art, when the intelligent terminal cleans the cache data, the intelligent terminal generally cleans the cache data on the basis of first-in first-out. Specifically, a fixed data size is set, and the cache data of the fixed data size cached first is cleared.
In the process of implementing the invention, the inventor finds that the related art has at least the following problems:
some cache data are common data of the user, the user can often use the cache data in the process of using the intelligent terminal, the common data are likely to be deleted by mistake in the process of adopting first-in first-out to clean, and data which are meaningless and not common for the user are left in the cache space, so that the utilization value of the cache space is low, and the viscosity of the user is not high.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a computer readable storage medium for clearing cache data, and mainly aims to solve the problems of low utilization value of the cache space and low user viscosity.
According to a first aspect of the present invention, there is provided a method for cleaning cache data, the method comprising:
predicting the predicted cache growth amount of a circular queue according to the historical cache growth amounts of the circular queue in a plurality of historical time periods, wherein the circular queue comprises a plurality of cache nodes for storing cache data;
counting the total data amount of the currently stored cache data of the circular queue, and calculating the sum of the total data amount and the predicted cache increase amount;
when the sum is larger than or equal to a cleaning threshold value, calculating an important coefficient of each cache node based on the node position and the node access rate of each cache node in the circular queue;
and clearing the cache data of the target cache node in the circular queue, wherein the important coefficient of the target cache node is lower than the important coefficients of other cache nodes in the circular queue.
In another embodiment, the predicting the predicted buffer growth amount of the circular queue according to the historical buffer growth amounts of the circular queue in a plurality of historical time periods comprises:
respectively counting a plurality of historical cache growth amounts of the circular queue in a plurality of historical time periods, and calculating the average growth amount of the plurality of historical cache growth amounts;
obtaining at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculating at least one unit increment based on the at least one prediction coefficient and the average increment;
respectively calculating the product of the at least one unit increment and the increment of the at least one coefficient weight to obtain at least one increment product;
calculating a product sum of the at least one increase product, calculating a weighted sum of the at least one coefficient weight, and taking a first ratio of the product sum and the weighted sum as the prediction buffer increase.
In another embodiment, the obtaining at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculating at least one unit increase amount based on the at least one prediction coefficient and the average increase amount includes:
for each prediction coefficient of the at least one prediction coefficient, calculating a first product of the prediction coefficient and a first historical cache growth amount of the plurality of historical cache growth amounts, calculating a second product of the prediction coefficient and the average growth amount;
calculating a first sum of the first product and the average increment, and taking a difference of the first sum and the second product as a first process value;
updating the first historical cache growth amount in the calculation process to a second historical cache growth amount, replacing the average growth amount by using the first process value, and repeatedly executing the calculation process until the plurality of historical cache growth amounts are traversed to obtain a unit growth amount of the prediction coefficient, wherein the second historical cache growth amount is the next historical cache growth amount of the first historical cache growth amount in the plurality of historical cache growth amounts;
and repeating the process of generating the unit increment to obtain at least one unit increment of the at least one prediction coefficient.
In another embodiment, the calculating the importance coefficient of each cache node includes:
for each cache node in the circular queue, determining a node position of the cache node;
inquiring the data importance of the cache data stored in the cache node, and counting the node access rate of the cache node;
determining the cleaning number, calculating a second ratio of the node position to the cleaning number, and taking the sum of the second ratio, the data importance and the node access rate as an important coefficient of the cache node.
In another embodiment, the querying the data importance of the cache data stored by the cache node includes:
reading the data content of the cache data stored in the cache node, and determining the data type of the cache data stored in the cache node;
and inquiring the data importance corresponding to the data type as the data importance of the cache data stored by the cache node.
In another embodiment, the method further comprises:
when receiving data to be cached, storing the data to be cached in an idle cache node of the circular queue, wherein the idle cache node does not store the cached data;
and moving the idle cache node to the head of the circular queue.
In another embodiment, the method further comprises:
if the data to be cached carries expiration time, marking the data to be cached by adopting the process time, and recording the storage duration of the data to be cached in the circular queue;
and when the storage duration reaches the expiration time, cleaning the data to be cached.
According to a second aspect of the present invention, there is provided a cache data cleaning apparatus, including:
the prediction module is used for predicting the predicted cache growth amount of the circular queue according to the historical cache growth amounts of the circular queue in a plurality of historical time periods, and the circular queue comprises a plurality of cache nodes used for storing cache data;
the counting module is used for counting the total data amount of the cache data currently stored in the circular queue and calculating the sum of the total data amount and the predicted cache growth amount;
a calculating module, configured to calculate an important coefficient of each cache node based on a node position and a node access rate of each cache node in the circular queue when the sum is greater than or equal to a cleaning threshold;
and the clearing module is used for clearing the cache data of the target cache node in the circular queue, and the important coefficient of the target cache node is lower than the important coefficients of other cache nodes in the circular queue.
In another embodiment, the prediction module comprises:
the statistical unit is used for respectively counting a plurality of historical cache growth amounts of the circular queue in a plurality of historical time periods and calculating the average growth amount of the historical cache growth amounts;
a first calculation unit, configured to obtain at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculate at least one unit increase amount based on the at least one prediction coefficient and the average increase amount;
a second calculation unit, configured to calculate a product of the at least one unit growth amount and the growth amount of the at least one coefficient weight, respectively, to obtain at least one growth amount product;
a third calculation unit configured to calculate a product sum of the at least one increase product, calculate a weighted sum of the at least one coefficient weight, and use a first ratio of the product sum and the weighted sum as the prediction buffer increase.
In another embodiment, the first calculation unit is configured to calculate, for each prediction coefficient of the at least one prediction coefficient, a first product of the prediction coefficient and a first historical cache growth amount of the plurality of historical cache growth amounts, and calculate a second product of the prediction coefficient and the average growth amount; calculating a first sum of the first product and the average increment, and taking a difference of the first sum and the second product as a first process value; updating the first historical cache growth amount in the calculation process to a second historical cache growth amount, replacing the average growth amount by using the first process value, and repeatedly executing the calculation process until the plurality of historical cache growth amounts are traversed to obtain a unit growth amount of the prediction coefficient, wherein the second historical cache growth amount is the next historical cache growth amount of the first historical cache growth amount in the plurality of historical cache growth amounts; and repeating the process of generating the unit increment to obtain at least one unit increment of the at least one prediction coefficient.
In another embodiment, the calculation module includes:
the determining unit is used for determining the node position of each cache node in the circular queue;
the statistical unit is used for inquiring the data importance of the cache data stored in the cache node and counting the node access rate of the cache node;
and the calculating unit is used for determining the cleaning number, calculating a second ratio of the node position to the cleaning number, and taking the sum of the second ratio, the data importance and the node access rate as an importance coefficient of the cache node.
In another embodiment, the statistical unit is configured to read data content of the cache data stored in the cache node, and determine a data type of the cache data stored in the cache node; and inquiring the data importance corresponding to the data type as the data importance of the cache data stored by the cache node.
In another embodiment, the apparatus further comprises:
the storage module is used for storing the data to be cached in an idle cache node of the circular queue when the data to be cached is received, wherein the idle cache node does not store the cached data;
and the moving module is used for moving the idle cache node to the head of the circular queue.
In another embodiment, the apparatus further comprises:
the marking module is used for marking the data to be cached by adopting the process time and recording the storage duration of the data to be cached in the circular queue if the data to be cached carries the expiration time;
the cleaning module is further configured to clean the data to be cached when the storage duration reaches the expiration time.
According to a third aspect of the present invention, there is provided an apparatus comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the processor executes the computer program.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
By means of the technical scheme, compared with the existing mode of cleaning based on first-in first-out, the method, the device, the equipment and the computer readable storage medium for cleaning the cache data can predict the predicted cache growth amount of the circular queue, count the total data amount of the cache data currently stored in the circular queue, and select the target cache node with low importance coefficient in the circular queue to clean the cache data when the sum of the predicted cache growth amount and the total data amount is larger than a cleaning threshold value, so that the importance of the cleaned cache data relative to a user is the lowest, the utilization value of a cache space is improved, and the viscosity of the user is higher.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flowchart illustrating a method for clearing cache data according to an embodiment of the present invention;
fig. 2 is a schematic flowchart illustrating a method for clearing cache data according to an embodiment of the present invention;
fig. 3A is a schematic structural diagram illustrating a cache data cleaning apparatus according to an embodiment of the present invention;
fig. 3B is a schematic structural diagram illustrating a cache data cleaning apparatus according to an embodiment of the present invention;
fig. 3C is a schematic structural diagram illustrating a cache data cleaning apparatus according to an embodiment of the present invention;
fig. 3D is a schematic structural diagram illustrating a cache data cleaning apparatus according to an embodiment of the present invention;
fig. 3E is a schematic structural diagram illustrating a cache data cleaning apparatus according to an embodiment of the present invention;
fig. 4 shows a schematic device structure diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a cache data cleaning method, which can predict the predicted cache growth amount of a circular queue, and count the total data amount of cache data currently stored in the circular queue, when the sum of the predicted cache growth amount and the total data amount is greater than a cleaning threshold, a target cache node with a low important coefficient is selected from the circular queue to perform cache data cleaning, so that the aims of ensuring that the importance of the cleaned cache data is the lowest relative to a user, improving the utilization value of a cache space and ensuring that the viscosity of the user is higher are fulfilled, as shown in fig. 1, the method comprises the following steps:
101. and predicting the predicted buffer increase amount of the circular queue according to the historical buffer increase amounts of the circular queue in a plurality of historical time periods, wherein the circular queue comprises a plurality of buffer nodes for storing buffer data.
102. And counting the total data amount of the currently stored cache data of the circular queue, and calculating the sum of the total data amount and the predicted cache increase amount.
103. And when the sum is larger than or equal to the cleaning threshold value, calculating the important coefficient of each cache node based on the node position and the node access rate of each cache node in the circular queue.
104. And clearing the cache data of the target cache node in the circular queue, wherein the important coefficient of the target cache node is lower than the important coefficients of other cache nodes in the circular queue.
The method provided by the embodiment of the invention can predict the predicted cache growth amount of the circular queue, and count the total data amount of the cache data currently stored in the circular queue, when the sum of the predicted cache growth amount and the total data amount is greater than the cleaning threshold, the target cache node with low important coefficient is selected from the circular queue to clean the cache data, so that the importance of the cleaned cache data relative to a user is the lowest, the utilization value of a cache space is improved, and the viscosity of the user is higher.
The embodiment of the invention provides a cache data cleaning method, which can predict the predicted cache growth amount of a circular queue, and count the total data amount of cache data currently stored in the circular queue, when the sum of the predicted cache growth amount and the total data amount is greater than a cleaning threshold, a target cache node with a low important coefficient is selected from the circular queue to perform cache data cleaning, so that the purposes of ensuring that the importance of the cleaned cache data is the lowest relative to a user, improving the utilization value of a cache space and ensuring that the viscosity of the user is higher are achieved, as shown in fig. 2, the method comprises the following steps:
201. and predicting the predicted buffer increase amount of the circular queue according to the historical buffer increase amounts of the circular queue in a plurality of historical time periods.
In the embodiment of the invention, a circular queue for storing the buffered data is arranged. The medium for storing the cache data in the circular queue is a cache node, the circular queue is composed of a plurality of cache nodes, and each node stores one service of cache data.
In order to determine the time for cleaning the cache data, statistics needs to be performed on the circular queue, and then whether the cache data in the circular queue needs to be cleaned is determined according to the data volume obtained through the statistics. Considering that the increase of the buffer data is peaked, that is, a large amount of buffer data is likely to be stored in the circular queue in a certain time period, so that preparation is made in advance, and the paralysis of the whole circular queue caused by the entry of a large amount of buffer data is avoided, when the circular queue is counted, the future increase of the circular queue needs to be predicted, so that the comprehensiveness and the integrity of the counting are ensured.
Specifically, when the predicted cache growth amount is calculated, the calculation can be performed by a prediction algorithm, and the method comprises the following steps of one to three.
Step one, respectively counting a plurality of historical cache growth amounts of the circular queue in a plurality of historical time periods, and calculating the average growth amount of the plurality of historical cache growth amounts.
For example, assuming that 30min is divided into time periods, the cache growth amounts of 20 historical time periods before the current time are counted and are r1, r2 and r3 … r20 respectively, and the average growth amount is calculated to be S0=(r1+r2+r3+…+r20)/20。
And step two, acquiring at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculating at least one unit increment based on the at least one prediction coefficient and the average increment.
The at least one prediction coefficient and the at least one coefficient weight corresponding to the at least one prediction coefficient may be adjusted according to different application scenarios, for example, the at least one prediction coefficient and the at least one coefficient weight may be appropriately increased in a peak time period, and the at least one prediction coefficient and the at least one coefficient weight may be appropriately decreased in a valley time period. For example, the prediction coefficient may be a0=0.1,a1=0.3,a2=0.5,a3The coefficient weight may be w 0.40=w1=w2=w30.25. In this way, at least one unit growth amount may be calculated based on the at least one prediction coefficient and the average growth amount.
In addition, besides the first prediction, the dynamic adjustment coefficient is needed before each subsequent predictionTaking the above way of representing the coefficients as an example, the specific adjustment process is as follows: will r is20Are respectively reacted with S210、S211、S212、S213Comparing with r20The coefficient weight of the minimum value difference of (1) is multiplied by 1.05 and is compared with r20The coefficient weight with the largest value difference is multiplied by 0.95, thereby realizing the adjustment of the coefficient weight.
Specifically, in calculating the unit growth amount, for each of the at least one prediction coefficient, first, a first product of the prediction coefficient and a first history buffer growth amount of the plurality of history buffer growth amounts is calculated, and a second product of the prediction coefficient and the average growth amount is calculated; subsequently, a first sum of the first product and the average increment is calculated, and the difference between the first sum and the second product is used as a first process value. And continuously updating the first historical cache increment in the calculation process to a second historical cache increment, namely updating the second historical cache increment to a next historical cache increment of the first historical cache increment in the plurality of historical cache increments, replacing the average increment by adopting the first process value, and repeatedly executing the calculation process until the plurality of historical cache increments are traversed to obtain the unit increment of the prediction coefficient.
Continuing with the example of step one and step two, the above process can be expressed as: first, S is calculated10=a0×r1+(1-a0)×S0(ii) a Subsequently, S is calculated20=a0×r2+(1-a0)×S10(ii) a Continue to calculate S30=a0×r3+(1-a0)×S20(ii) a And so on until S is obtained by calculation210=a0×r20+(1-a0)×S200. By repeatedly performing the above-described process of generating the unit growth amounts, at least one unit growth amount of at least one prediction coefficient is obtained, e.g., S211=a1×r20+(1-a1)×S200;S212=a2×r20+(1-a2)×S200;S213=a3×r20+(1-a3)×S200And so on.
And step three, generating a predicted buffer increase amount according to at least one unit increase amount and at least one coefficient weight.
The predicted buffer growth amount may be generated based on the at least one unit growth amount and the at least one coefficient weight after the at least one unit growth amount is calculated. Firstly, respectively calculating the product of at least one unit increment and at least one increment of coefficient weight to obtain at least one increment product; then, a product sum of at least one increase product is calculated, a weighted sum of at least one coefficient weight is calculated, and a first ratio of the product sum and the weighted sum is used as a prediction buffer increase. The above calculation process can be expressed by the following equation 1:
equation 1: kz is S210×w0+S211×w1+S212×w2+S213×w3/(w0+w1+w2+w3)
And Kz is the predicted buffer growth amount.
Through the process from the first step to the third step, the predicted cache increment of the circular queue can be obtained, and then the cleaning of the cache data is executed based on the predicted cache increment in the follow-up process.
202. And counting the total data amount of the currently stored cache data of the circular queue, and calculating the sum of the total data amount and the predicted cache increase amount.
In the embodiment of the invention, after the predicted buffer increase of the circular queue is obtained, the total data amount of the buffer data currently stored in the circular queue can be counted, and the sum of the total data amount and the predicted buffer increase is calculated. When the total amount of data of the currently stored cache data in the circular queue is counted, the sub-data amount of each cache node may be counted first, and the total amount of the sub-data amount is calculated, so as to obtain the total amount of data.
203. And when the sum is larger than or equal to the cleaning threshold value, calculating the important coefficient of each cache node based on the node position and the node access rate of each cache node in the circular queue.
In the embodiment of the present invention, in order to determine the timing for performing the buffer data scrubbing, a scrubbing threshold for evaluating the sum may be set, and when the sum is greater than or equal to the scrubbing threshold, the scrubbing process of the buffer data may be performed. The cleaning threshold may be a self-defined value, and may be specifically adjusted according to different scenes. In addition, considering that the prediction process shown in step 201 may be performed multiple times, the cleaning threshold needs to be adjusted in each prediction, that is, the cleaning threshold needs to be dynamically adjusted in each subsequent prediction except for the first prediction. Specifically, if the predicted cache growth amount is larger than the sum of the multiple historical cache growth amounts, the cleaning threshold value is adjusted to be 0.99 times of the original cleaning threshold value; and if the predicted cache growth amount is smaller than the sum of the historical cache growth amounts, adjusting the cleaning threshold value to be 1.01 times of the original cleaning threshold value. It should be noted that, no matter how the cleaning threshold is adjusted, the value of the cleaning threshold is determined to be smaller than the limit value.
When the sum is greater than or equal to the cleaning threshold value, the cleaning of the buffer data of the circular queue is required at the moment. Because some cache data stored in the circular queue are important for the user and the user can access the cache data frequently, and some cache data are not important for the user and may be accessed only once by the user, an important coefficient of the cache data stored in each node needs to be calculated, and the cache data is cleaned according to the important coefficient. Specifically, for each cache node in the circular queue, firstly, determining the node position of the cache node; subsequently, reading the data content of the cache data stored by the cache node, determining the data type of the cache data stored by the cache node, inquiring the data importance corresponding to the data type as the data importance of the cache data stored by the cache node, and counting the node access rate of the cache node; and finally, determining the cleaning number, calculating a second ratio of the node position to the cleaning number, and expressing the weight of the node position of the cache node in the whole cache cleaning process by using the second ratio, so that the node position is also comprehensively considered, and the sum of the second ratio, the data importance and the node access rate is used as an important coefficient of the cache node. The above calculation process can be expressed by the following equation 2:
equation 2: c ═ 0.4 XN/M +0.2K +0.4 XR
Where C is used to represent the significant coefficient.
N is used to represent a node position, and it should be noted that, in order to ensure that the important coefficient of the cache node arranged in front of the circular queue is higher than that of the cache node behind, when recording the node position, recording is performed in a reverse manner, that is, the value of the node position of the cache node arranged in the first place of the circular queue is the largest, and the value of the node position of the cache node arranged in the last place of the circular queue is the smallest. For example, if the current circular queue includes 10 cache nodes, if the node a is arranged at the head of the circular queue, the node position of the node a, that is, the value of N of the node a is 10; if the node B is arranged at the end of the circular queue, the node position of the node B, that is, the value of N of the node B is 1.
M is used to indicate the number of cleans. When the cache data is cleared, the cache data in the nodes is cleared according to the fixed number, so that M for representing the number of cleared values can be set, and the cache data in the M nodes is cleared each time. M may be expressed in percentage form, for example, it may be 5%, M may be set manually, and M is generally larger than the remaining capacity in the circular queue.
K is used to represent the data importance. K is defined according to the data type of the data service. Wherein, the data importance degree of the data type being common data is 1, the data importance degree of the data type being core data is 3, and the data importance degree of the data type being financial transaction data is 5. The data importance is artificially set and can be adjusted according to different scenes.
R is used to represent node access rates. The access rate is used for indicating which data are accessed frequently, and the data which are not accessed frequently are cleaned as much as possible. When the node access rate is calculated, first, the first access times of the current node need to be counted; then, calculating the total number of times of all the nodes being accessed; and finally, calculating the ratio of the first access times to the total times, and taking the ratio as the node access rate.
After the values of the unknowns are determined, the corresponding importance coefficients can be calculated for each node, so as to determine which nodes to clear the cache data according to the importance coefficients.
204. And clearing the cache data of the target cache node in the circular queue.
In the embodiment of the invention, after the important coefficient of each node is generated, the cache nodes in the circular queue can be sorted according to the descending of the important coefficient to obtain the sorting result. After the sorting result is obtained, the cache nodes with the cleaning number of the important coefficients arranged at the tail end of the sorting result can be used as target cache nodes, so that the important coefficients of the target cache nodes are lower than those of other cache nodes in the circular queue, and the cache data in the target cache nodes are cleaned. For example, if the number of cleans is 2 and the sorting result is A, C, D, B, the cache data in D and B can be cleaned.
In the process of practical application, when the important coefficients of the cache nodes are calculated, the positions of the nodes are considered, and in order to avoid that newly written cache data is not stored for a certain time and is cleaned, when the data to be cached is received, the data to be cached is stored in an idle cache node of the circular queue, namely the cache node which does not store any cache data, and the idle cache node is moved to the head of the circular queue, so that the important coefficients of each cache node are ensured to be in accordance with reality. In addition, some cache data are data frequently interacted and accessed by users, so that the data to be cached is cache data already stored in the circular queue, namely the data to be cached hits existing cache data in the circular queue, and thus, a cache node where the data to be cached is located in the circular queue can be determined to directly move to the head of the circular queue without storing the data to be cached again.
It should be noted that, if the data to be cached carries the expiration time, the data to be cached is marked by using the process time, the storage duration of the data to be cached in the circular queue is recorded, and when the storage duration reaches the expiration time, the data to be cached is cleared. Further, some data to be cached do not carry expiration time, so as to avoid that some cached data may not be cleared for a long time and the cache space in the terminal is occupied for a long time, therefore, when the data to be cached is written, an expiration time can be defined for the data to be cached, the storage duration of the data to be cached stored in the circular queue is recorded at the moment of writing, and when the storage duration is greater than or equal to the expiration time, the data to be cached is cleared.
The method provided by the embodiment of the invention can predict the predicted cache growth amount of the circular queue, and count the total data amount of the cache data currently stored in the circular queue, when the sum of the predicted cache growth amount and the total data amount is greater than the cleaning threshold, the target cache node with low important coefficient is selected from the circular queue to clean the cache data, so that the importance of the cleaned cache data relative to a user is the lowest, the utilization value of a cache space is improved, and the viscosity of the user is higher.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present invention provides a cache data cleaning apparatus, and as shown in fig. 3A, the apparatus includes: a prediction module 301, a statistics module 302, a calculation module 303 and a cleaning module 304.
The prediction module 301 is configured to predict a predicted cache increment of a circular queue according to historical cache increments of the circular queue in multiple historical time periods, where the circular queue includes multiple cache nodes for storing cache data;
the counting module 302 is configured to count a total amount of data of the currently stored cache data in the circular queue, and calculate a sum of the total amount of data and the predicted cache growth amount;
the calculating module 303 is configured to calculate an important coefficient of each cache node based on a node position and a node access rate of each cache node in the circular queue when the sum is greater than or equal to a cleaning threshold;
the cleaning module 304 is configured to clean cache data of a target cache node in the circular queue, where an important coefficient of the target cache node is lower than important coefficients of other cache nodes in the circular queue.
In a specific application scenario, as shown in fig. 3B, the prediction module 301 specifically includes: a statistical unit 3011, a first calculation unit 3012, a second calculation unit 3013, and a third calculation unit 3014.
The statistical unit 3011 is configured to separately count a plurality of historical cache growth amounts of the circular queue in a plurality of historical time periods, and calculate an average growth amount of the plurality of historical cache growth amounts;
the first calculating unit 3012 is configured to obtain at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculate at least one unit increase amount based on the at least one prediction coefficient and the average increase amount;
the second calculating unit 3013 is configured to calculate a product of the at least one unit increment and the at least one coefficient weight to obtain at least one product of the increment;
the third calculation unit 3014 is configured to calculate a product sum of the products of the at least one increment, calculate a weighted sum of the at least one coefficient weight, and use a first ratio of the product sum and the weighted sum as the prediction buffer increment.
In a specific application scenario, the first calculating unit 3012 is configured to calculate, for each prediction coefficient of the at least one prediction coefficient, a first product of the prediction coefficient and a first historical cache increment of the historical cache increments, and calculate a second product of the prediction coefficient and the average increment; calculating a first sum of the first product and the average increment, and taking a difference of the first sum and the second product as a first process value; updating the first historical cache growth amount in the calculation process to a second historical cache growth amount, replacing the average growth amount by using the first process value, and repeatedly executing the calculation process until the plurality of historical cache growth amounts are traversed to obtain a unit growth amount of the prediction coefficient, wherein the second historical cache growth amount is the next historical cache growth amount of the first historical cache growth amount in the plurality of historical cache growth amounts; and repeating the process of generating the unit increment to obtain at least one unit increment of the at least one prediction coefficient.
In a specific application scenario, as shown in fig. 3C, the calculating module 303 specifically includes: a determination unit 3031, a statistical unit 3032 and a calculation unit 3033.
The determining unit 3031 is configured to determine, for each cache node in the circular queue, a node position of the cache node;
the statistical unit 3032 is configured to query the data importance of the cache data stored in the cache node, and count the node access rate of the cache node;
the calculating unit 3033 is configured to determine a cleaning number, calculate a second ratio of the node position to the cleaning number, and use a sum of the second ratio, the data importance, and the node access rate as an importance coefficient of the cache node.
In a specific application scenario, the statistical unit 3032 is configured to read data content of the cache data stored by the cache node, and determine a data type of the cache data stored by the cache node; and inquiring the data importance corresponding to the data type as the data importance of the cache data stored by the cache node.
In a specific application scenario, as shown in fig. 3D, the apparatus further includes: a storage module 305 and a movement module 306.
The storage module 305 is configured to, when receiving data to be cached, store the data to be cached in an idle cache node of the circular queue, where the idle cache node does not store the cached data;
the moving module 306 is configured to move the idle cache node to the head of the circular queue.
In a specific application scenario, as shown in fig. 3E, the apparatus further includes: a marking module 307.
The marking module 307 is configured to mark the data to be cached by using the process time if the data to be cached carries an expiration time, and record a storage duration of the data to be cached in the circular queue;
the cleaning module 304 is further configured to clean the data to be cached when the storage duration reaches the expiration time.
The device provided by the embodiment of the invention can predict the predicted cache growth amount of the circular queue, and count the total data amount of the cache data currently stored in the circular queue, when the sum of the predicted cache growth amount and the total data amount is greater than the cleaning threshold, a target cache node with a low important coefficient is selected from the circular queue to clean the cache data, so that the importance of the cleaned cache data relative to a user is the lowest, the utilization value of a cache space is improved, and the viscosity of the user is higher.
It should be noted that other corresponding descriptions of the functional units related to the cache data cleaning apparatus provided in the embodiment of the present invention may refer to the corresponding descriptions in fig. 1 and fig. 2, and are not described herein again.
In an exemplary embodiment, referring to fig. 4, there is further provided a device, where the device 400 includes a communication bus, a processor, a memory, and a communication interface, and may further include an input/output interface and a display device, where the functional units may communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the cache data cleaning method in the embodiment.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the cache data scrubbing method.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by hardware, and also by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.
Claims (10)
1. A cache data cleaning method is characterized by comprising the following steps:
predicting the predicted cache growth amount of a circular queue according to the historical cache growth amounts of the circular queue in a plurality of historical time periods, wherein the circular queue comprises a plurality of cache nodes for storing cache data;
counting the total data amount of the currently stored cache data of the circular queue, and calculating the sum of the total data amount and the predicted cache increase amount;
when the sum is larger than or equal to a cleaning threshold value, calculating an important coefficient of each cache node based on the node position and the node access rate of each cache node in the circular queue;
and clearing the cache data of the target cache node in the circular queue, wherein the important coefficient of the target cache node is lower than the important coefficients of other cache nodes in the circular queue.
2. The method of claim 1, wherein predicting the predicted buffer growth amount of the circular queue based on historical buffer growth amounts of the circular queue over a plurality of historical time periods comprises:
respectively counting a plurality of historical cache growth amounts of the circular queue in a plurality of historical time periods, and calculating the average growth amount of the plurality of historical cache growth amounts;
obtaining at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculating at least one unit increment based on the at least one prediction coefficient and the average increment;
respectively calculating the product of the at least one unit increment and the increment of the at least one coefficient weight to obtain at least one increment product;
calculating a product sum of the at least one increase product, calculating a weighted sum of the at least one coefficient weight, and taking a first ratio of the product sum and the weighted sum as the prediction buffer increase.
3. The method of claim 2, wherein obtaining at least one prediction coefficient and at least one coefficient weight corresponding to the at least one prediction coefficient, and calculating at least one unit growth amount based on the at least one prediction coefficient and the average growth amount comprises:
for each prediction coefficient of the at least one prediction coefficient, calculating a first product of the prediction coefficient and a first historical cache growth amount of the plurality of historical cache growth amounts, calculating a second product of the prediction coefficient and the average growth amount;
calculating a first sum of the first product and the average increment, and taking a difference of the first sum and the second product as a first process value;
updating the first historical cache growth amount in the calculation process to a second historical cache growth amount, replacing the average growth amount by using the first process value, and repeatedly executing the calculation process until the plurality of historical cache growth amounts are traversed to obtain a unit growth amount of the prediction coefficient, wherein the second historical cache growth amount is the next historical cache growth amount of the first historical cache growth amount in the plurality of historical cache growth amounts;
and repeating the process of generating the unit increment to obtain at least one unit increment of the at least one prediction coefficient.
4. The method of claim 1, wherein the calculating the importance coefficient of each cache node comprises:
for each cache node in the circular queue, determining a node position of the cache node;
inquiring the data importance of the cache data stored in the cache node, and counting the node access rate of the cache node;
determining the cleaning number, calculating a second ratio of the node position to the cleaning number, and taking the sum of the second ratio, the data importance and the node access rate as an important coefficient of the cache node.
5. The method of claim 4, wherein the querying the data importance of the cached data stored by the cache node comprises:
reading the data content of the cache data stored in the cache node, and determining the data type of the cache data stored in the cache node;
and inquiring the data importance corresponding to the data type as the data importance of the cache data stored by the cache node.
6. The method of claim 1, further comprising:
when receiving data to be cached, storing the data to be cached in an idle cache node of the circular queue, wherein the idle cache node does not store the cached data;
and moving the idle cache node to the head of the circular queue.
7. The method of claim 6, further comprising:
if the data to be cached carries expiration time, marking the data to be cached by adopting the process time, and recording the storage duration of the data to be cached in the circular queue;
and when the storage duration reaches the expiration time, cleaning the data to be cached.
8. A cache data cleaning apparatus, comprising:
the prediction module is used for predicting the predicted cache growth amount of the circular queue according to the historical cache growth amounts of the circular queue in a plurality of historical time periods, and the circular queue comprises a plurality of cache nodes used for storing cache data;
the counting module is used for counting the total data amount of the cache data currently stored in the circular queue and calculating the sum of the total data amount and the predicted cache growth amount;
a calculating module, configured to calculate an important coefficient of each cache node based on a node position and a node access rate of each cache node in the circular queue when the sum is greater than or equal to a cleaning threshold;
and the clearing module is used for clearing the cache data of the target cache node in the circular queue, and the important coefficient of the target cache node is lower than the important coefficients of other cache nodes in the circular queue.
9. An apparatus comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910780198.2A CN110674121B (en) | 2019-08-22 | 2019-08-22 | Cache data cleaning method, device, equipment and computer readable storage medium |
PCT/CN2019/118233 WO2021031408A1 (en) | 2019-08-22 | 2019-11-13 | Cached data clearing method, device, equipment, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910780198.2A CN110674121B (en) | 2019-08-22 | 2019-08-22 | Cache data cleaning method, device, equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110674121A true CN110674121A (en) | 2020-01-10 |
CN110674121B CN110674121B (en) | 2023-08-22 |
Family
ID=69075526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910780198.2A Active CN110674121B (en) | 2019-08-22 | 2019-08-22 | Cache data cleaning method, device, equipment and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110674121B (en) |
WO (1) | WO2021031408A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111338581A (en) * | 2020-03-27 | 2020-06-26 | 尹兵 | Data storage method and device based on cloud computing, cloud server and system |
CN111858508A (en) * | 2020-06-17 | 2020-10-30 | 远光软件股份有限公司 | Regulation and control method and device of log system, storage medium and electronic equipment |
WO2021031408A1 (en) * | 2019-08-22 | 2021-02-25 | 平安科技(深圳)有限公司 | Cached data clearing method, device, equipment, and computer-readable storage medium |
CN112579652A (en) * | 2020-12-28 | 2021-03-30 | 咪咕文化科技有限公司 | Method and device for deleting cache data, electronic equipment and storage medium |
CN112632347A (en) * | 2021-01-14 | 2021-04-09 | 加和(北京)信息科技有限公司 | Data screening control method and device and nonvolatile storage medium |
CN112783886A (en) * | 2021-03-12 | 2021-05-11 | 中国平安财产保险股份有限公司 | Cache cleaning method and device, computer equipment and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114625311B (en) * | 2021-08-05 | 2024-06-14 | 亚信科技(中国)有限公司 | Method and device for determining cache component |
US11922026B2 (en) | 2022-02-16 | 2024-03-05 | T-Mobile Usa, Inc. | Preventing data loss in a filesystem by creating duplicates of data in parallel, such as charging data in a wireless telecommunications network |
CN116977146B (en) * | 2023-08-25 | 2024-02-09 | 山东省环科院环境工程有限公司 | Instrument data management and control system for environmental protection monitoring based on Internet of things |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226881A (en) * | 2013-03-28 | 2013-07-31 | 马钢控制技术有限责任公司 | Solving method for insufficient storage capacity of blacklist of POS (Point-of-sale) machine |
US20140164392A1 (en) * | 2012-12-07 | 2014-06-12 | At&T Intellectual Property I, L.P. | Methods and apparatus to sample data connections |
CN105045723A (en) * | 2015-06-26 | 2015-11-11 | 深圳市腾讯计算机系统有限公司 | Processing method, apparatus and system for cached data |
CN105095107A (en) * | 2014-05-04 | 2015-11-25 | 腾讯科技(深圳)有限公司 | Buffer memory data cleaning method and apparatus |
US20190073307A1 (en) * | 2017-09-06 | 2019-03-07 | Western Digital Technologies, Inc. | Predicting future access requests by inverting historic access requests in an object storage system |
CN109542802A (en) * | 2018-11-26 | 2019-03-29 | 努比亚技术有限公司 | Data cached method for cleaning, device, mobile terminal and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060074990A1 (en) * | 2004-09-28 | 2006-04-06 | International Business Machines Corporation | Leaf avoidance during garbage collection in a Java Virtual Machine |
CN101630291B (en) * | 2009-08-03 | 2012-11-14 | 中国科学院计算技术研究所 | Virtual memory system and method thereof |
CN107346289A (en) * | 2016-05-05 | 2017-11-14 | 北京自动化控制设备研究所 | A kind of method with round-robin queue's buffered data |
CN106227598A (en) * | 2016-07-20 | 2016-12-14 | 浪潮电子信息产业股份有限公司 | Recovery method of cache resources |
CN109491619A (en) * | 2018-11-21 | 2019-03-19 | 浙江中智达科技有限公司 | Caching data processing method, device and system |
CN110674121B (en) * | 2019-08-22 | 2023-08-22 | 平安科技(深圳)有限公司 | Cache data cleaning method, device, equipment and computer readable storage medium |
-
2019
- 2019-08-22 CN CN201910780198.2A patent/CN110674121B/en active Active
- 2019-11-13 WO PCT/CN2019/118233 patent/WO2021031408A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140164392A1 (en) * | 2012-12-07 | 2014-06-12 | At&T Intellectual Property I, L.P. | Methods and apparatus to sample data connections |
CN103226881A (en) * | 2013-03-28 | 2013-07-31 | 马钢控制技术有限责任公司 | Solving method for insufficient storage capacity of blacklist of POS (Point-of-sale) machine |
CN105095107A (en) * | 2014-05-04 | 2015-11-25 | 腾讯科技(深圳)有限公司 | Buffer memory data cleaning method and apparatus |
CN105045723A (en) * | 2015-06-26 | 2015-11-11 | 深圳市腾讯计算机系统有限公司 | Processing method, apparatus and system for cached data |
US20190073307A1 (en) * | 2017-09-06 | 2019-03-07 | Western Digital Technologies, Inc. | Predicting future access requests by inverting historic access requests in an object storage system |
CN109542802A (en) * | 2018-11-26 | 2019-03-29 | 努比亚技术有限公司 | Data cached method for cleaning, device, mobile terminal and storage medium |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021031408A1 (en) * | 2019-08-22 | 2021-02-25 | 平安科技(深圳)有限公司 | Cached data clearing method, device, equipment, and computer-readable storage medium |
CN111338581A (en) * | 2020-03-27 | 2020-06-26 | 尹兵 | Data storage method and device based on cloud computing, cloud server and system |
CN111858508A (en) * | 2020-06-17 | 2020-10-30 | 远光软件股份有限公司 | Regulation and control method and device of log system, storage medium and electronic equipment |
CN111858508B (en) * | 2020-06-17 | 2023-01-31 | 远光软件股份有限公司 | Regulation and control method and device of log system, storage medium and electronic equipment |
CN112579652A (en) * | 2020-12-28 | 2021-03-30 | 咪咕文化科技有限公司 | Method and device for deleting cache data, electronic equipment and storage medium |
CN112579652B (en) * | 2020-12-28 | 2024-04-09 | 咪咕文化科技有限公司 | Method and device for deleting cache data, electronic equipment and storage medium |
CN112632347A (en) * | 2021-01-14 | 2021-04-09 | 加和(北京)信息科技有限公司 | Data screening control method and device and nonvolatile storage medium |
CN112632347B (en) * | 2021-01-14 | 2024-01-23 | 加和(北京)信息科技有限公司 | Data screening control method and device and nonvolatile storage medium |
CN112783886A (en) * | 2021-03-12 | 2021-05-11 | 中国平安财产保险股份有限公司 | Cache cleaning method and device, computer equipment and storage medium |
CN112783886B (en) * | 2021-03-12 | 2023-08-29 | 中国平安财产保险股份有限公司 | Cache cleaning method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110674121B (en) | 2023-08-22 |
WO2021031408A1 (en) | 2021-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110674121A (en) | Cache data cleaning method, device, equipment and computer readable storage medium | |
JP7183385B2 (en) | Node classification method, model training method, and its device, equipment and computer program | |
WO2021120789A1 (en) | Data writing method and apparatus, and storage server and computer-readable storage medium | |
US8214608B2 (en) | Behavioral monitoring of storage access patterns | |
JP5838229B2 (en) | Send product information based on determined preference values | |
US7899763B2 (en) | System, method and computer program product for evaluating a storage policy based on simulation | |
CN104583891B (en) | For the devices, systems and methods that the adaptable caching in non-volatile main memory system is changed | |
CN111445418A (en) | Image defogging method and device and computer equipment | |
US20100049935A1 (en) | Management of very large streaming data sets for efficient writes and reads to and from persistent storage | |
CN110287010A (en) | A kind of data cached forecasting method towards the analysis of Spark time window data | |
CN112887795B (en) | Video playing method, device, equipment and medium | |
WO2021088404A1 (en) | Data processing method, apparatus and device, and readable storage medium | |
CN103888377A (en) | Message cache method and device | |
CN112667528A (en) | Data prefetching method and related equipment | |
JP2003005920A (en) | Storage system and data rearranging method and data rearranging program | |
JP2023534696A (en) | Anomaly detection in network topology | |
CN111858403A (en) | Cache data heat management method and system based on probability to access frequency counting | |
CN109597800A (en) | A kind of log distribution method and device | |
CN109289196A (en) | Game achieves processing method and processing device | |
CN114866489A (en) | Congestion control method and device and training method and device of congestion control model | |
US20110107268A1 (en) | Managing large user selections in an application | |
CN117235371A (en) | Video recommendation method, model training method and device | |
CN109150819B (en) | A kind of attack recognition method and its identifying system | |
Löpker et al. | The idle period of the finite G/M/1 queue with an interpretation in risk theory | |
CN112446490A (en) | Network training data set caching method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40020235 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |