CN116578593A - Data caching method, system, device, computer equipment and storage medium - Google Patents

Data caching method, system, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116578593A
CN116578593A CN202310450756.5A CN202310450756A CN116578593A CN 116578593 A CN116578593 A CN 116578593A CN 202310450756 A CN202310450756 A CN 202310450756A CN 116578593 A CN116578593 A CN 116578593A
Authority
CN
China
Prior art keywords
data
cache
dynamic
target
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310450756.5A
Other languages
Chinese (zh)
Inventor
鲁重瑞
于汉江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority to CN202310450756.5A priority Critical patent/CN116578593A/en
Publication of CN116578593A publication Critical patent/CN116578593A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • G06F16/2322Optimistic concurrency control using timestamps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a data caching method, a data caching system, a data caching device, computer equipment and a storage medium. The method comprises the following steps: determining a cache priority of target data in response to an access request for the target data; storing the target data in a preset dynamic cache region according to the cache priority to obtain target dynamic cache data of the dynamic cache region; at a preset updating moment, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data. The method can improve the hit rate of data access.

Description

Data caching method, system, device, computer equipment and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a data caching method, system, device, computer equipment, and storage medium.
Background
With development of cloud computing technology, a magnetic resonance simulation cloud platform appears, a large number of services deployed on the magnetic resonance simulation cloud platform generally have a database access requirement, and a UDC (Unified Data Configuration Center ) can provide a unified database access interface for a plurality of services on the cloud platform to realize database access synchronization, however, because a large number of services directly access a database through the UDC, the database access efficiency is easy to be reduced.
In the prior art, in order to improve the database access efficiency, after accessing the database last time, the queried data is generally stored in a cache, and the queried data is directly read from the cache during the subsequent access. However, because of the limited cache space, cloud computing generally requires massive data, which easily results in a low cache hit rate during data access.
Therefore, the existing cloud computing technology has the problem of low cache hit rate.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data caching method, system, apparatus, computer device, and computer readable storage medium that can improve the cache hit rate.
In a first aspect, the present application provides a data caching method. The method comprises the following steps:
determining a cache priority of target data in response to an access request for the target data;
storing the target data in a preset dynamic cache region according to the cache priority to obtain target dynamic cache data of the dynamic cache region;
at a preset updating moment, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
In one embodiment, the determining, in response to an access request for target data, a cache priority of the target data includes:
determining a first priority and a second priority of the target data in response to the access request; the first priority is associated with a machine learning process for the target data, and the second priority is associated with a dynamic caching process for the target data;
and carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data.
In one embodiment, the determining the first priority and the second priority of the target data in response to the access request includes:
determining the data attribute of the target data according to the access request;
and inputting the data attribute into a pre-trained machine learning model to obtain the first priority, and inputting the data attribute into a pre-trained dynamic cache model to obtain the second priority.
In one embodiment, the storing the target data in a preset dynamic cache area according to the cache priority to obtain target dynamic cache data in the dynamic cache area includes:
According to the buffer priority, sorting the target data in the original dynamic buffer data of the dynamic buffer area to obtain sorted dynamic buffer data;
if the number of the ordered dynamic cache data exceeds the cache space size of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data;
and storing the truncated dynamic cache data serving as the target dynamic cache data in the dynamic cache region.
In one embodiment, at a preset update time, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data, including:
determining repeated data between the original static cache data and the target dynamic cache data at the updating moment;
deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data;
and if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain the updated static cache data.
In one embodiment, after deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data, the method further includes:
if the total quantity of the deleted dynamic cache data and the original static cache data exceeds the cache space size of the static cache region, truncating the original static cache data according to the cache space size of the static cache region to obtain truncated static cache data;
and merging the truncated static cache data with the deleted dynamic cache data to obtain the updated static cache data.
In a second aspect, the application further provides a data caching system. The system comprises a cache processor, a dynamic cache unit and a static cache unit; the dynamic cache unit is connected with the static cache unit, and the dynamic cache unit and the static cache unit are both connected with the cache processor;
the cache processor is used for responding to an access request for target data, determining the cache priority of the target data and sending the cache priority to the dynamic cache unit;
The dynamic caching unit is used for storing the target data according to the received caching priority to obtain target dynamic caching data;
the static cache unit is used for updating the original static cache data according to the target dynamic cache data acquired from the dynamic cache unit at a preset updating moment to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
In a third aspect, the present application further provides a data caching device. The device comprises:
the data acquisition module is used for responding to an access request aiming at target data and determining the buffer priority of the target data;
the dynamic cache module is used for storing the target data in a preset dynamic cache area according to the cache priority to obtain target dynamic cache data of the dynamic cache area;
the static cache module is used for updating the original static cache data in the preset static cache region according to the target dynamic cache data at preset updating time to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
In a fourth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
determining a cache priority of target data in response to an access request for the target data;
storing the target data in a preset dynamic cache region according to the cache priority to obtain target dynamic cache data of the dynamic cache region;
at a preset updating moment, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
In a fifth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
determining a cache priority of target data in response to an access request for the target data;
Storing the target data in a preset dynamic cache region according to the cache priority to obtain target dynamic cache data of the dynamic cache region;
at a preset updating moment, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
According to the data caching method, system, device, computer equipment and storage medium, the caching priority of target data is determined by responding to the access request for the target data, the target data is stored in the preset dynamic cache area according to the caching priority to obtain target dynamic cache data of the dynamic cache area, and at preset updating time, the original static cache data in the preset static cache area is updated according to the target dynamic cache data to obtain updated static cache data; the cache area is divided into a dynamic cache area for storing dynamic cache data and a static cache area for storing static cache data, the dynamic cache data is updated in real time in the dynamic cache area, so that the cache hit rate of the dynamic cache area is improved, the static cache area is further updated with the static cache data in batches from the dynamic cache area at preset time, the cache hit rate of the static cache area is improved, and the static cache area can be provided with higher cache hit rate when data access is performed on the static cache area.
Drawings
FIG. 1 is a flow chart of a data caching method in one embodiment;
FIG. 2 is a flow diagram of a cache service component data processing process in one embodiment;
FIG. 3 is a flow diagram of a cache service component data configuration process in one embodiment;
FIG. 4 is a flow diagram of data processing in a policy section of a cache service component according to one embodiment;
FIG. 5 is a flow diagram of data processing in a data area of a cache service component according to one embodiment;
FIG. 6 is a block diagram of a data caching system in one embodiment;
FIG. 7 is a block diagram of a data caching apparatus in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a data caching method is provided, where the method is applied to a server for illustration, it is understood that the method may also be applied to a terminal, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step S110, in response to the access request for the target data, determining the buffer priority of the target data.
The target data may be data that needs to be accessed.
The buffer priority may be a priority of ordering the target data in the dynamic buffer.
In a specific implementation, the server may include a processor and a memory, where the memory is provided with a buffer area, the buffer area includes a dynamic buffer area and a static buffer area, the dynamic buffer area is in communication connection with the static buffer area, data buffered in the dynamic buffer area is dynamic buffer data, data buffered in the static buffer area is static buffer data, the static buffer area faces a user and is used for receiving a data access request of the user, and returning access data to the user. When the target data need to be accessed, a user sends an access request to a server through a terminal, a static cache area searches target data from a database aiming at the received access request, the searched target data is returned to the terminal, the static cache area can also send the data attribute of the target data to a processor, and the cache priority of the target data is determined through the processor.
In practical application, a DDCS (Data Dynamic Cache Service ) may be set in a unified data configuration center, where the DDCS includes a policy area (processor) and a data area (cache area), the data area includes a dynamic cache area and a static cache area, and the policy area is set with an MLA (Machine Learning Algorithms, machine learning) policy and a GDBA (General Dynamic Buffer Algorithm, general dynamic cache) policy associated with a cache priority of the dynamic cache area, and a GSBA (General Static Buffer Algorithm, general static cache) policy associated with a cache priority of the static cache area for target data.
After receiving an access request of a user, the static cache area searches target data in the static cache area, and if the target data can be searched, the searched target data is directly returned to the terminal; if the target data cannot be found, the target data is found in the database, the database returns the found target data to the static cache area, the target data is returned to the terminal through the static cache area, and then the cache priority of the target data can be determined through the strategy area, so that the target data is stored in the dynamic cache area according to the cache priority, otherwise, if the target data cannot be found in the database, the static cache area returns a null value to the terminal, and the null value is stored.
The data attribute of the target data includes, but is not limited to, the type, the number, the format, the length and the like of the target data.
The MLA may be RNN (Recurrent Neural Network ), random forest or LSTM (Long Short-Term Memory network).
The GDBA may be FIFO (First Input First Output, first-in first-out queue), LFU (Least Frequently Used, least used), LRU (Least Recently Used ), or OPT (OPTimal replacement, best page replacement), among others.
Wherein GSBA may be FIFO, LFU, LRU or OPT.
Step S120, storing the target data in a preset dynamic cache area according to the cache priority, and obtaining the target dynamic cache data of the dynamic cache area.
The target dynamic cache data may be dynamic cache data including target data.
In a specific implementation, the dynamic buffer area can acquire the buffer priority of the target data, sort the target data in the original dynamic buffer data of the dynamic buffer area according to the buffer priority, and store the obtained sorted dynamic buffer data to obtain the target dynamic buffer data of the dynamic buffer area.
For example, before a user makes an access request, there are 4 original dynamic cache data in the dynamic cache: d, d 1 ,d 2 ,d 3 And d 4 The buffer priorities are 0.9,0.8,0.6 and 0.5 respectively, and the user aims at the target data d 0 Sending out access request, if failing to find d in static buffer area 0 Then d can be looked up in the database 0 And will find d from the database 0 Returning to the terminal, the DDCS policy region can also determine d 0 The buffer priority of (2) is 0.7, and d is carried out according to the order of the buffer priority from high to low 0 Arranged at d 2 And d 3 Obtaining target dynamic cache data: d, d 1 ,d 2 ,d 0 ,d 3 And d 4
Step S130, at a preset updating moment, updating original static cache data in a preset static cache region according to target dynamic cache data to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
The update time may be a time when the static cache data of the static cache area is updated according to the dynamic cache data.
The original static cache data may be static cache data stored in the static cache region before the update time.
In a specific implementation, when the static cache region reaches the update time, the dynamic cache region can acquire target dynamic cache data, and the target dynamic cache data and the original static cache data are combined to obtain updated static cache data. Because the updated static cache data contains the target data, and the static cache area faces to the user, when the user sends an access request to the target data again, the target data can be directly obtained from the static cache area.
For example, every 30 minutes, the static cache data of the static cache area is updated by using the dynamic cache data of the dynamic cache area, and the specific update time is 9:00,9:30 and … …. At 9:10, a user sends an access request to target data 1, and the dynamic cache area stores the target data 1 searched from the database to obtain target dynamic cache data 1; at 9:15, a user sends an access request to target data 2, and the dynamic cache area stores the target data 2 searched from the database to obtain target dynamic cache data 2; at 9:20, a user sends an access request to target data 3, and the dynamic cache area stores the target data 3 searched from the database to obtain target dynamic cache data 3; and at 9:30, uniformly updating the original static cache data by the static cache region, combining the original static cache data with the target dynamic cache data 3 to obtain updated static cache data, wherein the updated static cache data comprises target data 1, target data 2 and target data 3.
According to the data caching method, the caching priority of the target data is determined by responding to the access request for the target data, the target data is stored in the preset dynamic cache area according to the caching priority, the target dynamic cache data of the dynamic cache area are obtained, and at the preset updating moment, the original static cache data in the preset static cache area are updated according to the target dynamic cache data, so that updated static cache data are obtained; the cache area is divided into a dynamic cache area for storing dynamic cache data and a static cache area for storing static cache data, the dynamic cache data is updated in real time in the dynamic cache area, so that the cache hit rate of the dynamic cache area is improved, the static cache area is further updated with the static cache data in batches from the dynamic cache area at preset time, the cache hit rate of the static cache area is improved, and the static cache area can be provided with higher cache hit rate when data access is performed on the static cache area.
In one embodiment, the step S110 may specifically include: determining a first priority and a second priority of the target data in response to the access request; the first priority is associated with machine learning processing for the target data, and the second priority is associated with dynamic caching processing for the target data; and carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data.
The first priority may be a buffer priority obtained by the MLA. The second priority may be a cache priority obtained by the GDBA.
In a specific implementation, after receiving an access request for target data, a static buffer zone can acquire a data attribute of the target data, send the data attribute to a processor, input the data attribute to a preset machine learning model, determine a first priority of the target data through the machine learning model, input the data attribute to a preset dynamic buffer model, determine a second priority of the target data through the dynamic buffer model, and then perform weighted summation on the first priority and the second priority according to preset weights to obtain a buffer storage priority of the target data in the dynamic buffer zone.
In practical application, the weight of the MLA policy model can be set to W m The GDBA strategy model weights 1-W m Inputting the type (or number) of the target data into the MLA policy model, output first priority p m The type (or the number) of the target data is input into the GDBA strategy model, and the second priority p is output g The buffer priority of the target data is p o =(p m *W m +p g *(1-W m )). Further, the method can also take MLA as main material and GDBA as auxiliary material to set W m >1-W m
In this embodiment, the first priority and the second priority of the target data are determined by responding to the access request; the first priority is associated with machine learning processing for the target data, and the second priority is associated with dynamic caching processing for the target data; carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data; because the MLA can predict the access state of the current target data based on the access state of the historical data, the small-range sample is prevented from being overfitted, the GDBA can restrict the cache of the target data based on the traditional cache mode, the weight of the traditional cache mode is added in the cache, the MLA is prevented from misjudging, the cache priority of the target data can be reasonably determined through the combination of the MLA and the GDBA, and then the position of the target data in the dynamic cache area is reasonably determined.
In one embodiment, the step of determining the first priority and the second priority of the target data in response to the access request may specifically include: determining the data attribute of the target data according to the access request; the data attributes are input to a pre-trained machine learning model to obtain a first priority, and the data attributes are input to a pre-trained dynamic cache model to obtain a second priority.
In a specific implementation, the static buffer area may determine a data attribute of the target data, send the data attribute to the processor, the processor inputs the data attribute to a preset machine learning model, determines a first priority of the target data through the machine learning model, inputs the data attribute to a preset dynamic buffer model, and determines a second priority of the target data through the dynamic buffer model.
In practical application, the static buffer area can determine the targetThe type (or quantity) of the data is sent to a strategy area, the strategy area inputs the type (or quantity) of the target data to an MLA strategy model and a GDBA strategy model, and the first priority p of the target data is obtained respectively m And a second priority p g
In this embodiment, the data attribute of the target data is determined according to the access request; inputting the data attribute into a pre-trained machine learning model to obtain a first priority, and inputting the data attribute into a pre-trained dynamic cache model to obtain a second priority; the priority of the target data can be obtained according to the machine learning strategy and the dynamic caching strategy respectively, and the rationality of the priority of the target data caching is increased.
In one embodiment, the step S120 may specifically include: according to the buffer priority, sorting the target data in the original dynamic buffer data of the dynamic buffer area to obtain sorted dynamic buffer data; if the number of the ordered dynamic cache data exceeds the size of the cache space of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data; and taking the truncated dynamic cache data as target dynamic cache data and storing the target dynamic cache data in a dynamic cache region.
The original dynamic cache data may be the dynamic cache data before the target data is added.
The truncating process may be a process of deleting a number of data at the end.
In a specific implementation, the dynamic buffer area can sort target data in original dynamic buffer data according to buffer priority, so as to obtain sorted dynamic buffer data, the dynamic buffer area can count the number of the sorted dynamic buffer data and the buffer space size of the dynamic buffer area, and if the number of the sorted dynamic buffer data does not exceed the buffer space size of the dynamic buffer area, the sorted dynamic buffer data is directly used as the target dynamic buffer data and is stored in the dynamic buffer area; otherwise, if the number of the sorted dynamic cache data exceeds the cache space of the dynamic cache region, deleting a plurality of data arranged at the back from the sorted dynamic cache data to obtain truncated dynamic cache data, wherein the number of the truncated dynamic cache data does not exceed the cache space of the dynamic cache region, and then taking the truncated dynamic cache data as target dynamic cache data and storing the target dynamic cache data in the dynamic cache region.
In practical application, let the dynamic cache data after sorting be n 1 ,n 2 ,……n N The total number is N, the size of the buffer space of the dynamic buffer area is DSIZE, if N is less than or equal to DSIZE, N is directly added 1 ,n 2 ,……n N Storing in a dynamic cache area; otherwise, if N>DSIZE, delete the last (N-DSIZE) data in the sorted dynamic cache data, and obtain N 1 ,n 2 ,……n DSIZE Stored in the dynamic buffer.
In this embodiment, sorting processing is performed on the target data in the original dynamic cache data in the dynamic cache region according to the cache priority, so as to obtain sorted dynamic cache data; if the number of the ordered dynamic cache data exceeds the size of the cache space of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data; taking the truncated dynamic cache data as target dynamic cache data and storing the target dynamic cache data in a dynamic cache region; the data overflow of the dynamic cache area can be avoided, and the effectiveness of the dynamic cache data is ensured.
In one embodiment, the step S130 may specifically include: determining repeated data between original static cache data and target dynamic cache data at the updating moment; deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data; if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain updated static cache data.
The repeated data may be data contained in both the original static cache data and the target dynamic cache data.
In a specific implementation, at least one update time may be preset, at each update time, first, duplicate data between the original static cache data and the target dynamic cache data is searched, the duplicate data is deleted from the target dynamic cache data, the deleted dynamic cache data is obtained, the total number of the deleted dynamic cache data and the original static cache data is counted, the total number is compared with the cache space size of the static cache, and if the total number does not exceed the cache space size of the static cache region, the deleted dynamic cache data and the original static cache data are directly combined, so as to obtain updated static cache data.
In practical application, the target dynamic cache data of the dynamic cache region is set as an A set, the original static cache data of the static cache region is set as a B set, GSBA operation is performed on the B set at preset updating time, repeated data of the A set and the B set are deleted, the data quantity ASIZE of the deleted A set, the data quantity BSIZE of the B set and the space size SSIZE of the static cache region are counted, if (SSIZE-BSIZE) is more than or equal to ASIZE, the deleted A set is directly combined with the B set to obtain a new B set, and the data in the new B set is the updated static cache data.
In this embodiment, by determining the repeated data between the original static cache data and the target dynamic cache data at the update time; deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data; if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain updated static cache data; the repeated data between the original static cache data and the target dynamic cache data can be removed, after the original static cache data and the target dynamic cache data are combined to obtain updated static cache data, data access is performed on the updated static cache data, and the data access efficiency can be improved.
In one embodiment, after the deleting the repeated data from the target dynamic cache data to obtain the deleted dynamic cache data, the method specifically further includes: if the total number of the deleted dynamic cache data and the original static cache data exceeds the cache space size of the static cache region, truncating the original static cache data according to the cache space size of the static cache region to obtain truncated static cache data; and merging the truncated static cache data with the deleted dynamic cache data to obtain updated static cache data.
In a specific implementation, at each update time, if the total number of the deleted dynamic cache data and the original static cache data exceeds the cache space size of the static cache region, a plurality of data arranged at the back can be deleted from the original static cache data to obtain truncated static cache data, wherein the sum of the truncated static cache data and the deleted dynamic cache data does not exceed the cache space size of the static cache region, and then the deleted dynamic cache data and the truncated static cache data can be combined to obtain updated static cache data.
In practical application, the target dynamic cache data of the dynamic cache area is set as an A set, the original static cache data of the static cache area is set as a B set, GSBA operation is performed on the B set at preset updating time, repeated data of the A set and the B set are deleted, the data quantity ASIZE of the deleted A set, the data quantity BSIZE of the B set and the space size SSIZE of the static cache area are counted, if (SSIZE-BSIZE) < ASIZE, the B set is arranged in the back ((ASIZE+BSIZE) -SSIZE) data are deleted, the truncated B set is obtained, the deleted A set is merged into the B set after the static cache area is truncated, and new B set is obtained, wherein the data in the new B set is the updated static cache data.
In this embodiment, if the total number of the deleted dynamic cache data and the original static cache data exceeds the size of the cache space of the static cache region, truncating the original static cache data according to the size of the cache space of the static cache region to obtain truncated static cache data; combining the truncated static cache data with the deleted dynamic cache data to obtain updated static cache data; the data overflow of the static buffer area can be avoided, and the validity of the static buffer data is ensured.
In order to facilitate a thorough understanding of embodiments of the present application by those skilled in the art, the following description will be provided in connection with a specific example.
In order to improve the caching efficiency and the caching hit rate of data, the application aims at a magnetic resonance simulation cloud platform, a Data Dynamic Caching Service (DDCS) component with high hit rate is added in a unified data configuration center (UDC), the caching service component divides the cache into a dynamic caching area and a static caching area, the dynamic cache is updated in real Time, and the static caching Fixed Time (FT) and the dynamic cache are updated interactively.
In particular, components other than the DDCS may be referred to as an external area, and the DDCS may be divided into a policy area and a data area. The policy area is an algorithm part and comprises MLA, GDBA and GSBA, the MLA can adopt RNN, random forest or LSTM, the GDBA can adopt FIFO, LFU, LRU or OPT, the GSBA can adopt FIFO, LFU, LRU or OPT, the specific form and specific parameters of the algorithm can be realized through manual configuration, and the data area comprises a dynamic cache area, a static cache area, a database and related operations.
FIG. 2 provides a flow diagram of a cache service component data processing process. According to fig. 2, the ddcs component data processing procedure may specifically include:
step S210, after the simulated cloud platform is started, initializing a cache service component, configuring a specific algorithm and algorithm parameters of MLA, GDBA, GSBA, and configuring FT;
step S220, the cache service component operates, when the external area accesses the data, the static cache area (B set) of the data area is accessed first; meanwhile, the attribute of the target data to be accessed is used as analysis data of a strategy area and is input into the strategy area;
step S230, the strategy area takes MLA as the main part and GDBA as the auxiliary part, and the buffer priority of the MLA and the GDBA is output to the data area according to the attribute of the input target data;
step S240, merging the target data into a dynamic cache area (A set) according to the cache priority output by the strategy area; merging the set A into the set B of the static cache region after the FT time;
step S250, firstly judging whether target data to be accessed exist in a static cache area, and feeding back a query result of 'exist' or 'none' to a strategy area; if yes, returning the target data to an external area, and if no, further searching in a database; if the target data is queried in the database, returning the target data to an external area, and if the target data is not queried in the database, returning a null value; and finally, storing the inquired target data or null value in a static buffer area.
In one embodiment, a method of data processing for a DDCS component is provided, as particularly shown in fig. 3-5.
FIG. 3 provides a flow diagram of a cache service component data configuration process. According to fig. 3, the data configuration procedure of the ddcs may specifically include:
step S310, initializing a cache service component after the magnetic resonance simulation cloud platform is started, wherein the cache service component comprises configuring MLA, GDBA, GSBA and FT, for example, configuring MLA as RNN, GDBA and GSBA as LFU and FT as 1h;
step S320, if the configuration is successful, the cache service component starts to operate;
step S330, if the configuration is unsuccessful, a message of the initialization failure of the cache service component is returned, and the usage database is switched.
FIG. 4 provides a flow diagram of data processing in the policy section of a cache service component. According to fig. 4, the policy zone data processing procedure of the ddcs may specifically include:
the policy zone is divided into MLA and GDBA, the MLA is configured as RNN, and the GDBA is configured as LFU.
The LFU algorithm is used for counting the reference times of the input data, sequencing all the data according to the reference times, storing the data by using a queue, and outputting a weight queue with the size of N;
The RNN algorithm is characterized by being capable of analyzing and predicting time-series data. In the network configuration, the state at the previous time is used as an input, and the prediction result is analyzed together with the input data at the current time. Thus, there is an advantage to cache analysis for successive time points.
The RNN algorithm comprises an input layer, a hidden layer and an output layer, and is specifically as follows:
input layer: inputting N data, wherein X is E Rm, d, m represents the minimum batch sample number, and d represents the characteristic dimension; the feature dimension contains attribute features of the access data at the current moment, such as the current time, the reference times, the interval time and the like.
Hidden layer: st=f (WSt-1+uxt); st represents a state of a sample at a time t, W represents an input weight, U represents a weight of input data at the moment, f represents an activation function, and a Tanh function can be adopted specifically;
output layer: ot=g (VSt), where g is also an activation function, and in particular, a softmax function may be employed, and V represents the weight of the output data.
The neural network trains based on time sequence data, analyzes the data at the current moment, gives out the probability corresponding to the data, and the value of the probability is expressed as the probability value of the data put into a cache.
FIG. 5 provides a flow chart of data processing in a data area of a cache service component. According to fig. 5, the data processing procedure of the ddcs data area may specifically include:
step S510, for the target data and the original dynamic cache data, the MLA of the policy zone outputs a priority vector N m *1, GDBA output priority vector N g *1, a step of; based on the calculated (N m *W m +N g *(1-W m ) Re-ordering the target data and the original dynamic cache data to obtain ordered dynamic cache data N o The method comprises the steps of carrying out a first treatment on the surface of the Wherein W is m Representing the MLA output priority vector N m *1 weight ratio of 1-W m Representing GDBA output priority vector N g *1 weight ratio;
step S520, calculating dynamic cache space DSIZE, if N is less than or equal to DSIZE, directly adding N o Save to dynamic buffer, if N>DSIZE delete N o The later (N-DSIZE) elements are ordered to obtain truncated dynamic cache data,storing the truncated dynamic cache data into a dynamic cache region;
step S530, after FT time, the static buffer B set is GSBA operated, then the repeated values of the A set and the B set are deleted, the deleted A set space size ASIZE, the B set space size BSIZE and the static buffer space size SSIZE are calculated, if (SSIZE-BSIZE) is more than or equal to ASIZE, the deleted A set is directly merged into the B set to form a new B set, if (SSIZE-BSIZE) < ASIZE, the B set is sorted to the following ((ASIZE+BSIZE) -SSIZE) elements are deleted, and then the A set is merged into the static buffer to form the new B set;
Step S540, if the static cache area B of the data area is provided with the target data to be accessed by the external area, returning the target data, and ending the access; if not, accessing the database of the data area, and storing the target data in the database into the static cache area; and whether the static buffer area has the target data or not, feeding back the result of whether the static buffer area has the target data or not to the strategy area.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a data caching system and a device for realizing the above related data caching method. The implementation of the solution provided by the system and the device is similar to that described in the above method, so the specific limitation in the embodiments of the data caching system and the device provided below may refer to the limitation of the data caching method hereinabove, and will not be repeated herein.
In one embodiment, as shown in FIG. 6, there is provided a data caching system comprising: a cache processor 610, a dynamic cache unit 620, and a static cache unit 630; the dynamic cache unit 620 is connected to the static cache unit 630, and both the dynamic cache unit 620 and the static cache unit 630 are connected to the cache processor 610; wherein:
the cache processor 610 is configured to determine a cache priority of target data in response to an access request for the target data, and send the cache priority to the dynamic cache unit 620;
the dynamic buffer unit 620 is configured to store the target data according to the received buffer priority, so as to obtain target dynamic buffer data;
The static buffer unit 630 is configured to update, at a preset update time, the original static buffer data according to the target dynamic buffer data obtained from the dynamic buffer unit 620, so as to obtain updated static buffer data; the updated static cache data is used to provide repeated data access to the target data.
In a specific implementation, the static cache unit receives an access request of target data, if no target data exists in the static cache unit, the target data can be searched for in the database, the target data searched for in the database is returned to the terminal, the data attribute of the target data can be sent to the cache processor by the static cache region, and the cache priority of the target data is determined according to the data attribute by the cache processor and is sent to the dynamic cache unit. And the dynamic caching unit stores the target data according to the received caching priority to obtain target dynamic caching data. When the updating time is reached, the static cache unit acquires target dynamic cache data from the dynamic cache unit, and combines the target dynamic cache data with the original static cache data of the static cache unit to obtain updated static cache data. Because the updated static cache data contains the target data, and the static cache unit faces to the user, when the user sends an access request to the target data again, the target data can be directly obtained from the static cache unit.
According to the data caching system, the caching processor responds to the access request aiming at the target data, the caching priority of the target data is determined, the caching priority is sent to the dynamic caching unit, the dynamic caching unit stores the target data according to the received caching priority to obtain target dynamic caching data, and the static caching unit updates the original static caching data according to the target dynamic caching data obtained from the dynamic caching unit at preset updating time to obtain updated static caching data; the cache area is divided into a dynamic cache area for storing dynamic cache data and a static cache area for storing static cache data, the dynamic cache data is updated in real time in the dynamic cache area, so that the cache hit rate of the dynamic cache area is improved, the static cache area is further updated with the static cache data in batches from the dynamic cache area at preset time, the cache hit rate of the static cache area is improved, and the static cache area can be provided with higher cache hit rate when data access is performed on the static cache area.
In one embodiment, the cache processor 610 is further configured to determine, in response to the access request, a first priority and a second priority of the target data; the first priority is associated with a machine learning process for the target data, and the second priority is associated with a dynamic caching process for the target data; and carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data.
In one embodiment, the cache processor 610 is further configured to determine a data attribute of the target data according to the access request; and inputting the data attribute into a pre-trained machine learning model to obtain the first priority, and inputting the data attribute into a pre-trained dynamic cache model to obtain the second priority.
In one embodiment, the dynamic buffer unit 620 is further configured to sort the target data in the original dynamic buffer data of the dynamic buffer area according to the buffer priority, to obtain sorted dynamic buffer data; if the number of the ordered dynamic cache data exceeds the cache space size of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data; and storing the truncated dynamic cache data serving as the target dynamic cache data in the dynamic cache region.
In one embodiment, the static buffer unit 630 is further configured to determine, at the update time, duplicate data between the original static buffer data and the target dynamic buffer data; deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data; and if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain the updated static cache data.
In one embodiment, the static buffer unit 630 is further configured to truncate the original static buffer data according to the buffer space size of the static buffer region if the total number of the deleted dynamic buffer data and the original static buffer data exceeds the buffer space size of the static buffer region, so as to obtain truncated static buffer data; and merging the truncated static cache data with the deleted dynamic cache data to obtain the updated static cache data.
In one embodiment, as shown in fig. 7, there is provided a data caching apparatus including: a data acquisition module 710, a dynamic caching module 720, and a static caching module 730, wherein:
a data acquisition module 710, configured to determine a cache priority of target data in response to an access request for the target data;
the dynamic buffer module 720 is configured to store the target data in a preset dynamic buffer according to the buffer priority, so as to obtain target dynamic buffer data of the dynamic buffer;
the static buffer module 730 is configured to update, at a preset update time, the original static buffer data in the preset static buffer area according to the target dynamic buffer data, so as to obtain updated static buffer data; the updated static cache data is used to provide repeated data access to the target data.
In one embodiment, the data obtaining module 710 is further configured to determine, in response to the access request, a first priority and a second priority of the target data; the first priority is associated with a machine learning process for the target data, and the second priority is associated with a dynamic caching process for the target data; and carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data.
In one embodiment, the data obtaining module 710 is further configured to determine a data attribute of the target data according to the access request; and inputting the data attribute into a pre-trained machine learning model to obtain the first priority, and inputting the data attribute into a pre-trained dynamic cache model to obtain the second priority.
In one embodiment, the dynamic buffer module 720 is further configured to sort the target data in the original dynamic buffer data in the dynamic buffer according to the buffer priority, to obtain sorted dynamic buffer data; if the number of the ordered dynamic cache data exceeds the cache space size of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data; and storing the truncated dynamic cache data serving as the target dynamic cache data in the dynamic cache region.
In one embodiment, the static buffer module 730 is further configured to determine, at the update time, duplicate data between the original static buffer data and the target dynamic buffer data; deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data; and if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain the updated static cache data.
In one embodiment, the static buffer module 730 is further configured to truncate the original static buffer data according to the buffer space size of the static buffer region if the total number of the deleted dynamic buffer data and the original static buffer data exceeds the buffer space size of the static buffer region, so as to obtain truncated static buffer data; and merging the truncated static cache data with the deleted dynamic cache data to obtain the updated static cache data.
The various modules in the data caching system and apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data cache data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data caching method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method of caching data, the method comprising:
determining a cache priority of target data in response to an access request for the target data;
storing the target data in a preset dynamic cache region according to the cache priority to obtain target dynamic cache data of the dynamic cache region;
at a preset updating moment, according to the target dynamic cache data, updating original static cache data in a preset static cache region to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
2. The method of claim 1, wherein the determining the cache priority of the target data in response to the access request for the target data comprises:
determining a first priority and a second priority of the target data in response to the access request; the first priority is associated with a machine learning process for the target data, and the second priority is associated with a dynamic caching process for the target data;
and carrying out weighted summation on the first priority and the second priority to obtain the buffer memory priority of the target data.
3. The method of claim 2, wherein the determining the first priority and the second priority of the target data in response to the access request comprises:
determining the data attribute of the target data according to the access request;
and inputting the data attribute into a pre-trained machine learning model to obtain the first priority, and inputting the data attribute into a pre-trained dynamic cache model to obtain the second priority.
4. The method according to claim 1, wherein storing the target data in a preset dynamic buffer according to the buffer priority to obtain target dynamic buffer data of the dynamic buffer comprises:
According to the buffer priority, sorting the target data in the original dynamic buffer data of the dynamic buffer area to obtain sorted dynamic buffer data;
if the number of the ordered dynamic cache data exceeds the cache space size of the dynamic cache region, truncating the ordered dynamic cache data to obtain truncated dynamic cache data;
and storing the truncated dynamic cache data serving as the target dynamic cache data in the dynamic cache region.
5. The method of claim 4, wherein the updating the original static cache data in the preset static cache area to obtain updated static cache data at the preset update time according to the target dynamic cache data includes:
determining repeated data between the original static cache data and the target dynamic cache data at the updating moment;
deleting the repeated data from the target dynamic cache data to obtain deleted dynamic cache data;
and if the total quantity of the deleted dynamic cache data and the original static cache data does not exceed the cache space size of the static cache region, merging the deleted dynamic cache data with the original static cache data to obtain the updated static cache data.
6. The method of claim 5, wherein after deleting the duplicate data from the target dynamic cache data to obtain deleted dynamic cache data, further comprising:
if the total quantity of the deleted dynamic cache data and the original static cache data exceeds the cache space size of the static cache region, truncating the original static cache data according to the cache space size of the static cache region to obtain truncated static cache data;
and merging the truncated static cache data with the deleted dynamic cache data to obtain the updated static cache data.
7. The data caching system is characterized by comprising a caching processor, a dynamic caching unit and a static caching unit; the dynamic cache unit is connected with the static cache unit, and the dynamic cache unit and the static cache unit are both connected with the cache processor;
the cache processor is used for responding to an access request for target data, determining the cache priority of the target data and sending the cache priority to the dynamic cache unit;
The dynamic caching unit is used for storing the target data according to the received caching priority to obtain target dynamic caching data;
the static cache unit is used for updating the original static cache data according to the target dynamic cache data acquired from the dynamic cache unit at a preset updating moment to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
8. A data caching apparatus, the apparatus comprising:
the data acquisition module is used for responding to an access request aiming at target data and determining the buffer priority of the target data;
the dynamic cache module is used for storing the target data in a preset dynamic cache area according to the cache priority to obtain target dynamic cache data of the dynamic cache area;
the static cache module is used for updating the original static cache data in the preset static cache region according to the target dynamic cache data at preset updating time to obtain updated static cache data; the updated static cache data is used to provide repeated data access to the target data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202310450756.5A 2023-04-20 2023-04-20 Data caching method, system, device, computer equipment and storage medium Pending CN116578593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310450756.5A CN116578593A (en) 2023-04-20 2023-04-20 Data caching method, system, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310450756.5A CN116578593A (en) 2023-04-20 2023-04-20 Data caching method, system, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116578593A true CN116578593A (en) 2023-08-11

Family

ID=87538779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310450756.5A Pending CN116578593A (en) 2023-04-20 2023-04-20 Data caching method, system, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116578593A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493400A (en) * 2024-01-02 2024-02-02 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment
CN117666969A (en) * 2024-01-30 2024-03-08 中电数据产业有限公司 Distributed caching method and system based on Web security
CN117666969B (en) * 2024-01-30 2024-05-14 中电数据产业有限公司 Distributed caching method and system based on Web security

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493400A (en) * 2024-01-02 2024-02-02 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment
CN117493400B (en) * 2024-01-02 2024-04-09 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment
CN117666969A (en) * 2024-01-30 2024-03-08 中电数据产业有限公司 Distributed caching method and system based on Web security
CN117666969B (en) * 2024-01-30 2024-05-14 中电数据产业有限公司 Distributed caching method and system based on Web security

Similar Documents

Publication Publication Date Title
WO2022062184A1 (en) High-concurrency query method, intelligent terminal and storage medium
CN109766318B (en) File reading method and device
US9135630B2 (en) Systems and methods for large-scale link analysis
US9026523B2 (en) Efficient selection of queries matching a record using a cache
CN116578593A (en) Data caching method, system, device, computer equipment and storage medium
CN109101580A (en) A kind of hot spot data caching method and device based on Redis
CN114415965A (en) Data migration method, device, equipment and storage medium
CN108881254A (en) Intruding detection system neural network based
CN111198961B (en) Commodity searching method, commodity searching device and commodity searching server
CN112416368B (en) Cache deployment and task scheduling method, terminal and computer readable storage medium
CN110245129A (en) Distributed global data deduplication method and device
CN114205424B (en) Bill file decompression method, device, computer equipment and storage medium
CN114821248B (en) Point cloud understanding-oriented data active screening and labeling method and device
CN115878625A (en) Data processing method and device and electronic equipment
CN116737607B (en) Sample data caching method, system, computer device and storage medium
CN114253938A (en) Data management method, data management device, and storage medium
US20240005146A1 (en) Extraction of high-value sequential patterns using reinforcement learning techniques
US11966393B2 (en) Adaptive data prefetch
CN116975095A (en) Data storage method, data reading method, data processing system and storage medium
CN117909076A (en) Resource management method, device, computer equipment and storage medium
CN117172896A (en) Prediction method, prediction apparatus, computer device, storage medium, and program product
CN117216103A (en) Method, device, computer equipment and storage medium for determining cache failure time
CN117216009A (en) File processing method, apparatus, device, storage medium and computer program product
CN117648465A (en) Data processing method, device and equipment for Internet of things equipment
CN113988282A (en) Programmable access engine architecture for graph neural networks and graph applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination