CN106021445A - Cached data loading method and apparatus - Google Patents

Cached data loading method and apparatus Download PDF

Info

Publication number
CN106021445A
CN106021445A CN201610324104.7A CN201610324104A CN106021445A CN 106021445 A CN106021445 A CN 106021445A CN 201610324104 A CN201610324104 A CN 201610324104A CN 106021445 A CN106021445 A CN 106021445A
Authority
CN
China
Prior art keywords
data
time
cached
loading
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610324104.7A
Other languages
Chinese (zh)
Other versions
CN106021445B (en
Inventor
王福财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610324104.7A priority Critical patent/CN106021445B/en
Publication of CN106021445A publication Critical patent/CN106021445A/en
Application granted granted Critical
Publication of CN106021445B publication Critical patent/CN106021445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a cached data loading method and apparatus. The method comprises the steps of scanning a service access layer and identifying an interface containing preset automatic loading information; loading data, corresponding to the identified interface containing the automatic loading information, in a database according to preset cache parameter information, and performing caching; and when data of an accessed business is the data corresponding to the interface containing the automatic loading information, reading the cached data to realize business access processing, wherein the cache parameter information includes preset data discrimination identifier information, caching condition information and/or expiration time. According to embodiments of the cached data loading method and apparatus, the data is cached through the automatic loading information and the cache parameter information, so that system avalanche caused by large amounts of concurrent operations during cache failure is avoided; and furthermore, the cached data is sorted and a cache is judged, so that the utilization efficiency of the data in the cache and the reading efficiency of the cached data are improved.

Description

Method and device for loading cache data
Technical Field
The present disclosure relates to, but not limited to, data processing technologies, and more particularly, to a method and an apparatus for loading cache data.
Background
With the continuous expansion of the internet scale and the increasing of internet user groups, new requirements are also put forward on e-commerce websites. When a website is frequently visited by millions of users, the response speed of the system directly influences the experience of the users in visiting the website.
One key technology for improving the access speed of the website by using a caching technology. On many large websites, relevant memory caching technologies such as Redis (Redis is an open source database written in American National Standards Institute (ANSI) C language, supports networks, can be based on a memory and can also be a persistent log-type and Key-Value database, and provides Application Programming Interfaces (APIs) of multiple languages), Memcached (Memcached is a high-performance distributed memory object caching system used for dynamic Web page (Web) application to reduce database loads), and the like are widely used. However, the memory caching technology is also limited by the physical memory size of the system, when the virtual memory function is started, if the memory is used up, the memory caching technology stores data which are not frequently used in a disk, which is determined by the memory caching technology, and is equivalent to that a business layer transfers a cache exchange strategy to the memory caching technology, so that the flexibility of data caching is lacked to a certain extent; if the virtual memory function is disabled, the memory caching technique will use the virtual memory of the operating system, causing the performance of the website service related to the cached data to be drastically degraded. The memory caching technique may also limit the physical memory available through configuration options, and when the upper memory usage threshold is reached, even if an erroneous write command hint is given (but will continue to accept self-read commands), if there are a large number of concurrent operations, the data in the data layer will be accessed directly through the cache, resulting in a system avalanche. In addition, in the related art, the cache hit rate of the cache data is not high.
In summary, the related memory caching technology has the problem that a large number of concurrent operations cause system avalanche when the cache fails.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method and a device for loading cache data, which can avoid system avalanche caused by a large number of concurrent operations when a cache fails.
The embodiment of the invention provides a device for loading cache data, which comprises: the device comprises an identification unit, a cache unit and a reading unit; wherein,
the identification unit is used for scanning the service access layer and identifying an interface containing preset automatic loading information;
the cache unit is used for loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset cache parameter information;
the reading unit is used for reading the cached data when the accessed service data is the data corresponding to the interface containing the automatic loading information, so as to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
Optionally, the cache unit is further configured to, when the accessed service data is data corresponding to an interface that includes automatic loading information, load and cache the data of the interface that includes the automatic loading information, which corresponds to the accessed service data, from the database if the accessed service data is not read when the cached data is read.
Optionally, the apparatus further comprises an updating unit,
and the updating unit is used for updating the data corresponding to the updated service contained in the cache if the updated data is the data on the interface containing the automatic loading information when the data is updated.
Optionally, the apparatus further includes a buffer processing unit, configured to establish a message queue including data state information of all buffered data;
reading data state information of one or more than one data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
and accessing the cached data for times within a preset time length.
Optionally, the cache processing unit is specifically configured to,
establishing a message queue containing data state information of all cached data;
reading data state information of one or more data from the established message queue according to a preset extraction strategy;
the read data state information includes a previous request time of the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the read data state information comprises the loading times of the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot-spot data with controllable time, and deleting the data which are determined to be hot-spot data with controllable time from the cache; and/or the presence of a gas in the gas,
when the read data state information includes the number of times the cached data is accessed within a preset time length,
and the access times of the cached data in the preset time length are smaller than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
Optionally, the cache unit is further configured to,
when the cache parameter information comprises expiration time, acquiring the expiration time of the cached data, loading and caching the processing time of the data from a database, and subtracting the processing time from the expiration time to obtain advanced processing time;
and if the business data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
Optionally, the apparatus further comprises a sorting unit,
the sorting unit is used for sorting the cached data according to expiration time, loading time-consuming duration and/or data request frequency.
In another aspect, an embodiment of the present invention further provides a method for loading cache data, including:
scanning a service access layer, and identifying an interface containing preset automatic loading information;
loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset caching parameter information;
when the accessed service data is the data corresponding to the interface containing the automatic loading information, reading the cached data to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
Optionally, when the accessed service data is data corresponding to an interface including auto-loading information, if the accessed service data is not read when the cached data is read, the method further includes: and loading and caching the data of the interface containing the automatic loading information corresponding to the accessed service data from the database.
Optionally, the method further includes:
and when the data is updated, if the updated data is the data on the interface containing the automatic loading information, updating the data corresponding to the updated service contained in the cache.
Optionally, the method further includes:
establishing a message queue containing data state information of all cached data;
reading data state information of one or more than one data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
and accessing the cached data for times within a preset time length.
Optionally, the performing cache judgment processing on the data includes:
the data state information includes a previous request time for the cached data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the data state information includes the number of times of loading the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot-spot data with controllable time, and deleting the data which are determined to be hot-spot data with controllable time from the cache; and/or the presence of a gas in the gas,
when the data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are smaller than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
Optionally, when the caching parameter information includes an expiration time, the method further includes:
acquiring the expiration time of the cached data, loading and caching the processing time of the data from a database, and subtracting the processing time from the expiration time to obtain the advanced processing time;
and if the business data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cached data with the expiration time reached when the expiration time reaches in the database.
Optionally, the method further includes:
and sequencing the cached data according to the expiration time, the loading time duration and/or the data request frequency.
Compared with the related art, the technical scheme of the application comprises the following steps: scanning a service access layer, and identifying an interface containing preset automatic loading information; loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset caching parameter information; when the accessed service data is the data corresponding to the interface containing the automatic loading information, reading the cached data to realize service access processing; the caching parameter information comprises: the preset data distinguishes identification information, and/or cache condition information, and/or expiration time. According to the embodiment of the invention, the data is cached through the automatic loading information and the caching parameter information, so that system avalanche caused by a large number of concurrent operations when the cache fails is avoided; furthermore, by sequencing the cached data and performing cache judgment processing, the utilization benefit of the data in the cache and the reading efficiency of the cached data are improved.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
FIG. 1 is a block diagram of the main electrical structure of a server of an embodiment of the present invention;
FIG. 2 is a flowchart of a method for loading cache data according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for loading cache data according to another embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for loading cache data according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method of an application example one of the present invention;
fig. 6 is a flowchart of a method of application example two of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
As shown in fig. 1, which is a block diagram of a main electrical structure of a server according to an embodiment of the present invention, the block diagram includes: an Input Output (IO) bus, a processor 40, a memory 41, a memory 42, and a communication device 43. Wherein,
the input/output (IO) bus is connected to other components (the processor 40, the memory 41, the memory 42, and the communication device 43) of the server to which the IO bus belongs, and provides a transmission line for the other components.
The processor 40 typically controls the overall operation of the server to which it belongs. For example, processor 40 performs computations, validation, etc. The processor 40 may be a Central Processing Unit (CPU).
The communication means 43, typically comprising one or more components, allows radio communication between the server to which it belongs and the wireless communication system or network.
The memory 41 stores processor 40 readable, processor executable software code containing instructions for controlling the processor 40 to perform the functions described herein (i.e., software performing functions).
An embodiment of the method of the invention is proposed based on the electrical structure of the server described above.
Fig. 2 is a flowchart of a method for loading cache data according to an embodiment of the present invention, as shown in fig. 2, including:
step 200, scanning a service access layer, and identifying an interface containing preset automatic loading information;
it should be noted that the auto-loading information may be an auto-loading flag, identifier, or other similar information that can distinguish this type of interface from other interfaces. The automatic loading information can be set according to the analysis and judgment of the skilled person on whether the data needs to be automatically loaded. In addition, the operation of scanning the service layer is a conventional operation performed on the service access layer.
Step 201, loading and caching data corresponding to the identified interface containing the automatic loading information in a database according to preset caching parameter information;
here, the cache parameter information includes: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time; wherein the data distinguishing identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
Step 202, when the accessed service data is data corresponding to an interface containing automatic loading information, reading the cached data to realize service access processing;
it should be noted that the preset method function may include hash, fifth version of message digest algorithm (MD5), etc. which may generate information of unique identifier, code or name; when the data distinguishing identification information is combined by adopting the interface parameters, the method may include: generating data distinguishing identification information corresponding to different cached data according to the parameters of the interface according to a set classification combination rule, wherein the combined generated information can be similar to the serial number information of library collection books; the generated data distinguishing identification information has a certain naming rule by adopting the interface parameters for combined generation and adopting a preset method function for budget generation, so that the user personnel can conveniently identify the cached data in the process of analyzing and processing; the embodiment of the present invention may also generate the data distinguishing identification information in other manners, as long as each generated data distinguishing identification information has uniqueness.
Optionally, when the accessed service data is data corresponding to an interface including automatic loading information, if the accessed service data is not read when the cached data is read, the method according to the embodiment of the present invention further includes: and loading and caching the data of the interface containing the automatic loading information corresponding to the accessed service data from the database.
Here, the data of the service that is not read to be accessed when the cached data is read includes: according to the embodiment of the invention, errors or failures occur when the data of the accessed service is read from the cached data, and the errors or failures in reading the data can be implemented by a method for judging whether the data reading is successful or not in the related technology.
Optionally, the method in the embodiment of the present invention further includes:
and when the data is updated, if the updated data is the data on the interface containing the automatic loading information, updating the data corresponding to the updated service contained in the cache.
Here, the update of the data includes the update of the data of the service, and the update of the data of the service can be determined by system parameters related to the system when the system performs the update, for example, the configuration log, the system log, the operation log, and the file content of other recorded data changes are judged. In addition, the data updating method may include determining a storage path of the original data corresponding to the updated data in the related art, and after deleting data on the determined storage path of the original data, performing operations of loading and writing the updated service data into the cache.
Optionally, the method in the embodiment of the present invention further includes:
establishing a message queue containing data state information of all cached data;
reading data state information of one or more data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information; here, the preset fetching policy includes reading data state information of the data one by one according to the sorting of the message queue.
The data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
the cached data is accessed within a preset time length.
It should be noted that, contents included in the data state information may be referred to as data state information according to system cache traffic, service access speed, and the like, and those skilled in the art may add and delete the contents according to the usage scenario of the embodiment of the present invention.
Optionally, the processing for determining the cache of the data includes:
the data state information includes a previous request time for the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the data state information includes the number of times the cached data is loaded and the time each time the cached data is loaded,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot but time-controllable data, and deleting the data which are determined to be hot but time-controllable from the cache; and/or the presence of a gas in the gas,
when the data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are less than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
It should be noted that each time the cache data is loaded, the time may be calculated by the total time of the times of loading the cache data and the times of loading. In addition, the judgment that the access times of the cached data in the preset time length are smaller than the preset access time threshold value may include: obtaining an access time length by recording the time of accessing the cached data for the first time and subtracting the time of accessing the cached data for the first time from the current time of the system, counting the access times of the cached data when the access time length reaches a preset time length, for example, the access times of the cached data within one hour after the first access, and if the access times are less than an access time threshold, for example, the access time threshold is 60, determining that the cached data are non-hot cached data, and deleting the non-hot cached data from the cache; and if the time is not counted from the time of the cached data accessed for the first time, counting the accessed times of the cached data according to a preset time length.
In addition, in the embodiment of the present invention, the reading of the cache data information from the message queue may be performed in a parallel manner by using multiple processes, so that the speed of the cache determination processing may be increased.
Optionally, when the cache parameter information includes an expiration time, the method according to the embodiment of the present invention further includes:
acquiring the expiration time of the cached data, the processing time length of loading and caching the data from the database, and subtracting the processing time length from the expiration time to obtain the advanced processing time;
and if the service data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
Optionally, the method in the embodiment of the present invention further includes:
and sorting the cached data according to the expiration time, the loading time duration and/or the request frequency of the data.
It should be noted that the sorting method can be obtained by analysis and judgment of a person skilled in the art according to the system requirements, and if the system requires a high request response efficiency, data with a high request frequency can be cached in a position sorted in front, so that efficient reading is facilitated; if the data of other services can be read due to the influence of long loading time, the data with long loading time can be cached at the position behind the sequence, and the data with the influence of other caches can be prevented from being read.
According to the method, the data are cached through the automatic loading information and the caching parameter information, so that system avalanche caused by a large number of concurrent operations when the cache fails is avoided; furthermore, by sequencing the cached data and performing cache judgment processing, the utilization benefit of the data in the cache and the reading efficiency of the cached data are improved.
Fig. 3 is a flowchart of a method for loading cache data according to another embodiment of the present invention, as shown in fig. 3, including:
step 300, scanning a service access layer, and identifying an interface containing preset automatic loading information;
it should be noted that the auto-loading information may be an auto-loading flag, identifier, or other similar information that can distinguish this type of interface from other interfaces. The automatic loading information can be set according to the analysis and judgment of the skilled person on whether the data needs to be automatically loaded. In addition, the operation of scanning the service layer is a conventional operation performed on the service access layer.
301, loading and caching data corresponding to the identified interface containing the automatic loading information in a database according to preset caching parameter information;
here, the cache parameter information includes: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time; wherein the data distinguishing identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
Step 302, establishing a message queue containing data state information of all cached data, reading data state information of one or more than one data from the established message queue according to a preset extraction strategy, and performing caching judgment processing on the data according to the read data state information;
here, the data state information is pre-collected and includes information of at least one of:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
the cached data is accessed within a preset time length.
It should be noted that, contents included in the data state information may be referred to as data state information according to system cache traffic, service access speed, and the like, and those skilled in the art may add and delete the contents according to the usage scenario of the embodiment of the present invention.
Optionally, the processing for determining the cache of the data includes:
the data state information includes a previous request time for the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the data state information includes the number of times the cached data is loaded and the time each time the cached data is loaded,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot but time-controllable data, and deleting the data which are determined to be hot but time-controllable from the cache; and/or the presence of a gas in the gas,
when the data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are less than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
It should be noted that each time the cache data is loaded, the time may be calculated by the total time of the times of loading the cache data and the times of loading. In addition, the judgment that the access times of the cached data in the preset time length are smaller than the preset access time threshold value may include: obtaining an access time length by recording the time of accessing the cached data for the first time and subtracting the time of accessing the cached data for the first time from the current time of the system, counting the access times of the cached data when the access time length reaches a preset time length, for example, the access times of the cached data within one hour after the first access, and if the access times are less than an access time threshold, for example, the access time threshold is 60, determining that the cached data are non-hot cached data, and deleting the non-hot cached data from the cache; and if the time is not counted from the time of the cached data accessed for the first time, counting the accessed times of the cached data according to a preset time length.
In addition, in the embodiment of the present invention, the reading of the cache data information from the message queue may be performed in a parallel manner by using multiple processes, so that the speed of the cache determination processing may be increased.
And step 303, sorting the cached data according to the expiration time, the loading time duration and/or the data request frequency.
It should be noted that the sorting method can be obtained by analysis and judgment of a person skilled in the art according to the system requirements, and if the system requires a high request response efficiency, data with a high request frequency can be cached in a position sorted in front, so that efficient reading is facilitated; if the data of other services can be read due to the influence of long loading time, the data with long loading time can be cached at the position behind the sequence, and the data with the influence of other caches can be prevented from being read.
Step 304, when the accessed service data is data corresponding to the interface containing the automatic loading information, reading the cached data to realize service access processing;
it should be noted that the preset method function may include hash, fifth version of message digest algorithm (MD5), etc. which may generate information of unique identifier, code or name; when the data distinguishing identification information is combined by adopting the interface parameters, the method may include: generating data distinguishing identification information corresponding to different cached data according to the parameters of the interface according to a set classification combination rule, wherein the combined generated information can be similar to the serial number information of library collection books; the generated data distinguishing identification information has a certain naming rule by adopting the interface parameters for combined generation and adopting a preset method function for budget generation, so that the user personnel can conveniently identify the cached data in the process of analyzing and processing; the embodiment of the present invention may also generate the data distinguishing identification information in other manners, as long as each generated data distinguishing identification information has uniqueness.
Step 305, if the data of the accessed service is the data corresponding to the interface containing the automatic loading information and the data of the accessed service is not read when the cached data is read, loading and caching the data of the interface containing the automatic loading information corresponding to the data of the accessed service from the database.
Here, the data of the service that is not read to be accessed when the cached data is read includes: according to the embodiment of the invention, errors or failures occur when the data of the accessed service is read from the cached data, and the errors or failures in reading the data can be implemented by a method for judging whether the data reading is successful or not in the related technology.
Optionally, when the cache parameter information includes an expiration time, the method according to the embodiment of the present invention further includes:
acquiring the expiration time of the cached data, the processing time length of loading and caching the data from the database, and subtracting the processing time length from the expiration time to obtain the advanced processing time;
and if the service data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
Optionally, the method in the embodiment of the present invention further includes:
and when the data is updated, if the updated data is the data on the interface containing the automatic loading information, updating the data corresponding to the updated service contained in the cache.
Here, the update of the data includes the update of the data of the service, and the update of the data of the service can be determined by system parameters related to the system when the system performs the update, for example, the configuration log, the system log, the operation log, and the file content of other recorded data changes are judged. In addition, the data updating method may include determining a storage path of the original data corresponding to the updated data in the related art, and after deleting data on the determined storage path of the original data, performing operations of loading and writing the updated service data into the cache.
Fig. 4 is a block diagram of a device for loading cache data according to an embodiment of the present invention, as shown in fig. 4, including: the device comprises an identification unit, a cache unit and a reading unit; wherein,
the identification unit is used for scanning the service access layer and identifying an interface containing preset automatic loading information;
the cache unit is used for loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset cache parameter information;
optionally, the cache unit is further configured to, when the accessed data of the service is data corresponding to the interface that includes the automatic loading information, if the accessed data of the service is not read when the cached data is read, load and cache the data of the interface that includes the automatic loading information, which corresponds to the accessed data of the service, from the database.
Here, the data of the service that is not read to be accessed when the cached data is read includes: according to the embodiment of the invention, errors or failures occur when the data of the accessed service is read from the cached data, and the errors or failures in reading the data can be implemented by a method for judging whether the data reading is successful or not in the related technology.
Optionally, the buffer unit is further configured to,
when the cache parameter information comprises expiration time, acquiring the expiration time of cached data, loading the processing time of the cached data from a database, caching the processing time, and subtracting the processing time from the expiration time to obtain advanced processing time;
and if the service data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
The reading unit is used for reading the cached data when the accessed service data is the data corresponding to the interface containing the automatic loading information, so as to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
It should be noted that the auto-loading information may be an auto-loading flag, identifier, or other similar information that can distinguish this type of interface from other interfaces. The automatic loading information can be set according to the analysis and judgment of the skilled person on whether the data needs to be automatically loaded. In addition, the preset method function may include hash, message digest algorithm fifth edition (MD5), etc. which may generate information of unique identification, code or name; when the data distinguishing identification information is combined by adopting the interface parameters, the method may include: generating data distinguishing identification information corresponding to different cached data according to the parameters of the interface according to a set classification combination rule, wherein the combined generated information can be similar to the serial number information of library collection books; the generated data distinguishing identification information has a certain naming rule by adopting the interface parameters for combined generation and adopting a preset method function for budget generation, so that the user personnel can conveniently identify the cached data in the process of analyzing and processing; the embodiment of the present invention may also generate the data distinguishing identification information in other manners, as long as each generated data distinguishing identification information has uniqueness.
Optionally, the apparatus of the embodiment of the present invention further includes an updating unit,
and the updating unit is used for updating the data corresponding to the updated service contained in the cache if the updated data is the data on the interface containing the automatic loading information when the data is updated.
Here, the update of the data includes the update of the data of the service, and the update of the data of the service can be determined by system parameters related to the system when the system performs the update, for example, the configuration log, the system log, the operation log, and the file content of other recorded data changes are judged. In addition, the data updating method may include determining a storage path of the original data corresponding to the updated data in the related art, and after deleting data on the determined storage path of the original data, performing operations of loading and writing the updated service data into the cache.
Optionally, the apparatus in this embodiment of the present invention further includes a buffer processing unit, configured to establish a message queue including data state information of all buffered data;
reading data state information of one or more data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
the cached data is accessed within a preset time length.
It should be noted that, contents included in the data state information may be referred to as data state information according to system cache traffic, service access speed, and the like, and those skilled in the art may add and delete the contents according to the usage scenario of the embodiment of the present invention.
Optionally, the cache processing unit is specifically configured to,
establishing a message queue containing data state information of all cached data;
reading data state information of one or more data from the established message queue according to a preset extraction strategy;
when the read data state information includes a previous request time for the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the read data state information includes the number of times of loading the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot but time-controllable data, and deleting the data which are determined to be hot but time-controllable from the cache; and/or the presence of a gas in the gas,
when the read data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are less than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
It should be noted that each time the cache data is loaded, the time may be calculated by the total time of the times of loading the cache data and the times of loading. In addition, the judgment that the access times of the cached data in the preset time length are smaller than the preset access time threshold value may include: obtaining an access time length by recording the time of accessing the cached data for the first time and subtracting the time of accessing the cached data for the first time from the current time of the system, counting the access times of the cached data when the access time length reaches a preset time length, for example, the access times of the cached data within one hour after the first access, and if the access times are less than an access time threshold, for example, the access time threshold is 60, determining that the cached data are non-hot cached data, and deleting the non-hot cached data from the cache; and if the time is not counted from the time of the cached data accessed for the first time, counting the accessed times of the cached data according to a preset time length.
In addition, in the embodiment of the present invention, the reading of the cache data information from the message queue may be performed in a parallel manner by using multiple processes, so that the speed of the cache determination processing may be increased.
The apparatus of an embodiment of the present invention further comprises a sorting unit,
the sorting unit is used for sorting the cached data according to the expiration time, the loading time duration and/or the data request frequency.
It should be noted that the sorting method can be obtained by analysis and judgment of a person skilled in the art according to the system requirements, and if the system requires a high request response efficiency, data with a high request frequency can be cached in a position sorted in front, so that efficient reading is facilitated; if the data of other services can be read due to the influence of long loading time, the data with long loading time can be cached at the position behind the sequence, and the data with the influence of other caches can be prevented from being read.
The device of the embodiment of the invention can be arranged on a server to work, and can also work after being in communication connection with the server.
The apparatus for loading cache data according to another embodiment of the present invention includes: the device comprises an identification unit, a cache unit, a reading unit, an updating unit, a cache processing unit and a sorting unit; wherein,
the identification unit is used for scanning the service access layer and identifying an interface containing preset automatic loading information;
the cache unit is used for loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset cache parameter information;
the reading unit is used for reading the cached data when the accessed service data is the data corresponding to the interface containing the automatic loading information, so as to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
It should be noted that the auto-loading information may be an auto-loading flag, identifier, or other similar information that can distinguish this type of interface from other interfaces. The automatic loading information can be set according to the analysis and judgment of the skilled person on whether the data needs to be automatically loaded. In addition, the preset method function may include hash, message digest algorithm fifth edition (MD5), etc. which may generate information of unique identification, code or name; when the data distinguishing identification information is combined by adopting the interface parameters, the method may include: generating data distinguishing identification information corresponding to different cached data according to the parameters of the interface according to a set classification combination rule, wherein the combined generated information can be similar to the serial number information of library collection books; the generated data distinguishing identification information has a certain naming rule by adopting the interface parameters for combined generation and adopting a preset method function for budget generation, so that the user personnel can conveniently identify the cached data in the process of analyzing and processing; the embodiment of the present invention may also generate the data distinguishing identification information in other manners, as long as each generated data distinguishing identification information has uniqueness.
And the updating unit is used for updating the data corresponding to the updated service contained in the cache if the updated data is the data on the interface containing the automatic loading information when the data is updated.
Here, the update of the data includes the update of the data of the service, and the update of the data of the service can be determined by system parameters related to the system when the system performs the update, for example, the configuration log, the system log, the operation log, and the file content of other recorded data changes are judged. In addition, the data updating method may include determining a storage path of the original data corresponding to the updated data in the related art, and after deleting data on the determined storage path of the original data, performing operations of loading and writing the updated service data into the cache.
The buffer processing unit is used for establishing a message queue containing data state information of all buffered data;
reading data state information of one or more data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
the cached data is accessed within a preset time length.
It should be noted that, contents included in the data state information may be referred to as data state information according to system cache traffic, service access speed, and the like, and those skilled in the art may add and delete the contents according to the usage scenario of the embodiment of the present invention.
Optionally, the cache processing unit is specifically configured to,
establishing a message queue containing data state information of all cached data;
reading data state information of one or more data from the established message queue according to a preset extraction strategy;
when the read data state information includes a previous request time for the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the read data state information includes the number of times of loading the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot but time-controllable data, and deleting the data which are determined to be hot but time-controllable from the cache; and/or the presence of a gas in the gas,
when the read data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are less than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
It should be noted that each time the cache data is loaded, the time may be calculated by the total time of the times of loading the cache data and the times of loading. In addition, the judgment that the access times of the cached data in the preset time length are smaller than the preset access time threshold value may include: obtaining an access time length by recording the time of accessing the cached data for the first time and subtracting the time of accessing the cached data for the first time from the current time of the system, counting the access times of the cached data when the access time length reaches a preset time length, for example, the access times of the cached data within one hour after the first access, and if the access times are less than an access time threshold, for example, the access time threshold is 60, determining that the cached data are non-hot cached data, and deleting the non-hot cached data from the cache; and if the time is not counted from the time of the cached data accessed for the first time, counting the accessed times of the cached data according to a preset time length.
In addition, in the embodiment of the present invention, the reading of the cache data information from the message queue may be performed in a parallel manner by using multiple processes, so that the speed of the cache determination processing may be increased.
The sorting unit is used for sorting the cached data according to the expiration time, the loading time duration and/or the data request frequency.
It should be noted that the sorting method can be obtained by analysis and judgment of a person skilled in the art according to the system requirements, and if the system requires a high request response efficiency, data with a high request frequency can be cached in a position sorted in front, so that efficient reading is facilitated; if the data of other services can be read due to the influence of long loading time, the data with long loading time can be cached at the position behind the sequence, and the data with the influence of other caches can be prevented from being read.
Optionally, the cache unit is further configured to, when the accessed data of the service is data corresponding to the interface that includes the automatic loading information, if the accessed data of the service is not read when the cached data is read, load and cache the data of the interface that includes the automatic loading information, which corresponds to the accessed data of the service, from the database.
Here, the data of the service that is not read to be accessed when the cached data is read includes: according to the embodiment of the invention, errors or failures occur when the data of the accessed service is read from the cached data, and the errors or failures in reading the data can be implemented by a method for judging whether the data reading is successful or not in the related technology.
Optionally, the buffer unit is further configured to,
when the cache parameter information comprises expiration time, acquiring the expiration time of cached data, loading the processing time of the cached data from a database, caching the processing time, and subtracting the processing time from the expiration time to obtain advanced processing time;
and if the service data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
The device of the embodiment of the invention can be arranged on a server to work, and can also work after being in communication connection with the server.
The method of the present invention is described in detail below by way of application examples, which are only used to illustrate embodiments of the present invention and are not intended to limit the scope of the present invention.
Application example 1
Fig. 5 is a flowchart of a method according to an application example one of the present invention, as shown in fig. 5, including:
step 500, scanning a service access layer, reading an interface with set automatic loading information, and determining whether to automatically load data according to the automatic loading information; the application instance auto-load information may include an auto-load flag;
step 501, loading and caching data corresponding to the identified interface containing the automatic loading information in a database according to preset caching parameter information; here, the cached data may be written to the cache center;
in this application example, the caching parameter information includes: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time; wherein the data distinguishing identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
Step 502, when the accessed service data is data corresponding to an interface containing automatic loading information, judging whether to read cached data; if the cached data is read, go to step 5030; if the cached data is not read, go to step 5040;
step 5030, reading the cached data to implement service access processing;
step 5040, loading and caching data of an interface containing automatic loading information corresponding to the accessed service data from the database; after performing step 5040, the application instance may implement service access processing according to the cached data, i.e., may perform step 5030.
In this application example, after the caching of the data is completed, the application example may return a result set of the data.
Step 5031, when the service updates the data, updating the data corresponding to the updated service contained in the cache.
The service updating data may include that the background management also modifies the product details, for example, modifies the configuration parameters, modifies the product model, and the like;
it should be noted that, when the cache is updated, the updated data is synchronously written into the database for corresponding updating according to the processing method of the related art.
Application example 2
The application example firstly needs to establish a message queue containing data state information of all cached data; the data state information is pre-collected and comprises: the time of the previous request of the cached data, the loading times of the cached data and the time of each loading of the cached data, and/or the number of times the cached data is accessed within a preset time length. The application example takes the example that one process reads the first item of data state information in the message queue each time as an example for explanation; fig. 6 is a flowchart of a method of a second application example of the present invention, as shown in fig. 6, including:
step 600, reading first data state information of a current queue from a message queue;
601, obtaining the previous request time of the cached data from the data state information, and reading the current system time;
it should be noted that the information such as the request time and the current system time may be implemented by using an acquisition method in the related art.
Step 602, when the difference between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold, determining the cached data as non-hotspot cached data;
the request interval threshold can be analyzed and set according to parameters such as service access aging requirements and system performance.
Step 603, deleting the cache data determined to be non-hot-spot from the cache;
step 604, obtaining the accessed times of the cached data within a preset time length from the data state information;
605, determining that the data is non-hotspot cache data, wherein the access times of the cached data in a preset time duration are less than a preset access time threshold; the non-hotspot cached data is processed per step 603.
The preset time of the application example can be 1 hour, and the access times are 60 times;
step 606, obtaining the loading times of the cached data and the time for loading the cached data each time from the data state information;
step 607, if the loading times of the cached data are greater than the preset loading time threshold and/or the loading time of the cached data are greater than the preset loading time threshold, determining that the cached data are hot but time-controllable data;
the threshold value of the loading times of the application example can be 100, and the threshold value of the loading time can comprise 10-100 milliseconds; the actual value can be determined according to the number of requests;
step 608, deleting the data which is determined to be the hot spot and controllable in time from the cache;
when the above steps are executed, the data state information is updated according to the content of the executed steps.
The present application example further includes: and sorting the cached data according to the expiration time, the loading time duration and/or the request frequency of the data.
It should be noted that the sorting process may be performed by starting a separate process, and sorting the message queue according to the set policy, and the sorting algorithm may include: the closer to the expiration time, and/or the more time consuming the data is cached, the earlier the ordering. And performing reverse ordering according to the request times, wherein the more the request times are, the higher the use frequency is, and the greater the possibility of concurrence is caused.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) to perform the steps, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in hardware, for example, by an integrated circuit to implement its corresponding function, or in software, for example, by a processor executing a program/instruction stored in a memory to implement its corresponding function. The present invention is not limited to any specific form of combination of hardware and software. ".
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An apparatus for loading cache data, comprising: the device comprises an identification unit, a cache unit and a reading unit; wherein,
the identification unit is used for scanning the service access layer and identifying an interface containing preset automatic loading information;
the cache unit is used for loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset cache parameter information;
the reading unit is used for reading the cached data when the accessed service data is the data corresponding to the interface containing the automatic loading information, so as to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
2. The apparatus according to claim 1, wherein the cache unit is further configured to, when the data of the accessed service is data corresponding to an interface including auto-loading information, load and cache the data of the interface including the auto-loading information corresponding to the data of the accessed service from the database if the data of the accessed service is not read when the cached data is read.
3. The apparatus of claim 1, further comprising an update unit,
and the updating unit is used for updating the data corresponding to the updated service contained in the cache if the updated data is the data on the interface containing the automatic loading information when the data is updated.
4. The device according to any one of claims 1 to 3, further comprising a buffer processing unit, configured to establish a message queue containing data status information of all buffered data;
reading data state information of one or more than one data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
and accessing the cached data for times within a preset time length.
5. The apparatus of claim 4, wherein the cache processing unit is specifically configured to,
establishing a message queue containing data state information of all cached data;
reading data state information of one or more data from the established message queue according to a preset extraction strategy;
the read data state information includes a previous request time of the buffered data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the read data state information comprises the loading times of the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot-spot data with controllable time, and deleting the data which are determined to be hot-spot data with controllable time from the cache; and/or the presence of a gas in the gas,
when the read data state information includes the number of times the cached data is accessed within a preset time length,
and the access times of the cached data in the preset time length are smaller than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
6. The apparatus according to any one of claims 1 to 3, wherein the buffer unit is further configured to,
when the cache parameter information comprises expiration time, acquiring the expiration time of the cached data, loading and caching the processing time of the data from a database, and subtracting the processing time from the expiration time to obtain advanced processing time;
and if the business data is not loaded in the acquired advanced processing time, loading and caching updated data corresponding to the cache data with the expiration time reached when the expiration time reaches in the database, and caching.
7. The apparatus according to any one of claims 1 to 3, further comprising a sorting unit,
the sorting unit is used for sorting the cached data according to expiration time, loading time-consuming duration and/or data request frequency.
8. A method for loading cached data, comprising:
scanning a service access layer, and identifying an interface containing preset automatic loading information;
loading and caching data corresponding to the identified interface containing the automatic loading information in the database according to preset caching parameter information;
when the accessed service data is the data corresponding to the interface containing the automatic loading information, reading the cached data to realize service access processing;
the caching parameter information comprises: presetting preset data distinguishing identification information, and/or cache condition information, and/or expiration time;
the data discrimination identification information includes: and combining according to the interface parameters and/or converting and generating the interface parameters by adopting a preset method function.
9. The method of claim 8, further comprising:
establishing a message queue containing data state information of all cached data;
reading data state information of one or more than one data from the established message queue according to a preset taking-out strategy, and performing cache judgment processing on the data according to the read data state information;
the data state information is pre-collected and comprises at least one of the following information:
a previous request time for the cached data, and/or,
the number of times the cached data is loaded and the time each time the cached data is loaded, and/or,
and accessing the cached data for times within a preset time length.
10. The method according to claim 9, wherein the performing the cache determination process of the data comprises:
the data state information includes a previous request time for the cached data,
when the difference value between the current time of the system and the previous request time of the cached data is greater than a preset request interval threshold value, determining the cached data as non-hotspot cached data, and deleting the determined non-hotspot cached data from the cache; and/or the presence of a gas in the gas,
the data state information includes the number of times of loading the cached data and the time of each time of loading the cached data,
if the loading times of the cached data are greater than a preset loading time threshold and/or the loading time of the cached data are greater than a preset loading time threshold, determining that the cached data are hot-spot data with controllable time, and deleting the data which are determined to be hot-spot data with controllable time from the cache; and/or the presence of a gas in the gas,
when the data state information includes the number of times the cached data is accessed within a preset time period,
and the access times of the cached data in the preset time length are smaller than a preset access time threshold, the data are determined to be non-hotspot cached data, and the determined non-hotspot cached data are deleted from the cache.
CN201610324104.7A 2016-05-16 2016-05-16 It is a kind of to load data cached method and device Active CN106021445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610324104.7A CN106021445B (en) 2016-05-16 2016-05-16 It is a kind of to load data cached method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610324104.7A CN106021445B (en) 2016-05-16 2016-05-16 It is a kind of to load data cached method and device

Publications (2)

Publication Number Publication Date
CN106021445A true CN106021445A (en) 2016-10-12
CN106021445B CN106021445B (en) 2019-10-15

Family

ID=57097977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610324104.7A Active CN106021445B (en) 2016-05-16 2016-05-16 It is a kind of to load data cached method and device

Country Status (1)

Country Link
CN (1) CN106021445B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN106843769A (en) * 2017-01-23 2017-06-13 北京齐尔布莱特科技有限公司 A kind of interface data caching method, device and computing device
CN106874124A (en) * 2017-03-30 2017-06-20 光科技股份有限公司 A kind of object-oriented power information acquisition terminal based on the quick loading techniques of SQLite
CN107463598A (en) * 2017-06-09 2017-12-12 中国邮政储蓄银行股份有限公司 Distributed cache system
CN108829743A (en) * 2018-05-24 2018-11-16 平安科技(深圳)有限公司 Data cached update method, device, computer equipment and storage medium
WO2019019382A1 (en) * 2017-07-27 2019-01-31 上海壹账通金融科技有限公司 Cache handling method and device, computer device and storage medium
CN109471875A (en) * 2018-09-25 2019-03-15 网宿科技股份有限公司 Based on data cached temperature management method, server and storage medium
CN109597915A (en) * 2018-09-18 2019-04-09 北京微播视界科技有限公司 Access request treating method and apparatus
CN110109956A (en) * 2019-03-21 2019-08-09 福建天泉教育科技有限公司 A kind of method and terminal for preventing caching from penetrating
CN110555744A (en) * 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 Service data processing method and system
CN110895474A (en) * 2018-08-24 2020-03-20 深圳市鸿合创新信息技术有限责任公司 Terminal micro-service device and method and electronic equipment
CN111984889A (en) * 2020-02-21 2020-11-24 广东三维家信息科技有限公司 Caching method and system
CN112115074A (en) * 2020-09-02 2020-12-22 紫光云(南京)数字技术有限公司 Method for realizing data resident memory by using automatic loading mechanism
CN112559572A (en) * 2020-12-22 2021-03-26 上海悦易网络信息技术有限公司 Method and equipment for preheating data cache of Key-Value cache system
WO2021244067A1 (en) * 2020-06-05 2021-12-09 苏州浪潮智能科技有限公司 Method for diluting cache space, and device and medium
CN113806649A (en) * 2021-02-04 2021-12-17 北京沃东天骏信息技术有限公司 Data caching method and device for online application, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170479A (en) * 2011-05-21 2011-08-31 成都市华为赛门铁克科技有限公司 Updating method of Web buffer and updating device of Web buffer
CN103488581A (en) * 2013-09-04 2014-01-01 用友软件股份有限公司 Data caching system and data caching method
WO2014123127A1 (en) * 2013-02-06 2014-08-14 Square Enix Holdings Co., Ltd. Image processing apparatus, method of controlling the same, program and storage medium
CN105302493A (en) * 2015-11-19 2016-02-03 浪潮(北京)电子信息产业有限公司 Swap-in and swap-out control method and system for SSD cache in mixed storage array

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170479A (en) * 2011-05-21 2011-08-31 成都市华为赛门铁克科技有限公司 Updating method of Web buffer and updating device of Web buffer
WO2014123127A1 (en) * 2013-02-06 2014-08-14 Square Enix Holdings Co., Ltd. Image processing apparatus, method of controlling the same, program and storage medium
CN103488581A (en) * 2013-09-04 2014-01-01 用友软件股份有限公司 Data caching system and data caching method
CN105302493A (en) * 2015-11-19 2016-02-03 浪潮(北京)电子信息产业有限公司 Swap-in and swap-out control method and system for SSD cache in mixed storage array

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN106843769B (en) * 2017-01-23 2019-08-02 北京齐尔布莱特科技有限公司 A kind of interface data caching method, device and calculate equipment
CN106843769A (en) * 2017-01-23 2017-06-13 北京齐尔布莱特科技有限公司 A kind of interface data caching method, device and computing device
CN106874124A (en) * 2017-03-30 2017-06-20 光科技股份有限公司 A kind of object-oriented power information acquisition terminal based on the quick loading techniques of SQLite
CN106874124B (en) * 2017-03-30 2023-04-14 光一科技股份有限公司 SQLite rapid loading technology-based object-oriented electricity utilization information acquisition terminal
CN107463598A (en) * 2017-06-09 2017-12-12 中国邮政储蓄银行股份有限公司 Distributed cache system
WO2019019382A1 (en) * 2017-07-27 2019-01-31 上海壹账通金融科技有限公司 Cache handling method and device, computer device and storage medium
WO2019223137A1 (en) * 2018-05-24 2019-11-28 平安科技(深圳)有限公司 Cache data update method and apparatus, computer device, and storage medium
CN108829743A (en) * 2018-05-24 2018-11-16 平安科技(深圳)有限公司 Data cached update method, device, computer equipment and storage medium
CN110555744A (en) * 2018-05-31 2019-12-10 阿里巴巴集团控股有限公司 Service data processing method and system
CN110895474A (en) * 2018-08-24 2020-03-20 深圳市鸿合创新信息技术有限责任公司 Terminal micro-service device and method and electronic equipment
CN109597915A (en) * 2018-09-18 2019-04-09 北京微播视界科技有限公司 Access request treating method and apparatus
CN109597915B (en) * 2018-09-18 2022-03-01 北京微播视界科技有限公司 Access request processing method and device
CN109471875A (en) * 2018-09-25 2019-03-15 网宿科技股份有限公司 Based on data cached temperature management method, server and storage medium
CN109471875B (en) * 2018-09-25 2021-08-20 网宿科技股份有限公司 Hot degree management method based on cache data, server and storage medium
CN110109956A (en) * 2019-03-21 2019-08-09 福建天泉教育科技有限公司 A kind of method and terminal for preventing caching from penetrating
CN111984889A (en) * 2020-02-21 2020-11-24 广东三维家信息科技有限公司 Caching method and system
WO2021244067A1 (en) * 2020-06-05 2021-12-09 苏州浪潮智能科技有限公司 Method for diluting cache space, and device and medium
US11687271B1 (en) 2020-06-05 2023-06-27 Inspur Suzhou Intelligent Technology Co., Ltd. Method for diluting cache space, and device and medium
CN112115074A (en) * 2020-09-02 2020-12-22 紫光云(南京)数字技术有限公司 Method for realizing data resident memory by using automatic loading mechanism
CN112559572A (en) * 2020-12-22 2021-03-26 上海悦易网络信息技术有限公司 Method and equipment for preheating data cache of Key-Value cache system
CN113806649A (en) * 2021-02-04 2021-12-17 北京沃东天骏信息技术有限公司 Data caching method and device for online application, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106021445B (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN106021445B (en) It is a kind of to load data cached method and device
US9767140B2 (en) Deduplicating storage with enhanced frequent-block detection
US8521986B2 (en) Allocating storage memory based on future file size or use estimates
US9817879B2 (en) Asynchronous data replication using an external buffer table
CN111881096B (en) File reading method, device, equipment and storage medium
CN111966938B (en) Configuration method and system for realizing loading speed improvement of front-end page of cloud platform
CN115114232A (en) Method, device and medium for enumerating historical version objects
CN115509440A (en) Storage system and data processing method
CN110222046B (en) List data processing method, device, server and storage medium
CN111913913B (en) Access request processing method and device
US11520818B2 (en) Method, apparatus and computer program product for managing metadata of storage object
US11394748B2 (en) Authentication method for anonymous account and server
CN110554914B (en) Resource lock management method, device, server and storage medium
US20220092049A1 (en) Workload-driven database reorganization
US20220382473A1 (en) Managing deduplication operations based on a likelihood of duplicability
CN113297003B (en) Method, electronic device and computer program product for managing backup data
CN111061744B (en) Graph data updating method and device, computer equipment and storage medium
CN112783804A (en) Data access method, device and storage medium
CN113806249B (en) Object storage sequence lifting method, device, terminal and storage medium
CN113190332B (en) Method, apparatus and computer program product for processing metadata
CN117667964B (en) Data processing method, device, equipment, database and computer program product
US11379147B2 (en) Method, device, and computer program product for managing storage system
CN114048223A (en) Data reading and writing method and device, electronic equipment and system
CN107844258A (en) Data processing method, client, node server and distributed file system
CN114328521A (en) Index library updating method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant