CN115510107A - Data preheating caching method, device, equipment and storage medium - Google Patents
Data preheating caching method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115510107A CN115510107A CN202211206972.7A CN202211206972A CN115510107A CN 115510107 A CN115510107 A CN 115510107A CN 202211206972 A CN202211206972 A CN 202211206972A CN 115510107 A CN115510107 A CN 115510107A
- Authority
- CN
- China
- Prior art keywords
- data
- hot spot
- spot data
- cache
- caching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a data preheating caching method, a data preheating caching device, data preheating caching equipment and a data preheating caching storage medium. Relates to the technical field of big data processing. The method comprises the following steps: before target information is issued, determining first hot spot data to be accessed related to the target information, acquiring the first hot spot data from a database and writing the first hot spot data into a plurality of cache nodes, determining a target node in the plurality of cache nodes according to a preset rule, writing second hot spot data cached by the target node into a local storage, reading the second hot spot data from the local storage and returning the second hot spot data in response to receiving a second hot spot data access request, and performing preheating caching by using the plurality of cache nodes. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved.
Description
Technical Field
The present application relates to the field of big data processing technologies, and in particular, to a data pre-heating caching method, apparatus, device, and storage medium.
Background
With the development of networks, the internet can perform various social fission activities before releasing activities each time, promote more channels to receive guests, and the ranking list system becomes an important component in the fission activities. For the same ranking list data, instantaneous concurrency is high in the initial stage of activity, massive users participate in the ranking list system at the same time, the ranking list system is accessed to check online ranking, and for the data in a single state, the problem that concurrent requests flow back to a database at the same time can be solved.
Disclosure of Invention
The application provides a data preheating caching method, a data preheating caching device, data preheating caching equipment and a data preheating caching storage medium, and aims to solve at least one of technical problems in the related art to a certain extent.
In a first aspect, the present application provides a data preheating caching method, which determines first hot spot data to be accessed, which is related to target information, before the target information is issued; acquiring first hotspot data from a database and writing the first hotspot data into a plurality of cache nodes; determining a target node in the plurality of cache nodes according to a preset rule, and writing second hot data cached by the target node into a local storage; and responding to the received second hotspot data access request, reading second hotspot data from the local storage and returning.
In a second aspect, the present application provides a data preheating cache apparatus, including: the first determining module is used for determining first hot spot data to be accessed related to the target information before the target information is issued; the first writing module is used for acquiring first hot spot data from a database and writing the first hot spot data into a plurality of cache nodes; the second writing module is used for determining a target node in the plurality of cache nodes according to a preset rule and writing second hot data cached by the target node into a local storage; and the reading module is used for responding to the received second hot spot data access request, reading the second hot spot data from the local storage and returning the second hot spot data.
In a third aspect, the present application provides an electronic device, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the data pre-warm caching method.
In a fourth aspect, the present application provides a computer-readable storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform a data pre-warm caching method.
In a fifth aspect, the present application provides a computer program product comprising a computer program for execution by a processor of a data pre-warm caching method.
According to the data preheating caching method, the data preheating caching device, the data preheating caching equipment and the data preheating caching medium, before target information is issued, first hot data related to the target information and to be accessed are determined, the first hot data are obtained from a database and written into a plurality of caching nodes, the target nodes in the caching nodes are determined according to preset rules, second hot data cached by the target nodes are written into a local storage, a second hot data access request is received, the second hot data are read from the local storage and returned, preheating caching can be conducted on the hot data through the caching nodes, accordingly high availability of a system can be guaranteed, throughput efficiency can be improved, in addition, the caching data are stored into the local storage after preheating caching, the data are read from the local storage, and accordingly data reading efficiency can be improved. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart illustrating a data pre-heating caching method according to a first embodiment of the present application;
FIG. 2 is a flowchart illustrating a data pre-warming caching method according to a second embodiment of the present application;
FIG. 3 is a flowchart illustrating a data pre-warming caching method according to a second embodiment of the present application;
FIG. 4 is a block diagram of a data pre-warm buffer apparatus according to the present application;
FIG. 5 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application. On the contrary, the embodiments of the application include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
It should be noted that an execution main body of the data preheating caching method of this embodiment may be a data preheating caching device, the device may be implemented by software and/or hardware, the device may be configured in an electronic device, and the electronic device may include, but is not limited to, a terminal, a server, and the like.
In order to implement the data preheating caching method provided in this embodiment, this embodiment further provides a data processing architecture, including: the system comprises a front-end cluster, a back-end application service cluster, a preheating service cluster, a cache cluster (comprising a plurality of cache nodes) and a database. The preheating service cluster comprises a timing batch task triggering module and a read-write cache module, and the data preheating cache method of the embodiment can be executed by the preheating service cluster.
Fig. 1 is a schematic flow chart illustrating a data pre-warming caching method according to a first embodiment of the present application, where as shown in fig. 1, the method includes:
s101: before target information is released, first hot spot data to be accessed, which are related to the target information, are determined.
The target information may be information that a large amount of access exists on the internet, and the target information is, for example, ranking list information of users of e-commerce activities, or any other possible information, which is not limited in this respect.
The hotspot data related to the target information and having a larger access amount may be referred to as first hotspot data, for example, if there are more users accessing the top ten of the leaderboard, the user data accessing the top three of the leaderboard may be referred to as the first hotspot data, or if there are more users accessing the top three of the leaderboard, the user data accessing the top three of the leaderboard may also be referred to as the first hotspot data.
In practical application, massive users participate simultaneously at the initial stage of target information release, and instantaneous concurrency is high. In view of this, the preheating service cluster of the embodiment of the present disclosure may perform preheating caching on the first hotspot data in batches at regular time before the target information (e.g., the user leaderboard) is published.
For example, if the target information publishing time is P, the database information reading time is s, and the cache reading time is t, the embodiment may determine the first hot spot data s + t time units before the time point P, that is: and executing a timed batch preheating caching task.
Some embodiments, for example, the user leaderboard information of the historical activities may be analyzed to determine the first hotspot data of the current user leaderboard, or the first hotspot data may be determined in any other possible manner, without limitation.
In other embodiments, the preheat service cluster may also periodically batch determine cold data to be accessed in relation to the target information.
In this case, the ranking data of the user may be referred to as cold data, or the cold data may also be any other data with a small access amount, which is not limited herein.
S102: and acquiring the first hot spot data from the database and writing the first hot spot data into a plurality of cache nodes.
After the first hot spot data is determined, the preheating service cluster of this embodiment may acquire the first hot spot data from the database, and write the first hot spot data into a plurality of cache nodes of the cache cluster.
In some embodiments, when there are a plurality of pieces of first hotspot data, the first hotspot data may be stored in a plurality of cache nodes, respectively, and each cache node caches one piece of hotspot data.
Some embodiments may also cache the cold data, that is, the present embodiment also supports a batch caching operation on the cold data.
S103: and determining a target node in the plurality of cache nodes according to a preset rule, and writing the second hotspot data cached by the target node into a local memory.
That is to say, according to the embodiment of the present disclosure, one or more nodes may be selected from the plurality of cache nodes as a target node according to a preset rule, and hotspot data (i.e., any piece of first hotspot data) cached in the target node may be referred to as second hotspot data, that is, the second hotspot data belongs to the first hotspot data.
In some embodiments, the preset rule may be, for example, a data volume rule. Specifically, in this embodiment, the target node may be determined according to the data amount of the cache data of the multiple cache nodes, for example, the target node may be sorted according to the data amount of the first hotspot data stored in each cache node, and a single node (big key) with the largest cache data amount is taken as the target node.
In other embodiments, the preset rule may also be a data hot rule, that is: the heat of each piece of first hotspot data. Specifically, the present embodiment may determine the target node according to the data heat of the cache data of the multiple cache nodes, for example: the plurality of cache nodes may be sorted according to the data heat of the first hot spot data stored in each cache node, and a single node (hot key) with the highest data heat is taken as the target node, and it is understood that the data heat may be determined when the first hot spot data is determined, which is not limited to the above.
After the target node is determined, the second hotspot data cached by the target node may be written into a local storage, for example, a local read-write disk.
S104: and in response to receiving the second hotspot data access request, reading the second hotspot data from the local storage and returning.
Further, in target information publishing, the embodiment may monitor whether there is an access request for the second hotspot data, that is: whether there is a user accessing the second hotspot data. In a case where a user accesses the second hot spot data, the embodiment may receive the second hot spot data access request, read the second hot spot data from the local storage, and return the second hot spot data to the user. Therefore, the hot spot data are read from the local storage of the preheating cache in the process, the hot spot data do not need to be inquired in a database, and the data reading efficiency can be improved.
In some embodiments, the cached first hot spot data (including the cached second hot spot data) may have a caching period r, which may be set to be greater than s and t, and the value of r is set to be greater than the sum of s and t in consideration of the possibility of network jitter and contention for database resources. In the operation of reading the second hot spot data, firstly, it is determined whether the current caching time of the second hot spot data exceeds the caching deadline r, that is: judging whether the second hot spot data exceeds a cache period or not; further, under the condition that the caching time of the second hot spot data is determined not to exceed the caching deadline, the second hot spot data is read from the local storage and returned; and if the cache deadline r is exceeded, the cache of the second hot spot data is already cleared in the cache, and in this case, the second hot spot data is read from the database.
It should be noted that, in the technical solution of the present application, the acquisition, storage, application, etc. of the related data all conform to the regulations of the relevant laws and regulations, and do not violate the common customs of the public order.
According to the embodiment of the disclosure, before target information is released, first hot data to be accessed related to the target information is determined, the first hot data is obtained from a database and written into a plurality of cache nodes, the target nodes in the plurality of cache nodes are determined according to a preset rule, second hot data cached by the target nodes are written into a local storage, and in response to receiving a second hot data access request, the second hot data is read from the local storage and returned. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved.
Fig. 2 is a schematic flowchart illustrating a data pre-warming caching method according to a second embodiment of the present application, where as shown in fig. 2, the method includes:
s201: before target information is published, first hotspot data related to the target information and to be accessed are determined.
S202: and acquiring first hot spot data from the database and writing the first hot spot data into a plurality of cache nodes.
S203: and determining a target node in the plurality of cache nodes according to a preset rule, and writing the second hot spot data cached by the target node into a local storage.
For specific descriptions of S201-203, reference may be made to the above embodiments, which are not described herein again.
S204: the second hot spot data is mapped to the memory using the page cache.
After the second hot spot data cached by the target node is written into the local storage, further, the embodiment of the present disclosure may further map the locally stored second hot spot data into the memory by using a page cache (page cache). The valid time u of the second hot spot data in the memory may be set.
S205: and responding to the received second hot spot data access request, reading the second hot spot data from the memory and returning.
Further, in the operation of reading the second hot spot data, the second hot spot data may be directly read from the memory and returned.
In some embodiments, the data processing architecture of this embodiment may adopt, for example, socket network communication, and may call sendfile function to perform data read/write operation. In the embodiment, when socket network communication is performed by using sendfile, data copying between a user and an operating system kernel can be avoided, data is directly transmitted by using two file descriptors, and if the time for reading the second hot data from the memory is v, the data accuracy can be ensured as long as u + v is less than r + t. Therefore, in the embodiment, the hot spot data stored locally is mapped to the memory, so that the throughput and the reading efficiency of the cache data can be greatly improved.
According to the embodiment of the disclosure, before the target information is issued, the first hot spot data to be accessed related to the target information is determined, the first hot spot data is obtained from the database and written into the plurality of cache nodes, the target node in the plurality of cache nodes is determined according to the preset rule, the second hot spot data cached by the target node is written into the local storage, the second hot spot data is read from the local storage and returned in response to the received second hot spot data access request, and the plurality of cache nodes can be used for preheating and caching, so that the high availability of the system can be ensured and the throughput efficiency can be improved. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved. In addition, in the embodiment, the hot spot data stored locally is mapped to the memory, so that the throughput and the reading efficiency of the cache data can be greatly improved.
Fig. 3 is a schematic flowchart illustrating a data pre-warming caching method according to a third embodiment of the present application, where as shown in fig. 3, the method includes:
s301: before target information is released, first hot spot data to be accessed, which are related to the target information, are determined.
S302: and acquiring the first hot spot data from the database and writing the first hot spot data into a plurality of cache nodes.
For specific descriptions of S301-302, reference may be made to the above embodiments, which are not described herein again.
S303: and responding to the database updating operation, and updating the first hotspot data cached by the cache nodes.
According to the embodiment of the disclosure, the first hot spot data of the preheating cache can be updated.
In some embodiments, the update of the database may trigger a first hotspot data update operation of the plurality of cache nodes.
For example, when the user ranking list data changes, the data in the database is updated correspondingly, and the first hotspot data also changes accordingly. In this case, the embodiment of the present disclosure may perform an update operation on the first hotspot data cached by the plurality of cache nodes according to the update result of the database.
S304: and updating the first hot spot data cached by the plurality of cache nodes according to a preset updating interval.
In some embodiments, the update interval M may also be set according to an actual service requirement, for example, if the update time of the user ranking list is fast, a smaller update interval M may be set, and if the update time of the user ranking list is slow, the update interval M may be extended, for example, M is set to be smaller than r and larger than u, and the smaller M, the higher the real-time accuracy of the data is. In this embodiment, the first hot spot data cached by the plurality of cache nodes may be updated according to a preset update interval, that is, the first hot spot data cached by the plurality of cache nodes is updated once every update interval M. Therefore, in this embodiment, by updating the first hot spot data cached by the multiple cache nodes, the real-time performance of the cached hot spot data can be ensured, and dirty data is prevented from being read from the pre-heating cache.
S305: and determining a target node in the plurality of cache nodes according to a preset rule, and writing the second hotspot data cached by the target node into a local memory.
S306: and in response to receiving the second hotspot data access request, reading the second hotspot data from the local storage and returning.
For specific descriptions of S305-306, reference may be made to the above embodiments, which are not described herein again.
According to the embodiment of the disclosure, before target information is released, first hot data to be accessed related to the target information is determined, the first hot data is obtained from a database and written into a plurality of cache nodes, the target nodes in the plurality of cache nodes are determined according to a preset rule, second hot data cached by the target nodes are written into a local storage, and in response to receiving a second hot data access request, the second hot data is read from the local storage and returned. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved. In addition, in this embodiment, by updating the first hot spot data cached by the plurality of cache nodes, the real-time property of the cached hot spot data can be ensured, and dirty data reading from the pre-heating cache is avoided.
Fig. 4 is a block diagram of a data warm-up buffer apparatus according to the present application, and as shown in fig. 4, the data warm-up buffer apparatus 40 includes:
a first determining module 401, configured to determine, before target information is published, first hotspot data to be accessed, which is related to the target information;
a first writing module 402, configured to obtain first hotspot data from a database and write the first hotspot data into a plurality of cache nodes;
a second writing module 403, configured to determine a target node in the multiple cache nodes according to a preset rule, and write second hotspot data cached by the target node into a local storage; and
the reading module 404 is configured to, in response to receiving the second hotspot data access request, read the second hotspot data from the local storage and return the second hotspot data.
In some embodiments, the apparatus 40 further comprises: a mapping module for mapping the second hot spot data to the memory using the page cache;
the reading module 404 is specifically configured to: and responding to the access request, reading the second hot spot data from the memory and returning.
In some embodiments, the second writing module 403 is specifically configured to:
determining a target node according to the data volume of the cache data of the plurality of cache nodes; or
And determining a target node according to the data heat of the cache data of the plurality of cache nodes.
In some embodiments, the apparatus 40 further comprises an update module configured to:
responding to the database updating operation, and updating the first hotspot data cached by the plurality of cache nodes; or updating the first hot spot data cached by the plurality of cache nodes according to a preset updating interval.
In some embodiments, the reading module 404 is specifically configured to:
judging whether the caching time of the second hot spot data exceeds the caching deadline; and
and reading the second hot spot data from the local storage and returning the second hot spot data when the caching time of the second hot spot data does not exceed the caching deadline.
In some embodiments, the apparatus further comprises: and the second determining module is used for determining cold data to be accessed related to the target information and caching the cold data.
In this embodiment, before the target information is published, first hotspot data to be accessed related to the target information is determined, the first hotspot data is acquired from the database and written into the plurality of cache nodes, a target node in the plurality of cache nodes is determined according to a preset rule, second hotspot data cached by the target node is written into a local storage, and in response to a received second hotspot data access request, the second hotspot data is read from the local storage and returned. Therefore, the speed of acquiring data by the user can be increased, and the user experience is improved.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
Fig. 5 is a block diagram of an electronic device shown in accordance with the present application. For example, the electronic device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 516.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the electronic device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes a touch sensitive display screen that provides an output interface between the electronic device 500 and a user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 500 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516.
In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the electronic device 500. For example, the sensor component 514 may detect an open/closed state of the electronic device 500, a relative positioning of components, such as a display and keypad of the electronic device 500, a change in position of the electronic device 500 or a component of the electronic device 500, the presence or absence of user contact with the electronic device 500, an orientation or acceleration/deceleration of the electronic device 500, and a change in temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate wired or wireless communication between the electronic device 500 and other devices. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described data pre-warm buffering method.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 920 of the electronic device 500 to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (15)
1. A data pre-heating cache method is characterized by comprising the following steps:
before target information is published, determining first hotspot data to be accessed, which are related to the target information;
acquiring the first hot spot data from a database and writing the first hot spot data into a plurality of cache nodes;
determining a target node in the plurality of cache nodes according to a preset rule, and writing second hot spot data cached by the target node into a local storage, wherein the second hot spot data belongs to the first hot spot data; and
and responding to a received second hotspot data access request, reading the second hotspot data from the local storage and returning.
2. The method of claim 1, wherein after writing the second hotspot data cached by the target node to a local storage, further comprising:
mapping the second hot spot data to memory using a page cache;
and, in response to receiving a second hotspot data access request, reading and returning the second hotspot data from the local storage, including:
and responding to the access request, reading the second hot spot data from the memory and returning.
3. The method of claim 1, wherein determining the target node of the plurality of cache nodes according to a preset rule comprises:
determining the target node according to the data volume of the cache data of the plurality of cache nodes; or alternatively
And determining the target node according to the data heat of the cache data of the plurality of cache nodes.
4. The method of claim 1, wherein after obtaining the first hotspot data from the database and writing the first hotspot data to a plurality of cache nodes, the method further comprises:
updating the first hotspot data cached by the plurality of cache nodes in response to the database update operation; or
And updating the first hot spot data cached by the plurality of cache nodes according to a preset updating interval.
5. The method of claim 1, wherein reading and returning the second hotspot data from the local storage comprises:
judging whether the caching time of the second hot spot data exceeds the caching deadline; and
and reading the second hot spot data from the local storage and returning the second hot spot data when the caching time of the second hot spot data does not exceed the caching deadline.
6. The method of claim 1, further comprising:
and determining cold data to be accessed related to the target information, and caching the cold data.
7. A data pre-heating buffer device is characterized by comprising:
the first determining module is used for determining first hotspot data to be accessed related to target information before the target information is released;
the first writing module is used for acquiring the first hotspot data from a database and writing the first hotspot data into a plurality of cache nodes;
a second writing module, configured to determine a target node in the multiple cache nodes according to a preset rule, and write second hot spot data cached by the target node into a local storage, where the second hot spot data belongs to the first hot spot data; and
and the reading module is used for responding to the received second hot spot data access request, reading the second hot spot data from the local storage and returning the second hot spot data.
8. The apparatus of claim 7, further comprising:
a mapping module for mapping the second hot spot data to a memory using a page cache;
the reading module is specifically configured to:
and responding to the access request, reading the second hot spot data from the memory and returning.
9. The apparatus of claim 7, wherein the second write module is specifically configured to:
determining the target node according to the data volume of the cache data of the plurality of cache nodes; or
And determining the target node according to the data heat of the cache data of the plurality of cache nodes.
10. The apparatus of claim 7, further comprising an update module configured to:
updating the first hotspot data cached by the plurality of cache nodes in response to the database update operation; or
And updating the first hot spot data cached by the plurality of cache nodes according to a preset updating interval.
11. The apparatus according to claim 7, wherein the reading module is specifically configured to:
judging whether the caching time of the second hot spot data exceeds the caching period or not; and
and reading the second hot spot data from the local storage and returning the second hot spot data when the caching time of the second hot spot data does not exceed the caching deadline.
12. The apparatus of claim 7, further comprising:
and the second determining module is used for determining cold data to be accessed related to the target information and caching the cold data.
13. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-6.
14. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-6.
15. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211206972.7A CN115510107A (en) | 2022-09-30 | 2022-09-30 | Data preheating caching method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211206972.7A CN115510107A (en) | 2022-09-30 | 2022-09-30 | Data preheating caching method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115510107A true CN115510107A (en) | 2022-12-23 |
Family
ID=84509050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211206972.7A Pending CN115510107A (en) | 2022-09-30 | 2022-09-30 | Data preheating caching method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115510107A (en) |
-
2022
- 2022-09-30 CN CN202211206972.7A patent/CN115510107A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108804244B (en) | Data transmission method, device and storage medium | |
CN107315791A (en) | Static resource caching method, device and computer-readable recording medium | |
CN113190777B (en) | Data updating method, device, electronic equipment, storage medium and product | |
CN106598488A (en) | Distributed data reading method and device | |
CN114428589B (en) | Data processing method and device, electronic equipment and storage medium | |
CN111246278B (en) | Video playing method and device, electronic equipment and storage medium | |
CN111221862B (en) | Request processing method and device | |
CN110795314B (en) | Method and device for detecting slow node and computer readable storage medium | |
CN110908814A (en) | Message processing method and device, electronic equipment and storage medium | |
CN107704489B (en) | Processing method and device for read-write timeout and computer readable storage medium | |
CN115510107A (en) | Data preheating caching method, device, equipment and storage medium | |
CN114281859A (en) | Data processing method, device and storage medium | |
CN112667852B (en) | Video-based searching method and device, electronic equipment and storage medium | |
US9774730B2 (en) | Method, device, and system for telephone interaction | |
CN112102009A (en) | Advertisement display method, device, equipment and storage medium | |
CN109582851B (en) | Search result processing method and device | |
CN111625536B (en) | Data access method and device | |
CN112182027B (en) | Information query method, device, electronic equipment and storage medium | |
CN114238728B (en) | Vehicle data processing method, device and equipment | |
CN112102081B (en) | Method, device, readable storage medium and blockchain network for generating blockchain | |
CN110716985B (en) | Node information processing method, device and medium | |
CN112462996B (en) | Service information optimizing method, service information optimizing device and storage medium | |
CN110119471B (en) | Method and device for checking consistency of search results | |
CN110019358B (en) | Data processing method, device and equipment and storage medium | |
CN111723320B (en) | Data chart loading method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |