CN115599821A - Cache control method, device, equipment and medium - Google Patents

Cache control method, device, equipment and medium Download PDF

Info

Publication number
CN115599821A
CN115599821A CN202211309955.6A CN202211309955A CN115599821A CN 115599821 A CN115599821 A CN 115599821A CN 202211309955 A CN202211309955 A CN 202211309955A CN 115599821 A CN115599821 A CN 115599821A
Authority
CN
China
Prior art keywords
target
storage address
cached data
data
target index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211309955.6A
Other languages
Chinese (zh)
Inventor
赵鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202211309955.6A priority Critical patent/CN115599821A/en
Publication of CN115599821A publication Critical patent/CN115599821A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a cache control method, a cache control device, cache control equipment and a cache control medium. Wherein, the method comprises the following steps: acquiring the target access heat of the cached data; controlling to add the storage address of the cached data to a target array according to the target access heat; and updating the cached data in the cache region according to the storage address in the target array. According to the scheme, the storage address in the target array is updated by introducing the target access heat, and then the cached data in the cache region is updated in a grading manner, so that the influence of avalanche of the cache region on the database is avoided, and the access pressure of the database is reduced. Meanwhile, the scheme reduces unnecessary occupation of cache resources, improves the use efficiency of the cache resources, and is convenient to operate and small in operand.

Description

Cache control method, device, equipment and medium
Technical Field
The embodiment of the invention relates to the technical field of data storage, in particular to a cache control method, a cache control device, cache control equipment and a cache control medium.
Background
With the popularization of micro service technology, when a database is used for data access, a cache technology is widely adopted to improve throughput and reduce access pressure of the database.
In the caching technology adopted in the prior art, after the cached data in the cache region reaches the expiration time, a cache avalanche condition occurs, that is, a large number of instantaneous requests can directly access the database, so that the database is crashed, and the normal use of the database is affected.
Disclosure of Invention
The invention provides a cache control method, a cache control device, cache control equipment and a cache control medium, which are used for avoiding the situation of cache avalanche.
According to an aspect of the present invention, there is provided a cache control method, including:
acquiring the target access heat of cached data;
controlling to add the storage address of the cached data to a target array according to the target access heat;
and updating the cached data in the cache region according to the storage address in the target array.
According to another aspect of the present invention, there is provided a cache control apparatus, including:
the target access heat acquisition module is used for acquiring the target access heat of the cached data;
the storage address adding module is used for controlling the storage address of the cached data to be added to a target array according to the target access heat;
and the cached data updating module is used for updating the cached data in the cache region according to the storage address in the target array.
According to another aspect of the present invention, there is provided an electronic apparatus including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are enabled to execute any one of the cache control methods provided by the embodiments of the present invention.
According to another aspect of the present invention, a computer-readable storage medium is provided, in which computer instructions are stored, and the computer instructions are used for causing a processor to implement any one of the cache control methods provided by the embodiments of the present invention when the computer instructions are executed.
According to the cache control scheme provided by the embodiment of the invention, the target access heat of cached data is acquired; controlling to add the storage address of the cached data to the target array according to the target access heat; and updating the cached data in the cache region according to the storage address in the target array. According to the scheme, the target access heat is introduced, the storage address in the target array is updated, and then the cached data in the cache region is updated in a grading manner, so that the influence of the avalanche of the cache region on the database is avoided, and the access pressure of the database is reduced. Meanwhile, the scheme reduces unnecessary occupation of cache resources, improves the use efficiency of the cache resources, and is convenient to operate and small in operand.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a cache control method according to an embodiment of the present invention;
fig. 2A is a flowchart of a cache control method according to a second embodiment of the present invention;
fig. 2B is a schematic diagram illustrating that a doubly linked list is used to store each storage address in a target index according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a cache control apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device implementing a cache control method according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a cache control method according to an embodiment of the present invention, where this embodiment is applicable to performing add-drop control on cached data stored in a cache region, and the method may be executed by a cache control device, and the cache control device may be implemented in a software and/or hardware manner and may be configured in an electronic device that carries a cache control function.
Referring to fig. 1, the cache control method includes:
and S110, acquiring the target access heat of the cached data.
The cached data refers to data already stored in the cache area. For example, after receiving an access request, responding to the access request, if access result data corresponding to the access request exists in cached data stored in a cache region, feeding the access result data back to a request initiator; if the cached data stored in the cache region does not have the access result data corresponding to the access request, the access result data corresponding to the access request is obtained from the database, the obtained access result data is fed back to the request initiator and is sent to the cache region for storage, and the access result data stored in the cache region is the cached data. It should be noted that the number of the cached data in the cache region is at least one, and the cached data in the cache region is increased in real time.
The target access heat of the cached data is used for representing the access frequency of the cached data, and the hit rate of the cached data in a preset time slice can be adopted for quantitative identification. Specifically, the target access heat may be determined according to the number of accesses within a preset time slice. In a preset time slice, aiming at any cached data, the more times of accessing the cached data, the higher the hit rate of the cached data, namely the higher the target access heat; the fewer the number of times the cached data is accessed, the lower the hit rate of the cached data, i.e., the lower the target access heat. The length of the preset time slice is not limited at all, and can be set by technical personnel according to experience.
And S120, controlling to add the storage address of the cached data to the target array according to the target access heat.
Wherein the memory address can be used to uniquely characterize the identity of the cached data. The storage address can be used as a pointer of the cached data, and the cached data which is required to be stored in the cache region can be determined through the storage address of any cached data in the target array.
Wherein the target array may be a storage address for storing the cached data.
Specifically, for the target access heat of any cached data, if the target access heat of the cached data is higher, the storage address of the cached data is added to the target array; and if the target access heat of the cached data is lower, taking the cached data as cached data to be deleted, and forbidding adding the storage address of the cached data to a target array or deleting the storage address of the cached data in the target array.
And S130, updating the cached data in the cache region according to the storage address in the target array.
The implementation manner of the cache region in the embodiment of the present invention is not limited at all, and may be set by a technician according to experience or needs. Illustratively, the cache region may be implemented by a cache database. Wherein the cache database may be used to store cached data.
Specifically, the storage address in the target array corresponds to the cached data, and the cached data is continuously stored in the cache region; and deleting the cached data of which the corresponding storage address is not stored in the target array in the cache region so as to update the cached data in the cache region. It can be understood that the number of the cached data stored in the cache region is not less than the number of the cached data corresponding to the existing storage address in the target array.
It should be noted that, for any one of the cached data to be deleted, if the storage address of the cached data is in the target array, the storage address of the cached data is found in the target array, and the cached data in the cache area is deleted at the same time; if the storage address of the cached data is not in the target array, the cached data is directly deleted.
According to the cache control scheme provided by the embodiment of the invention, the target access heat of cached data is acquired; controlling to add the storage address of the cached data to the target array according to the target access heat; and updating the cached data in the cache region according to the storage address in the target array. According to the scheme, the target access heat is introduced, the storage address in the target array is updated, and then the cached data in the cache region is updated in a grading manner, so that the influence of the avalanche of the cache region on the database is avoided, and the access pressure of the database is reduced. Meanwhile, the scheme reduces unnecessary occupation of cache resources, improves the use efficiency of the cache resources, and is convenient to operate and small in operand.
The cache control scheme provided by the embodiment of the invention can be applied to scenes with large data access amount or high concurrency.
Example two
Fig. 2A is a flowchart of a cache control method according to a second embodiment of the present invention, and in this embodiment, based on the foregoing embodiments, an operation of "controlling to add a storage address of cached data to a target array according to a target access heat level" is further refined into "if the target access heat level is greater than a preset heat level threshold, a target index is allocated to the storage address of the cached data, and the storage address of the cached data is added to the target index; and if the target access heat is not greater than the preset heat threshold, allocating a target index to the storage address of the cached data, and deleting the storage address of the cached data from the target index so as to perfect the adding mechanism of the storage address. For parts which are not described in detail in the embodiments of the present invention, reference may be made to descriptions of other embodiments.
Referring to fig. 2A, a cache control method includes:
s210, obtaining the target access heat of the cached data.
S220, if the target access heat is larger than a preset heat threshold, allocating a target index to the storage address of the cached data in the target array, and adding the storage address of the cached data to the target index.
The size of the preset heat threshold is not limited at all, and can be set by technical personnel according to experience or determined repeatedly through a large number of tests.
The target index is used for an array in the target array and is used for positioning the storage position of the storage address of the cached data in the target array.
Specifically, since the target array is a data structure in which the storage addresses are arranged linearly, in order to determine the location of the storage addresses in the target array, a target index may be allocated to each storage address in the target array, and the storage addresses are stored in the target indexes, so that the storage locations of the storage addresses may be determined quickly. The target index may be determined according to a remainder result of the array length of the target array and the storage address of the cached data, and if the array length of the storage address is 20 and the remainder of the array length is 0, the target index of the storage address is 0, that is, the storage address is stored in the 0 th target index. It should be noted that the number of target indexes in the target array is at least one. The array length of the target array can be set by a technician as needed or experienced.
Specifically, for any cached data, within a preset time slice, if the target access heat of the cached data is greater than a preset heat threshold, a target index is allocated to the storage address of the cached data in a target array, and the storage address of the cached data is added to the target index.
In a specific implementation manner, adding the storage address of the cached data to the target index may be: and if the target index does not store the data, directly adding the storage address of the cached data to the target index.
For example, for any storage address of cached data, if the target index of the storage address is determined to be the target index a according to the remainder of the array length of the storage address, and if no data is stored in the target index a, the storage address may be directly added to the target index a.
In the embodiment of the invention, the data amount stored in the target index is limited, so that the data cannot be stored infinitely. To increase the number of memory addresses stored in the target array, a preset number of memory addresses may also be stored in a single target index.
In another specific implementation, adding the storage address of the cached data to the target index may be: if the target index has stored data, determining the amount of the stored data, and updating the stored data in the target index according to the amount of the stored data and the storage address of the cached data. The stored data amount refers to the number of storage addresses added to any target index. The stored data refers to the storage address stored in any target index.
Continuing with the previous example, if the target index a has data stored therein, the amount of the stored data in the target index a is determined, and the stored data in the target index a is updated according to the amount of the stored data and the storage address of the cached data.
In the embodiment of the present invention, the stored data in the target index is updated according to the stored data amount and the storage address of the cached data, and the stored data amount may be compared with a preset number threshold to determine whether the storage address of the cached data can be directly stored in the target index.
Specifically, if the amount of the stored data is equal to the preset number threshold, deleting at least one stored data from the target index, and adding the storage address of the cached data to the target index; and if the stored data amount is smaller than the preset number threshold, directly adding the storage address of the cached data into the target index. The size of the preset number threshold is not limited at all, and can be set by technical personnel according to experience or determined repeatedly through a large number of tests.
If the preset quantity threshold is 5, continuing the previous example, when the stored data amount in the target index A is equal to the preset quantity threshold and is 5, deleting at least one stored data from the target index A, and adding the storage address of the cached data to the target index A; when the amount of the stored data in the target index a is smaller than the threshold value of the preset amount, if the amount of the stored data in the target index a is 4, the storage address of the cached data is directly added to the target index a.
It should be noted that at least one target index in the target array has a corresponding preset number threshold. At least one target index in the target array may have the same or different preset number thresholds, which is not limited in this embodiment of the present invention. For convenience of management, the predetermined number thresholds corresponding to different target indexes are generally the same in size.
It can be understood that by introducing the preset number threshold, it is determined whether the storage address of the cached data can be directly added to the target index, so that invalid addition of the storage address of the cached data is avoided, and the storage address of the cached data is ensured to be added to the target index.
It can be understood that by introducing the stored data volume and determining whether the storage address of the cached data can be directly added to the target index according to the stored data volume, the storage address in the target index is prevented from being infinitely stored, and the storage pressure of the target index is reduced.
And S230, if the target access heat is not greater than the preset heat threshold, allocating a target index to the storage address of the cached data in the target array, and deleting the storage address of the cached data from the target index.
Specifically, for any cached data, in a preset time slice, if the target access heat of the cached data is less than or equal to a preset heat threshold, deleting the storage address of the cached data from a target index according to the target index allocated to the storage address of the cached data; correspondingly, when the cache region is updated subsequently, the cache data corresponding to the deleted storage address is also deleted in the cache region, so that the storage pressure of the cache region is reduced, and the condition of cache avalanche is avoided.
S240, updating the cached data in the cache region according to the storage address in the target array.
The embodiment of the invention provides a cache control scheme, wherein if the access heat of a target is greater than a preset heat threshold, a target index is allocated to a storage address of cached data, and the storage address of the cached data is added to the target index; if the target access heat is not greater than the preset heat threshold, a target index is distributed to the storage address of the cached data, the storage address of the cached data is deleted from the target index, and the adding mechanism of the storage address is perfected. According to the scheme, the preset heat threshold is introduced, the addition and deletion of the storage address of the cached data in the target index are controlled, the unnecessary occupation of resources is reduced, the influence on the performance of the target index caused by the fact that a large number of storage addresses of the cached data are stored in the target index is avoided, and the pressure of the target index is reduced. Meanwhile, by introducing the target index, the storage address of the cached data is quickly and accurately positioned, and the searching efficiency is improved.
The embodiment of the invention does not limit the way of storing each storage address by the target index. In an alternative embodiment, a doubly linked list may be used to store each storage address in the target index.
It can be understood that, by using the two-way linked list to store each storage address, the storage addresses can be added and deleted more conveniently, and meanwhile, traversal of each storage address can be realized.
Optionally, the target index stores each storage address by using a bidirectional linked list, and when the length of the bidirectional linked list corresponding to the target index does not reach a preset length threshold, in order to add the storage address of the cached data to the target index, the storage address of the cached data may be inserted into the tail of the bidirectional linked list corresponding to the target index. The size of the preset length threshold is not limited at all, and can be set by technical personnel according to experience or determined repeatedly through a large number of tests.
It can be understood that by inserting the storage address of the cached data into the tail of the doubly linked list corresponding to the target index, the situation of confusion in the insertion process can be avoided, and the accuracy of inserting the storage address of the cached data is improved.
Or optionally, the target index stores each storage address by using a bidirectional linked list, and when the length of the bidirectional linked list corresponding to the target index reaches a preset length threshold, in order to add the storage address of the cached data to the target index, at least one stored data needs to be deleted from the bidirectional linked list corresponding to the target index.
Specifically, the stored data may be deleted from the head of the bi-directional linked list corresponding to the target index; or, the reference access heat of the cached data corresponding to each stored data in the target index may be determined, and the storage address corresponding to the cached data with the lower reference access heat may be deleted from the doubly linked list corresponding to the target index. Preferably, the memory address of the cached data with the lowest reference access heat is deleted. The reference access heat refers to an existing storage address in the bidirectional linked list corresponding to the target index, and the access times of cached data corresponding to the existing storage address in a preset time slice.
For example, referring to fig. 2B, if the target index a corresponds to the doubly linked list a, the doubly linked list a includes 4 nodes, that is, node H, node G, node F, and node D, and each node is connected in sequence in a doubly linked manner. Wherein, the node H is the head of the double linked list, and the node D is the tail of the double linked list. When the length of the target index a corresponding to the doubly linked list a reaches a preset length threshold (for example, 4), at this time, the storage address of the cached data needs to be added, scanning can be started from the tail of the doubly linked list a, namely the node D, the head of the doubly linked list a, namely the node H is found, the stored data in the node H is deleted, and the storage address of the cached data needing to be added is inserted into the tail of the doubly linked list a; or the reference access heat of the cached data corresponding to each stored data in the target index a can be determined, and the storage address corresponding to the cached data with the lowest reference access heat is deleted from the doubly linked list a corresponding to the target index a. If the reference access heat of the cached data corresponding to the stored data in the node G is determined to be the lowest in the doubly linked list a, the stored data in the node G is deleted, and the storage address of the cached data needing to be added is inserted into the tail part of the doubly linked list a.
Of course, when the reference access heat degree comparison is performed, the access heat degrees of the cached data to be added can be introduced to perform the comparison together. If the access heat of the cached data to be added is the lowest, the storage address of the cached data to be added may be prohibited from being added to the corresponding target index.
It should be noted that, the different target indexes correspond to the doubly linked list, and all have corresponding preset length thresholds. The preset length thresholds of the doubly linked lists corresponding to different target indexes may be the same or different, and the embodiment of the present invention does not limit this.
It can be understood that the stored data in the bidirectional linked list corresponding to the target index can be deleted by adopting two different modes, so that the diversity and flexibility of the deleted stored data are improved, the situation that the stored data are unreasonably deleted possibly according to a single deleting mode is avoided, and the rationality of deleting the stored data is improved; meanwhile, the stored data of each node in the bidirectional linked list corresponding to the target index is guaranteed and cannot be permanently reserved, and the situation of continuous dirty reading is avoided.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a cache control device according to a third embodiment of the present invention, where this embodiment is applicable to performing add-drop control on cached data stored in a cache region, and the method may be executed by the cache control device, and the cache control device may be implemented in a software and/or hardware manner, and may be configured in an electronic device that carries a cache control function.
As shown in fig. 3, the apparatus includes: a target access heat acquisition module 310, a storage address adding module 320 and a cache data updating module 330. Wherein the content of the first and second substances,
a target access heat obtaining module 310, configured to obtain a target access heat of the cached data;
the storage address adding module 320 is used for controlling the storage address of the cached data to be added into the target array according to the target access heat;
and a cached data updating module 330, configured to update the cached data in the cache area according to the storage address in the target array.
According to the cache control scheme provided by the embodiment of the invention, the target access heat of cached data is acquired through the target access heat acquisition module; controlling to add the storage address of the cached data into the target array through a storage address adding module according to the target access heat; and updating the cached data in the cache region according to the storage address in the target array through a cache data updating module. According to the scheme, the target access heat is introduced, the storage address in the target array is updated, and then the cached data in the cache region is updated in a grading manner, so that the influence of the avalanche of the cache region on the database is avoided, and the access pressure of the database is reduced. Meanwhile, the scheme reduces unnecessary occupation of cache resources, improves the use efficiency of the cache resources, and is convenient to operate and small in operand.
Optionally, the storage address adding module 320 includes:
the adding unit is used for allocating a target index to the storage address of the cached data in the target array and adding the storage address of the cached data to the target index if the target access heat is greater than a preset heat threshold;
and the deleting unit is used for allocating a target index to the storage address of the cached data in the target array and deleting the storage address of the cached data from the target index if the target access heat is not greater than the preset heat threshold.
Optionally, the adding unit includes:
the adding subunit is used for directly adding the storage address of the cached data to the target index if the target index does not store the data;
and the updating subunit is used for determining the stored data amount if the target index stores the data, and updating the stored data in the target index according to the stored data amount and the storage address of the cached data.
Optionally, the update subunit includes:
the first adding slave unit is used for deleting at least one piece of stored data from the target index and adding the storage address of the cached data into the target index if the amount of the stored data is equal to a preset quantity threshold;
and the second adding slave unit is used for directly adding the storage address of the cached data into the target index if the stored data amount is smaller than the preset number threshold.
Optionally, the target index stores each storage address by using a bidirectional linked list.
Optionally, the update subunit is specifically configured to:
and inserting the storage address of the cached data into the tail part of the bidirectional linked list corresponding to the target index.
Optionally, the update subunit is specifically configured to:
deleting the stored data from the head of the bidirectional linked list corresponding to the target index; alternatively, the first and second electrodes may be,
and determining the reference access heat of the cached data corresponding to each stored data in the target index, and deleting the storage address corresponding to the cached data with lower reference access heat from the doubly linked list corresponding to the target index.
The cache control device provided by the embodiment of the invention can execute the cache control method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing each cache control method.
In the technical scheme of the invention, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related cached data, the target access heat, the storage address and the like all accord with the regulation of related laws and regulations and do not violate the good custom of the public order.
Example four
Fig. 4 is a schematic structural diagram of an electronic device implementing a cache control method according to a fourth embodiment of the present invention. The electronic device 410 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 410 includes at least one processor 411, and a memory communicatively connected to the at least one processor 411, such as a Read Only Memory (ROM) 412, a Random Access Memory (RAM) 413, and the like, wherein the memory stores computer programs executable by the at least one processor, and the processor 411 may perform various appropriate actions and processes according to the computer programs stored in the Read Only Memory (ROM) 412 or the computer programs loaded from the storage unit 418 into the Random Access Memory (RAM) 413. In the RAM 413, various programs and data required for the operation of the electronic device 410 can also be stored. The processor 411, the ROM 412, and the RAM 413 are connected to each other through a bus 414. An input/output (I/O) interface 415 is also connected to bus 414.
A number of components in the electronic device 410 are connected to the I/O interface 415, including: an input unit 416 such as a keyboard, a mouse, or the like; an output unit 417 such as various types of displays, speakers, and the like; a storage unit 418, such as a magnetic disk, optical disk, or the like; and a communication unit 419 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 419 allows the electronic device 410 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Processor 411 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 411 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 411 performs various methods and processes described above, such as a cache control method.
In some embodiments, the cache control method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 418. In some embodiments, part or all of the computer program may be loaded and/or installed onto electronic device 410 via ROM 412 and/or communications unit 419. When loaded into RAM 413 and executed by processor 411, may perform one or more of the steps of the cache control method described above. Alternatively, in other embodiments, the processor 411 may be configured to perform the cache control method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A cache control method, comprising:
acquiring the target access heat of cached data;
controlling to add the storage address of the cached data to a target array according to the target access heat;
and updating the cached data in the cache region according to the storage address in the target array.
2. The method of claim 1, wherein the controlling the storage address of the cached data to be added to the target array according to the target access heat comprises:
if the target access heat is greater than a preset heat threshold, allocating a target index to the storage address of the cached data in the target array, and adding the storage address of the cached data to the target index;
if the target access heat is not greater than the preset heat threshold, allocating a target index to the storage address of the cached data in the target array, and deleting the storage address of the cached data from the target index.
3. The method of claim 2, wherein adding the storage address of the cached data to the target index comprises:
if the target index does not store data, directly adding a storage address of the cached data to the target index;
and if the target index has stored data, determining the stored data amount, and updating the stored data in the target index according to the stored data amount and the storage address of the cached data.
4. The method of claim 3, wherein updating the stored data in the target index according to the amount of the stored data and the storage address of the cached data comprises:
if the amount of the stored data is equal to a preset quantity threshold, deleting at least one piece of stored data from the target index, and adding a storage address of the cached data to the target index;
and if the stored data amount is smaller than the preset number threshold, directly adding the storage address of the cached data into the target index.
5. The method of claim 4, wherein the target index stores each storage address using a doubly linked list.
6. The method of claim 5, wherein adding the storage address of the cached data to the target index comprises:
and inserting the storage address of the cached data into the tail part of the bidirectional linked list corresponding to the target index.
7. The method of claim 5, wherein deleting at least one stored datum from the target index comprises:
deleting the stored data from the head of the bi-directional linked list corresponding to the target index; alternatively, the first and second electrodes may be,
and determining the reference access heat of the cached data corresponding to each stored data in the target index, and deleting the storage address corresponding to the cached data with lower reference access heat from the doubly linked list corresponding to the target index.
8. A cache control apparatus, comprising:
the target access heat acquisition module is used for acquiring the target access heat of the cached data;
the storage address adding module is used for controlling the storage address of the cached data to be added to a target array according to the target access heat;
and the cached data updating module is used for updating the cached data in the cache region according to the storage address in the target array.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a cache control method as recited in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a cache control method according to any one of claims 1 to 7.
CN202211309955.6A 2022-10-25 2022-10-25 Cache control method, device, equipment and medium Pending CN115599821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211309955.6A CN115599821A (en) 2022-10-25 2022-10-25 Cache control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211309955.6A CN115599821A (en) 2022-10-25 2022-10-25 Cache control method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115599821A true CN115599821A (en) 2023-01-13

Family

ID=84848679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211309955.6A Pending CN115599821A (en) 2022-10-25 2022-10-25 Cache control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115599821A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561825A (en) * 2023-07-12 2023-08-08 北京亿赛通科技发展有限责任公司 Data security control method and device and computer equipment
CN116560585A (en) * 2023-07-05 2023-08-08 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system
CN117235247A (en) * 2023-11-13 2023-12-15 深圳市微克科技有限公司 Novel reading method, system and medium based on intelligent wearable equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560585A (en) * 2023-07-05 2023-08-08 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system
CN116560585B (en) * 2023-07-05 2024-04-09 支付宝(杭州)信息技术有限公司 Data hierarchical storage method and system
CN116561825A (en) * 2023-07-12 2023-08-08 北京亿赛通科技发展有限责任公司 Data security control method and device and computer equipment
CN116561825B (en) * 2023-07-12 2023-09-26 北京亿赛通科技发展有限责任公司 Data security control method and device and computer equipment
CN117235247A (en) * 2023-11-13 2023-12-15 深圳市微克科技有限公司 Novel reading method, system and medium based on intelligent wearable equipment

Similar Documents

Publication Publication Date Title
CN115599821A (en) Cache control method, device, equipment and medium
CN113961510A (en) File processing method, device, equipment and storage medium
US10747773B2 (en) Database management system, computer, and database management method
CN109597724B (en) Service stability measuring method, device, computer equipment and storage medium
CN113010535B (en) Cache data updating method, device, equipment and storage medium
CN113094392A (en) Data caching method and device
CN113742131B (en) Method, electronic device and computer program product for storage management
CN107748649B (en) Method and device for caching data
CN115061947B (en) Resource management method, device, equipment and storage medium
CN116578502A (en) Access request processing device, processing method, equipment and storage medium
CN116226150A (en) Data processing method, device, equipment and medium based on distributed database
CN113191136B (en) Data processing method and device
CN115878035A (en) Data reading method and device, electronic equipment and storage medium
CN112306413B (en) Method, device, equipment and storage medium for accessing memory
CN114564149A (en) Data storage method, device, equipment and storage medium
CN113076067A (en) Method and device for eliminating cache data
CN112631517A (en) Data storage method and device, electronic equipment and storage medium
CN117632795A (en) Statistical method, system, device, equipment and medium for cache data access frequency
CN115964391A (en) Cache management method, device, equipment and storage medium
CN114826935B (en) Model generation method, system, server and storage medium
CN113849255B (en) Data processing method, device and storage medium
CN112989250B (en) Web service response method and device and electronic equipment
CN114116613B (en) Metadata query method, device and storage medium based on distributed file system
US20230048813A1 (en) Method of storing data and method of reading data
CN112559574B (en) Data processing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination