CN114253455A - Cache hit rate adjusting method, device, equipment and storage medium - Google Patents

Cache hit rate adjusting method, device, equipment and storage medium Download PDF

Info

Publication number
CN114253455A
CN114253455A CN202010995183.0A CN202010995183A CN114253455A CN 114253455 A CN114253455 A CN 114253455A CN 202010995183 A CN202010995183 A CN 202010995183A CN 114253455 A CN114253455 A CN 114253455A
Authority
CN
China
Prior art keywords
cache
hit rate
data
cache device
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010995183.0A
Other languages
Chinese (zh)
Inventor
徐佳宏
朱吕亮
刘瑞顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ipanel TV Inc
Original Assignee
Shenzhen Ipanel TV Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ipanel TV Inc filed Critical Shenzhen Ipanel TV Inc
Priority to CN202010995183.0A priority Critical patent/CN114253455A/en
Publication of CN114253455A publication Critical patent/CN114253455A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a method, a device, equipment and a storage medium for adjusting cache hit rate. Obtaining a reading time length spent on reading data from the cache device according to the data reading request; determining a load parameter of the cache device according to the reading duration; determining whether the cache hit rate of the cache equipment needs to be adjusted according to the load parameters of the cache equipment, and if so, determining the hit discount rate matched with the load parameters; and responding to the at least one subsequent data reading request according to the hit discount rate so as to reduce the cache hit rate of the at least one subsequent data reading request. Therefore, when the load of the cache device is large, the probability that the data reading request hits the cache device is reduced, namely the cache hit rate of the cache device is reduced, and the cache device is ensured to keep good data transmission performance.

Description

Cache hit rate adjusting method, device, equipment and storage medium
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method, an apparatus, a device, and a storage medium for adjusting a cache hit rate.
Background
The existing cache system is characterized in that each disk is served outside through a fixed IP and a port, and a thread is exclusively owned. Each disk may be considered to have a separate service instance. When the client requests data from the cache system, as long as the requested data is hot spot data which is cached in the cache system, the cache system returns the data requested by the client to the client. In this way, when a large amount of hot data requested to be accessed by the client is concentrated on one disk, the disk input/output performance is limited, and the output of the hot data is limited due to the network bandwidth limitation of the network card, so that the data transmission performance of the disk is seriously affected, and the capability of the cache system for rapidly providing the hot data is further reduced.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for adjusting a cache hit rate, so as to reduce a probability that a data read request hits a cache device when a load of the cache device is large, that is, reduce the cache hit rate of the cache device, thereby ensuring that the cache device maintains good data transmission performance. The specific technical scheme is as follows:
in a first aspect, a method for adjusting a cache hit rate includes:
obtaining a reading duration spent on reading data from the cache device according to the data reading request;
determining a load parameter of the cache device according to the reading duration;
determining whether the cache hit rate of the cache equipment needs to be adjusted according to the load parameters of the cache equipment, and if so, determining the hit discount rate matched with the load parameters;
and responding to the at least one subsequent data reading request according to the hit discount rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
With reference to the first aspect, in some optional implementations, the responding to the at least one subsequent data read request according to the discount hit rate so as to reduce a cache hit rate of the at least one subsequent data read request includes:
determining a target cache hit rate of the cache device according to the hit discount rate and the current cache hit rate of the cache device;
and responding to the at least one subsequent data reading request according to the target cache hit rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
With reference to the previous embodiment, in some optional embodiments, the responding to the at least one subsequent data read request according to the target cache hit rate so that the cache hit rate of the at least one subsequent data read request is reduced includes:
obtaining a first data reading request;
when the data requested by the first data reading request is stored in the cache device, generating a random number, wherein the random number is within a preset numerical range;
determining whether the random number is within a first numerical range, if so, returning a first storage address to a sender of the first data reading request, wherein the first storage address is a storage address of data requested to be read by the first data reading request in the cache device; otherwise, returning an indication that the requested data is not stored in the cache device to a sender of the first data reading request;
and matching the ratio of the first numerical range to the numerical range of the preset numerical range with the target cache hit rate.
Optionally, in some optional embodiments, a mathematical relationship between the first range of values, the preset range of values, the target cache hit rate, and the current cache hit rate is as follows:
the first numerical range is (the target cache hit rate ÷ the current cache hit rate) × the preset numerical range.
Optionally, in some optional embodiments, a ratio of the first range of values to the preset range of values is equal to the discount hit rate.
In a second aspect, an apparatus for adjusting a cache hit rate includes: the device comprises a duration obtaining unit, a load parameter determining unit, an adjustment determining unit, a discount rate determining unit and a hit rate reducing unit;
the time length obtaining unit is configured to execute obtaining of a reading time length taken for reading data from the cache device according to the data reading request;
the load parameter determining unit is configured to determine the load parameter of the cache device according to the reading duration;
the adjustment determining unit is configured to determine whether the cache hit rate of the cache device needs to be adjusted according to the load parameter of the cache device, and if so, trigger the discount rate determining unit
The discount rate determination unit is configured to determine a hit discount rate matching the load parameter;
the hit rate reduction unit is configured to respond to the at least one subsequent data read request according to the hit discount rate, so that the cache hit rate of the at least one subsequent data read request is reduced.
With reference to the second aspect, in some optional embodiments, the hit rate reduction unit includes: a target hit rate determining unit and a first hit rate reducing unit;
the target hit rate determining unit is configured to determine a target cache hit rate of the cache device according to the hit discount rate and a current cache hit rate of the cache device;
the first hit rate reduction unit is configured to respond to the at least one subsequent data read request according to a target cache hit rate, so that the cache hit rate of the at least one subsequent data read request is reduced.
In combination with the previous embodiment, in some optional embodiments, the first hit rate reduction unit includes: a read request obtaining unit, a random number generating unit, a range determining unit, an address returning unit and an indication returning unit;
the reading request obtaining unit is configured to obtain a first data reading request;
the random number generation unit is configured to generate a random number when the data requested by the first data reading request is stored in the cache device, wherein the random number is within a preset numerical range;
the range determining unit is configured to determine whether the random number is within a first numerical range, if so, trigger the address returning unit, otherwise, trigger the indication returning unit
The address returning unit is configured to return a first saving address to a sender of the first data reading request, where the first saving address is a saving address of data requested to be read by the first data reading request in the cache device;
an indication returning unit configured to perform returning, to a sender of the first data read request, an indication that the requested data is not stored in the cache device;
and matching the ratio of the first numerical range to the numerical range of the preset numerical range with the target cache hit rate.
In a third aspect, a storage medium is used for storing a program, and when the program is executed by a processor, the method for adjusting the cache hit rate is implemented.
In a fourth aspect, an apparatus comprises at least one processor, and at least one memory, bus connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is used for calling a program in the memory, and the program is at least used for realizing any cache hit rate adjusting method.
According to the method, the device, the equipment and the storage medium for adjusting the cache hit rate, provided by the embodiment of the invention, the reading time spent for reading data from the cache equipment according to the data reading request is obtained; determining a load parameter of the cache device according to the reading duration; determining whether the cache hit rate of the cache equipment needs to be adjusted according to the load parameters of the cache equipment, and if so, determining the hit discount rate matched with the load parameters; and responding to the at least one subsequent data reading request according to the hit discount rate so as to reduce the cache hit rate of the at least one subsequent data reading request. Therefore, the invention can reduce the probability of the data reading request hitting the cache device when the load of the cache device is larger, namely, reduce the cache hit rate of the cache device, thereby ensuring that the cache device keeps good data transmission performance. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a client requesting hot data from a cache device according to the present invention;
fig. 2 is another flowchart of a client requesting hot data from a cache device according to the present invention;
FIG. 3 is a flow chart of a method for adjusting cache hit rate according to the present invention;
FIG. 4 is a flow chart of another method for adjusting cache hit rate according to the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for adjusting a cache hit rate according to the present invention;
fig. 6 is a schematic structural diagram of an apparatus provided in the present invention.
Detailed Description
Since some data are frequently accessed data in an actual system, the frequently accessed data may be referred to as hot spot data. For hot spot data, the hot spot data is usually stored in a cache device in a cache system at present, and the cache device has a high data reading and writing speed, and can quickly provide data for other devices or systems so as to improve the speed of acquiring the data by the other devices or systems.
Specifically, the storage system with the cache system may include: the system comprises client equipment, a data server, a cache server, cache equipment, an index server and non-cache equipment, wherein the cache equipment can be memory, solid-state storage disks and other storage equipment with high data read-write speed. The non-cache device may be a mechanical storage disk or the like that is not a cache device. The data read-write speed of the non-cache device is generally lower than that of the cache device, for example, the read-write speed of the mechanical storage disk is generally lower than that of the memory and the solid-state storage disk. Wherein, the cache system can include: a cache server and a cache device.
The client device can be various electronic devices such as a computer, a mobile phone, a tablet computer, a wearable device and the like.
The data server may communicate with the client device via an SDK (Software Development Kit).
The cache server may store therein index information of data stored in the cache device, and when the data is stored in units of data blocks, the index information stored in the cache server may be: (file _ id, block _ id) > (disk _ id, block _ id). Wherein, the file _ ID in (file _ ID, block _ ID) is the file ID, and the block _ ID in (file _ ID, block _ ID) is the data block ID. Disk _ ID in (disk _ ID, block _ ID) is disk ID of the cache device, and block _ ID in (disk _ ID, block _ ID) is disk block ID.
Wherein the storage space of one disk block is the same as the data volume of one complete data block.
Whether each data block of each file is stored in the cache device and which disk block of which disk of the cache device is stored can be determined through the index information.
Similar to the cache server, the index server may store index information of data stored in the non-cache device, and when the data is stored in units of data blocks, the index server may store the index information of the data stored in the index server as follows: (file _ id, block _ id) > (disk _ id, block _ id). Wherein, the file _ ID in (file _ ID, block _ ID) is the file ID, and the block _ ID in (file _ ID, block _ ID) is the data block ID. Disk _ ID in (disk _ ID, block _ ID) is disk ID of the non-cache device, and block _ ID in (disk _ ID, block _ ID) is disk block ID.
Whether each data block of each file is stored in the non-cache device and which disk block of which disk of the non-cache device is stored can be determined through the index information.
For ease of understanding, a data reading process of a storage system with a cache system is disclosed below:
when the data requested to be read by the client device is stored in the cache device, the data reading process is as shown in fig. 1, and includes:
s1, the client device sends a data access request to the data server;
s2, the data server sends the data access request to the cache server;
s3, the cache server determines whether the data requested by the client device is stored in the cache device, if yes, S4 is executed;
s4, the cache server sends the storage address of the data requested by the client device in the cache device to the data server;
s5, after receiving the storage address returned by the cache server, the data server obtains the data requested by the client device from the storage address of the cache device;
and S6, the data server returns the data requested by the client device to the client device.
When the data requested to be read by the client device is not stored in the cache device, the data reading process is as shown in fig. 2, and includes:
s1, the client device sends a data access request to the data server;
s2, the data server sends the data access request to the cache server;
s3, the cache server determines whether the data requested by the client device is stored in the cache device, if not, the step S7 is executed;
s7, the cache server sends a notice that the data requested by the client device is not stored in the cache device to the data server;
s8, after receiving the notification that the data requested by the client device is not stored in the cache device and returned by the cache server, the data server sends the data access request to the index server;
s9, the index server sends the storage address of the data requested by the client device in the non-cache device to the data server;
s10, after receiving the storage address returned by the index server, the data server obtains the data requested by the client device from the storage address of the non-cache device;
and S11, the data server returns the data requested by the client device to the client device.
The inventor researches and discovers that under the processing flow, when a client requests data from a cache system, the cache system returns the data requested by the client to the client as long as the requested data is hot spot data cached in the cache system. When a large amount of hot data which are continuously requested to be accessed by a client are concentrated on one disk, due to the fact that the input/output performance of the disk is limited and the network bandwidth of a network card is limited, the output of the hot data is limited, the data transmission performance of the disk is seriously affected, the capacity of a cache system for rapidly providing the hot data is further reduced, and the invention provides the following scheme for solving the problems.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, the present invention provides a method for adjusting a cache hit rate, including:
s100, obtaining the reading time spent on reading data from the cache device according to the data reading request;
it should be understood that the execution subject of the present invention may be a cache server, and when data requested by a client from the cache server is cached in a cache device, the data read request may be considered to hit hot data. After the hotspot data is hit, the cache server may obtain the reading duration from the data server, where the reading duration may be requested by the cache server from the data server, or the reading duration may be actively reported to the cache server by the data server, for example, the reading duration is reported to the cache server by a ZooKeeper reporting method, which is not limited in this embodiment of the present invention. ZooKeeper is a distributed, open-source distributed application coordination service, is an open-source implementation of Chubby of Google, and is an important component of Hadoop and Hbase. It is a software that provides a consistent service for distributed applications, and the functions provided include: configuration maintenance, domain name service, distributed synchronization, group service, etc.
In S5, after receiving the storage address returned by the cache server, the data server obtains the data requested by the client from the storage address of the cache device. Namely, data is read from a specified cache device, and the duration of this reading process can be taken as the above-mentioned reading duration.
The data server may report the duration (or the average duration within a period of time) to the cache server, and the cache server may perform conversion processing on the duration, for example, convert the duration into a disk score of the cache device, or convert the duration into a "score" and report the "score" to the cache server, which is not limited in this embodiment of the invention.
Since the read time taken for the data read request to read data from the cache device may be related to the current input/output performance and bandwidth limitations of the cache device, the load condition of the cache device may be inferred from the read time taken for the data read request to read data from the cache device. The load of the cache device may be a task amount processed by the cache device according to the data read request. The task processed by the cache device according to the data reading request may include: the method comprises a cache data reading task, a cache data sending task, a cache device compression task and the like.
When the number of data read requests that the cache device needs to process is large, processing the data read requests uses the input/output function of the cache device and the network bandwidth, which may result in a long read time taken to process the data read requests due to the limited input/output performance and limited network bandwidth of the cache device.
The reading time taken for reading data from the cache device according to the data reading request can reflect the current input/output performance of the cache device to a certain extent. For example: the longer the read time, the worse the input/output performance of the cache device may be. The reading time length can also reflect the load of the cache device to a certain extent. For example: the longer the reading time is, the larger the load of the cache device is, the cache device cannot process the corresponding data reading request in time, so that the input/output performance of the cache device is reduced, and the capacity of the cache system for rapidly providing hot data is further reduced.
The reading time taken for reading data from the cache device according to the data reading request can reflect the current network bandwidth utilization degree of the cache device to a certain extent. For example: the longer the reading time is, the longer the use condition of the network broadband of the cache device reaches the saturation degree or the saturation degree, so that the cache device does not have the redundant network broadband to improve the data transmission capability.
It should be understood that the caching device referred to herein may be a magnetic disk device, such as a solid state disk. The cache device may be a specific solid-state disk, or may be a general name of a solid-state disk server and a plurality of solid-state disks communicatively connected thereto, which is not limited in the present invention.
S200, determining a load parameter of the cache device according to the reading duration;
it should be understood that the read duration reflects the load of the cache device to some extent, i.e. reflects the degree of burdensome "duty" that the cache device is subjected to, which may be characterized by a load parameter. For example, the heavier the "task" the caching device is subjected to, the larger the load parameter.
Optionally, the load parameter may be a load quantity, that is: the number of data read requests being processed.
Alternatively, the load parameter may be a range of values that match the read duration. For example, the value of the load parameter may range from 0 to 100, and a larger value indicates that the load of the cache device is larger, i.e. the input/output capability of the cache device is poorer. Of course, the present invention does not set any limit to the value range of the load parameter, and any feasible manner is within the protection scope of the present invention.
S300, determining whether the cache hit rate of the cache equipment needs to be adjusted or not according to the load parameters of the cache equipment, and if so, executing S400;
it will be appreciated that the load that can be borne by a caching device is limited due to the limited ability to read data. Therefore, when the load parameter of the cache device exceeds a certain value range, it can be said that the cache device has been operated in an "overloaded" manner, that is, the performance of the cache device has been degraded due to an excessive load. In this case, the cache hit rate of the cache device may be adjusted, that is, the cache hit rate of the cache device may be reduced, so as to reduce the load of the cache device.
It should be understood that an upper threshold of load may be set for the cache device, and when the load parameter of the cache device is greater than the upper threshold of load, it may be determined to adjust the cache hit rate of the cache device, which is not limited in the present invention.
S400, determining a hit discount rate matched with the load parameters;
it should be understood that after determining that the cache hit rate of the cache device needs to be adjusted in step S300, a specific hit discount rate may be determined. The discount rate of hit may represent a reduction degree of the cache hit rate of the cache device, and the larger the discount rate of hit is, the larger the reduction degree of the cache hit rate of the cache device is.
S500, responding to the at least one subsequent data reading request according to the hit discount rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
It should be appreciated that there may be a variety of ways to respond to a subsequent at least one data read request based on the discount hit rate. For example, the first way: if the discount rate of hits is 30%, two value ranges can be divided, a first range and a second range, for example 0 to 0.7 and 0.7 to 1, respectively. For a data read request, after it is determined that the data read request hits the cache device, a random number (whose value range is 0 to 1) is obtained, and if the obtained random number is in the range of 0 to 0.7, a result of the data read request hitting the cache device is returned to the client. If the obtained random number is in the range of 0.7 to 1, although the data reading request hits the cache device, the result that the data reading request misses the cache device is returned to the client, so that the purpose of reducing the cache hit rate of the cache device is achieved. By the method, the cache hit rate of the cache device can be reduced.
Second, if the discount rate is 30%, two value ranges can be divided, namely a first range and a second range, for example, 0 to 70 and 70 to 100, respectively. For a data read request, before it is not determined whether the data read request hits the cache device, a random number (whose value ranges from 0 to 100) is obtained, and if the obtained random number is in the range from 0 to 70, a request result of the data read request for the cache device is returned to the client, and if the data read request hits the cache device, the returned request result is that the data read request hits the cache device. If the obtained random number is in the range of 70 to 100, regardless of whether the data read request hits the cache device, the request result returned to the client is a result that the data read request misses the cache device, so that the purpose of reducing the cache hit rate of the cache device is achieved. By the method, the cache hit rate of the cache device can be reduced.
Optionally, in some optional embodiments, the responding to the at least one subsequent data read request according to the discount hit rate so as to reduce the cache hit rate of the at least one subsequent data read request includes:
step one, determining a target cache hit rate of the cache device according to the hit discount rate and the current cache hit rate of the cache device;
and step two, responding to the at least one subsequent data reading request according to the target cache hit rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
It should be appreciated that in practice, after determining the discount rate of hits, the target hit rate may also be determined based on the discount rate of hits and the current cache hit rate of the cache device. And then adjusting the cache hit rate of the subsequent data reading request hitting the cache device to be the target hit rate or close to the target hit rate.
It should be understood that the present invention reduces the cache hit rate by the above-mentioned random number, so that the cache hit rate of the cache device hit by the data read request can approach the target hit rate to a certain extent. However, for a specific data read request, the probability of the data read request hitting the cache device does not necessarily approach the target hit rate, which is not limited by the present invention.
As shown in fig. 4, in combination with the previous embodiment, in some alternative embodiments, the second step includes:
s510, obtaining a first data reading request;
it should be understood that the first data read request is only taken as an example to illustrate how the cache device responds to the subsequent at least one data read request according to the hit discount rate, so that the cache hit rate of the subsequent at least one data read request is reduced. For any data read request, the same scheme as the present example can be adopted, and the present invention is not limited to this.
S520, when the data requested by the first data reading request is stored in the cache device, generating a random number, wherein the random number is within a preset numerical range;
it should be understood that the cache device may be pre-cached with the hot spot data, and when a specific piece of data is determined to be the hot spot data, the data may be cached in the cache device, and information of the data may be recorded in the cache device. For example, index information of the data is established so that whether the requested data is located in the cache device can be quickly determined according to the data requested by the first data read request. When the index information of the requested data exists in the cache device, the cache device can be considered to store the requested data.
It should be understood that, when the data requested by the first data read request is stored in the cache device, the random number is generated, so that the operation burden of the cache device can be reduced to a certain extent. If the data requested by the first data reading request does not exist in the cache device, the first data reading request cannot hit the cache device necessarily, and the result that the first data reading request does not hit the cache device is directly returned to the client, so that the random number does not need to be generated, and the subsequent steps of reducing the cache hit rate according to the random number are performed.
It should be understood that the present invention may generate a random number after determining that the data requested by the first data read request is stored in the cache device. The invention can also generate the random number no matter whether the cache device stores the data requested by the first data reading request or not after the cache device obtains the first data reading request.
S530, determining whether the random number is in a first numerical range, if so, executing S540, otherwise, executing S550, wherein the ratio of the first numerical range to the numerical range of the preset numerical range is matched with the target cache hit rate;
it should be understood that the preset value range of the random number may be further subdivided into two ranges, and when the random number is in one of the value ranges, step S540 may be performed, for example, one of the value ranges is the first value range, and when the random number is in the other value range, step S550 may be performed, which is not limited by the present invention.
S540, returning a first storage address to a sender of the first data read request, where the first storage address is a storage address of the data requested to be read by the first data read request in the cache device;
it should be understood that if the random number is generated after determining that the data requested by the first data read request is stored in the cache device according to the present invention, the first storage address may be returned to the sender of the first data read request when the random number is in the first value range. After receiving the first storage address, the sender of the first data reading request can obtain the data requested to be read from the cache device.
Optionally, if the random number is a random number generated after the cache device obtains the first data read request. Then a response may be sent to the sender of the first data read request when the random number is within the first range of values, depending on whether the first data read request hits in the cache device. And if the first data reading request hits the cache device, sending a first storage address to a sender of the first data reading request. And if the first data reading request does not hit the cache device, sending a result that the first data reading request does not hit the cache device to a sender of the first data reading request.
Optionally, the first storage address may be a storage address of the data requested to be read by the first data read request in the cache device, or may also be a storage address of the data requested to be read by the first data read request in another storage device, which is not limited in the present invention.
It should be understood that after obtaining the first storage address, the sender of the first data read request may obtain the data stored at the first storage address from the storage device, that is, obtain the data requested to be read by the first data read request.
And S550, returning an indication that the requested data is not stored in the cache device to a sender of the first data reading request.
It should be appreciated that if the random number is not within the first range of values, an indication may be sent to the sender of the first data read request that the data requested by the first data read request is not stored in the cache device. The indication mode may be various, for example, a binary number "0" is defined as an indication that the data requested by the first data read request is not stored in the cache device, and other indication modes may also be adopted, which is not limited in the present invention.
It should be appreciated that the indication that the data requested by the first data read request is not stored in the cache device may be sent directly to the sender of the first data read request as long as the random number is not within the first range of values. Without paying attention to whether the data requested by the first data reading request is actually not stored in the cache device, even if the data requested by the first data reading request is actually stored in the cache device, an indication that the data requested by the first data reading request is not stored in the cache device is sent to a sender of the first data reading request, so that the cache hit rate of the cache device can be reduced.
It should be understood that steps S100, S200, S300, and S400 in fig. 4 have been described in the previous embodiment, and the description of this embodiment is omitted.
In some optional embodiments, in combination with the embodiment shown in fig. 4, a mathematical relationship among the first range of values, the preset range of values, the target cache hit rate, and the current cache hit rate is as follows:
the first numerical range is (the target cache hit rate ÷ the current cache hit rate) × the preset numerical range.
It should be understood that the above mathematical relationship is only one of the alternative embodiments of the present solution, and any suitable changes or modifications based on the above mathematical relationship are within the protection scope of the present invention.
Optionally, any way of determining the result returned to the first data read request according to the result of the random number, so as to reduce the cache hit rate of the cache device, falls within the protection scope of the present invention. As for how much the cache hit rate of the cache device is reduced, the above mathematical relationship may be adopted for setting, and other manners may also be adopted for setting, which is set according to the actual situation, but the final effect is that the manner of reducing the cache hit rate of the cache device all belongs to the protection scope of the present invention.
In some alternative embodiments, in combination with the embodiment shown in fig. 4, the ratio of the first range of values to the predetermined range of values is equal to the discount rate of hits.
It should be understood that, for a data read request, the random number may be obtained after it is determined that the data read request hits the cache device, and if the obtained random number is within the first value range, a result of the data read request hitting the cache device is returned to the client. In this way, the ratio of the predetermined range of values may be equal to the hit discount rate, which is not limited by the invention.
As shown in fig. 5, the present invention provides a device for adjusting a cache hit rate, including: a duration obtaining unit 100, a load parameter determining unit 200, an adjustment determining unit 300, a discount rate determining unit 400, and a hit rate reducing unit 500;
the duration obtaining unit 100 is configured to perform obtaining a read duration taken for reading data from the cache device according to the data read request;
the load parameter determining unit 200 is configured to determine the load parameter of the cache device according to the read duration;
the adjustment determining unit 300 is configured to determine whether to adjust the cache hit rate of the cache device according to the load parameter of the cache device, and if so, trigger the discount rate determining unit
The discount rate determination unit 400 configured to perform determining a hit discount rate matching the load parameter;
the hit rate reduction unit 500 is configured to perform responding to the at least one subsequent data read request according to the hit discount rate, so as to reduce a cache hit rate of the at least one subsequent data read request.
In some optional embodiments, in combination with the embodiment shown in fig. 5, the hit rate reduction unit 500 includes: a target hit rate determining unit and a first hit rate reducing unit;
the target hit rate determining unit is configured to determine a target cache hit rate of the cache device according to the hit discount rate and a current cache hit rate of the cache device;
the first hit rate reduction unit is configured to respond to the at least one subsequent data read request according to a target cache hit rate, so that the cache hit rate of the at least one subsequent data read request is reduced.
In combination with the previous embodiment, in some optional embodiments, the first hit rate reduction unit includes: a read request obtaining unit, a random number generating unit, a range determining unit, an address returning unit and an indication returning unit;
the reading request obtaining unit is configured to obtain a first data reading request;
the random number generation unit is configured to generate a random number when the data requested by the first data reading request is stored in the cache device, wherein the random number is within a preset numerical range;
the range determining unit is configured to determine whether the random number is within a first numerical range, if so, trigger the address returning unit, otherwise, trigger the indication returning unit
The address returning unit is configured to return a first saving address to a sender of the first data reading request, where the first saving address is a saving address of data requested to be read by the first data reading request in the cache device;
an indication returning unit configured to perform returning, to a sender of the first data read request, an indication that the requested data is not stored in the cache device;
and matching the ratio of the first numerical range to the numerical range of the preset numerical range with the target cache hit rate.
The present invention provides a storage medium for storing a program that when executed by a processor implements any one of the cache hit rate adjustment methods.
The cache hit rate adjusting device comprises a processor and a memory, wherein the duration obtaining unit 100, the load parameter determining unit 200, the adjustment determining unit 300, the discount rate determining unit 400, the hit rate reducing unit 500 and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the probability that the data reading request hits the cache device is reduced by adjusting the kernel parameters when the load of the cache device is larger, namely the cache hit rate of the cache device is reduced, so that the cache device is ensured to keep good data transmission performance.
The embodiment of the invention provides a processor, which is used for running a program, wherein the cache hit rate adjusting method is executed when the program runs.
As shown in fig. 6, an embodiment of the present invention provides an apparatus 70, where the apparatus 70 includes at least one processor 701, and at least one memory 702 and a bus 703 connected to the processor 701; the processor 701 and the memory 702 complete mutual communication through a bus 703; the processor 701 is configured to call the program instructions in the memory 702 to perform the above-mentioned cache hit rate adjustment method. The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application also provides a computer program product adapted to execute a program initialized with the steps comprised in the above-mentioned cache hit rate adjustment method when executed on a data processing device.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for adjusting cache hit rate, comprising:
obtaining a reading duration spent on reading data from the cache device according to the data reading request;
determining a load parameter of the cache device according to the reading duration;
determining whether the cache hit rate of the cache equipment needs to be adjusted according to the load parameters of the cache equipment, and if so, determining the hit discount rate matched with the load parameters;
and responding to the at least one subsequent data reading request according to the hit discount rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
2. The method of claim 1, wherein responding to the at least one subsequent data read request according to the discount-on-hit ratio such that the cache hit ratio of the at least one subsequent data read request is reduced comprises:
determining a target cache hit rate of the cache device according to the hit discount rate and the current cache hit rate of the cache device;
and responding to the at least one subsequent data reading request according to the target cache hit rate so as to reduce the cache hit rate of the at least one subsequent data reading request.
3. The method of claim 2, wherein responding to the at least one subsequent data read request according to the target cache hit rate such that the cache hit rate of the at least one subsequent data read request is reduced comprises:
obtaining a first data reading request;
when the data requested by the first data reading request is stored in the cache device, generating a random number, wherein the random number is within a preset numerical range;
determining whether the random number is within a first numerical range, if so, returning a first storage address to a sender of the first data reading request, wherein the first storage address is a storage address of data requested to be read by the first data reading request in the cache device; otherwise, returning an indication that the requested data is not stored in the cache device to a sender of the first data reading request;
and matching the ratio of the first numerical range to the numerical range of the preset numerical range with the target cache hit rate.
4. The method of claim 3, wherein a mathematical relationship between the first range of values, the preset range of values, the target cache hit rate, and the current cache hit rate is as follows:
the first numerical range is (the target cache hit rate ÷ the current cache hit rate) × the preset numerical range.
5. The method of claim 3, wherein a ratio of the first range of values to the predetermined range of values is equal to the discount hit rate.
6. An apparatus for adjusting cache hit rate, comprising: the device comprises a duration obtaining unit, a load parameter determining unit, an adjustment determining unit, a discount rate determining unit and a hit rate reducing unit;
the time length obtaining unit is configured to execute obtaining of a reading time length taken for reading data from the cache device according to the data reading request;
the load parameter determining unit is configured to determine the load parameter of the cache device according to the reading duration;
the adjustment determining unit is configured to determine whether the cache hit rate of the cache device needs to be adjusted according to the load parameter of the cache device, and if so, trigger the discount rate determining unit
The discount rate determination unit is configured to determine a hit discount rate matching the load parameter;
the hit rate reduction unit is configured to respond to the at least one subsequent data read request according to the hit discount rate, so that the cache hit rate of the at least one subsequent data read request is reduced.
7. The apparatus of claim 6, wherein the hit rate reduction unit comprises: a target hit rate determining unit and a first hit rate reducing unit;
the target hit rate determining unit is configured to determine a target cache hit rate of the cache device according to the hit discount rate and a current cache hit rate of the cache device;
the first hit rate reduction unit is configured to respond to the at least one subsequent data read request according to a target cache hit rate, so that the cache hit rate of the at least one subsequent data read request is reduced.
8. The apparatus of claim 7, wherein the first hit rate reduction unit comprises: a read request obtaining unit, a random number generating unit, a range determining unit, an address returning unit and an indication returning unit;
the reading request obtaining unit is configured to obtain a first data reading request;
the random number generation unit is configured to generate a random number when the data requested by the first data reading request is stored in the cache device, wherein the random number is within a preset numerical range;
the range determining unit is configured to determine whether the random number is within a first numerical range, if so, trigger the address returning unit, otherwise, trigger the indication returning unit
The address returning unit is configured to return a first saving address to a sender of the first data reading request, where the first saving address is a saving address of data requested to be read by the first data reading request in the cache device;
an indication returning unit configured to perform returning, to a sender of the first data read request, an indication that the requested data is not stored in the cache device;
and matching the ratio of the first numerical range to the numerical range of the preset numerical range with the target cache hit rate.
9. A storage medium for storing a program, wherein the program when executed by a processor implements the method for adjusting a cache hit rate according to any one of claims 1 to 5.
10. An apparatus comprising at least one processor, and at least one memory, bus connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to call a program in the memory, the program at least being configured to implement the cache hit rate adjustment method according to any one of claims 1 to 5.
CN202010995183.0A 2020-09-21 2020-09-21 Cache hit rate adjusting method, device, equipment and storage medium Pending CN114253455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010995183.0A CN114253455A (en) 2020-09-21 2020-09-21 Cache hit rate adjusting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010995183.0A CN114253455A (en) 2020-09-21 2020-09-21 Cache hit rate adjusting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114253455A true CN114253455A (en) 2022-03-29

Family

ID=80788264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010995183.0A Pending CN114253455A (en) 2020-09-21 2020-09-21 Cache hit rate adjusting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114253455A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829825A (en) * 2023-01-10 2023-03-21 南京砺算科技有限公司 Method for controlling loading of primitive data, graphics processor, device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115829825A (en) * 2023-01-10 2023-03-21 南京砺算科技有限公司 Method for controlling loading of primitive data, graphics processor, device, and storage medium
CN115829825B (en) * 2023-01-10 2023-05-05 南京砺算科技有限公司 Loading control method of primitive data, graphic processor, equipment and storage medium

Similar Documents

Publication Publication Date Title
US9984013B2 (en) Method, controller, and system for service flow control in object-based storage system
CN101510219B (en) File data accessing method, apparatus and system
CN113010818B (en) Access current limiting method, device, electronic equipment and storage medium
CN106713028B (en) Service degradation method and device and distributed task scheduling system
CN111614736A (en) Network content resource scheduling method, domain name scheduling server and electronic equipment
US10263876B2 (en) Adaptive service timeouts
CN112422610B (en) Intelligent gateway method and system based on distributed object storage
CN109151512A (en) The method and device of content is obtained in CDN network
CN111782692B (en) Frequency control method and device
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
US11431669B2 (en) Server configuration method and apparatus
US20200374376A1 (en) Distributing Requests for Data Among Servers Based On Indicators of Intent to Access the Data
CN109951543A (en) A kind of data search method of CDN node, device and the network equipment
CN114253455A (en) Cache hit rate adjusting method, device, equipment and storage medium
CN104202349B (en) The method of scheduling distributed buffer resources, Apparatus and system
CN106612263B (en) Method and equipment for processing application access request
CN114253456A (en) Cache load balancing method and device
CN110781500A (en) Data wind control system and method
CN116055401A (en) Message processing method, device, equipment and storage medium
CN105763508B (en) Data access method and application server
CN110868333A (en) Data caching method and system for gateway
CN114500484A (en) Page rendering method and device, electronic equipment and readable medium
CN114500663B (en) Scheduling method, device, equipment and storage medium of content distribution network equipment
CN113989034B (en) Bank attribute data management method and device, electronic equipment and storage medium
CN109302484B (en) User request processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination