CN117149836B - Cache processing method and device - Google Patents

Cache processing method and device Download PDF

Info

Publication number
CN117149836B
CN117149836B CN202311405661.8A CN202311405661A CN117149836B CN 117149836 B CN117149836 B CN 117149836B CN 202311405661 A CN202311405661 A CN 202311405661A CN 117149836 B CN117149836 B CN 117149836B
Authority
CN
China
Prior art keywords
queue
query index
weight
resource element
current query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311405661.8A
Other languages
Chinese (zh)
Other versions
CN117149836A (en
Inventor
吴璟
韩勇
韩丰景
王永强
沈梦伶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unicom Online Information Technology Co Ltd
Original Assignee
China Unicom Online Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unicom Online Information Technology Co Ltd filed Critical China Unicom Online Information Technology Co Ltd
Priority to CN202311405661.8A priority Critical patent/CN117149836B/en
Publication of CN117149836A publication Critical patent/CN117149836A/en
Application granted granted Critical
Publication of CN117149836B publication Critical patent/CN117149836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of computer caching, and provides a caching processing method and device. The method comprises the following steps: judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist; when judging that the content is in the first queue, placing the content to be cached at the tail position of the first queue, and increasing the hit number of the current query index by one for subsequent caching; when judging that the current query index is in the second queue, determining whether available accommodating space exists to perform caching so as to judge whether temporary elimination processing is performed, temporarily storing temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting movement conditions, and moving the resource elements meeting the movement conditions to the tail position of the first queue for subsequent caching. The invention effectively improves the cache hit rate on the premise of ensuring the high-efficiency calculation of the weight of the current resource element.

Description

Cache processing method and device
Technical Field
The present invention relates to the field of computer caching technologies, and in particular, to a cache processing method and apparatus.
Background
With the development of society, most regions access the internet, but the response speed of users in different regions accessing the same website is different. In order to increase the response speed of websites to user accesses, content delivery networks (Content Delivery Network, CDNs) are emerging. The CDN is an intelligent virtual network built on the basis of the existing network, and by means of various cache servers, a user can obtain required information from a place closest to the user at the highest speed, so that network congestion is reduced to a great extent, and response speed and hit rate of user access are improved. An important part of the CDN is that the cache server pulls and caches resources from the CDN source. When the user accesses, the resource can be read and processed from the cache server for feedback. At present, the cache elimination algorithm performs elimination operation according to the frequency of data access, and for CDN service, service high availability and customer service demand quality cannot be met. In addition, there is still much room for improvement in how to increase the effective hit rate while considering the number and size of cache element hits, and how to scan quickly to reduce the lookup consumption.
Therefore, there is a need to provide a cache processing method to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide a cache processing method and device for solving the technical problems that in the prior art, service high availability and customer service demand quality cannot be met, the effective hit rate is improved while the hit number and size of cache elements are considered, and the search consumption is reduced by fast scanning.
The first aspect of the present invention proposes a cache processing method, including: when receiving content to be cached, judging whether a current query index of the content to be cached exists or not; judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist, wherein the first queue is used for storing first-level resource elements and first-class storage indexes; the second queue is used for storing a second class storage index of the second-class resource element; when judging that the current query index is in a first queue, placing the content of a current resource element corresponding to the current query index at the tail position of the first queue, and increasing the hit number of the current query index by one for subsequent cache processing; when judging that the current query index is in the second queue, determining whether available accommodating space exists to perform caching processing so as to judge whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting movement conditions from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, and moving the resource elements meeting the movement conditions to the tail position of the first queue for subsequent caching processing.
According to an alternative embodiment, when the current query index is determined to be in the second queue, determining whether there is an available accommodation space for caching includes: calculating the sum of the content size of the resource element corresponding to the current query index and the content size of all elements in the current first queue to obtain a current space capacity calculation value; and judging whether the current space capacity calculated value is larger than the total cache space capacity or not so as to determine whether available accommodation space exists for cache processing or not.
According to an optional embodiment, when the current space capacity calculated value is greater than or equal to the total cache space capacity, determining that an available accommodation space does not exist to cache a resource element corresponding to the current query index, and determining to perform temporary elimination processing on the resource element in the first queue; when the calculated value of the current space capacity is smaller than the total cache space capacity, determining that an available containing space exists to cache the resource element corresponding to the current query index, and directly caching the resource element corresponding to the current query index into a first queue.
According to an alternative embodiment, further comprising: and repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index by comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until the current available accommodation space is determined to cache the resource element corresponding to the current query index.
According to an alternative embodiment, when it is determined that the temporary elimination processing of the resource elements in the first queue is completed, the following expression is adopted to calculate a current element hit rate weight value of the resource element corresponding to the current query index:
Weight(key) = phit*ptype*pdomain*ptime (1)
wherein Weight (key) represents a Weight value of a resource element corresponding to the current query index; the phit represents the hit rate of each unit byte in the resource element corresponding to the current query index, and is represented by using a proportional relationship between the number of balanced hits and the size of the resource element corresponding to the current query index, namely phit=hit/value_size; ptype represents the category allocation weight of the resource type corresponding to the current query index; pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; ptime represents a weight of whether the resource element corresponding to the current query index is expired.
According to an alternative embodiment, the following expressions are used to sequentially calculate the resource element weight values of the resource elements located in the temporary queue:
Weight(key) s ’= phit s ’*ptype s ’*pdomain s ’*ptime s ’ (6)
wherein Weight (key) s ' represents a resource element weight value of an s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n; phit is s ' represent hit rate of each unit byte in the s-th resource element in the temporary queue, expressed by using proportional relation of balanced hit number and size of the resource element corresponding to the current query index, namely phit s ’= hit s ’/value_size s ' s is a positive integer, specifically 1, 2, 3, & gt, n; ptype s ' the class allocation weight of the resource type corresponding to the s-th resource element in the temporary queue is represented, s is a positive integer, and is specifically 1, 2, 3, and n; pdomain s ' represents the assigned weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and the determined weight value is adjusted according to the corresponding user charging and service bandwidth; ptime s ' means a weight of whether the s-th resource element located in the temporary queue is out of date, s is a positive integer, specifically 1, 2, 3, n;
sequentially comparing the calculated current element hit rate Weight of the resource element corresponding to the current query index with the calculated resource element Weight value Weight (key) of the resource element in the temporary queue;
When the calculated Weight (key) is smaller than the calculated Weight (key) s When' the current query index corresponding to the content to be cached is placed in the second queue, weight (key) s The corresponding resource element is put back to the tail position of the first queue;
when the calculated Weight (key) is equal to or greater than the calculated Weight (key) s And when' caching the content to be cached to the tail position of the first queue, and judging whether to put one or more resource elements in the temporary queue back to the first queue according to the currently available accommodating space of the first queue.
According to an alternative embodiment, when the sum of the current buffer space capacity and the size of the resource element corresponding to the current query index is greater than or equal to the total buffer space capacity, one or more resource elements are taken out from the first queue, weight comparison is performed, and when it is determined whether the sum of the current buffer space capacity and the resource element corresponding to the current query index is greater than the total buffer space capacity or not, until the resource element corresponding to the current query index meets the following moving condition, the resource element corresponding to the current query index is buffered to the first queue.
According to an alternative embodiment, the following expression is adopted to calculate a category allocation weight value of the resource type corresponding to the current query index:
ptype = (type_size - cur_type_size + total_size)/total_size (4)
Wherein, ptype represents the category allocation weight value of the resource type corresponding to the current query index; type_size represents an allocation space of each category; cur_type_size represents the allocated space occupied by the resource category of the resource element corresponding to the current query index in the first queue; total size represents the total allocation space available for the resource class of all resource elements in the first queue.
According to an alternative embodiment, the weight corresponding to the user is calculated using the following expression:
pdomain= billing * bandwidth (5)
wherein pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; the weighting represents the weight determined by the user according to the unit price setting charging; the bandwidth represents a weight value determined by the current bandwidth of the user.
A second aspect of the present invention provides a cache processing apparatus, which adopts the cache processing method described in the first aspect of the present invention, including: the receiving processing module is used for judging whether the current query index of the content to be cached exists or not when the content to be cached is received; the determining processing module is used for judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist, wherein the first queue is used for storing first-level resource elements and first-class storage indexes; the second queue is used for storing a second class storage index of the second-class resource element; the first cache processing module is used for placing the content of the current resource element corresponding to the current query index at the tail position of the first queue when judging that the current query index is in the first queue, and increasing the hit number of the current query index by one for subsequent cache processing; and the second cache processing module is used for determining whether available accommodating space exists to perform cache processing when judging that the current query index is in the second queue, judging whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting the moving condition from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, and moving the resource elements meeting the moving condition to the tail position of the first queue for subsequent cache processing.
A third aspect of the present invention provides an electronic apparatus, comprising: one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect of the present invention.
A fourth aspect of the invention provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method according to the first aspect of the invention.
The embodiment of the invention has the following advantages:
compared with the prior art, the method and the device have the advantages that whether the current query index is in the first queue and the second queue is judged by judging whether the current query index of the content to be cached exists or not, when the current query index is judged to be in the first queue, the content of the current resource element corresponding to the current query index is placed at the tail position of the first queue, the hit number of the current query index is increased by one, and through the judgment of multiple existence of the search index, search consumption can be effectively reduced, and more efficient cache processing is further realized; when judging that the current query index is in a second queue, determining whether available accommodating space exists for caching so as to judge whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting the moving condition from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, moving the resource elements meeting the moving condition to the tail position of the first queue, and judging through multiple existence of search indexes, so that the search consumption can be effectively reduced, and caching processing of higher universities is further realized; the storage position of the content to be cached is determined through three queues and through resource element weight calculation and judgment, and the effective hit rate can be improved while the number and the size of the cache element hits are considered.
In addition, by comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until it is determined that the currently available accommodation space can cache the resource element corresponding to the current query index, the effective cache hit rate can be further improved.
Drawings
FIG. 1 is a flow chart of steps of an example of a cache processing method of the present invention;
FIG. 2 is a flow chart of an embodiment of a cache processing method according to the present invention;
FIG. 3 is a block diagram of an example of a cache processing apparatus of the present invention;
FIG. 4 is a schematic diagram of the architecture of a computer device according to one embodiment of the invention;
FIG. 5 is a schematic diagram of a computer program product of one embodiment of the invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In view of the above, the present invention proposes a cache processing method, in which a redesigned cache algorithm includes three weight linked lists of LRU's, namely a first queue (also called cache queue), a second queue (also called ready queue) and a temporary queue (also called temp queue), by considering various factors such as the type of resource, the size and the number of hits. The storage position of the content to be cached is determined through three queues and through resource element weight calculation and judgment, and the effective hit rate can be improved while the number and the size of the cache element hits are considered. The search consumption can be effectively reduced through multiple presence judgment of the search index.
Example 1
FIG. 1 is a flow chart of steps of an example of a cache processing method of the present invention.
The following describes the present invention in detail with reference to fig. 1 and 2.
As shown in fig. 1, in step S101, when content to be cached is received, it is determined whether a current query index of the content to be cached already exists.
In a specific embodiment, as shown in fig. 2, when the content to be cached is received, whether the current query index key of the content to be cached exists is judged, and if the current query index key exists, whether the current query index key exists in the first queue or the second queue is further judged.
And when the current query index key is determined to be absent, determining that the current query index key is not hit, putting the current query index key into a second queue, only storing the query index, and ending the caching process flow. And continuously receiving the next content to be cached so as to continuously judge whether the current query index of the next content to be cached exists.
For example, when a new content to be cached needs to be cached, it is first determined whether the current query index key already exists in the cache. When the data does not exist, the data firstly enters a second queue, the current query index key is stored, the cache content is not stored, and the number of hits is set to be the initial setting value I.
When the query index key exists, the number of hits is increased by one, and then whether the current query index key is in the first queue or the second queue is judged.
It should be noted that the foregoing is merely illustrative of the present invention and is not to be construed as limiting thereof.
Next, in step S102, if it is determined that the current query index of the content to be cached is already present, determining whether the current query index is in a first queue and a second queue, where the first queue is used to store a first-level resource element and a first-class storage index; the second queue is used for storing a second class storage index of the second-class resource element.
And judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist.
Specifically, a first queue (also called a cache queue) stores first-level resource elements and first-class storage indexes. The second queue (also called read queue) is used for storing the second class of storage indexes of the second-level resource elements, i.e. only storing the storage indexes belonging to the second class of storage indexes, and not storing the content of any resource elements.
It should be noted that, in the present invention, the resource elements of the first queue are (key, value, hit, type, domain, time), hereinafter referred to as "key, value, hit", where key represents a lookup index, value represents cache content, hit represents the number of hits, type represents the resource type, domain represents a resource domain name, and time represents a resource expiration time. The sum total_size of the sizes of all the element values in the first queue is as follows: value1.Size+value2.Size+. Value n. Size < = total_size, where total_size is the total buffer capacity currently allocated, holding the first level resource elements such as hot spot resource content; the resource elements of the second queue are (key, hit, type, domain, time), hereinafter referred to as (key, hit), and do not include cache contents, and only store the storage index corresponding to the second-level resource elements. In some embodiments, the second queue also holds access records and the like. The foregoing is illustrative only and is not to be construed as limiting the invention.
Next, in step S103, when it is determined that the current query index is in the first queue, the content of the current resource element corresponding to the current query index is placed at the tail position of the first queue, and the hit number of the current query index is increased by one for the subsequent cache processing.
In a specific embodiment, when the current query index is in the first queue, the resource element corresponding to the current query index is taken out of the first queue and placed at the tail position of the first queue. And increasing the hit number of the current query index by one. For example, if the hit number of the current query index is h1, the hit number is increased by one, i.e. the current hit number of the current query index is h When (when) =h1+1。
It should be noted that the foregoing is merely illustrative of the present invention and is not to be construed as limiting thereof.
Next, in step S104, when it is determined that the current query index is in the second queue, it is determined whether there is an available accommodating space to perform the buffering process, so as to determine whether to perform the temporary elimination process on the resource elements in the first queue, temporary store the temporarily eliminated resource elements in the temporary queue, select the resource elements meeting the movement condition from the current resource elements corresponding to the content to be buffered and the resource elements in the temporary queue, and move the resource elements meeting the movement condition to the tail position of the first queue for the subsequent buffering process.
When the current query index is judged to be in the second queue, whether available accommodation space exists for caching is determined.
And specifically calculating the sum of the content size of the resource element corresponding to the current query index and the content size of all elements in the current first queue to obtain a current space capacity calculation value. Then, judging whether the current space capacity calculated value is larger than the total cache space capacity or not so as to determine whether available accommodation space exists for cache processing.
Optionally, when the calculated value of the current space capacity is greater than or equal to the total buffer space capacity, determining that an available containing space does not exist to buffer the resource elements corresponding to the current query index, determining to perform temporary elimination processing on the resource elements in the first queue, specifically taking out one or more resource elements in the first queue, and placing the taken out one or more resource elements in the temporary queue, so as to provide enough buffer space capacity for the content to be buffered.
In a specific embodiment, under the condition that the sum of the current buffer space capacity and the size of the resource element corresponding to the current query index is equal to the total buffer space capacity, one or more resource elements are taken out from the first queue, and then weight comparison is performed until the resource element corresponding to the current query index meets the following moving condition, the resource element corresponding to the current query index is buffered to the first queue.
The moving condition is that the weight value of the content to be cached (i.e. the resource element corresponding to the current query index) is larger than the weight value of each resource element in the temporary queue while the total cache space capacity of the first queue is satisfied and the available accommodation space is used for storing the content to be cached (i.e. the resource element to be cached, such as the resource element corresponding to the current query index).
For temporary elimination processing, the resource cache processing method also comprises a temporary queue. The temporary queue is used for temporarily storing the resource elements when the temporary elimination is carried out.
In a specific embodiment, one resource element is fetched from the first queue (e.g., using value'. Size to represent the content size of the resource element), where cache+value.
In another embodiment, one resource element is fetched from the first queue (e.g., using value'. Size to represent the content size of the resource element), and cache+value. In the above case, it means that the buffer space capacity after fetching one resource element still cannot meet the requirement of storing the content to be buffered in the first queue, and therefore, one or more resource elements need to be fetched again until the following condition is met, and then the fetching of the resource elements from the first queue is stopped: cache. Size+value. Size-value'. Size is less than or equal to total_size.
Optionally, one or more resource elements are fetched starting from the first position of the first queue to provide sufficient buffer space capacity for the content to be buffered.
By comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until the current available accommodation space is determined to be capable of caching the resource element corresponding to the current query index, the cache processing efficiency can be further improved.
In yet another embodiment, one or more resource elements are first fetched from the first queue, and weight comparison is performed to determine whether the sum of the current buffer space capacity and the sum is greater than the total buffer space capacity, so as to provide sufficient buffer space capacity for the content to be buffered.
The following expression is adopted to calculate the hit rate weight value of the current element of the resource element corresponding to the current query index:
Weight(key) = phit*ptype*pdomain*ptime (1)
wherein Weight (key) represents a Weight value of a resource element corresponding to the current query index; the phit represents the hit rate of each unit byte in the resource element corresponding to the current query index, and is represented by using a proportional relationship between the number of balanced hits and the size of the resource element corresponding to the current query index, namely phit=hit/value_size; ptype represents the category allocation weight of the resource type corresponding to the current query index; pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; ptime represents a weight of whether the resource element corresponding to the current query index is expired.
Under the condition that one resource element is taken out from the first queue, the one resource element is positioned at the head of the first queue, and the resource element weight value of the one resource element is calculated by adopting the following expression:
Weight(key) c ’= phit c ’*ptype c ’*pdomain c ’*ptime c ’ (2)
wherein Weight (key) c ' a resource element weight value representing a resource element located at a head of queue position of the first queue, c being equal to 1; phit is c ' represent each of the units in the resource element located at the head of the first queueThe hit rate of the bit byte is expressed by using the proportional relation of the balanced hit number and the size of the resource element corresponding to the current query index, namely the phi c ’= hit c ’/value_size s ' c is equal to 1; ptype c ' a class allocation weight representing a resource type corresponding to a resource element located at a head position of the first queue, c being equal to 1; pdomain s ' represents the assigned weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and the determined weight value is adjusted according to the corresponding user charging and service bandwidth; ptime c ' weight indicating whether a resource element located at the head of the first queue is expired, c is equal to 1.
When the calculated Weight (key) is smaller than the calculated Weight (key) c When' the current query index of the content to be cached is placed in the second queue, weight (key) c The corresponding resource element is still stored in the first queue.
When Weight (key) is greater than or equal to Weight (key) c ' Weight (key) c And putting the corresponding resource elements in a temporary queue, and judging whether the sum of the current buffer space capacity and the size of the resource elements corresponding to the current query index is larger than the total buffer space capacity.
In the case of fetching a plurality of resource elements (e.g., a specified number of resource elements) from the first queue, sequentially calculating the resource element weight values of the resource elements located in the first queue: "
Weight(key) c ”= phit c ”*ptype c ”*pdomain c ”*ptime c ” (3)
Wherein Weight (key) c "means the resource element weight value of the c-th resource element located in the first queue, c is a positive integer, specifically 1, 2, 3..n; phit is c "represents hit rate of each unit byte in the c-th resource element in the first queue, expressed by using proportional relation of balanced hit number and size of resource element corresponding to the current query index, namely phit c ”= hit c ”/value_size c ", c is a positive integer, specifically 1, 2, 3, & gt, n; ptype c "represents the class allocation weight of the resource type corresponding to the c-th resource element in the first queue, c is a positive integer, specifically 1, 2, 3, n; pdomain c "represents the weight value corresponding to the user of the c-th resource element in the first queue, and adjusts the determined weight value according to the corresponding user charging and service bandwidth; ptime c "means a weight indicating whether the c-th resource element located in the first queue is out of date, c is a positive integer, specifically 1, 2, 3, and n
Weight (key) of the plurality of fetched resource elements is sequentially fetched c And the following judgment is carried out on the resource element weight value of the content to be cached:
when the calculated Weight (key) is smaller than the calculated Weight (key) c When' the current query index of the content to be cached is placed in the second queue, weight (key) c "the corresponding resource element is still stored in the first queue.
When Weight (key) is greater than or equal to Weight (key) c "when Weight (key) c And placing the corresponding resource elements in a temporary queue, and judging whether the sum of the current buffer space capacity and the size of the resource elements corresponding to the current query index is larger than the total buffer space capacity.
For example, the elimination processing is performed on a certain resource element, the total buffer space capacity is total_size=1000mb, the resource size of the content to be buffered is value_size=10mb, the hit number is 5, and the unit byte hit rate is phit=0.5. The resource type of the content to be cached is video class resource, namely type_percentage=3/5, currently occupies 400MB of space, and the type weight is ptype= (600-410+1000)/1000=1.19. For example, the resource domain name www.a.com is adopted as a parameter pdomain, wherein the corresponding user charges as biling=3.2, and the service bandwidth as bandwidth=2, and the parameter pdomain=6.4. The parameter ptime is in seconds, for example, 10 seconds remain to expire, and the parameter ptime is 10. Further, according to the expression (1), a Weight value Weight (key) =0.5×1.19×6.4×10=38.08 of the resource element of the content to be cached is obtained.
In an alternative embodiment, a hit rate weight value of each resource element in the first queue is calculated, and the resource element with the smallest hit rate weight value is fetched from the first queue. For example, the hit rate of each unit byte in the resource elements in the first queue, the class allocation weight of the resource type of each resource element, the user weight value, and the weight value obtained by multiplying the weight of whether each resource element is out of date are used as the hit rate weight value of each resource element in the first queue, so as to determine the resource element with the minimum hit rate weight value.
Then, when it is determined that the temporary elimination processing of the resource elements in the first queue is completed, a resource element moving processing is performed.
Before the resource element moving process is carried out, whether the temporary queue contains the resource element is judged, and when the temporary queue does not contain the resource element, the resource element moving process is stopped.
When the temporary queue contains resource elements, since the weight of the resource elements in the temporary queue is smaller than the weight of the resource elements corresponding to the current query index, whether one or more resource elements in the temporary queue are put back into the first queue is determined according to the current available accommodation space (i.e. the total buffer capacity minus the size of all the resource elements in the first queue) of the first queue. Therefore, the time of the cache processing can be effectively reduced, and the cache processing efficiency can be further improved.
In another embodiment, the above expression (1) is adopted to calculate the current element hit rate weight value of the resource element corresponding to the current query index.
Calculating a category allocation weight value of the resource type corresponding to the current query index by adopting the following expression:
ptype = (type_size - cur_type_size + total_size)/total_size (4)
wherein, ptype represents the category allocation weight value of the resource type corresponding to the current query index; type_size represents an allocation space of each category; cur_type_size represents the allocated space occupied by the resource category of the resource element corresponding to the current query index in the first queue; total size represents the total allocation space available for the resource class of all resource elements in the first queue.
Further, the weight corresponding to the user is calculated using the following expression:
pdomain= billing * bandwidth (5)
wherein pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; the weighting represents the weight determined by the user according to the unit price setting charging; the bandwidth represents a weight value determined by the current bandwidth of the user.
Specifically, for the determination of the parameter ptime in the above expression (1). Setting the expiration time of the resource as date, the current time as current, and calculating the parameter ptime through the following expression.
ptime= date - current (7)
When date > current, then it is determined that the resource element corresponding to the current query index is not expired. When date < = current, it is determined that the resource element corresponding to the current query index has expired, i.e. has not been necessary for buffering, i.e. ptime=0.
The following expressions are adopted to sequentially calculate the weight values of the resource elements in the temporary queue:
Weight(key) s ’= phit s ’*ptype s ’*pdomain s ’*ptime s ’ (6)
wherein Weight (key) s ' represents a resource element weight value of an s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n; phit is s ' represent hit rate of each unit byte in the s-th resource element in the temporary queue, expressed by using proportional relation of balanced hit number and size of the resource element corresponding to the current query index, namely phit s ’= hit s ’/value_size s ' s is a positive integer, specifically 1, 2, 3, & gt, n;ptype s ' the class allocation weight of the resource type corresponding to the s-th resource element in the temporary queue is represented, s is a positive integer, and is specifically 1, 2, 3, and n; pdomain s ' represents the assigned weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and the determined weight value is adjusted according to the corresponding user charging and service bandwidth; ptime s ' denotes the weight of whether the s-th resource element located in the temporary queue is out of date, s is a positive integer, specifically 1, 2, 3.
It should be noted that, in this embodiment, the sequentially calculating the resource element weight values of the resource elements located in the temporary queue refers to calculating the resource element weight values of the specified number of resource elements in the temporary queue, and not calculating the resource element weight values of all the resource elements located in the temporary queue. Thus, the time of the cache processing can be effectively reduced, and the cache processing efficiency can be improved.
And sequentially comparing the calculated current element hit rate Weight of the resource element corresponding to the current query index with the calculated resource element Weight value Weight (key) of the resource element in the temporary queue.
When the calculated Weight (key) is smaller than the calculated Weight (key) ', the current query index corresponding to the content to be cached is placed in the second queue, and the resource element corresponding to the Weight (key)' is placed back to the tail position of the first queue.
When the calculated Weight (key) is greater than or equal to the calculated Weight (key)', the content to be cached is cached to the tail position of the first queue, and whether one or more resource elements in the temporary queue are put back into the first queue is judged according to the current available accommodating space of the first queue (namely, the total caching capacity minus the sizes of all the resource elements in the first queue).
In another embodiment, when the calculated current space capacity is smaller than the total buffer space capacity, it is determined that there is an available accommodating space to buffer the resource element corresponding to the current query index, and the resource element corresponding to the current query index is directly buffered in the first queue.
It should be noted that the foregoing is merely illustrative of the present invention and is not to be construed as limiting thereof. Furthermore, the drawings are only schematic illustrations of processes involved in a method according to an exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily understood that the processes shown in the figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Compared with the prior art, the method and the device have the advantages that whether the current query index is in the first queue and the second queue is judged by judging whether the current query index of the content to be cached exists or not, when the current query index is judged to be in the first queue, the content of the current resource element corresponding to the current query index is placed at the tail position of the first queue, the hit number of the current query index is increased by one, and through the judgment of multiple existence of the search index, search consumption can be effectively reduced, and more efficient cache processing is further realized; when judging that the current query index is in a second queue, determining whether available accommodating space exists for caching so as to judge whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting the moving condition from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, moving the resource elements meeting the moving condition to the tail position of the first queue, and judging through multiple existence of search indexes, so that the search consumption can be effectively reduced, and caching processing of higher universities is further realized; the storage position of the content to be cached is determined through three queues and through resource element weight calculation and judgment, and the effective hit rate can be improved while the number and the size of the cache element hits are considered.
In addition, by comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until it is determined that the currently available accommodation space can cache the resource element corresponding to the current query index, the effective cache hit rate can be further improved.
The following are examples of the apparatus of the present invention that may be used to perform the method embodiments of the present invention. For details not disclosed in the embodiments of the apparatus of the present invention, please refer to the embodiments of the method of the present invention.
Fig. 3 is a schematic diagram of an example of a cache processing apparatus according to the present invention.
Referring to fig. 3, a second aspect of the present disclosure provides a cache processing apparatus, which adopts the cache processing method described in the first aspect of the present invention. The cache processing apparatus 300 includes a reception processing module 310, a determination processing module 320, a first cache processing module 330, and a second cache processing module 340.
In a specific embodiment, when receiving the content to be cached, the receiving processing module 310 determines whether the current query index of the content to be cached already exists. The determining processing module 320 determines whether the current query index of the content to be cached is in a first queue and a second queue if the current query index is determined to exist, where the first queue is used to store a first-level resource element and a first-class storage index; the second queue is used for storing a second class storage index of the second-class resource element; when the first cache processing module 330 determines that the current query index is in the first queue, the content of the current resource element corresponding to the current query index is placed at the tail position of the first queue, and the hit number of the current query index is increased by one for subsequent cache processing. When the second cache processing module 340 determines that the current query index is in the second queue, determining whether an available accommodating space exists to perform cache processing, so as to determine whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements in the temporary queue, selecting resource elements meeting a moving condition from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, and moving the resource elements meeting the moving condition to a tail position of the first queue for subsequent cache processing.
In an alternative embodiment, when the current query index is determined to be in the second queue, determining whether there is an available accommodation space for caching includes: calculating the sum of the content size of the resource element corresponding to the current query index and the content size of all elements in the current first queue to obtain a current space capacity calculation value; and judging whether the current space capacity calculated value is larger than the total cache space capacity or not so as to determine whether available accommodation space exists for cache processing or not.
In an optional embodiment, when the calculated value of the current space capacity is greater than or equal to the total cache space capacity, determining that an available accommodating space does not exist to cache the resource element corresponding to the current query index, and determining to perform temporary elimination processing on the resource element in the first queue; when the calculated value of the current space capacity is smaller than the total cache space capacity, determining that an available containing space exists to cache the resource element corresponding to the current query index, and directly caching the resource element corresponding to the current query index into a first queue.
Further comprises: and repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index by comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until the current available accommodation space is determined to cache the resource element corresponding to the current query index.
Under the condition that the temporary elimination processing of the resource elements in the first queue is determined to be completed, calculating the current element hit rate weight value of the resource elements corresponding to the current query index by adopting the following expression:
Weight(key) = phit*ptype*pdomain*ptime (1)
wherein Weight (key) represents a Weight value of a resource element corresponding to the current query index; the phit represents the hit rate of each unit byte in the resource element corresponding to the current query index, and is represented by using a proportional relationship between the number of balanced hits and the size of the resource element corresponding to the current query index, namely phit=hit/value_size; ptype represents the category allocation weight of the resource type corresponding to the current query index; pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; ptime represents a weight of whether the resource element corresponding to the current query index is expired.
The following expressions are adopted to sequentially calculate the weight values of the resource elements in the temporary queue:
Weight(key) s ’= phit s ’*ptype s ’*pdomain s ’*ptime s ’ (6)
wherein Weight (key) s ' represents a resource element weight value of an s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n; phit is s ' represent hit rate of each unit byte in the s-th resource element in the temporary queue, expressed by using proportional relation of balanced hit number and size of the resource element corresponding to the current query index, namely phit s ’= hit s ’/value_size s ' s is a positive integer, specifically 1, 2, 3, & gt, n; ptype s ' the class allocation weight of the resource type corresponding to the s-th resource element in the temporary queue is represented, s is a positive integer, and is specifically 1, 2, 3, and n; pdomain s ' represents the assigned weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and the determined weight value is adjusted according to the corresponding user charging and service bandwidth; ptime s ' means a weight of whether the s-th resource element located in the temporary queue is out of date, s is a positive integer, specifically 1, 2, 3, n;
and sequentially comparing the calculated current element hit rate Weight of the resource element corresponding to the current query index with the calculated resource element Weight value Weight (key) of the resource element in the temporary queue.
When the calculated Weight (key) is smaller than the calculated Weight (key) s When' the current query index corresponding to the content to be cached is placed in the second queue, weight (key) s The resource element corresponding to' is put back to the tail position of the first queue.
When the calculated Weight (key) is equal to or greater than the calculated Weight (key) s And when' caching the content to be cached to the tail position of the first queue, and judging whether to put one or more resource elements in the temporary queue back to the first queue according to the currently available accommodating space of the first queue.
And under the condition that the sum of the current buffer space capacity and the size of the resource element corresponding to the current query index is larger than or equal to the total buffer space capacity, one or more resource elements are taken out from the first queue, weight comparison is carried out, and when judging whether the sum of the current buffer space capacity and the resource element corresponding to the current query index is larger than the total buffer space capacity or not until the resource element corresponding to the current query index meets the following moving condition, the resource element corresponding to the current query index is buffered to the first queue.
Further, the following expression is adopted to calculate a category allocation weight value of the resource type corresponding to the current query index:
ptype = (type_size - cur_type_size + total_size)/total_size (4)
wherein, ptype represents the category allocation weight value of the resource type corresponding to the current query index; type_size represents an allocation space of each category; cur_type_size represents the allocated space occupied by the resource category of the resource element corresponding to the current query index in the first queue; total size represents the total allocation space available for the resource class of all resource elements in the first queue.
The weight corresponding to the user is calculated by adopting the following expression:
pdomain= billing * bandwidth (5)
wherein pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; the weighting represents the weight determined by the user according to the unit price setting charging; the bandwidth represents a weight value determined by the current bandwidth of the user.
Since the cache processing method executed by the cache processing apparatus is substantially the same as the cache processing method in fig. 1, the description of the same is omitted.
Those skilled in the art will appreciate that the modules may be distributed throughout several devices as described in the embodiments, and that corresponding variations may be implemented in one or more devices that are unique to the embodiments. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
The following describes embodiments of a computer device of the present invention, which may be regarded as a specific physical implementation of the method and apparatus embodiments of the present invention described above. Details described in relation to the embodiments of the computer apparatus of the present invention should be considered as additions to the embodiments of the method or apparatus described above; for details not disclosed in the embodiments of the computer apparatus of the present invention, reference may be made to the above-described method or apparatus embodiments.
Fig. 4 is a schematic structural diagram of a computer device of an embodiment of the present invention, the computer device including a processor and a memory for storing a computer executable program, the processor performing the method of fig. 1 when the computer program is executed by the processor.
As shown in fig. 4, the computer device is in the form of a general purpose computing device. The processor may be one or a plurality of processors and work cooperatively. The invention does not exclude that the distributed processing is performed, i.e. the processor may be distributed among different physical devices. The computer device of the present invention is not limited to a single entity, but may be a sum of a plurality of entity devices.
The memory stores a computer executable program, typically machine readable code. The computer readable program may be executable by the processor to cause a computer device to perform the method of the present invention, or at least some of the steps of the method.
The memory includes volatile memory, such as Random Access Memory (RAM) and/or cache memory, and may be non-volatile memory, such as Read Only Memory (ROM).
Optionally, in this embodiment, the computer device further includes an I/O interface, which is used for exchanging data between the computer device and an external device. The I/O interface may be a bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
It should be understood that the computer device shown in fig. 4 is only one example of the present invention, and elements or components not shown in the above examples may be further included in the computer device of the present invention. For example, some computer devices also include a display unit such as a display screen, and some computer devices also include a human-computer interaction element such as a button, a keyboard, and the like. Computer devices covered by the present invention may be considered as long as they are capable of executing a computer readable program in memory to implement the method or at least part of the steps of the method.
FIG. 5 is a schematic diagram of a computer program product of one embodiment of the invention. As shown in fig. 5, a computer program product has stored therein a computer executable program which, when executed, implements the above-described method of the present invention. The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
From the above description of embodiments, those skilled in the art will readily appreciate that the present invention may be implemented by hardware capable of executing a specific computer program, such as the system of the present invention, as well as electronic processing units, servers, clients, handsets, control units, processors, etc. included in the system. The invention may also be implemented by computer software executing the method of the invention, e.g. by control software executed by a microprocessor, an electronic control unit, a client, a server, etc. It should be noted, however, that the computer software for performing the method of the present invention is not limited to being executed by one or a specific hardware entity, but may also be implemented in a distributed manner by unspecified specific hardware. For computer software, the software product may be stored on a computer readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), or may be stored distributed over a network, as long as it enables a computer device to perform the method according to the invention.
The foregoing description of the specific embodiments provides further details of the objects, aspects and advantages of the present invention, and it should be understood that the present invention is not inherently related to any particular computer, virtual device or computer apparatus, and various general purpose devices may also implement the present invention. The foregoing description of the embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. The cache processing method is characterized by comprising the following steps:
when receiving content to be cached, judging whether a current query index of the content to be cached exists or not;
judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist, wherein the first queue is used for storing first-level resource elements and first-class storage indexes; the second queue is used for storing a second class storage index of the second-class resource element;
when judging that the current query index is in a first queue, placing the content of a current resource element corresponding to the current query index at the tail position of the first queue, and increasing the hit number of the current query index by one for subsequent cache processing;
When judging that the current query index is in a second queue, determining whether available accommodating space exists to perform caching processing so as to judge whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting movement conditions from the current resource elements corresponding to the content to be cached and the resource elements in the temporary queue, and moving the resource elements meeting the movement conditions to the tail position of the first queue for subsequent caching processing;
under the condition that the temporary elimination processing of the resource elements in the first queue is determined to be completed, calculating the current element hit rate weight value of the resource elements corresponding to the current query index by adopting the following expression:
Weight(key) = phit*ptype*pdomain*ptime (1)
wherein Weight (key) represents a Weight value of a resource element corresponding to the current query index; the phit represents the hit rate of each unit byte in the resource element corresponding to the current query index, and is represented by using a proportional relationship between the number of balanced hits and the size of the resource element corresponding to the current query index, namely phit=hit/value_size; ptype represents the category allocation weight of the resource type corresponding to the current query index; pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; ptime represents the weight of whether the resource element corresponding to the current query index is out of date; the following expressions are adopted to sequentially calculate the weight values of the resource elements in the temporary queue:
Weight(key) s ’= phit s ’*ptype s ’*pdomain s ’*ptime s ’ (6)
Wherein Weight (key) s ' represents a resource element weight value of an s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n; phit is s ' represent hit rate of each unit byte in the s-th resource element in the temporary queue, expressed by using proportional relation of balanced hit number and size of the resource element corresponding to the current query index, namely phit s ’= hit s ’/value_size s ' s is a positive integer, specifically 1, 2, 3, & gt, n; ptype s ' the class allocation weight of the resource type corresponding to the s-th resource element in the temporary queue is represented, s is a positive integer, and is specifically 1, 2, 3, and n; pdomain s ' represents the allocation weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and is determined according to the charging and service bandwidth adjustment of the corresponding userA weight value; ptime s ' means a weight of whether the s-th resource element located in the temporary queue is out of date, s is a positive integer, specifically 1, 2, 3, n;
sequentially comparing the calculated current element hit rate Weight of the resource element corresponding to the current query index with the calculated resource element Weight value Weight (key) of the resource element in the temporary queue;
When the calculated Weight (key) is smaller than the calculated Weight (key) s When' the current query index corresponding to the content to be cached is placed in the second queue, weight (key) s The corresponding resource element is put back to the tail position of the first queue;
when the calculated Weight (key) is equal to or greater than the calculated Weight (key) s And when' caching the content to be cached to the tail position of the first queue, and judging whether to put one or more resource elements in the temporary queue back to the first queue according to the currently available accommodating space of the first queue.
2. The cache processing method according to claim 1, wherein when determining that the current query index is in the second queue, determining whether there is an available accommodation space for cache processing comprises:
calculating the sum of the content size of the resource element corresponding to the current query index and the content size of all elements in the current first queue to obtain a current space capacity calculation value;
and judging whether the current space capacity calculated value is larger than the total cache space capacity or not so as to determine whether available accommodation space exists for cache processing or not.
3. The cache processing method of claim 2, wherein,
When the current space capacity calculated value is greater than or equal to the total cache space capacity, determining that an available accommodating space does not exist to cache the resource element corresponding to the current query index, and determining to perform temporary elimination processing on the resource element in the first queue;
when the calculated value of the current space capacity is smaller than the total cache space capacity, determining that an available containing space exists to cache the resource element corresponding to the current query index, and directly caching the resource element corresponding to the current query index into a first queue.
4. The cache processing method according to claim 2, further comprising:
and repeatedly determining whether the available accommodation space exists to cache the resource element corresponding to the current query index by comparing the size of the available accommodation space with the size of the resource element corresponding to the current query index, and repeatedly determining to temporarily eliminate one or more resource elements in the first queue until the current available accommodation space is determined to cache the resource element corresponding to the current query index.
5. The cache processing method of claim 1, wherein,
And under the condition that the sum of the current buffer space capacity and the size of the resource element corresponding to the current query index is larger than or equal to the total buffer space capacity, one or more resource elements are taken out from the first queue, weight comparison is carried out, whether the sum of the current buffer space capacity and the resource element corresponding to the current query index is larger than the total buffer space capacity is judged, and until the resource element corresponding to the current query index meets the moving condition, the resource element corresponding to the current query index is buffered to the first queue.
6. The cache processing method of claim 1, wherein,
calculating a category allocation weight value of the resource type corresponding to the current query index by adopting the following expression:
ptype = (type_size - cur_type_size + total_size)/total_size (4)
wherein, ptype represents the category allocation weight value of the resource type corresponding to the current query index; type_size represents an allocation space of each category; cur_type_size represents the allocated space occupied by the resource category of the resource element corresponding to the current query index in the first queue; total size represents the total allocation space available for the resource class of all resource elements in the first queue.
7. The cache processing method of claim 1, wherein,
The weight corresponding to the user is calculated by adopting the following expression:
pdomain= billing * bandwidth (5)
wherein pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the weight value is determined according to corresponding user charging and service bandwidth adjustment; the weighting represents the weight determined by the user according to the unit price setting charging; the bandwidth represents a weight value determined by the current bandwidth of the user.
8. A cache processing apparatus employing the cache processing method according to any one of claims 1 to 7, comprising:
the receiving processing module is used for judging whether the current query index of the content to be cached exists or not when the content to be cached is received;
the determining processing module is used for judging whether the current query index of the content to be cached is in a first queue and a second queue or not under the condition that the current query index of the content to be cached is determined to exist, wherein the first queue is used for storing first-level resource elements and first-class storage indexes; the second queue is used for storing a second class storage index of the second-class resource element;
the first cache processing module is used for placing the content of the current resource element corresponding to the current query index at the tail position of the first queue when judging that the current query index is in the first queue, and increasing the hit number of the current query index by one for subsequent cache processing;
The second cache processing module is used for determining whether available accommodating space exists to perform cache processing when judging that the current query index is in a second queue, judging whether to perform temporary elimination processing on the resource elements in the first queue, temporarily storing the temporarily eliminated resource elements into the temporary queue, selecting resource elements meeting movement conditions from the current resource elements corresponding to the contents to be cached and the resource elements in the temporary queue, and moving the resource elements meeting the movement conditions to the tail position of the first queue for subsequent cache processing;
under the condition that the temporary elimination processing of the resource elements in the first queue is determined to be completed, calculating the current element hit rate weight value of the resource elements corresponding to the current query index by adopting the following expression:
Weight(key) = phit*ptype*pdomain*ptime (1)
wherein Weight (key) represents a Weight value of a resource element corresponding to the current query index; the phit represents the hit rate of each unit byte in the resource element corresponding to the current query index, and is represented by using a proportional relationship between the number of balanced hits and the size of the resource element corresponding to the current query index, namely phit=hit/value_size; ptype represents the category allocation weight of the resource type corresponding to the current query index; pdomain represents a weight value corresponding to a user of the resource element corresponding to the current query index, and the determined weight value is adjusted according to corresponding user charging and service bandwidth; ptime represents the weight of whether the resource element corresponding to the current query index is out of date; the following expressions are adopted to sequentially calculate the weight values of the resource elements in the temporary queue:
Weight(key) s ’= phit s ’*ptype s ’*pdomain s ’*ptime s ’ (6)
Wherein Weight (key) s ' represents a resource element weight value of an s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n; phit is s ' denote hit rate per unit byte in the s-th resource element located in temporary queue, using equalizationThe proportional relation between the number of hits and the size of the resource element corresponding to the current query index, namely phit s ’= hit s ’/value_size s ' s is a positive integer, specifically 1, 2, 3, & gt, n; ptype s ' the class allocation weight of the resource type corresponding to the s-th resource element in the temporary queue is represented, s is a positive integer, and is specifically 1, 2, 3, and n; pdomain s ' represents the assigned weight of the user corresponding to the s-th resource element in the temporary queue, s is a positive integer, and is specifically 1, 2, 3, and n, and the determined weight value is adjusted according to the corresponding user charging and service bandwidth; ptime s ' means a weight of whether the s-th resource element located in the temporary queue is out of date, s is a positive integer, specifically 1, 2, 3, n;
sequentially comparing the calculated current element hit rate Weight of the resource element corresponding to the current query index with the calculated resource element Weight value Weight (key) of the resource element in the temporary queue;
When the calculated Weight (key) is smaller than the calculated Weight (key) s When' the current query index corresponding to the content to be cached is placed in the second queue, weight (key) s The corresponding resource element is put back to the tail position of the first queue;
when the calculated Weight (key) is equal to or greater than the calculated Weight (key) s And when' caching the content to be cached to the tail position of the first queue, and judging whether to put one or more resource elements in the temporary queue back to the first queue according to the currently available accommodating space of the first queue.
CN202311405661.8A 2023-10-27 2023-10-27 Cache processing method and device Active CN117149836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311405661.8A CN117149836B (en) 2023-10-27 2023-10-27 Cache processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311405661.8A CN117149836B (en) 2023-10-27 2023-10-27 Cache processing method and device

Publications (2)

Publication Number Publication Date
CN117149836A CN117149836A (en) 2023-12-01
CN117149836B true CN117149836B (en) 2024-02-27

Family

ID=88902916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311405661.8A Active CN117149836B (en) 2023-10-27 2023-10-27 Cache processing method and device

Country Status (1)

Country Link
CN (1) CN117149836B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091984A1 (en) * 2015-12-01 2017-06-08 华为技术有限公司 Data caching method, storage control apparatus and storage device
CN108763110A (en) * 2018-03-22 2018-11-06 新华三技术有限公司 A kind of data cache method and device
CN108763103A (en) * 2018-05-24 2018-11-06 郑州云海信息技术有限公司 A kind of EMS memory management process, device, system and computer readable storage medium
CN111240593A (en) * 2020-01-06 2020-06-05 苏州浪潮智能科技有限公司 Data migration method, device, equipment and medium with dynamic self-adaptive scheduling
CN112054923A (en) * 2020-08-24 2020-12-08 腾讯科技(深圳)有限公司 Service request detection method, device and medium
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system
CN116028389A (en) * 2023-01-18 2023-04-28 深圳前海环融联易信息科技服务有限公司 Hot spot data caching method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091984A1 (en) * 2015-12-01 2017-06-08 华为技术有限公司 Data caching method, storage control apparatus and storage device
CN108763110A (en) * 2018-03-22 2018-11-06 新华三技术有限公司 A kind of data cache method and device
CN108763103A (en) * 2018-05-24 2018-11-06 郑州云海信息技术有限公司 A kind of EMS memory management process, device, system and computer readable storage medium
CN111240593A (en) * 2020-01-06 2020-06-05 苏州浪潮智能科技有限公司 Data migration method, device, equipment and medium with dynamic self-adaptive scheduling
CN112054923A (en) * 2020-08-24 2020-12-08 腾讯科技(深圳)有限公司 Service request detection method, device and medium
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system
CN116028389A (en) * 2023-01-18 2023-04-28 深圳前海环融联易信息科技服务有限公司 Hot spot data caching method, device, equipment and medium

Also Published As

Publication number Publication date
CN117149836A (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US10182127B2 (en) Application-driven CDN pre-caching
JP5487306B2 (en) Cache prefill in thread transport
US8806142B2 (en) Anticipatory response pre-caching
US6901484B2 (en) Storage-assisted quality of service (QoS)
US7251649B2 (en) Method for prioritizing content
US8539160B2 (en) Asynchronous cache refresh for systems with a heavy load
US9569742B2 (en) Reducing costs related to use of networks based on pricing heterogeneity
US9774665B2 (en) Load balancing of distributed services
US20140310474A1 (en) Methods and systems for implementing transcendent page caching
CN110545246A (en) Token bucket-based current limiting method and device
CA2874633C (en) Incremental preparation of videos for delivery
CN108769253B (en) Adaptive pre-fetching control method for optimizing access performance of distributed system
CN109639813B (en) Video file transmission processing method and device, electronic equipment and storage medium
US20170324677A1 (en) Optimized stream management
CN113094392A (en) Data caching method and device
CN116996578B (en) Resource processing method and device based on content distribution network
CN112631504A (en) Method and device for realizing local cache by using off-heap memory
CN117149836B (en) Cache processing method and device
US9811467B2 (en) Method and an apparatus for pre-fetching and processing work for procesor cores in a network processor
Meint et al. From FIFO to predictive cache replacement
US20130268634A1 (en) Fast http seeking
Lam et al. Temporal pre-fetching of dynamic web pages
US11755534B2 (en) Data caching method and node based on hyper-converged infrastructure
CN107094179A (en) A kind of website visiting request processing method
JP2017068328A (en) Cache control device, cache control method, and cache control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant