CN114168494A - Cache processing method and device, electronic equipment and storage medium - Google Patents

Cache processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114168494A
CN114168494A CN202111437516.9A CN202111437516A CN114168494A CN 114168494 A CN114168494 A CN 114168494A CN 202111437516 A CN202111437516 A CN 202111437516A CN 114168494 A CN114168494 A CN 114168494A
Authority
CN
China
Prior art keywords
cache
target
time
parameter
time scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111437516.9A
Other languages
Chinese (zh)
Inventor
王振旺
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111437516.9A priority Critical patent/CN114168494A/en
Publication of CN114168494A publication Critical patent/CN114168494A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present disclosure relates to a cache processing method, an apparatus, an electronic device, and a storage medium, which are applied to a distributed cache cluster, where the cache cluster includes a plurality of cache nodes, and the cache nodes store cache resources, so that in a distributed scenario, the efficiency of local caching of data of the plurality of nodes included in the cluster can be improved, and the user experience can be improved. The specific scheme comprises the following steps: setting a cache refreshing period of a cache node, wherein the cache refreshing period is the same as the running period of a time wheel, and the running period of the time wheel comprises a plurality of time scales; establishing a corresponding relation between the associated target cache resources in at least one cache node and the target time scale on the time wheel; and under the condition that the time wheel runs to the target time scale, refreshing the associated target cache resources cached in at least one cache node corresponding to the target time scale.

Description

Cache processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a cache processing method and apparatus, an electronic device, and a storage medium.
Background
Local caching refers to dividing the physical memory local to the client into a part of space for buffering data written back by the client to the server. Therefore, the data written back by the client is not written into the hard disk of the server firstly, but the written back data is written into the local written back cache space firstly, and when the cache space reaches a certain threshold value, the data is written back to the server. So as to greatly reduce the reading and writing pressure of the server and the network load. In current network environments (e.g., e-commerce shopping guide, network transaction, etc.), in order to cope with the high concurrent system operation pressure, a local caching technology is usually used to cache the persistent information into the local memory of the server.
However, in a distributed scenario, there are multiple service instances (multiple service devices) in a cluster, and through the above-described local caching technology, contents stored in local cache spaces of the service instances included in the cluster are updated or invalidated at the same time, so that there is a problem that cached data in the service instances are inconsistent, which causes a phenomenon that user operations are jittered. Thus, in a distributed scenario, the local cache data of the multiple service devices included in the cluster may be inconsistent, which affects the use experience of the user.
Disclosure of Invention
The present disclosure provides a cache processing method, an apparatus, an electronic device, and a storage medium, which can improve the efficiency of caching data locally at a plurality of nodes included in a cluster in a distributed scenario, and improve the user experience. The technical scheme of the disclosure is as follows:
according to a first aspect of the present disclosure, a cache processing method is provided, which is applied to a distributed cache cluster, where the cache cluster includes a plurality of cache nodes, and the cache nodes store cache resources, and the method includes: setting a cache refreshing period of a cache node, wherein the cache refreshing period is the same as the running period of a time wheel, and the running period of the time wheel comprises a plurality of time scales; establishing a corresponding relation between the associated target cache resources in at least one cache node and the target time scale on the time wheel; and under the condition that the time wheel runs to the target time scale, refreshing the associated target cache resources cached in at least one cache node corresponding to the target time scale.
As can be seen from the above, when the cache node included in the distributed cache cluster caches the cache resource, the electronic device may preset a cache refresh cycle of the cache node, and determine that an operation cycle of the time wheel is the same as the cache refresh cycle, so that when the cache node stores the target cache resource, a correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel may be established, and thus, when the time wheel runs to the target time scale, the target cache resource associated in the at least one cache node corresponding to the target time scale may be refreshed. In this case, the electronic device may combine the local cache with the time wheel, so that after the correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel is established, it may be ensured that the associated cache resource is synchronously refreshed when the time wheel runs to the target time scale, and a situation that the data resources cached in the plurality of cache nodes included in the distributed cache cluster are inconsistent is avoided. Therefore, the efficiency of local data caching of a plurality of nodes included in the cluster is improved, and the use experience of a user is improved.
Optionally, the method for establishing a correspondence between the associated target cache resource in the at least one cache node and the target time scale on the time wheel specifically includes: acquiring target cache parameters corresponding to associated target cache resources in at least one cache node, wherein the target cache parameters corresponding to the associated target cache resources are the same; and establishing a corresponding relation between the target cache parameters and the target time scales on the time wheel, wherein the target time scales correspond to at least one cache parameter, and the at least one cache parameter comprises the target cache parameters.
As can be seen from the above, in the process of establishing the corresponding relationship between the target cache resource associated in the at least one cache node and the target time scale on the time wheel, the corresponding relationship between the target cache resource associated in the at least one cache node and the target time scale on the time wheel may be determined by obtaining the same target cache parameter corresponding to the target cache resource associated in the at least one cache node, and establishing the corresponding relationship between the target cache parameter and the target time scale on the time wheel. Further, a process of establishing the corresponding relation between the target cache resource and the target time scale on the time wheel in a more specific manner is provided, so that the electronic equipment can quickly and accurately establish the corresponding relation between the target cache resource and the target time scale.
Optionally, the method for establishing a correspondence between the target cache parameter and the target time scale on the time wheel specifically includes: performing hash processing on the target cache parameter to obtain a target hash value corresponding to the target cache parameter; and establishing a corresponding relation between the target hash value and the target time scale on the time wheel, and determining the corresponding relation between the target cache parameter and the target time scale.
As can be seen from the above, in the process of establishing the corresponding relationship between the target cache parameter and the target time scale on the time wheel, the hash processing may be performed on the target cache parameter to obtain the target hash value corresponding to the target cache parameter, and then the corresponding relationship between the target hash value and the target time scale on the time wheel is established, so as to determine the corresponding relationship between the target cache parameter and the target time scale. The method further provides a specific implementation mode for establishing the corresponding relation between the target cache parameter and the target time scale, and the target cache parameter is subjected to Hash processing, so that the cache resource in each cache node in the cache cluster is corresponding to one time scale of the time wheel according to the mode, and the corresponding relation between the cache resource and the time scale on the time wheel can be uniformly established, so that the electronic equipment can quickly and accurately establish the corresponding relation between the target cache parameter and the target time scale.
Optionally, the method for "refreshing the associated target cache resource cached in the at least one cache node corresponding to the target time scale when the time wheel runs to the target scale" specifically includes: under the condition that the time wheel runs to a target time scale, acquiring all cache parameter lists; determining at least one cache parameter corresponding to the target time scale from all cache parameter lists, wherein the at least one cache parameter comprises a target cache parameter; and synchronously refreshing all cache resources corresponding to each cache parameter in the at least one cache parameter, wherein all cache resources corresponding to the target cache parameter comprise the target cache resource.
As can be seen from the above, when the time wheel runs to the target time scale, the electronic device may obtain all the cache parameter lists first, so that at least one cache parameter corresponding to the target time scale may be determined from all the cache parameter lists according to the corresponding relationship between the cache parameters in all the cache parameter lists and the time scale, and thus all the cache resources corresponding to each cache parameter in the at least one cache parameter may be refreshed synchronously. Further, a specific implementation manner of refreshing the associated target cache resources cached in the at least one cache node corresponding to the target time scale is given, so that the efficiency of refreshing the local cache data of the plurality of nodes included in the cluster is improved.
Optionally, the cache refresh cycle corresponding to the time wheel is at least one of the following durations: unit minutes, unit hours, and unit days; the unit duration corresponding to the cache refreshing period on the time wheel is any one of the following unit durations: second, minute, hour units; and unit time length corresponds to every two adjacent time scales on the time wheel.
Therefore, the cache refreshing period corresponding to the time wheel can be any preset time length, and the unit time length corresponding to the time wheel can also be any preset unit time length, so that various setting modes of the time wheel are provided, and the diversity of the time wheel is improved.
Optionally, the time wheel comprises a plurality of time scales represented by any one of the following ways: circular lists, arrays.
From the above, the plurality of time scales included in the time wheel can be realized in a circular list or array form, so that various representation modes of the time wheel are provided, and the diversity of the time wheel is improved.
According to a second aspect of the present disclosure, there is provided a cache processing apparatus applied to a distributed cache cluster, where the cache cluster includes a plurality of cache nodes, and the cache nodes store cache resources, the cache processing apparatus includes: the device comprises a setting unit, a first establishing unit and a first refreshing unit; the setting unit is configured to execute setting of a cache refreshing cycle of the cache node, the cache refreshing cycle is the same as an operation cycle of the time wheel, and the operation cycle of the time wheel comprises a plurality of time scales; the first establishing unit is configured to execute establishing of a corresponding relation between a target cache resource associated in at least one cache node and a target time scale on the time wheel; the first refreshing unit is configured to refresh the associated target cache resources cached in the at least one cache node corresponding to the target time scale under the condition that the time wheel runs to the target time scale.
Optionally, the first obtaining unit is configured to perform obtaining of target cache parameters corresponding to associated target cache resources in the at least one cache node, where the target cache parameters corresponding to the associated target cache resources are the same; and the second establishing unit is configured to execute establishing of a corresponding relation between the target cache parameter and a target time scale on the time wheel, wherein the target time scale corresponds to at least one cache parameter, and the at least one cache parameter comprises the target cache parameter.
Optionally, the first processing unit is configured to perform hash processing on the target cache parameter to obtain a target hash value corresponding to the target cache parameter; and the third establishing unit is configured to execute the corresponding relationship between the target hash value and the target time scale on the time wheel, and determine the corresponding relationship between the target caching parameter and the target time scale.
Optionally, the second obtaining unit is configured to obtain all the cache parameter lists when the time wheel runs to the target time scale; the second processing unit is configured to determine at least one cache parameter corresponding to the target time scale from all cache parameter lists, wherein the at least one cache parameter comprises the target cache parameter; and the second refreshing unit is configured to perform synchronous refreshing on all the cache resources corresponding to each cache parameter in the at least one cache parameter respectively, wherein all the cache resources corresponding to the target cache parameter comprise the target cache resource.
Optionally, the cache refresh cycle corresponding to the time wheel is at least one of the following durations: unit minutes, unit hours, and unit days; the unit duration corresponding to the cache refreshing period on the time wheel is any one of the following unit durations: second, minute, hour units; and unit time length corresponds to every two adjacent time scales on the time wheel.
Optionally, the time wheel comprises a plurality of time scales represented by any one of the following: circular lists, arrays.
According to a third aspect of the present disclosure, there is provided an electronic apparatus comprising:
a processor. A memory for storing processor-executable instructions. Wherein the processor is configured to execute the instructions to implement any one of the above-described optional cache processing methods of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having instructions stored thereon, which, when executed by a processor of an electronic device, enable the electronic device to perform any one of the above-mentioned first aspect optional cache processing methods.
According to a fifth aspect of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of optionally caching processing as in any one of the first aspects.
According to a sixth aspect of the present disclosure, there is provided a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a computer program or instructions to implement the cache processing method as described in the first aspect and any one of the possible implementations of the first aspect.
The technical scheme provided by the disclosure at least brings the following beneficial effects:
based on any one of the above aspects, in the disclosure, when the cache resource is cached by the cache node included in the distributed cache cluster, the electronic device may preset a cache refresh cycle of the cache node, and determine that an operation cycle of the time wheel is the same as the cache refresh cycle, so that when the cache node stores the target cache resource, a correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel may be established, and thus, when the time wheel runs to the target time scale, the target cache resource associated in the at least one cache node corresponding to the target time scale may be refreshed. In this case, the electronic device may combine the local cache with the time wheel, so that after the correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel is established, it may be ensured that the associated cache resource is synchronously refreshed when the time wheel runs to the target time scale, and a situation that the data resources cached in the plurality of cache nodes included in the distributed cache cluster are inconsistent is avoided. Therefore, the efficiency of local data caching of a plurality of nodes included in the cluster is improved, and the use experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating a cache processing system according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a cache processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart diagram illustrating another cache processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a time scale representation of a time wheel according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart illustrating another cache processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart illustrating another cache processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a cache processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another cache processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another cache processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of another cache processing apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of another cache processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
First, an application scenario of the embodiment of the present disclosure is described.
The cache processing method of the embodiment of the disclosure is applied to a scene in which a distributed cache cluster performs local caching. In the related art, in order to deal with system cache pressure caused by high concurrency, a local cache technology is generally used for caching persistent information in a memory local, but in a distributed cache cluster scene, a plurality of cache nodes (i.e., service instances) exist in a cache cluster, and a conventional local cache scheme cannot guarantee that resources cached locally by each cache node in the cache cluster are simultaneously invalidated (updated or deleted, etc.), so that a problem that data resources cached by each cache node are inconsistent exists, and a jitter phenomenon (data obtained by a user is inconsistent) may exist in user operation.
In order to solve the above problem, an embodiment of the present disclosure provides a cache processing method, where when a cache node included in a distributed cache cluster caches a cache resource, an electronic device may preset a cache refresh cycle of the cache node, and determine that an operation cycle of a time wheel is the same as the cache refresh cycle, so that when the cache node stores a target cache resource, a corresponding relationship between a target cache resource associated in at least one cache node and a target time scale on the time wheel may be established. Therefore, under the condition that the time wheel runs to the target time scale, the associated target cache resources cached in at least one cache node corresponding to the target time scale can be refreshed. In this case, the electronic device may combine the local cache with the time wheel, so that after the correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel is established, it may be ensured that the associated cache resource is synchronously refreshed when the time wheel runs to the target time scale, and a situation that the data resources cached in the plurality of cache nodes included in the distributed cache cluster are inconsistent is avoided. Therefore, the efficiency of local data caching of a plurality of nodes included in the cluster is improved, and the use experience of a user is improved.
The following provides an exemplary description of a cache processing method according to an embodiment of the present disclosure with reference to the accompanying drawings:
fig. 1 is a schematic diagram of a cache processing system according to an embodiment of the present disclosure, and as shown in fig. 1, the cache processing system may include a cluster 11 and clients 12 (only one client 12 is shown in fig. 1 by way of example, and there may be more clients in a specific implementation). The cluster 11 includes a plurality of nodes, and the cluster 11 may establish a communication connection with the client 12. The cluster 11 and the client 12 may be connected in a wired manner or in a wireless manner, which is not limited in this disclosure.
And the cluster 11 is used for receiving and storing the real-time data information sent by the client 12. For example, the nodes included in the cluster 11 receive the cache resources sent by the client 12, store the cache resources in the nodes, and update the cache resources stored in the nodes when receiving the update resources corresponding to the cache resources.
And the client 12 is used for sending the real-time cache resources to the cluster 11. For example, the client 12 generates data resources in real time according to an operation of a user, and transmits the data resources to nodes included in the cluster 11 to cache the data resources in the nodes.
In an implementation manner, the cluster 11 may be one server, a server cluster composed of a plurality of servers, or a cloud computing service center. The cluster 11 may include a processor, memory, and network interfaces, among others. The plurality of cache nodes included in the cache cluster in the present disclosure may be one server in the cluster 11, or one storage device or one storage module on one server.
In one implementable manner, the client 12 is used to provide voice and/or data connectivity services to users. The client 12 may be variously named, for example, a UE end, a terminal unit, a terminal station, a mobile station, a remote terminal, a mobile device, a wireless communication device, a vehicular user equipment, a terminal agent, or a terminal device, etc.
Alternatively, the client 12 may be a handheld device, an in-vehicle device, a wearable device, or a computer with various communication functions, which is not limited in this disclosure. For example, the handheld device may be a smartphone. The in-vehicle device may be an in-vehicle navigation system. The wearable device may be a smart bracelet. The computer may be a Personal Digital Assistant (PDA) computer, a tablet computer, and a laptop computer.
The cache processing method provided by the embodiment of the present disclosure may be applied to the cluster 11 and the client 12 in the cache processing system shown in fig. 1. The electronic devices to which the present disclosure relates may be the cluster 11 or the client 12. The cache processing method provided by the embodiment of the present disclosure is described in detail by taking an example that the cache processing method of the present disclosure is applied to a cluster in a process of performing local cache by a distributed cache cluster.
After introducing the application scenario and the cache processing system of the embodiment of the present disclosure, the cache processing method provided by the embodiment of the present disclosure is described in detail below with reference to the cache processing system shown in fig. 1.
As shown in fig. 2, a flowchart of a cache processing method is shown according to an exemplary embodiment of the present disclosure. The cache processing method is applied to a distributed cache cluster, the cache cluster comprises a plurality of cache nodes, and cache resources are stored in the cache nodes. The cache processing method may include S201 to S203.
S201, setting a cache refreshing period of a cache node.
Specifically, the cache refresh period is the same as the running period of the time wheel, and the running period of the time wheel includes a plurality of time scales.
Optionally, in a distributed scenario, when the cache cluster caches the cache resource, the cache resource in the cache node may be refreshed when time passes through a cache refresh cycle by presetting a cache refresh cycle of the cache node.
Optionally, one cache refresh cycle may have various durations of 1 minute, 1 hour, 1 day, and the like, and the unit duration corresponding to different cache refresh cycles may be any one of the following unit durations: seconds, minutes, hours.
For example, in the case that the cache refresh period is 1 minute in duration, the corresponding unit duration may be unit second, two adjacent time scales on the corresponding time wheel are unit second, and 60 time scales are included on the time wheel; under the condition that the cache refreshing period is 1 hour, the corresponding unit time length can be unit second or unit minute, two adjacent time scales on the corresponding time wheel are unit second or unit minute, and 3600 time scales or 60 time scales are included on the time wheel; in the case that the buffer refresh period is 1 day, the corresponding unit duration may be unit minute or unit hour, the two adjacent time scales on the corresponding time wheel are unit minute or unit hour, and 1440 time scales or 12 time scales are included on the corresponding time wheel.
S202, establishing a corresponding relation between the associated target cache resources in at least one cache node and the target time scales on the time wheel.
Optionally, when the target cache resource cached in at least one cache node included in the cache cluster has an association relationship, a correspondence relationship may be established between the associated target cache resource in the at least one cache node and the same time scale (i.e., the target time scale) on the time wheel.
It should be noted that, the associated target cache resource in the at least one cache node may be understood as: the cached resources in at least one cache node are the same, or the cached resources have a corresponding relationship.
Optionally, the time wheel is predetermined, the set cache refresh period of the cache node corresponds to the time wheel, that is, the cache refresh period is determined as a time period on the time wheel, and the time scale on the time wheel is determined according to the unit duration corresponding to the cache refresh period.
S203, under the condition that the time wheel runs to the target time scale, refreshing the associated target cache resources cached in at least one cache node corresponding to the target time scale.
Optionally, after the target cache resource is associated with the target time scale on the time wheel, the time wheel runs continuously with the lapse of time, and when the time wheel runs to the target time scale, the cache resource in the cache node corresponding to the target time scale may be refreshed.
Optionally, when the time wheel runs to the target time scale, it is detected whether the associated target cache resource cached in the at least one cache node corresponding to the target time scale needs to be updated, and when the target cache resource needs to be updated, the target cache resource cached in the at least one cache node is refreshed.
The technical scheme provided by the embodiment at least has the following beneficial effects: when the cache node included in the distributed cache cluster caches the cache resource, the electronic device may preset a cache refresh cycle of the cache node, and determine that an operation cycle of the time wheel is the same as the cache refresh cycle, so that when the cache node stores the target cache resource, a corresponding relationship between the target cache resource associated in the at least one cache node and the target time scale on the time wheel may be established, and thus, when the time wheel runs to the target time scale, the target cache resource associated cached in the at least one cache node corresponding to the target time scale may be refreshed. In this case, the electronic device may combine the local cache with the time wheel, so that after the correspondence between the target cache resource associated in the at least one cache node and the target time scale on the time wheel is established, it may be ensured that the associated cache resource is synchronously refreshed when the time wheel runs to the target time scale, and a situation that the data resources cached in the plurality of cache nodes included in the distributed cache cluster are inconsistent is avoided. Therefore, the efficiency of local data caching of a plurality of nodes included in the cluster is improved, and the use experience of a user is improved.
In an implementable manner, as shown in fig. 3 in conjunction with fig. 2, the method in S202 may specifically include S301-S302.
S301, obtaining a target cache parameter corresponding to the associated target cache resource in at least one cache node.
Specifically, the target cache parameters corresponding to the associated target cache resources are the same.
Optionally, in the process of caching the cache resource by the cache node, when the cache resource is cached to the cache node, the corresponding relationship between the cache resource and the cache parameter may be determined, so that the target cache parameter corresponding to the target cache resource may be obtained.
It should be noted that, in the embodiment of the present disclosure, the cache parameter may be a cache key, and the cache key may be a specific key value, which is not specifically limited in the present disclosure.
For example, it may be understood that a plurality of cache locations are included in a cache node, each cache location corresponds to one identifier (i.e., a cache key), and when a cache resource is stored in a corresponding cache location in the cache node, a corresponding relationship between the cache resource and the cache key may be determined.
S302, establishing a corresponding relation between the target cache parameters and the target time scales on the time wheel.
Specifically, the target time scale corresponds to at least one cache parameter, and the at least one cache parameter includes a target cache parameter.
Optionally, after the target cache parameter corresponding to the target cache resource is obtained, the target cache parameter is associated with the target time scale on the time wheel, so that when the time wheel runs to the target time scale, the cache resource corresponding to the target cache parameter can be refreshed.
For example, as shown in fig. 4, for a time scale representation of a time wheel shown in an exemplary embodiment of the present disclosure, the time wheel may be represented by a circular list (clock) or an array representation, and taking a cache refresh period corresponding to the time wheel as 1 day and a unit time length corresponding to the time wheel as 1 hour as an example, 12 time scales are included on the time wheel to represent 12 hours of 1 day. Wherein, the time scale 1 corresponds to a first cache key (i.e. key1) corresponding to the first cache resource; time scale 4 corresponds to the second cache key (i.e., key2) corresponding to the second cache resource, time scale 10 corresponds to the third cache key (i.e., key3) corresponding to the third cache resource, and the fourth cache key (i.e., key4) corresponding to the fourth cache resource. When the time wheel runs to the time scale 1, the first cache resource corresponding to the first cache key can be refreshed; when the time wheel runs to the time scale 4, the second cache resource corresponding to the second cache key can be refreshed; when the time wheel runs to the time scale 10, the third cache resource corresponding to the third cache key and the fourth cache resource corresponding to the fourth cache key may be refreshed.
The technical scheme provided by the embodiment at least has the following beneficial effects: in the process of establishing the corresponding relationship between the associated target cache resource in the at least one cache node and the target time scale on the time wheel, the corresponding relationship between the associated target cache resource in the at least one cache node and the target time scale on the time wheel may be determined by obtaining the same target cache parameter corresponding to the associated target cache resource in the at least one cache node and establishing the corresponding relationship between the target cache parameter and the target time scale on the time wheel. Further, a process of establishing the corresponding relation between the target cache resource and the target time scale on the time wheel in a more specific manner is provided, so that the electronic equipment can quickly and accurately establish the corresponding relation between the target cache resource and the target time scale.
In an implementable manner, as shown in fig. 5 in conjunction with fig. 3, the method in S302 may specifically include S401-S402.
S401, carrying out Hash processing on the target cache parameters to obtain target hash values corresponding to the target cache parameters.
Optionally, after determining the target cache parameter corresponding to the target cache resource, the target cache parameter may be subjected to hash processing to obtain a target hash value corresponding to the target cache parameter.
S402, establishing a corresponding relation between the target hash value and the target time scale on the time wheel, and determining the corresponding relation between the target cache parameter and the target time scale.
Optionally, after obtaining the target hash value corresponding to the target caching parameter, establishing a corresponding relationship between the target hash value and the target time scale on the time wheel may determine a corresponding relationship between the target caching parameter and the target time scale.
The technical scheme provided by the embodiment at least has the following beneficial effects: in the process of establishing the corresponding relationship between the target cache parameter and the target time scale on the time wheel, the target cache parameter may be subjected to hash processing to obtain a target hash value corresponding to the target cache parameter, and then the target hash value and the target time scale on the time wheel are established to be in the corresponding relationship, so as to determine the corresponding relationship between the target cache parameter and the target time scale. The method further provides a specific implementation mode for establishing the corresponding relation between the target cache parameter and the target time scale, and the target cache parameter is subjected to Hash processing, so that the cache resource in each cache node in the cache cluster is corresponding to one time scale of the time wheel according to the mode, and the corresponding relation between the cache resource and the time scale on the time wheel can be uniformly established, so that the electronic equipment can quickly and accurately establish the corresponding relation between the target cache parameter and the target time scale.
In an implementable manner, as shown in fig. 6 in conjunction with fig. 2, the method in S203 may specifically include S501-S503.
S501, under the condition that the time wheel runs to the target time scale, all cache parameter lists are obtained.
Optionally, after determining the corresponding relationship between the target cache resource and the target time scale, the corresponding relationship between each time scale and the cache parameter on the time round may be correspondingly saved to obtain a total cache parameter list, where the total cache parameter list includes the corresponding relationship between each time scale and the cache parameter.
Optionally, the time wheel continuously runs along with continuous refreshing of time, and when the time wheel runs to the target time scale, all the cache parameter lists may be obtained, so as to determine at least one cache parameter corresponding to the target time scale from the all cache parameter lists.
S502, determining at least one cache parameter corresponding to the target time scale from all cache parameter lists.
Specifically, the at least one caching parameter includes a target caching parameter.
Optionally, at least one cache parameter corresponding to the target time scale may be quickly determined according to a corresponding relationship between each time scale and a cache parameter included in all cache parameter lists.
S503, synchronously refreshing all the cache resources corresponding to each cache parameter in the at least one cache parameter.
Specifically, all the cache resources corresponding to the target cache parameter include the target cache resource.
Optionally, after the at least one cache parameter corresponding to the target time scale is determined, all cache resources corresponding to each cache parameter in the at least one cache parameter are synchronously refreshed, so as to ensure consistency of cache contents in cache nodes included in the cache cluster.
The technical scheme provided by the embodiment at least has the following beneficial effects: under the condition that the time wheel runs to the target time scale, the electronic equipment can determine at least one cache parameter corresponding to the target time scale from all cache parameter lists by acquiring all cache parameter lists firstly according to the corresponding relation between the cache parameters in all cache parameter lists and the time scale, so that all cache resources corresponding to each cache parameter in the at least one cache parameter can be synchronously refreshed. Further, a specific implementation manner of refreshing the associated target cache resources cached in the at least one cache node corresponding to the target time scale is given, so that the efficiency of refreshing the local cache data of the plurality of nodes included in the cluster is improved.
In an implementation manner, the cache refresh period corresponding to the time wheel is at least one of the following durations: unit minutes, unit hours, and unit days; the unit duration corresponding to the cache refreshing period on the time wheel is any one of the following unit durations: second, minute, hour units; and unit time length corresponds to every two adjacent time scales on the time wheel.
It should be noted that the cache refresh cycle corresponding to the time wheel may also be a preset time length, that is, a time length is preset as the cache refresh cycle corresponding to the time wheel, and the preset time length may not be a unit time length (i.e., a unit minute, a unit hour, a unit day), for example, the preset time length may be: 1 day 5 hours, 2 days 1 hours 30 minutes, etc.
The technical scheme provided by the embodiment at least has the following beneficial effects: the cache refreshing cycle corresponding to the time wheel can be any preset time length, and the unit time length corresponding to the time wheel can also be any preset unit time length, so that various setting modes of the time wheel are provided, and the diversity of the time wheel is improved.
In one practical way, the plurality of time scales included in the time wheel are represented by any one of the following ways: circular lists, arrays.
The technical scheme provided by the embodiment at least has the following beneficial effects: the time scales included by the time wheel can be realized in a circular list or array form, various expression modes of the time wheel are provided, and the diversity of the time wheel is improved.
As can be seen from the above examples, in the existing scheme, in the process of performing local caching in a distributed cache cluster, each cache node in a plurality of cache nodes included in the cache cluster independently refreshes cache contents, so that when cache resources cached by the plurality of cache nodes are associated, part of the cache nodes update the cache resources, and the other part of the cache nodes do not update the cache resources, thereby causing a problem that the cache resources in all the cache nodes included in the cache cluster are inconsistent. According to the method, the distributed cache cluster and the time wheel are combined, the cache keys (cache parameters) corresponding to the cache resources correspond to the time scales on the time wheel through a hash algorithm, so that the cache keys corresponding to the same cache resources in all cache nodes in the cache cluster correspond to the same time scale on the time wheel, the purpose of synchronously refreshing the cache resources is achieved when the time wheel runs to the corresponding time scale, and the problem of inconsistency of the cache resources in all cache nodes in the cache cluster is solved. Therefore, the efficiency of local data caching of a plurality of nodes included in the cluster is improved, and the use experience of a user is improved.
It is understood that the above method may be implemented by a cache processing apparatus. In order to implement the above functions, the cache processing device includes a hardware structure and/or a software module that performs each function. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments.
The cache processing device and the like in the embodiments of the present disclosure may be divided into functional modules according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
Fig. 7 is a schematic structural diagram illustrating a cache processing apparatus according to an exemplary embodiment. Referring to fig. 7, the cache processing apparatus 70 may include: a setting unit 701, a first establishing unit 702 and a first refreshing unit 703.
A setting unit 701 configured to perform setting of a cache refresh cycle of a cache node, where the cache refresh cycle is the same as an operation cycle of a time wheel, and the operation cycle of the time wheel includes a plurality of time scales; for example, the setting unit 701 may be used to perform step 201 in fig. 2.
A first establishing unit 702 configured to perform establishing a corresponding relationship between a target cache resource associated in at least one cache node and a target time scale on a time wheel; for example, the first establishing unit 702 may be used to perform step 202 in fig. 2.
A first refreshing unit 703 configured to perform, when the time wheel runs to the target time scale, refreshing the associated target cache resource cached in the at least one cache node corresponding to the target time scale; for example, the first refresh unit 703 may be used to perform step 203 in fig. 2.
Optionally, with reference to fig. 7, as shown in fig. 8, the cache processing apparatus 70 may further include: a first acquisition unit 704. A first obtaining unit 704, configured to perform obtaining target cache parameters corresponding to associated target cache resources in at least one cache node, where the target cache parameters corresponding to the associated target cache resources are the same; for example, the first obtaining unit 704 may be configured to perform step 301 in fig. 3.
A second establishing unit 705 configured to perform establishing a corresponding relationship between a target cache parameter and a target time scale on the time wheel, where the target time scale corresponds to at least one cache parameter, and the at least one cache parameter includes the target cache parameter; for example, the second establishing unit 705 may be used to perform step 302 in fig. 3.
Optionally, with reference to fig. 8, as shown in fig. 9, the cache processing apparatus 70 may further include: a first processing unit 706. The first processing unit 706 is configured to perform hash processing on the target cache parameter to obtain a target hash value corresponding to the target cache parameter; for example, the first processing unit 706 may be configured to perform step 401 in fig. 4.
A third establishing unit 707 configured to perform establishing a correspondence between the target hash value and the target time scale on the time wheel, and determine a correspondence between the target cache parameter and the target time scale; for example, the third establishing unit 707 may be used to perform step 402 in fig. 4.
Optionally, in conjunction with fig. 9, as shown in fig. 10, the second obtaining unit 708 is configured to obtain all the cache parameter lists when the time wheel runs to the target time scale; for example, the second obtaining unit 708 may be configured to perform step 501 in fig. 5.
The second processing unit 709 is configured to determine at least one cache parameter corresponding to the target time scale from all cache parameter lists, where the at least one cache parameter includes the target cache parameter; for example, the second processing unit 709 may be used to execute step 502 in fig. 5.
A second refreshing unit 710 configured to perform synchronous refreshing on all cache resources corresponding to each of the at least one cache parameter, where all cache resources corresponding to the target cache parameter include the target cache resource; for example, the second refresh unit 710 may be used to perform step 503 in FIG. 5.
Optionally, the cache refresh cycle corresponding to the time wheel is at least one of the following durations: unit minutes, unit hours, and unit days; the unit duration corresponding to the cache refreshing period on the time wheel is any one of the following unit durations: second, minute, hour units; and unit time length corresponds to every two adjacent time scales on the time wheel.
Optionally, the time wheel comprises a plurality of time scales represented by any one of the following: circular lists, arrays.
As above, the embodiment of the present disclosure may perform division of functional modules on an electronic device according to the above method example. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is schematic, and is only a logic function division, and there may be another division manner in actual implementation. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block.
With regard to the cache processing apparatus in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be elaborated here.
Fig. 11 is a schematic structural diagram of a cache processing apparatus 60 according to the present disclosure. As shown in fig. 11, the cache processing apparatus 60 may include at least one processor 601 and a memory 603 for storing instructions executable by the processor 601. Wherein the processor 601 is configured to execute the instructions in the memory 603 to implement the cache processing method in the above embodiments.
In addition, the cache processing device 60 may also include a communication bus 602 and at least one communication interface 604.
The processor 601 may be a GPU, a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the disclosed aspects.
The communication bus 602 may include a path that conveys information between the aforementioned components.
The communication interface 604 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The memory 603 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and connected to the processing unit by a bus. The memory may also be integrated with the processing unit as a volatile storage medium in the GPU.
The memory 603 is used for storing instructions for executing the disclosed solution, and is controlled by the processor 601. The processor 601 is configured to execute instructions stored in the memory 603 to implement the functions of the disclosed method.
In particular implementations, processor 601 may include one or more GPUs, such as GPU0 and GPU1 in fig. 11, as one embodiment.
In one embodiment, the cache processing apparatus 60 may include a plurality of processors, such as the processor 601 and the processor 607 in fig. 11. Each of these processors may be a single-Core (CPU) processor or a multi-core (multi-GPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, the cache processing apparatus 60 may further include an output device 605 and an input device 606, as an embodiment. Output device 605 is in communication with processor 601 and may display information in a variety of ways. For example, the output device 605 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 606 is in communication with the processor 601 and may accept user input in a variety of ways. For example, the input device 606 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of the cache processing apparatus 60, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
The present disclosure also provides a computer-readable storage medium having instructions stored thereon, where the instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the group communication method provided by the embodiments of the present disclosure.
The embodiment of the present disclosure further provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to execute the cache processing method provided by the embodiment of the present disclosure.
The embodiment of the present disclosure further provides a communication system, as shown in fig. 1, the system includes a cluster 11 and a client 12. The cluster 11 and the client 12 are respectively configured to execute corresponding steps in the foregoing embodiments of the present disclosure, so that the communication system solves the technical problem solved by the embodiments of the present disclosure and achieves the technical effect achieved by the embodiments of the present disclosure, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A cache processing method is applied to a distributed cache cluster, the cache cluster comprises a plurality of cache nodes, and cache resources are stored in the cache nodes, and the method comprises the following steps:
setting a cache refreshing period of the cache node, wherein the cache refreshing period is the same as the running period of a time wheel, and the running period of the time wheel comprises a plurality of time scales;
establishing a corresponding relation between a target cache resource associated in at least one cache node and a target time scale on the time wheel;
and under the condition that the time wheel runs to the target time scale, refreshing the associated target cache resources cached in the at least one cache node corresponding to the target time scale.
2. The method of claim 1, wherein establishing a correspondence between the associated target cache resource in the at least one cache node and the target time scale on the time wheel comprises:
acquiring target cache parameters corresponding to associated target cache resources in the at least one cache node, wherein the target cache parameters corresponding to the associated target cache resources are the same;
and establishing a corresponding relation between the target cache parameters and the target time scales on the time wheel, wherein the target time scales correspond to at least one cache parameter, and the at least one cache parameter comprises the target cache parameters.
3. The method of claim 2, wherein establishing the correspondence between the target buffer parameter and the target time scale on the time wheel comprises:
performing hash processing on the target cache parameter to obtain a target hash value corresponding to the target cache parameter;
and establishing a corresponding relation between the target hash value and the target time scale on the time wheel, and determining the corresponding relation between the target cache parameter and the target time scale.
4. The method according to any of claims 1 to 3, wherein the refreshing the associated target cache resource cached in the at least one cache node corresponding to the target time scale in the case that the time wheel runs to the target time scale comprises:
under the condition that the time wheel runs to the target time scale, acquiring all cache parameter lists;
determining at least one cache parameter corresponding to the target time scale from the all cache parameter list, wherein the at least one cache parameter comprises the target cache parameter;
and synchronously refreshing all cache resources corresponding to each cache parameter in the at least one cache parameter, wherein all cache resources corresponding to the target cache parameter comprise the target cache resource.
5. The method of claim 1, wherein the cache refresh period for the time round is at least one of: unit minutes, unit hours, and unit days; the unit duration corresponding to the cache refresh cycle on the time wheel is any one of the following unit durations: second, minute, hour units; and unit time length corresponds to every two adjacent time scales on the time wheel.
6. The method of claim 1, wherein the plurality of time scales included in the time wheel are represented by any one of: circular lists, arrays.
7. A cache processing apparatus applied to a distributed cache cluster, the cache cluster including a plurality of cache nodes, the cache nodes storing cache resources, the apparatus comprising:
the setting unit is configured to execute setting of a cache refreshing cycle of the cache node, wherein the cache refreshing cycle is the same as an operating cycle of a time wheel, and the operating cycle of the time wheel comprises a plurality of time scales;
a first establishing unit configured to perform establishing a corresponding relationship between a target cache resource associated in at least one of the cache nodes and a target time scale on the time wheel;
a first refreshing unit configured to perform, when the time wheel runs to the target time scale, refreshing the associated target cache resource cached in the at least one cache node corresponding to the target time scale.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the cache processing method of any of claims 1-6.
9. A computer-readable storage medium having instructions stored thereon, wherein the instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the cache processing method of any one of claims 1-6.
10. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the cache processing method of any of claims 1-6.
CN202111437516.9A 2021-11-29 2021-11-29 Cache processing method and device, electronic equipment and storage medium Pending CN114168494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111437516.9A CN114168494A (en) 2021-11-29 2021-11-29 Cache processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111437516.9A CN114168494A (en) 2021-11-29 2021-11-29 Cache processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114168494A true CN114168494A (en) 2022-03-11

Family

ID=80481565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111437516.9A Pending CN114168494A (en) 2021-11-29 2021-11-29 Cache processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114168494A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719621A (en) * 2023-06-01 2023-09-08 上海聚水潭网络科技有限公司 Data write-back method, device, equipment and medium for mass tasks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719621A (en) * 2023-06-01 2023-09-08 上海聚水潭网络科技有限公司 Data write-back method, device, equipment and medium for mass tasks
CN116719621B (en) * 2023-06-01 2024-05-03 上海聚水潭网络科技有限公司 Data write-back method, device, equipment and medium for mass tasks

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
EP3595355B1 (en) Resource obtaining method, apparatus and system
WO2016026384A1 (en) Client page display method, device and system
US20140279851A1 (en) Fingerprint-Based, Intelligent, Content Pre-Fetching
CN109388626B (en) Method and apparatus for assigning numbers to services
CN105630819A (en) Cached data refreshing method and apparatus
CN102882974A (en) Method for saving website access resource by website identification version number
CN110909521A (en) Synchronous processing method and device for online document information and electronic equipment
CN113094430B (en) Data processing method, device, equipment and storage medium
CN111581239A (en) Cache refreshing method and electronic equipment
JP7366664B2 (en) Offline briefcase sync
CN114168494A (en) Cache processing method and device, electronic equipment and storage medium
CN114153986A (en) Knowledge graph construction method and device, electronic equipment and storage medium
CN111163336A (en) Video resource pushing method and device, electronic equipment and computer readable medium
CN111262907B (en) Service instance access method and device and electronic equipment
CN111245940B (en) Method and device for processing mobile communication signal data in communication module of Internet of things
CN110740418A (en) Method and device for generating user visit information
CN115801813A (en) Memory cache based activity query method and device
CN113760929A (en) Data synchronization method and device, electronic equipment and computer readable medium
CN110661857B (en) Data synchronization method and device
CN113760928A (en) Cache data updating system and method
CN112685075A (en) Gray scale distribution method and device, electronic equipment and computer readable medium
CN113283596B (en) Model parameter training method, server, system and storage medium
CN115119058B (en) Method, device, equipment and storage medium for notifying multimedia resource task
CN117632885B (en) Resource synchronization method, device, equipment and medium in backtracking system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination