CN114500529A - Cloud edge cooperative caching method and system based on perceptible redundancy - Google Patents

Cloud edge cooperative caching method and system based on perceptible redundancy Download PDF

Info

Publication number
CN114500529A
CN114500529A CN202111631200.3A CN202111631200A CN114500529A CN 114500529 A CN114500529 A CN 114500529A CN 202111631200 A CN202111631200 A CN 202111631200A CN 114500529 A CN114500529 A CN 114500529A
Authority
CN
China
Prior art keywords
data
edge
edge server
caching
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111631200.3A
Other languages
Chinese (zh)
Inventor
王艳广
李一泠
王冲
李龙鸣
符传杰
施展
王夏菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Science And Technology Network Information Development Co ltd
Original Assignee
Aerospace Science And Technology Network Information Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Science And Technology Network Information Development Co ltd filed Critical Aerospace Science And Technology Network Information Development Co ltd
Priority to CN202111631200.3A priority Critical patent/CN114500529A/en
Publication of CN114500529A publication Critical patent/CN114500529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a cloud-edge cooperative caching method and system based on perceptible redundancy, relates to the technical field of cloud-edge cooperative caching, and aims to solve the problems of low hit rate and long access delay of the conventional cloud-edge cooperative caching method. The cloud edge collaborative caching method executes the following operations in each period: and selectively caching the data in the data set in a plurality of edge servers by utilizing a cooperative edge data caching strategy based on the perceptible redundancy based on the data access information of the last period. And after receiving the data access request, managing the cache space of each edge server by using a replacement strategy based on the perceptible redundancy. By designing a cooperative edge data caching strategy based on the perceptible redundancy and a replacement strategy based on the perceptible redundancy, the extra overhead generated by accessing the edge server and accessing the cloud server to obtain the missing data is balanced, the space utilization rate of the edge server is improved, the hit rate of the system is improved, and the data access delay is reduced.

Description

Cloud edge cooperative caching method and system based on perceptible redundancy
Technical Field
The invention relates to the technical field of cloud-edge cooperative caching, in particular to a cloud-edge cooperative caching method and system based on perceptible redundancy.
Background
Under a traditional centralized cloud architecture, when a plurality of users access the same content, the cloud data center needs to send the same data to each user, so that massive repeated data exists in a network, and network congestion is generated. One natural idea is to enable caching on the edge side close to the user so that access requests for popular data are handled by the edge as much as possible, thereby relieving the bandwidth pressure of the cloud server. In order to make up for the deficiency of the cloud architecture, the concept of edge computing arises. In the edge computing mode, computing and storage capabilities can be moved down to the edge server near the user, so that the network edge also has data processing and data caching capabilities. The edge caching technology focuses on caching hot content of the cloud data center by using an edge server, so that the bandwidth pressure of the cloud server is relieved, and the data access delay is reduced.
The management of the edge cache needs to consider both cache placement policy and cache replacement policy. The primary goal of the cache placement policy is to balance access to the edge collaboration domain with access to the cloud servers. Access to the cloud server may incur greater data transfer overhead compared to data transfer overhead between edge servers. In order to reduce the load of the cloud server to the maximum extent, data deduplication operations are generally required for the edge servers in the cooperative cache domain. However, data deduplication may minimize access to cloud servers, but may also make large numbers of data access requests unresponsive to the local edge server. Considering that the bandwidth resources of the edge servers are limited, the data transmission overhead between the edge servers is not negligible, and frequent transmission of popular data between the edge servers can generate larger cooperation overhead, thereby seriously affecting the overall performance. Extreme deduplication can impact access performance, and marginal storage systems require proper redundancy.
The research on the cache replacement strategy is also well known, and the cache content updating operation can be completed by designing a proper cache replacement strategy according to different influence factors. However, the cache replacement algorithm is only suitable for the cache space management of a single server, and is not suitable for the cache space management of a multi-edge server in a collaboration scene. The main reason is that when edge servers make cache replacement decisions independently, each edge server will tend to retain the most popular content and the popular data will be cached in almost all edge servers. In this case, the edge space utilization is greatly reduced, the system hit rate is reduced, and the data access latency is increased.
Therefore, a method and system for effectively reducing the execution time and increasing the cache hit rate are needed.
Disclosure of Invention
The invention aims to provide a cloud edge collaborative caching method and system based on sensible redundancy, which are used for designing a collaborative edge data caching strategy based on the sensible redundancy and a replacement strategy based on the sensible redundancy so as to balance the extra overhead generated by accessing an edge server and accessing a cloud server to obtain missing data, improve the space utilization rate of the edge server, improve the hit rate of a system and reduce the data access delay.
In order to achieve the above purpose, the invention provides the following technical scheme:
a cloud edge cooperative caching method based on perceptible redundancy comprises the following steps:
in each cycle, the following operations are performed:
selectively caching data in the data set in a plurality of edge servers by utilizing a cooperative edge data caching strategy based on the perceptible redundancy based on the data access information of the previous period; all the edge servers form a cooperative cache domain;
and after receiving the data access request, managing the cache space of each edge server by using a replacement strategy based on perceptible redundancy.
Compared with the prior art, the cloud edge collaborative caching method based on the perceptible redundancy, provided by the invention, executes the following operations in each period: selectively caching data in the data set in a plurality of edge servers by utilizing a cooperative edge data caching strategy based on the perceptible redundancy based on the data access information of the previous period; all the edge servers form a cooperative cache domain, and after receiving a data access request, the cache space of each edge server is managed by using a replacement strategy based on perceptible redundancy. By designing a cooperative edge data caching strategy based on the perceptible redundancy and a replacement strategy based on the perceptible redundancy, the extra overhead generated by accessing the edge server and the missing data acquired by accessing the cloud server is balanced, the space utilization rate of the edge server is improved, the hit rate of a system is improved, and the data access delay is reduced.
A cloud edge collaborative caching system based on perceptible redundancy comprises a cloud data center, an edge server cluster and a controller; the cloud data center and the edge server cluster are both in communication connection with the controller; the edge server cluster is a cooperative cache domain consisting of a plurality of edge servers; the cache space of each edge server is divided into a redundant area and an exclusive area;
the controller is used for executing the cloud edge collaborative caching method.
Compared with the prior art, the beneficial effect of the cloud-edge collaborative caching system provided by the invention is the same as that of the cloud-edge collaborative caching method in the technical scheme, and the details are not repeated here.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of a cloud-edge collaborative caching system according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram illustrating a cache space division of an edge server according to embodiment 1 of the present invention;
fig. 3 is a schematic flowchart of a replacement strategy based on perceptual redundancy according to embodiment 1 of the present invention;
FIG. 4 is a popularity plot of data used in the experiments provided in example 1 of the present invention;
fig. 5 is a schematic performance diagram of three cache placement strategies under two loads according to embodiment 1 of the present invention;
fig. 6 is a schematic performance diagram of three cache replacement strategies under two loads according to embodiment 1 of the present invention;
fig. 7 is a flowchart illustrating a cloud edge collaborative caching method according to embodiment 2 of the present invention.
Detailed Description
For the convenience of clearly describing technical solutions of the embodiments of the present invention, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Example 1:
for infrastructure providers, the edge cache technology can improve the utilization rate of edge storage resources, and bring higher economic benefit; for an application provider, the edge caching technology can assist the edge provider in placing application data at an edge end, and storage cost is minimized on the premise of meeting established service quality requirements; for a user, the edge cache technology can optimize the data access experience of the user in the context of exponential increase of the data amount. In view of the above, edge caching is becoming a focus of research. The edge caching technology is generally used for Content caching in a Content Delivery Network (CDN), and compared with a CDN server, the edge server is closer to the user equipment and the deployment density far exceeds that of the CDN server, so that low-latency data access requirements of mass devices can be met by enabling caching at an edge end. However, the storage resources of the edge server are far less than those of the CDN server, and fine-grained cache space management needs to be performed to meet the low-latency data access requirements of a large number of users.
The management of the edge cache needs to consider both cache placement policy and cache replacement policy.
Considering that the more nodes participating in data deduplication, the higher the utilization rate of the storage space of the edge end, but the cooperation between the remote edge servers generates a larger network overhead, and on the contrary, the data access delay is increased. Some researchers propose an Edge-data-deduplicating (EF-dedup) strategy, divide a plurality of Edge servers into disjoint rings, perform distributed deduplication on each ring, and set the optimal number of nodes in the ring to balance storage overhead and network overhead; some researchers consider that the popularity difference of the cold and hot data is huge, and combine hot data caching based on popularity and data deduplication based on maximization of space utilization, so that the access frequency of the cloud server is reduced to the greatest extent. While data deduplication may minimize access to cloud servers, it may also make a large number of data access requests unresponsive to the local edge server. Considering that the bandwidth resources of the edge servers are limited, the data transmission overhead between the edge servers is not negligible, and frequent transmission of popular data between the edge servers can generate larger cooperation overhead, which seriously affects the overall performance.
Classic cache replacement strategies such as Least Recently Used (LRU) and Least Frequently Used (LFU), respectively updating cache contents by considering data access time interval, data access frequency, and the like; ARC comprehensively considers data access time interval and data access frequency by introducing ghost cache, thereby updating cache content; GDS (greedy Dual size) introduces the data acquisition cost factor to update the cache content; the GDS-LF expands a GDS priority calculation function on the basis of the GDS, and the purpose of multi-cost perception is achieved; qaca considers the influence of IO request arrival rate and realizes fairness guarantee for a plurality of users in the scene of sharing cache; the MPC sorts the data according to the data popularity, and eliminates the data with the lowest popularity when new data arrives; considering that the addition rate of new content is high, most content only receives a few access requests, and the addition of such temporary content causes pollution of cache space, researchers propose a cache replacement strategy for predicting short-term popularity based on data access intervals to reduce cache update frequency. However, the existing cache replacement algorithm is only suitable for the cache space management of a single server, and is not suitable for the cache space management of a multi-edge server in a cooperation scene. The main reason is that when edge servers make cache replacement decisions independently, each edge server will tend to retain the most popular content and the popular data will be cached on almost all edge servers. In this case, the edge space utilization rate is greatly reduced, the system hit rate is reduced, and the data access latency is increased.
For the defects of the prior art, the embodiment is used to provide a cloud-edge collaborative cache system based on perceptible redundancy, and the space utilization rate is guaranteed by limiting the maximum available cache space of redundant data. As shown in fig. 1, the cloud-edge collaborative caching system includes a cloud data center, an edge server cluster, and a controller, where the cloud data center and the edge server cluster are both in communication connection with the controller. In the edge cache system, terminal equipment firstly requests data from a local edge server closest to the terminal equipment, the edge server provides cache service for the terminal equipment in the coverage range of the edge server, and the cache system is composed of a plurality of edge servers and a cloud data center which are cooperated with each other. Each edge server can be used for popular data caching, a plurality of edge servers form a cooperative caching domain, and caching contents can be shared among the edge servers. If the request data is not cached in any edge server, the cloud data center needs to be accessed to add the missing data to the edge server.
The cloud data center is a novel data center based on a cloud computing architecture, physically gathers a large number of servers, is a logic center of a network, and provides a service source. The edge server cluster is a cooperative cache domain composed of a plurality of edge servers, and the cache space of each edge server is divided into a redundant area and an exclusive area. The redundant area caches a plurality of data with the highest popularity, and the access request of the popular data can be directly responded by the local edge server as much as possible, so that the data transmission frequency between the edge servers is reduced; the exclusive area is used for caching data contents which are not cached by other edge servers, so that the utilization efficiency of the whole space is guaranteed, and in order to enable the data access performance of each edge server to be close to that of each edge server, the data are cached in the exclusive area of each edge server according to the popularity rank.
The controller is configured to execute the cloud edge collaborative caching method described in embodiment 2. The controller is used as a hub of cooperative caching of the cloud data center and the edge server and comprises an access information acquisition module, a cache profit prediction module, an edge data caching module, a request processing module and a cloud service access module.
The access information acquisition module is used for counting relevant information during processing of the data access request, such as access times of data objects in each edge server, hit conditions of the data access request, data access delay and the like, and obtaining data access information. The delay generated in the data access process (i.e. data access delay) mainly consists of the following three parts: data download latency L generated by user access to local edge serverdownData transmission delay L between edge serversedgeData transmission delay L generated by accessing cloud servercloudWherein L isdown<Ledge<Lcloud
The cache profit prediction module is used for calculating the global popularity, the local access preference, the access frequency, the cache profit and the cache profit threshold value based on the historical data access information of the previous period and the data access information counted in real time so as to be used for cache strategy adjustment.
The specific calculation method is as follows:
suppose that the data sets are arranged in descending order according to the access frequency as
Figure BDA0003441033110000051
Suppose that the jth data djThe frequency of occurrence of (d) corresponds to the Zipf distribution, then the global data popularity P of the jth datajThe calculation formula of (a) is as follows:
Figure BDA0003441033110000052
in the formula (1), α is a Zipf distribution index.
Local access preference of jth data to ith edge server
Figure BDA0003441033110000053
The calculation formula of (a) is as follows:
Figure BDA0003441033110000054
in the formula (2), the reaction mixture is,
Figure BDA0003441033110000055
for the ith edge server n in the previous cycleiWithin the service range of, data djThe cumulative request amount of (2);
Figure BDA0003441033110000056
edge server niInner data djAccess frequency of
Figure BDA0003441033110000057
The calculation formula of (2) is as follows:
Figure BDA0003441033110000058
representing data d using access probability of unit bytejIs cached in an edge server niCaching revenue in
Figure BDA0003441033110000059
Then it is slowThe earnings are calculated as follows:
Figure BDA00034410331100000510
in the formula (4), sjIs the data size of the data.
Edge server niCache profit threshold pt ofiThe calculation formula of (a) is as follows:
Figure BDA00034410331100000511
in the formula (5), the reaction mixture is,
Figure BDA00034410331100000512
for an edge server niMiddle buffer data d with lowest incomeminThe cached revenue of; n is a radical ofnIs the total number of edge servers.
If the data djA number of nodes M that have been cached in a cooperative cache domain, an
Figure BDA00034410331100000513
Then data d is writtenjBuffered in niThe critical conditions for the gain over cost are:
Figure BDA0003441033110000061
the edge data caching module is used for performing caching and placing operation by utilizing a cooperative edge data caching strategy based on the perceptible redundancy, specifically, a caching and placing scheme is formulated based on information such as caching capacity, network state and data caching income of the edge server, and popular data are actively cached to the edge server so as to optimize data access experience of a user side. The Cooperative Edge Data Caching strategy based on the perceptible Redundancy is also called as RCEDC (Redundancy-aware Cooperative Edge Data Caching), and caches popular Data in an Edge server based on a greedy strategy to avoid calculation overhead generated by directly solving a hybrid shaping nonlinear programming problem, and selectively caches partial Data in a plurality of Edge servers to balance overhead generated by accessing the Edge server to obtain missing Data and overhead generated by accessing a cloud server to obtain the missing Data.
The RCEDC algorithm includes the following steps:
(1) simplifying the edge data caching problem into a linear programming problem and solving an initial caching scheme, determining the proportion of redundant data, and determining the popularity ranking of the data object with the lowest caching income in the edge server.
By using
Figure BDA0003441033110000062
The edge server is represented by a representation of,
Figure BDA0003441033110000063
indicates the Cache Capacity (Cache Capacity) of each edge server, wherein NnRepresenting the total number of edge servers.
Figure BDA0003441033110000064
Representing the Data objects (Data) involved in the Data access process,
Figure BDA0003441033110000065
indicates the data Size (Size).
To solve the proper redundant data ratio, the RCEDC reduces the edge data caching problem to a linear programming problem:
minL(Nr)
Figure BDA0003441033110000066
wherein N isrFor the redundant area, the data volume, N, can be bufferedsCacheable data volume for exclusive area, L (N)r) Expressed as:
Figure BDA0003441033110000067
(2) and arranging the data in a descending order according to the cache income by using a greedy algorithm, sequentially adding the uncached data into the edge server, and caching the data with high cache income preferentially.
(3) Cause data djAt different edge servers niWith a difference in access heat, i.e. local access preference
Figure RE-GDA0003564649840000068
There is a difference according to djThe local access preferences of the edge server are arranged in descending order to obtain an ordered edge server set
Figure RE-GDA0003564649840000069
Preferentially adding data to the edge server with the highest local access heat, namely
Figure RE-GDA00035646498400000610
If it is
Figure RE-GDA00035646498400000611
The remaining cache space is larger than the data size s of the data objectjThen d will bejBuffer memory
Figure RE-GDA00035646498400000612
And (4) if the data caching is successful, entering a step (4), otherwise, selecting an edge server with suboptimal caching income for caching.
(4) Traversing remaining edge servers
Figure BDA00034410331100000613
If the data cache gains
Figure BDA00034410331100000614
Greater than the cache revenue threshold ptmAnd is and
Figure BDA0003441033110000071
the remaining buffer capacity is larger than the data object size sjThen data caching is performed followed by selection of the next edge server attemptAnd carrying out data caching.
The request processing module is used for carrying out cache replacement operation by using a replacement strategy based on the perceptible redundancy, and particularly, when a data access request arrives, sequentially retrieving the local edge server, the adjacent edge server in the cooperative cache domain and the cloud data center to acquire data. If the data access request is not hit, missing data needs to be added to the local edge server, and if the remaining cache space is insufficient, the data with the lowest cache benefit in the cached data needs to be eliminated.
The request processing module adopts a Replacement Strategy facing to partition Cache and based on perceptible Redundancy, which is called RPCRS (reduced-aware Partitioned Cache Replacement Strategy), and in order to sense the redundant state of data, the Strategy divides the Cache data in the edge server into two types according to the acquisition source: one type is data (exclusive data) acquired by a cloud data center, and the data is cached in a single edge server; the other is data (redundant data) obtained from an adjacent edge server, which has been cached. As shown in fig. 2, the RPCRS strategy divides the cache space of each edge server into two regions, C1 and C2. The C1 area is used for caching the uncached data content of the adjacent node and can only cache exclusive data; and the C2 area caches popular data according to the local access heat of the edge server, and can cache either exclusive data or redundant data. According to the data access load in each node, the proportion of redundant data to exclusive data can be adaptively adjusted, so that the data access performance is optimized.
The data adding process of the RPCRS strategy has the following flows: when exclusive data enters the cache for the first time, the exclusive data is added to the C1 area, and when the cache space of the C1 area is insufficient, the data with the lowest priority is downgraded to the C2 area and competes with redundant data for the cache space. When the exclusive data is located in the C2 area, if the exclusive data is hit again, the exclusive data is upgraded to the C1 area; the redundant data is added to the C2 region the first time it enters the cache and can only be in the C2 region whether a hit or not. The maximum available cache space for redundant data is limited by the cache capacity of the C2 region, but since the C2 region includes exclusive data demoted from the C1 region in addition to the redundant data, both compete for the cache space of the C2 region. Therefore, the actually used buffer space of the redundant data takes the buffer capacity of the C2 area as an upper bound, and the space ratio can be adaptively adjusted according to the actual access situation.
The RPCRS strategy calculates the priority of data in the edge server in the C1 and C2 regions, and the process is as follows:
calculating data djReal-time access frequency of
Figure BDA0003441033110000072
Figure BDA0003441033110000073
In the formula (8), the reaction mixture is,
Figure BDA0003441033110000074
indicating the start time t of the current cycle0To the current time tcurEdge server niInner data dj(ii) accumulated access times;
Figure BDA0003441033110000075
then represents t0To tcurIn time frame, edge server niThe sum of the accumulated access times of all the cached data in the cache.
Using exponentially weighted moving average fi jRepresenting data access frequency:
Figure BDA0003441033110000081
in the formula (9), α represents a weighting factor, and takes a decimal between 0 and 1.
Edge server niC1 data d in areajCache priority of
Figure BDA0003441033110000082
Comprises the following steps:
Figure BDA0003441033110000083
edge server niC2 data d in areajPriority of
Figure BDA0003441033110000084
Comprises the following steps:
Figure BDA0003441033110000085
weight is Weight, and since there is more overhead to evict exclusive data, let:
Figure BDA0003441033110000086
fig. 3 is a flow chart of cache content update under RPCRS. When processing the data access request, checking whether the C1 area and the C2 area of the local edge server and the cache space of the adjacent edge server contain the request data or not in turn. The hit situation according to the data access request can be classified into the following cases:
(1) the requested data is hit in the local edge server's C1 area. At the moment, no new data is added, and only the cache priority of the accessed data needs to be updated;
(2) the requested data is hit in the local edge server's C2 area. At this time, corresponding processing is required according to the data type: if the hit data is exclusive data, the exclusive data is not cached in the adjacent edge server and is hit again in a short time, so that the data is upgraded to the C1 area, and if the residual cache space in the C1 area is insufficient, the data with the lowest priority in the C1 area is eliminated to make up enough cache space; if the hit data is redundant data, only the priority of the target data needs to be updated, and the redundant data is prevented from preempting the cache space of the C1 area, so that the purpose of limiting the maximum available cache space of the redundant data is achieved;
(3) the requested data is hit on the adjacent edge server side. At this time, an adjacent edge server needs to be accessed to add the missing data to the local and modify the data type of the missing data into redundant data, and then the data is added to the C2 area. If the residual cache space in the C2 area is insufficient, eliminating the data with the lowest priority in the C2 area, caching the data, and initializing the data popularity of the new data;
(4) the requested data fails to hit within the cooperative cache domain. At this time, the cloud server needs to be accessed to obtain the missing data, the data type is modified into exclusive data, and then the data is added to the C1 area. If the remaining cache space in the C1 area is not enough to cache new data, the data with lower priority is demoted to the C2 area to make up the cache space in the C1 area. Exclusive data updates its cache priority when it is demoted and competes with redundant data for the C2 region cache space. Although the exclusive data has higher priority weight compared with the redundant data and dominates in competition for the cache space, if the exclusive data is not accessed for a long time, the exclusive data with lower access frequency is eliminated in time along with the updating of the cache content.
Based on the replacement strategy, the actual occupied cache space of the redundant data takes the cache capacity of the C2 area as an upper bound, and the actual space occupation ratio can be dynamically adjusted according to the data access load. Therefore, each edge server can self-adaptively adjust the space ratio of the two types of data according to the actual access load, and further optimize the data access performance under different access loads.
The cloud service access module is used for calling the cloud service access module to access the cloud data center to obtain missing data and adding the data to the local edge server when the request data is not cached in the cooperative cache domain.
Compared with the prior art, the embodiment divides the data into redundant data and exclusive data according to whether the data is cached in the edge servers or not, and limits the maximum available cache space of the redundant data, thereby improving the space utilization rate. In general, in order to solve the problem related to edge storage in the current distributed storage field, the embodiment establishes a cloud-edge cooperative cache system with a cooperative edge data caching policy RCEDC based on perceptual redundancy and a replacement policy RPCRS facing a partition cache based on perceptual redundancy as cores.
In the existing scheme in the past, on one hand, attention is paid to cache hit rate, and the strategy of increasing redundant data as much as possible causes low space utilization rate and serious shortage of edge cache server storage resources; on the other hand, it is expected that the edge cooperation capability is possibly improved, a policy of not setting a redundant cache causes frequent access to the cloud, and the access delay is greatly increased. In an experiment, it can be proved that the cloud-edge cooperative cache system provided by the embodiment can adapt to the current network environment, effectively reduce the execution time and improve the cache hit rate. The effectiveness of this example was demonstrated by the following experiment:
the experiment used an edge computation simulator named SimEdgeIntel. In the embodiment, a test experiment is performed in a Linux environment, a server is configured with 2 Intel Xeon E5-2620 CPUs and a 128GB memory; the software used and its version numbers are shown in table 1.
TABLE 1 software version
Figure BDA0003441033110000091
The experimental load of 100 ten thousand anonymous data access requests (hereinafter cdn-request-18) to the ten top-ranked sites in the United states in 2016 was selected and used in several research studies. The data object popularity distribution under this load is shown in fig. 4. The load contains about 45 ten thousand accessed data objects, wherein the data access requests are concentrated in a few popular data, and the content popularity of the data after the ranking 1000 is close to 0, namely most data is cold data with only a few accesses, and accords with the Zipf distribution data popularity characteristic. The data popularity distribution curve was fitted using a Zipf distribution, resulting in a Zipf distribution index of 0.74. 200 ten thousand data access requests are generated as a composite load (hereinafter referred to as zipf-0.74) based on the fitting result. The experimental loads used are shown in table 2:
TABLE 2 Experimental loads
Figure BDA0003441033110000092
Figure BDA0003441033110000101
As shown in table 3, the data transmission delay between the edge servers is set to 20ms, and the cloud server access delay is set to 200 ms. In order to research the influence of the cache capacity on the system performance, the cache capacity is set to be 1% -10% of the total data amount; in order to research the influence of the number of edge servers on the system performance, the number of the edge servers is set to be 2-6; in order to research the influence of the state of the edge network on the system performance, the variation range of the data transmission delay between the edge servers is set to be 20-80 ms.
TABLE 3 Experimental parameters Table
Figure BDA0003441033110000102
In order to evaluate the performance of the cache policy, the following describes the performance evaluation indexes:
1. cache hit rate: the ratio of the number of hits of the user request to the total number of requests, and the hits of the requests, that is, the data access requests, can be processed by the local edge server or the adjacent edge server;
2. average delay: the ratio of the total access delay of the data to the total number of the requests, wherein the average delay is an important index which can be intuitively felt by a user and is also a primary target of optimization;
3. unloading flow: total data traffic processed by the edge end. An important role of the edge cache technology is to process an access request of a user for popular data at an edge end so as to solve bandwidth bottleneck under a cloud architecture, so that unloading flow is mostly used as an evaluation index of an edge cache strategy in the existing research work;
4. execution time: considering that the RCEDC policy proposed in this embodiment aims to obtain the cache placement scheme within a limited time, the execution times of the algorithms are compared in the experiment of the cache placement policy.
In the following, a comparison experiment is performed by using a cache placement strategy Greedy built in the SimEdgeIntel, a distributed cooperative cache strategy (hereinafter, Contrast), and a cooperative edge data cache strategy RCEDC based on redundancy sensing proposed herein. The Greedy takes the total cache income of the maximized cooperative cache domain as a target, and places the data with the highest cache income at each edge server side, so that the total income of the edge-side cache data is maximized. And comparing four indexes such as hit rate, average delay, unloading flow, execution time and the like under the three cache placement strategies. Setting the cache capacity C of the edge server to 0.75GB and the number N of the edge servers n3, inter-edge server data transfer delay L edge20 ms. The experiments were carried out at cdn-request-18 and zipf-0.74 loads, respectively, and the results are shown in FIG. 5.
As shown in FIGS. 5(a) and 5(b), FIG. 5(a) is a graph showing a comparison of hit rates under the load of cdn-request-18, and FIG. 5(b) is a graph showing a comparison of hit rates under the load of zipf-0.74. Under both loads, the hit rates of Contrast and RCEDC are similar and much higher than Greedy, while the difference in average hit rates of Contrast and RCEDC is up to 0.3%.
As shown in FIGS. 5(c) and 5(d), FIG. 5(c) is a graph comparing the unloaded flow rate at cdn-request-18 load, and FIG. 5(d) is a graph comparing the unloaded flow rate at zipf-0.74 load. Under both loads, the offload flows of Contrast and RCEDC are similar and much higher than Greedy, while under the zip-0.74 load, the offload flow is further increased due to the doubling of the number of requests.
As shown in FIGS. 5(e) and 5(f), FIG. 5(e) is a graph of the average delay contrast at cdn-request-18 load, and FIG. 5(f) is a graph of the average delay contrast at zipf-0.74 load. At cdn-request-18 load, RCEDC reduced the mean delay by 5.64% and 16.00% compared to Contrast and Greedy, respectively, and at zipf-0.74 load, RCEDC reduced the mean delay by 5.63% and 14.43% compared to Contrast and Greedy, respectively.
As shown in FIGS. 5(g) and 5(h), FIG. 5(g) is a graph comparing the execution time under the load of cdn-request-18, and FIG. 5(h) is a graph comparing the execution time under the load of zipf-0.74. Under both loads, the execution time of RCEDC is similar to Greeny and is lower than Contrast. Under the load of cdn-request-18, the execution time of RCEDC is reduced by 81.46% compared with that of Contrast.
The above experimental results show that: compared with Contrast, RCEDC can obviously reduce average delay and greatly shorten the execution time of the algorithm; compared with Greedy, RCEDC has obvious improvement on indexes such as hit rate, unloading flow and average delay. That is, RCEDC can get a more optimal cache placement scheme in a limited time.
Next, comparative experiments were performed using LRU, GDS-LF and RPCRS. Wherein the LRU updates the cache contents based on the data access time interval. First, the three indexes of hit rate, average delay and unload flow under the three cache replacement strategies are compared, and the used configuration and load are the same as those in the foregoing. The results of the experiment are shown in FIG. 6.
As shown in FIGS. 6(a) and 6(b), FIG. 6(a) is a graph showing a comparison of hit rates under the load of cdn-request-18, and FIG. 6(b) is a graph showing a comparison of hit rates under the load of zipf-0.74. The performance on hit rate was superior to LRU and GDS-LF for RPCRS. Under cdn-request-18 load, RPCRS has 18.85% and 25.20% higher hit rate than GDS-LF and LRU, respectively, and all three cache strategies have higher hit rate under zipf-0.74 load.
As shown in FIGS. 6(c) and 6(d), FIG. 6(c) is a graph comparing the unloaded flow rate at cdn-request-18 load, and FIG. 6(d) is a graph comparing the unloaded flow rate at zipf-0.74 load. The performance on the unloading flow rate is better for RPCRS than for LRU and GDS-LF. Under cdn-request-18 load, the RPCRS has 66.11% and 88.11% improvement in unload flow compared with GDS-LF and LRU, respectively, and the three buffer strategies have further improved unload flow under the zip-0.74 load.
As shown in FIGS. 6(e) and 6(f), FIG. 6(e) is a graph of the average delay at cdn-request-18 load, and FIG. 6(f) is a graph of the average delay at zipf-0.74 load. The performance on average delay is better for RPCRS than for LRU and GDS-LF. At cdn-request-18 load, RPCRS has a 12.70% and 15.97% reduction in average delay compared to GDS-LF and LRU, respectively, and the average delay of the three buffering strategies is further reduced at zipf-0.74 load.
The performance of the three cache replacement strategies on the hit rate and the unloading flow is analyzed, the RPCRS cache strategy caches data in a data partition mode according to whether the data are cached in a plurality of edge servers, and the data redundancy in the cooperative cache domain is remarkably reduced by limiting the maximum available cache space of redundant data, so that the RPCRS can remarkably improve the cache hit rate compared with the other two cache replacement strategies. At this time, the offload flow under the RPCRS caching policy is also much higher than the LRU and GDS-LF because more data access requests can be processed in the cooperative cache domain. And analyzing the performances of the three cache replacement strategies on average delay, wherein the cache of the part of data can cause a large amount of popular data to be evicted from the cache due to the existence of a large number of large objects which are accessed only once in the data access process, and the total data access delay can be seriously influenced when the cache space is small. The RPCRS and GDS-LF caching strategies consider the influence of the data size on the system performance, and non-popular large objects can be quickly removed from the cache and have limited influence on the system performance. However, the LRU fails to consider the data size difference and the data access frequency difference, and thus cannot avoid the pollution of the unpopular large objects to the cache space.
Compared with GDS-LF, RPCRS improves the utilization rate of edge space by limiting redundant data quantity and reduces the access to cloud server; and on the other hand, popular data is cached in a redundant area of each edge server, so that most data access requests can be processed by the local edge server, and the data transmission frequency between the edge servers is reduced. Therefore, RPCRS can further reduce the average delay compared to GDS-LF.
The above compares the performance of the RCEDC, RPCRS and the latest research work on hit rate, mean delay, unload flow and run time, respectively. The experimental result indicates that RCEDC can reduce the average delay by 25.40% at most and shorten the execution time by 81.46% at most; RPCRS can improve the cache hit rate by 29.15% at most and reduce the average delay by 16.80% at most.
Example 2:
the embodiment is configured to provide a cloud-edge collaborative caching method based on perceptual redundancy, which works based on the cloud-edge system caching system described in embodiment 1, and as shown in fig. 7, the cloud-edge collaborative caching method includes:
in each cycle, the following operations are performed:
s1: selectively caching data in the data set in a plurality of edge servers by utilizing a cooperative edge data caching strategy based on the perceptible redundancy based on the data access information of the previous period; all the edge servers form a cooperative cache domain;
s1 may include:
(1) calculating a cache condition based on the data access information of the previous period; the caching conditions comprise local access preference of each data in the data set to each edge server, caching income of each data cached in each edge server and a caching income threshold of each edge server; the data access information comprises the access frequency of each data, the accumulated request quantity of each data in the service range of each edge server and the data access delay;
specifically, the data access information is acquired by the access information acquisition module, and the cache condition is calculated according to the data access information and is acquired by the cache benefit prediction module.
More specifically, calculating the cache condition based on the data access information of the previous cycle includes:
1) for each piece of data, according to the accumulated request amount of the data in the service range of each edge server, the local access preference of the data to each edge server is calculated by adopting the formula (2) in the embodiment 1, and the local access preference of each piece of data in the data set to each edge server is obtained.
2) Calculating the global data popularity of the data by adopting the formula (1) in the embodiment 1 based on the access frequency of the data; and calculating the caching profit of the data cached in each edge server according to the global data popularity and the local access preference of the data to each edge server, and obtaining the caching profit of each data cached in each edge server.
For each edge server, the global data popularity and the local access preference of the data to the edge server are summed, and the access frequency of the data in the edge server is obtained by using the formula (3) in the embodiment 1. The cache benefit of the data cached in the edge server is calculated using equation (4) in embodiment 1, based on the access frequency of the data in the edge server and the data size of the data.
3) Determining the minimum value of the cache profit corresponding to each edge server according to the cache profit of each data cached in each edge server, and calculating the cache profit threshold of each edge server according to all the minimum values and the data access delay by using the formula (5) in embodiment 1.
(2) And performing descending order arrangement on the data in the data set according to the cache income, and selectively caching the data in a plurality of edge servers in sequence based on the cache condition.
Specifically, selectively caching the data in the plurality of edge servers in sequence based on the caching condition may include:
1) selecting first data as data to be cached;
2) according to local access preference of the data to be cached on each edge server, performing descending order arrangement on all edge servers to obtain an ordered edge server set; selecting a first edge server in the edge server set as an edge server to be stored;
3) judging whether the residual cache space of the edge server to be stored is larger than the data size of the data to be cached or not to obtain a first judgment result;
4) if the first judgment result is negative, selecting the next edge server in the edge server set as the edge server to be stored, and returning to the step of judging whether the residual cache space of the edge server to be stored is larger than the data size of the data to be cached;
5) if the first judgment result is yes, caching the data to be cached in the edge server to be stored, and caching the data to be cached in an edge server which is except the edge server to be stored and meets the preset condition; the preset conditions are that the cache income of the data to be cached in the edge server is greater than the cache income threshold value of the edge server, and the residual cache space of the edge server is greater than the data size of the data to be cached;
6) judging whether the data in the data set are cached or not;
7) if not, selecting the next data behind the data to be cached as the data to be cached, and returning to the step of performing descending order on all edge servers according to the local access preference of the data to be cached on each edge server.
S2: and after receiving the data access request, managing the cache space of each edge server by using a replacement strategy based on perceptible redundancy.
S2 may include:
(1) after receiving the data access request, judging whether target data requested to be acquired by the data access request is located in an exclusive area of the local edge server or not, and obtaining a second judgment result; the local edge server is an edge server covering a terminal sending a data access request; the cache space of each edge server is divided into an exclusive area and a redundant area; the exclusive area is used for caching the exclusive data which is not cached by the rest edge servers; the redundant area is used for caching the exclusive data and the redundant data cached by other edge servers;
(2) if the second judgment result is yes, updating the priority of the target data;
(3) if the second judgment result is negative, judging whether the target data is positioned in a redundant area of the local edge server or not to obtain a third judgment result;
(4) if the third judgment result is yes, judging whether the target data is exclusive data; if yes, upgrading and caching the target data to an exclusive area; if not, updating the priority of the target data;
the upgrading and caching the target data to the exclusive area may include: judging whether the residual cache space of the exclusive area is larger than the data size of the target data or not; if yes, upgrading and caching the target data to an exclusive area; if not, removing the exclusive data with the lowest priority in the exclusive area, and returning to the step of judging whether the residual cache space of the exclusive area is larger than the data size of the target data.
(5) If the third judgment result is negative, judging whether the target data is positioned at the adjacent edge server or not to obtain a fourth judgment result; the adjacent edge servers are the rest edge servers except the local edge server;
(6) if the fourth judgment result is yes, accessing the adjacent edge server to add the target data serving as redundant data to a redundant area of the local edge server;
adding the target data as redundant data to the redundant area of the local edge server may include: judging whether the residual cache space of the redundant area is larger than the data size of the target data or not; if yes, caching the target data into a redundant area; if not, removing the data with the lowest priority in the redundant area, and returning to the step of judging whether the residual cache space of the redundant area is larger than the data size of the target data.
(7) If the fourth judgment result is negative, the cloud data center is accessed, and the target data is used as exclusive data to be added to the exclusive area of the local edge server.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
While the invention has been described in conjunction with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made therein without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is intended to include such modifications and variations.

Claims (10)

1. A cloud edge cooperative caching method based on perceptual redundancy is characterized by comprising the following steps:
in each cycle, the following operations are performed:
selectively caching data in the data set in a plurality of edge servers by utilizing a cooperative edge data caching strategy based on the perceptible redundancy based on the data access information of the previous period; all the edge servers form a cooperative cache domain;
and after receiving the data access request, managing the cache space of each edge server by using a replacement strategy based on perceptible redundancy.
2. The cloud-edge cooperative caching method according to claim 1, wherein the selectively caching data in the data set in a plurality of edge servers by using a cooperative edge data caching policy based on perceptual redundancy based on the data access information of the previous cycle specifically comprises:
calculating a cache condition based on the data access information of the previous period; the caching condition comprises local access preference of each data in the data set to each edge server, caching income of each data cached in each edge server and caching income threshold of each edge server; the data access information comprises the access frequency of each data, the accumulated request quantity of each data in the service range of each edge server and the data access delay;
and performing descending order arrangement on the data in the data set according to cache benefits, and selectively caching the data in a plurality of edge servers in sequence based on the cache conditions.
3. The cloud-edge cooperative caching method according to claim 2, wherein the calculating the caching condition based on the data access information of the previous cycle specifically includes:
for each piece of data, calculating local access preference of the data to each edge server according to the accumulated request amount of the data in the service range of each edge server, and obtaining the local access preference of each piece of data in the data set to each edge server;
calculating a global data popularity of the data based on the access frequency of the data; calculating cache benefits of the data cached in each edge server according to the global data popularity and the local access preference of the data to each edge server, and obtaining the cache benefits of each data cached in each edge server;
determining the minimum value of the cache income corresponding to each edge server according to the cache income of each data cached in each edge server; and calculating the caching profit threshold of each edge server according to all the minimum values and the data access delay.
4. The cloud-edge collaborative caching method according to claim 3, wherein the calculating, according to the global data popularity and the local access preference of the data to each of the edge servers, a caching gain of the data cached in each of the edge servers specifically comprises:
for each edge server, summing the global data popularity and the local access preference of the data to the edge server to obtain the access frequency of the data in the edge server;
and calculating the caching benefit of the data cached in the edge server according to the access frequency of the data in the edge server and the data size of the data.
5. The cloud-edge collaborative caching method according to claim 2, wherein the sequentially selectively caching the data in the plurality of edge servers based on the caching condition specifically comprises:
selecting first data as data to be cached;
according to the local access preference of the data to be cached on each edge server, performing descending order arrangement on all the edge servers to obtain an ordered edge server set; selecting a first edge server in the edge server set as an edge server to be stored;
judging whether the residual cache space of the edge server to be stored is larger than the data size of the data to be cached or not to obtain a first judgment result;
if the first judgment result is negative, selecting the next edge server in the edge server set as an edge server to be stored, and returning to the step of judging whether the residual cache space of the edge server to be stored is larger than the data size of the data to be cached;
if the first judgment result is yes, caching the data to be cached in the edge server to be stored, and caching the data to be cached in an edge server which is except the edge server to be stored and meets a preset condition; the preset conditions are that the cache benefit of the data to be cached in the edge server is greater than the cache benefit threshold value of the edge server, and the residual cache space of the edge server is greater than the data size of the data to be cached;
judging whether the data in the data set are cached or not;
and if not, selecting the next data behind the data to be cached as the data to be cached, and returning to the step of performing descending order arrangement on all the edge servers according to the local access preference of the data to be cached on all the edge servers.
6. The cloud-edge collaborative caching method according to claim 1, wherein the managing the cache space of each edge server by using a replacement policy based on perceptual redundancy after receiving a data access request specifically comprises:
after receiving a data access request, judging whether target data requested to be acquired by the data access request is located in an exclusive area of a local edge server or not, and obtaining a second judgment result; the local edge server is an edge server covering a terminal which sends the data access request; the cache space of each edge server is divided into an exclusive area and a redundant area; the exclusive area is used for caching the exclusive data which is not cached by the rest edge servers; the redundant area is used for caching the exclusive data and redundant data cached by other edge servers;
if the second judgment result is yes, updating the priority of the target data;
if the second judgment result is negative, judging whether the target data is located in a redundant area of the local edge server to obtain a third judgment result;
if the third judgment result is yes, judging whether the target data is exclusive data or not; if yes, upgrading and caching the target data to the exclusive area; if not, updating the priority of the target data;
if the third judgment result is negative, judging whether the target data is positioned in the adjacent edge server or not to obtain a fourth judgment result; the adjacent edge server is the rest edge servers except the local edge server;
if the fourth judgment result is yes, accessing the adjacent edge server to add the target data serving as redundant data to a redundant area of the local edge server;
if the fourth judgment result is negative, accessing the cloud data center to add the target data serving as exclusive data to an exclusive area of the local edge server.
7. The cloud-edge cooperative caching method according to claim 6, wherein the upgrading and caching the target data to the exclusive area specifically includes:
judging whether the residual cache space of the exclusive area is larger than the data size of the target data or not;
if yes, upgrading and caching the target data to the exclusive area;
if not, removing the exclusive data with the lowest priority in the exclusive area, and returning to the step of judging whether the residual cache space of the exclusive area is larger than the data size of the target data.
8. The cloud edge collaborative caching method according to claim 6, wherein the adding the target data as redundant data to a redundant area of the local edge server specifically comprises:
judging whether the residual cache space of the redundant area is larger than the data size of the target data or not;
if yes, caching the target data to the redundant area;
if not, removing the data with the lowest priority in the redundant area, and returning to the step of judging whether the residual cache space of the redundant area is larger than the data size of the target data.
9. A cloud edge cooperative cache system based on perceptible redundancy is characterized by comprising a cloud data center, an edge server cluster and a controller; the cloud data center and the edge server cluster are both in communication connection with the controller; the edge server cluster is a cooperative cache domain consisting of a plurality of edge servers; the cache space of each edge server is divided into a redundant area and an exclusive area;
the controller is used for executing the cloud edge collaborative caching method of any one of claims 1 to 8.
10. The cloud-edge collaborative caching system according to claim 9, wherein the controller comprises an access information acquisition module, a cache profit prediction module, an edge data caching module, a request processing module, and a cloud service access module;
the access information acquisition module is used for counting relevant information when processing the data access request to obtain data access information;
the cache profit prediction module is used for calculating local access preference, cache profit and a cache profit threshold value based on the data access information;
the edge data caching module is used for performing caching placement operation by utilizing a cooperative edge data caching strategy based on the perceptible redundancy;
the request processing module is used for carrying out cache replacement operation by utilizing a replacement strategy based on perceptual redundancy.
CN202111631200.3A 2021-12-28 2021-12-28 Cloud edge cooperative caching method and system based on perceptible redundancy Pending CN114500529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111631200.3A CN114500529A (en) 2021-12-28 2021-12-28 Cloud edge cooperative caching method and system based on perceptible redundancy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111631200.3A CN114500529A (en) 2021-12-28 2021-12-28 Cloud edge cooperative caching method and system based on perceptible redundancy

Publications (1)

Publication Number Publication Date
CN114500529A true CN114500529A (en) 2022-05-13

Family

ID=81496702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111631200.3A Pending CN114500529A (en) 2021-12-28 2021-12-28 Cloud edge cooperative caching method and system based on perceptible redundancy

Country Status (1)

Country Link
CN (1) CN114500529A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114980212A (en) * 2022-04-29 2022-08-30 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
CN116320004A (en) * 2023-05-22 2023-06-23 北京金楼世纪科技有限公司 Content caching method and caching service system
CN117714475A (en) * 2023-12-08 2024-03-15 江苏云工场信息技术有限公司 Intelligent management method and system for edge cloud storage
WO2024188037A1 (en) * 2023-03-10 2024-09-19 华为云计算技术有限公司 Function caching method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923486A (en) * 2010-07-23 2010-12-22 华中科技大学 Method for avoiding data migration in hardware affair memory system
CN105049326A (en) * 2015-06-19 2015-11-11 清华大学深圳研究生院 Social content caching method in edge network area
WO2019095402A1 (en) * 2017-11-15 2019-05-23 东南大学 Content popularity prediction-based edge cache system and method therefor
US20190260845A1 (en) * 2017-12-22 2019-08-22 Soochow University Caching method, system, device and readable storage media for edge computing
CN111782612A (en) * 2020-05-14 2020-10-16 北京航空航天大学 File data edge caching method in cross-domain virtual data space
CN112887992A (en) * 2021-01-12 2021-06-01 滨州学院 Dense wireless network edge caching method based on access balance core and replacement rate
CN113115362A (en) * 2021-04-16 2021-07-13 三峡大学 Cooperative edge caching method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923486A (en) * 2010-07-23 2010-12-22 华中科技大学 Method for avoiding data migration in hardware affair memory system
CN105049326A (en) * 2015-06-19 2015-11-11 清华大学深圳研究生院 Social content caching method in edge network area
WO2019095402A1 (en) * 2017-11-15 2019-05-23 东南大学 Content popularity prediction-based edge cache system and method therefor
US20190260845A1 (en) * 2017-12-22 2019-08-22 Soochow University Caching method, system, device and readable storage media for edge computing
CN111782612A (en) * 2020-05-14 2020-10-16 北京航空航天大学 File data edge caching method in cross-domain virtual data space
CN112887992A (en) * 2021-01-12 2021-06-01 滨州学院 Dense wireless network edge caching method based on access balance core and replacement rate
CN113115362A (en) * 2021-04-16 2021-07-13 三峡大学 Cooperative edge caching method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李碧瑶: "边缘网络下的计算卸载和边缘缓存方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 5, 15 May 2021 (2021-05-15), pages 1 - 64 *
王俊岭: "基于协同的边缘缓存策略研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2, 15 February 2021 (2021-02-15), pages 1 - 70 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114980212A (en) * 2022-04-29 2022-08-30 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
CN114980212B (en) * 2022-04-29 2023-11-21 中移互联网有限公司 Edge caching method and device, electronic equipment and readable storage medium
WO2024188037A1 (en) * 2023-03-10 2024-09-19 华为云计算技术有限公司 Function caching method and system
CN116320004A (en) * 2023-05-22 2023-06-23 北京金楼世纪科技有限公司 Content caching method and caching service system
CN116320004B (en) * 2023-05-22 2023-08-01 北京金楼世纪科技有限公司 Content caching method and caching service system
CN117714475A (en) * 2023-12-08 2024-03-15 江苏云工场信息技术有限公司 Intelligent management method and system for edge cloud storage
CN117714475B (en) * 2023-12-08 2024-05-14 江苏云工场信息技术有限公司 Intelligent management method and system for edge cloud storage

Similar Documents

Publication Publication Date Title
CN114500529A (en) Cloud edge cooperative caching method and system based on perceptible redundancy
Zhong et al. A deep reinforcement learning-based framework for content caching
WO2019119897A1 (en) Edge computing service caching method, system and device, and readable storage medium
CN110213627A (en) Flow medium buffer distributor and its working method based on multiple cell user mobility
US6901484B2 (en) Storage-assisted quality of service (QoS)
EP3089039B1 (en) Cache management method and device
US20110107030A1 (en) Self-organizing methodology for cache cooperation in video distribution networks
CN112218337A (en) Cache strategy decision method in mobile edge calculation
CN104166630B (en) Optimization caching laying method based on prediction in a kind of content oriented central site network
CN109982104A (en) The video of mobile awareness prefetches and caching Replacement Decision method in a kind of mobile edge calculations
CN112702443B (en) Multi-satellite multi-level cache allocation method and device for satellite-ground cooperative communication system
CN115884094B (en) Multi-scene cooperation optimization caching method based on edge calculation
CN113282786B (en) Panoramic video edge collaborative cache replacement method based on deep reinforcement learning
CN113094392A (en) Data caching method and device
WO2024207834A1 (en) Multi-level cache adaptive system and strategy based on machine learning
CN115361710A (en) Content placement method in edge cache
CN110913430A (en) Active cooperative caching method and cache management device for files in wireless network
CN112631789B (en) Distributed memory system for short video data and video data management method
CN111447506B (en) Streaming media content placement method based on delay and cost balance in cloud edge environment
Wang et al. Agile Cache Replacement in Edge Computing via Offline-Online Deep Reinforcement Learning
CN115484314B (en) Edge cache optimization method for recommending enabling under mobile edge computing network
Leconte et al. Adaptive replication in distributed content delivery networks
Kabir et al. Reconciling cost and performance objectives for elastic web caches
CN112954026B (en) Multi-constraint content cooperative cache optimization method based on edge calculation
CN113297152B (en) Method and device for updating cache of edge server of power internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination