CN110933692B - Optimized cache system based on edge computing framework and application thereof - Google Patents

Optimized cache system based on edge computing framework and application thereof Download PDF

Info

Publication number
CN110933692B
CN110933692B CN201911212414.XA CN201911212414A CN110933692B CN 110933692 B CN110933692 B CN 110933692B CN 201911212414 A CN201911212414 A CN 201911212414A CN 110933692 B CN110933692 B CN 110933692B
Authority
CN
China
Prior art keywords
cache
edge node
edge
local
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911212414.XA
Other languages
Chinese (zh)
Other versions
CN110933692A (en
Inventor
张海霞
顿凯
袁东风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201911212414.XA priority Critical patent/CN110933692B/en
Publication of CN110933692A publication Critical patent/CN110933692A/en
Application granted granted Critical
Publication of CN110933692B publication Critical patent/CN110933692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Abstract

The invention relates to an optimized cache system based on an edge computing framework and application thereof, wherein the optimized cache system comprises a plurality of local area networks, and the local area networks are all connected with a cloud center through the Internet; each local area network is an independent edge node cluster, and the edge node cluster comprises a router, a plurality of switches, a plurality of area edge servers, a plurality of local edge nodes, a host or other storage devices which are sequentially connected from top to bottom; the edge node cluster is connected with the Internet through the router; the regional edge server is used for controlling a plurality of local edge nodes and storing hot content of the whole local area network; the local edge node is used for storing local hot content; the optimized cache system based on the edge computing framework is based on a traditional three-layer framework of cloud, edge node and user terminal, and is optimized in the distribution structure of the edge node.

Description

Optimized cache system based on edge computing framework and application thereof
Technical Field
The invention relates to an optimized cache system based on an edge computing framework and application thereof, belonging to the technical field of mobile communication.
Background
By 2021, the global mobile data traffic will reach 587EB, which is equivalent to 122 times of 2011, and the mobile network traffic is increased rapidly, so that the pressure of the mobile backhaul link is huge and the bandwidth resources are very tight.
To cope with the explosive growth of mobile network traffic, many efforts have been made by academia and industry, where mobile edge computation and mobile edge caching are two of the most important aspects. There are many terminal devices in the edge of a mobile network, which have certain storage and computing capabilities. After the content is cached to the edge of the mobile network, the user can acquire the content nearby, so that the repeated transmission of the content is avoided, and the pressure of a return network is relieved; meanwhile, the edge cache can reduce the network delay requested by the user, and further improve the network experience of the user. Mobile Edge Computing (MEC) is the best solution to meet the delay requirements of delay sensitive services in a 5G Radio Access Network (RAN). The main idea is to deploy computing and storage capacity at the RAN edge to provide content and processing power quickly according to the requirements of the user. Efficient content caching and delivery is a key issue to ensure the success of this technology.
Chinese patent document CN108551472A discloses a content cache optimization method based on edge calculation, which utilizes the regularity of user movement and optimization theory to improve the service quality of user services under 5G communication. Firstly, area subdivision is carried out on a certain area according to the regularity of a user moving track, then the optimization caching of files is carried out by utilizing an optimization theory, and then the user caching mode is determined by comparing the file transmission time with the user requirement. However, this patent has the following drawbacks: this patent only considers the placement process for the cache contents and does not consider the entire life cycle of the cache. Moreover, for the problem of cache placement, the single planning according to the individual route may possibly cause some main roads in traffic, or the servers in the areas where people intensively move are frequently overloaded and have to be expanded and upgraded, but a large amount of storage resources are idle, so that resources are wasted.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an optimized cache system based on an edge computing framework;
the invention also provides the application of the optimized cache system based on the edge computing framework;
interpretation of terms:
the edge server is a front-end server which is directly contacted with the user;
the technical scheme of the invention is as follows:
an optimized cache system based on an edge computing framework comprises a plurality of local area networks, wherein the local area networks are all connected with a cloud center through the Internet;
each local area network is an independent edge node cluster, each edge node cluster comprises a router, a primary switch, a secondary switch and an area edge server which are sequentially connected from top to bottom, and the secondary switch is downwards connected with a local edge node and user terminal equipment;
the router is upwards accessed to the Internet, and the primary switch is downwards connected with the secondary switch and the regional edge server;
the router is used for connecting a local area network and the Internet; the primary switch and the secondary switch are used for connection networking in a local area network; the regional edge server is used for storing the hot content of the whole local area network and is responsible for managing the data tags of the whole local area network; the local edge node is used for storing local hot content; and the user terminal equipment is used for sending an http request to request corresponding internet resources.
The optimized cache system based on the edge computing framework is based on a traditional three-layer framework of cloud, edge node and user terminal, and is optimized in the distribution structure of the edge node. In the horizontal composition of the edge nodes, the edge nodes are grouped based on the spatial difference of the edge node distribution, for example, regional distributions such as schools, residential areas and business areas are relatively concentrated originally and have obvious spatial difference; and because of the mobility of personnel, the different areas have obvious time difference for the requests of network resources, and all nodes of the same functional area (school or residential area, and the like) are classified into a cluster based on the obvious time-space difference, so that the cache processing and the later-stage cooperative cache realization are facilitated. On the vertical structure of the edge node, based on the structure of a wired network, the traditional flat distribution scheme is changed, the cache server is vertically organized and divided into an area edge server and a local edge node, and the local edge node is closer to a user and stores local hot content. The regional edge server not only plays a role of a control node, but also is responsible for storing hot content in the range of the whole local area network.
The optimization of the horizontal structure enables the peak period of work among the clusters to have obvious time difference, and different clusters have obvious space difference due to relative concentration of industry distribution (such as residential areas and office areas basically gathered in the same area). The obvious space-time difference provides a premise for realizing the cooperative cache; the optimization of the vertical structure leads different edge nodes to have definite division of labor, on the other hand, under the condition that the storage of the nodes is limited, the vertical structure of the nodes is actually reclassified of the popularity of the cache content, and the popularity content of the whole area network is stored in the area edge server, so that the possibility of repeated caching in a cluster is reduced, and the utilization rate of the storage space in the cluster is improved.
According to the optimization of the invention, the regional edge server installs Redis as a database for storing data labels of cache resources and reachability tables of clusters adjacent to other edge nodes;
the data label is used for recording relevant information of the cache resource, including URI of the cache resource, file size, access times, storage path, whether temporary cache is adopted or not and generation time of the data label, the access times include total access times in the whole local area network and access times of each local edge node, a total access time threshold value p0 in the whole local area network is 125% of the number of users in the network where the ith local edge node is located, and an access time threshold value pi of each local edge node is the number of users in the network where the ith local edge node is located; whether the caching standard is met or not is judged through the total access time threshold p0 in the whole local area network and the access time threshold pi of each local edge node.
The reachability table takes the target edge node cluster IP as a key value, and comprises the hop count of the current edge node cluster reaching the target edge node cluster and the use condition of the target edge node cluster, wherein the use condition of the target edge node cluster comprises the use rates of a CPU (Central processing Unit), a memory and a disk and the current user connection number.
The cache placing and using method of the optimized cache system based on the edge computing frame is divided into two periods of working peak period and non-working peak period, and comprises the following steps:
(1) judging whether the current edge node cluster is in a working peak period or a non-working peak period, if the current edge node cluster is in the working peak period, the system is responsible for providing service by using a cache in the working peak period, entering the step (4), otherwise, the system is responsible for placing the cache in the non-working peak period, and entering the step (2);
(2) the optimized cache system automatically polls the database, selects the data tags of which the access times reach the threshold, including the data tags of which the total access times in the whole local area network reach the threshold p0 and the access times of each local edge node reach the threshold pi, forms a set A to be cached, and enters the step (3);
(3) screening the data tags in the set A to be cached, caching the cache resources in the regional edge server if the total access times in the whole local area network reach a threshold value p0, and removing the data tags from the set A to be cached after updating indexes of storage paths of the cache resources; if the access times of a certain local edge node reach a threshold value pi, caching the cache resources in the local edge node, and moving out the set A to be cached after the label updating storage path until the set A to be cached is empty;
(4) the regional edge server plays a role of a gateway and a dispatching server, intercepts all access requests of a local area network which is responsible for the regional edge server, obtains a URI of a requested resource according to the http request, inquires whether a data label of the current cache resource exists in a database according to the URI of the requested resource, and if not, executes the step (5) and if so, executes the step (6);
(5) the http request is sent to a corresponding website server in the internet to complete the service; meanwhile, updating the database, and creating a data label of the cache resource of the http request; and recording relevant data information of the cache resource of the access request, such as the resource URI, the access times and the like.
(6) Checking the queried URI of the cache resource to judge whether the resource is cached or not, if the URI of the cache resource is empty, indicating that the resource is not cached, and entering the step (7); otherwise, the cache resource is already cached, and the step (8) is carried out
(7) Sending the http request to a corresponding website server in the internet, and updating a data tag of the cache resource, so that the number of the access times in the data tag is increased by 1;
(8) and extracting the storage path information from the corresponding data label, and selecting the local edge node closest to the http request IP to respond to the request.
According to the cache placement and use method, firstly, the cache can be used for shunting the user request, the pressure of a return link in a network is reduced, and the service quality of the user is greatly improved; and secondly, according to the hierarchical cache placement, the resources are effectively classified and stored according to the heat degree, so that the time delay of a user is reduced, and a large amount of storage space is saved.
According to the invention, through the mist node cache resource, the idle storage and calculation capacity of the network edge equipment is fully utilized, the transmission distance of the data packet in the http request is shortened, the delay problem of website access is improved, and the user experience is optimized. For example, it is tested that a plurality of users do not use the method shown herein to request a resource with a size of 10M in a wide area network, the average delay is about 9.4 seconds, the same resource is requested by using the method of the present invention, the average delay is about 1.8 seconds, and the delay reduction rate reaches about 80%.
The invention uses a multi-copy cache mechanism in the fog node cluster, and uses a weighted load balancing algorithm comprising CPU utilization rate, memory utilization rate, IO utilization rate and bandwidth utilization rate to select the node with the lowest load for storing the multi-copy for providing service. The method not only performs load balancing on the fog node cluster, avoids the problem of slow cache reading caused by concentrating a large number of cache reading requests on one node, but also plays a role in reducing time delay.
The cache cleaning and replacing method of the optimized cache system based on the edge computing framework comprises the following steps:
A. calculating the load parameter wi (t) of each local edge node in real time, if the load parameter wi (t) is more than 0.6, triggering a cleaning mechanism, and entering the step B, otherwise, not triggering the cleaning mechanism;
B. the method for cleaning the cache comprises the following steps:
firstly, inquiring storage paths in the screened data labels, selecting cache resources which contain more than three addresses and comprise addresses of edge nodes needing to be cleaned in the storage paths, and forming a cache set H to be cleaned;
checking the temporary cache flag bit of the data tag in the cache set H, namely whether the temporary cache is present, and when the temporary cache flag bit of the data tag of a certain cache resource is set to be 1, directly clearing the cache resource and the corresponding data tag thereof;
checking a storage path of the data tags in the cache set H, and if the storage path contains the address of the regional edge server, cleaning the rest path information in the storage path and the corresponding cache; if the storage path does not contain the regional edge server and the storage addresses exceed 3, backing up the cache in the regional edge server, clearing the cache on the local edge node corresponding to the regional edge server, and updating the data tag of the regional edge server;
and returning to the step A until the load parameter wi (t) of each local edge node is not more than 0.6.
The cache cleaning and replacing method can effectively release the storage space, so that the cache with low popularity or redundancy is removed, the precious and limited storage space is more efficiently utilized, and the user experience is improved.
Caching generally responds to services during peak periods requested by users, and performs updated placement of resources during idle periods. Therefore, a large number of repeated requests may occur during peak periods, and the repeated requests for repeated resources from corresponding websites on the public network may greatly affect the user experience, so a collaborative caching mechanism needs to be designed to improve the user experience.
Because the edge node clusters of the invention are divided horizontally according to different service areas, the edge node clusters of the invention have good space-time heterogeneity, for example, residential quarter service peak hours are concentrated in the evening, and business quarter service peak hours are concentrated in the daytime. There will be a significant peak-to-peak between the two regions, which provides the possibility of optimizing the region-coordinated caching of the caching system of the present invention.
Preferably, according to the present invention, the load parameter wi (t) is calculated according to formula (i):
Wi(t)=(0.2*RCPUi+0.25*RMEMi+0.4*RDISKi+0.15*RIOi)*(1+Ruser/K)(Ⅰ)
in formula (I), wi (t) is a load parameter at a certain time t, RMEMiIs the memory usage rate, R, of the local edge nodeCPUiFor CPU usage of local edge nodes, RDISKiIs the remaining disk capacity, R, of the local edge nodeIOiDisk IO, R for local edge nodeuserAnd K is a user correction coefficient.
The area collaborative caching method of the optimized caching system based on the edge computing frame comprises the following steps:
a. setting corresponding working time of each edge node cluster, and entering a step b if the number of requests for a certain cache resource S reaches a threshold value V in a peak period, wherein V is 2 times of the number of user terminals in the local area network;
b. the reachable table records the hop count and idle state required by the current edge node cluster and the nearby edge node cluster for communication. Taking one edge node cluster with the least hop count in all edge node clusters with the load parameter of 0.4 in the reachable list as a target cluster, and sending a request of a resource S to the target cluster by an area edge server of the current cluster;
c. after receiving the request, the target cluster searches the data label with the temporary cache zone bit of 1, if corresponding request resources exist, the step d is executed, otherwise, the step e is executed;
d. returning corresponding cache resources and responding to the request;
e. and newly building a data label, setting the temporary cache zone bit to be 1, selecting the local edge node with the lowest load parameter for caching, updating the storage path of the data label, returning the file name and the cache address of the cache resource to the requested edge node cluster, and updating the information of the database in the regional edge server in the edge node cluster.
The design of the regional collaborative cache system aims to make up the defects of the cache system. The aim of the cache system is to cache hot content in the off-peak period, so that the cache is utilized to achieve the shunting effect in the peak period, and the network pressure is greatly reduced. This allows for the situation where, during peak hours of operation, a sudden large number of accesses to the same uncached resource occur. The cache system cannot process the data, so that network congestion is caused, and user experience is greatly reduced. After the regional collaborative cache system is used, the similar idle edge clusters can be utilized to effectively shunt the sudden surge traffic, so that the user experience is greatly improved.
The invention has the beneficial effects that:
1. the cache system effectively utilizes the existing network framework and idle storage resources, and one of the cache system greatly shortens a data return link, improves the service quality of a network, and shortens the time for a user to wait for network response, for example, the average waiting time delay is 12 seconds when the user uses a resource simulating one 13M of a plurality of users in the internet to repeatedly request, while the average time delay is 2 seconds when the method is used, and the efficiency is obviously reduced.
2. The cache cleaning replacement strategy comprehensively considers the service conditions such as disk occupancy rate, CPU utilization rate, memory utilization rate and the like, the running conditions of the cache cleaning replacement strategy, the user connection number and the like, so that invalid caches and low-popularity caches can be effectively cleaned, and the limited storage space can be effectively and fully utilized.
3. The invention can effectively improve the user in the peak period of the system work by cooperating with the cache, so that the sudden access to a large amount of resources can be shunted, and the time delay of the user can be obviously improved along with the increase of the request times. For example, it is simulated that a plurality of users make requests for a picture resource of 5M that is never cached in a peak period, and as the number of requests increases, the delay of the user request decreases from about 4.6 seconds to about 1.7 seconds, and the delay decreases by 63%.
4. The cache optimization method and system based on the edge computing framework, provided by the invention, are a full stack type process comprehensively considering cache from placement, use to cleaning. Compared with the common base station cache, the two-layer cache system can be realized by using the existing storage equipment such as a host server and the like without purchasing upgraded base station equipment, so that the cost is lower; the resource utilization is also higher.
5. Compared with a Content Delivery Network (CDN), the cache of the invention is not placed by the operator any more, and then pushed to the user, but caches the content concerned by the user from the user perspective, thereby achieving the effect of shunting.
6. Compared with other existing caching systems, the invention provides the coordination of caching among different edge clusters to effectively reduce the peak pressure of the edge cluster service to the network in the peak period.
Drawings
FIG. 1 is a block diagram of an optimized cache system based on an edge computing framework according to the present invention;
FIG. 2 is a schematic flow chart of a cache placement and usage method of the optimized cache system based on the edge computing framework according to the present invention;
FIG. 3 is a schematic flowchart of a cache cleaning replacement method for an optimized cache system based on an edge computing framework according to the present invention;
FIG. 4 is a schematic flow chart of a region collaborative caching method of an optimized caching system based on an edge computing framework according to the present invention;
fig. 5 is a schematic time delay diagram of an edge node cluster 2 in an optimized cache system based on an edge computing framework according to embodiment 6 of the present invention.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
An optimized cache system based on an edge computing framework comprises a plurality of local area networks, wherein the local area networks are all connected with a cloud center through the Internet;
each local area network is an independent edge node cluster, each edge node cluster comprises a router, a primary switch, a secondary switch and an area edge server which are sequentially connected from top to bottom, and the secondary switch is downwards connected with a local edge node and user terminal equipment;
the router is upwards accessed to the Internet, and the primary switch is downwards connected with the secondary switch and the regional edge server;
the router is used for connecting the local area network and the Internet; the primary switch and the secondary switch are used for connecting and networking the inside of the local area network; the regional edge server is used for storing the hot content of the whole local area network and is responsible for managing the data tags of the whole local area network; the local edge node is used for storing local hot content; and the user terminal equipment is used for sending an http request and requesting corresponding internet resources.
The optimized cache system based on the edge computing framework is based on a traditional three-layer framework of cloud, edge node and user terminal, and is optimized in the distribution structure of the edge node. In the horizontal composition of the edge nodes, the edge nodes are grouped based on the spatial difference of the edge node distribution, for example, regional distributions such as schools, residential areas and business areas are relatively concentrated originally and have obvious spatial difference; and because of the mobility of personnel, the different areas have obvious time difference for the requests of network resources, and all nodes of the same functional area (school or residential area, and the like) are classified into a cluster based on the obvious time-space difference, so that the cache processing and the later-stage cooperative cache realization are facilitated. On the vertical structure of the edge node, based on the structure of a wired network, the traditional flat distribution scheme is changed, the cache server is vertically organized and divided into an area edge server and a local edge node, and the local edge node is closer to a user and stores local hot content. The regional edge server not only plays a role of a control node, but also is responsible for storing hot content in the range of the whole local area network.
The optimization of the horizontal structure enables the peak period of work among the clusters to have obvious time difference, and different clusters have obvious space difference due to relative concentration of industry distribution (such as residential areas and office areas basically gathered in the same area). The obvious space-time difference provides a premise for realizing the cooperative cache; the optimization of the vertical structure leads different edge nodes to have definite division of labor, on the other hand, under the condition that the storage of the nodes is limited, the vertical structure of the nodes is actually reclassified of the popularity of the cache content, and the popularity content of the whole area network is stored in the area edge server, so that the possibility of repeated caching in a cluster is reduced, and the utilization rate of the storage space in the cluster is improved.
Example 2
The optimized cache system based on the edge computing framework in the embodiment 1 is characterized in that:
the regional edge server installs Redis as a database for storing data labels of cache resources and reachability tables of clusters adjacent to other edge nodes;
the data label is used for recording relevant information of the cache resource, including URI of the cache resource, file size, access times, storage path, whether temporary cache is adopted or not and generation time of the data label, the access times include total access times in the whole local area network and access times of each local edge node, a total access time threshold value p0 in the whole local area network is 125% of the number of users in the network where the ith local edge node is located, and an access time threshold value pi of each local edge node is the number of users in the network where the ith local edge node is located; whether the caching standard is met or not is judged through the total access time threshold p0 in the whole local area network and the access time threshold pi of each local edge node.
The reachability table takes the target edge node cluster IP as a key value, and comprises the hop count of the current edge node cluster reaching the target edge node cluster and the use condition of the target edge node cluster, wherein the use condition of the target edge node cluster comprises the use rates of a CPU (Central processing Unit), a memory and a disk and the current user connection number.
Example 3
In embodiment 2, as shown in fig. 2, the method for cache placement and use of an optimized cache system based on an edge computing framework is divided into two periods, namely an on-peak period and an off-peak period, and includes the following steps:
(1) judging whether the current edge node cluster is in a working peak period or a non-working peak period, if the current edge node cluster is in the working peak period, the system is responsible for providing service by using a cache in the working peak period, entering the step (4), otherwise, the system is responsible for placing the cache in the non-working peak period, and entering the step (2);
(2) the optimized cache system automatically polls the database, selects the data tags of which the access times reach the threshold, including the data tags of which the total access times in the whole local area network reach the threshold p0 and the access times of each local edge node reach the threshold pi, forms a set A to be cached, and enters the step (3);
(3) screening the data tags in the set A to be cached, caching the cache resources in the regional edge server if the total access times in the whole local area network reach a threshold value p0, and removing the data tags from the set A to be cached after updating indexes of storage paths of the cache resources; if the access times of a certain local edge node reach a threshold value pi, caching the cache resources in the local edge node, and moving out the set A to be cached after the label updating storage path until the set A to be cached is empty;
(4) the regional edge server plays a role of a gateway and a dispatching server, intercepts all access requests of a local area network which is responsible for the regional edge server, obtains a URI of a requested resource according to the http request, inquires whether a data label of the current cache resource exists in a database according to the URI of the requested resource, and if not, executes the step (5) and if so, executes the step (6);
(5) the http request is sent to a corresponding website server in the internet to complete the service; meanwhile, updating the database, and creating a data label of the cache resource of the http request; and recording relevant data information of the cache resource of the access request, such as the resource URI, the access times and the like.
(6) Checking the queried URI of the cache resource to judge whether the resource is cached or not, if the URI of the cache resource is empty, indicating that the resource is not cached, and entering the step (7); otherwise, the cache resource is already cached, and the step (8) is carried out
(7) Sending the http request to a corresponding website server in the internet, and updating a data tag of the cache resource, so that the number of the access times in the data tag is increased by 1;
(8) and extracting the storage path information from the corresponding data label, and selecting the local edge node closest to the http request IP to respond to the request.
According to the cache placement and use method, firstly, the cache can be used for shunting the user request, the pressure of a return link in a network is reduced, and the service quality of the user is greatly improved; and secondly, according to the hierarchical cache placement, the resources are effectively classified and stored according to the heat degree, so that the time delay of a user is reduced, and a large amount of storage space is saved.
According to the invention, through the mist node cache resource, the idle storage and calculation capacity of the network edge equipment is fully utilized, the transmission distance of the data packet in the http request is shortened, the delay problem of website access is improved, and the user experience is optimized. For example, it is tested that a plurality of users do not use the method shown herein to request a resource with a size of 10M in a wide area network, the average delay is about 9.4 seconds, the same resource is requested by using the method of the present invention, the average delay is about 1.8 seconds, and the delay reduction rate reaches about 80%.
The invention uses a multi-copy cache mechanism in the fog node cluster, and uses a weighted load balancing algorithm comprising CPU utilization rate, memory utilization rate, IO utilization rate and bandwidth utilization rate to select the node with the lowest load for storing the multi-copy for providing service. The method not only performs load balancing on the fog node cluster, avoids the problem of slow cache reading caused by concentrating a large number of cache reading requests on one node, but also plays a role in reducing time delay.
Example 4
The cache cleaning and replacing method for the optimized cache system based on the edge computing framework in embodiment 2 is shown in fig. 3, and includes the following steps:
A. calculating the load parameter wi (t) of each local edge node in real time, if the load parameter wi (t) is more than 0.6, triggering a cleaning mechanism, and entering the step B, otherwise, not triggering the cleaning mechanism; the calculation formula of the load parameter wi (t) is shown as the formula (I):
Wi(t)=(0.2*RCPUi+0.25*RMEMi+0.4*RDISKi+0.15*RIOi)*(1+Ruser/K)(Ⅰ)
in formula (I), wi (t) is a load parameter at a certain time t, RMEMiIs the memory usage rate, R, of the local edge nodeCPUiFor CPU usage of local edge nodes, RDISKiIs the remaining disk capacity, R, of the local edge nodeIOiDisk IO, R for local edge nodeuserAnd K is a user correction coefficient.
B. The method for cleaning the cache comprises the following steps:
firstly, inquiring storage paths in the screened data labels, selecting cache resources which contain more than three addresses and comprise addresses of edge nodes needing to be cleaned in the storage paths, and forming a cache set H to be cleaned;
checking the temporary cache flag bit of the data tag in the cache set H, namely whether the temporary cache is present, and when the temporary cache flag bit of the data tag of a certain cache resource is set to be 1, directly clearing the cache resource and the corresponding data tag thereof;
checking a storage path of the data tags in the cache set H, and if the storage path contains the address of the regional edge server, cleaning the rest path information in the storage path and the corresponding cache; if the storage path does not contain the regional edge server and the storage addresses exceed 3, backing up the cache in the regional edge server, clearing the cache on the local edge node corresponding to the regional edge server, and updating the data tag of the regional edge server;
and returning to the step A until the load parameter wi (t) of each local edge node is not more than 0.6.
The cache cleaning and replacing method can effectively release the storage space, so that the cache with low popularity or redundancy is removed, the precious and limited storage space is more efficiently utilized, and the user experience is improved.
Caching generally responds to services during peak periods requested by users, and performs updated placement of resources during idle periods. Therefore, a large number of repeated requests may occur during peak periods, and the repeated requests for repeated resources from corresponding websites on the public network may greatly affect the user experience, so a collaborative caching mechanism needs to be designed to improve the user experience.
Because the edge node clusters of the invention are divided horizontally according to different service areas, the edge node clusters of the invention have good space-time heterogeneity, for example, residential quarter service peak hours are concentrated in the evening, and business quarter service peak hours are concentrated in the daytime. There will be a significant peak-to-peak between the two regions, which provides the possibility of optimizing the region-coordinated caching of the caching system of the present invention.
Example 5
The regional collaborative caching method for the optimized caching system based on the edge computing framework, as shown in fig. 4, in embodiment 2, includes the following steps:
a. setting corresponding working time of each edge node cluster, and entering a step b if the number of requests for a certain cache resource S reaches a threshold value V in a peak period, wherein V is 2 times of the number of user terminals in the local area network;
b. the reachable table records the hop count and idle state required by the current edge node cluster and the nearby edge node cluster for communication. Taking one edge node cluster with the least hop count in all edge node clusters with the load parameter of 0.4 in the reachable list as a target cluster, and sending a request of a resource S to the target cluster by an area edge server of the current cluster;
c. after receiving the request, the target cluster searches the data label with the temporary cache zone bit of 1, if corresponding request resources exist, the step d is executed, otherwise, the step e is executed;
d. returning corresponding cache resources and responding to the request;
e. and newly building a data label, setting the temporary cache zone bit to be 1, selecting the local edge node with the lowest load parameter for caching, updating the storage path of the data label, returning the file name and the cache address of the cache resource to the requested edge node cluster, and updating the information of the database in the regional edge server in the edge node cluster.
Example 6
The optimized cache system based on the edge computing framework in the embodiment 2 is characterized in that:
as shown in fig. 1, the following clusters are built in two different local area networks: a host with a hard disk of 100G and a memory 8G is used as an area edge server; and selecting 50G hosts as cache spaces of the two 4G memories as local edge nodes, and using the hosts of the 4G memories as terminals to send Http requests, wherein the node 1 corresponds to the hosts 1 and 2, and the node 2 corresponds to the hosts 3 and 4.
Setting the working peak time of the edge node cluster 1 to be 8: 00-10: 00 every day; the working peak time of the edge node cluster 2 is 14: 00-16: 00 per day.
According to the network building condition, setting the threshold value of the cache as follows: the cache threshold value of each node is 2; the threshold of the zone edge server is 5; the threshold of the cooperative cache is 8.
And installing a redis database on the regional edge server to serve as a tag storage database.
And installing a shell script file for inquiring the load parameters for each machine.
Using a machine No. 1 under the edge node cluster 1 to make 3 requests on a resource A with the size of 13M, wherein the average time delay is 12 seconds; the resource is requested again 2 times during the next peak period using machine number 1, with the average delay dropping to 2 seconds.
As shown in fig. 5, machine No. 1 under the edge node cluster 2 makes 15 repeated requests for resource B of size 5M, with an average latency of 4.6 seconds in the first eight times, and starts to drop significantly to about 1.7 seconds in the ninth time, with a latency reduction of 63%.
And closing the cleaning and replacing function of the cache for the edge node cluster 2, and cleaning all caches and data labels in the edge node clusters 1 and 2. And (3) requesting 5 resources with the size of 100M for 3 times in the working peak period by using hosts 1, 2, 3 and 4 under the edge node clusters 1 and 2 respectively. After the off-peak period, inquiring the occupation condition of the hard disk, and finding that the caches in the edge node cluster 1 are all stored in the server, the occupation of the two nodes is empty, and the total occupied storage space is 500M; and the edge server in the edge node cluster 2 is basically unoccupied, two nodes are stored by a cache, and the total occupied space is 1000M. By using the cache cleaning and replacing function, 50% of storage space can be saved.

Claims (4)

1. A cache placement and use method of an optimized cache system based on an edge computing framework is characterized in that,
the optimized cache system based on the edge computing framework comprises a plurality of local area networks, wherein the local area networks are all connected with a cloud center through the Internet;
each local area network is an independent edge node cluster, each edge node cluster comprises a router, a primary switch, a secondary switch and an area edge server which are sequentially connected from top to bottom, and the secondary switch is downwards connected with a local edge node and user terminal equipment;
the router is upwards accessed to the Internet, and the primary switch is downwards connected with the secondary switch and the regional edge server;
the router is used for connecting a local area network and the Internet; the primary switch and the secondary switch are used for connection networking in a local area network; the regional edge server is used for storing the hot content of the whole local area network and is responsible for managing the data tags of the whole local area network; the local edge node is used for storing local hot content; the user terminal equipment is used for sending an http request and requesting corresponding internet resources;
the regional edge server installs Redis as a database for storing data labels of cache resources and reachability tables of clusters adjacent to other edge nodes;
the data label is used for recording relevant information of the cache resource, including URI of the cache resource, file size, access times, storage path, whether temporary cache is adopted or not and generation time of the data label, the access times include total access times in the whole local area network and access times of each local edge node, a total access time threshold value p0 in the whole local area network is 125% of the number of users in the network where the ith local edge node is located, and an access time threshold value pi of each local edge node is the number of users in the network where the ith local edge node is located;
the reachability table takes a target edge node cluster IP as a key value, and comprises the hop count of the current edge node cluster reaching the target edge node cluster and the use condition of the target edge node cluster, wherein the use condition of the target edge node cluster comprises the use rates of a CPU (Central processing Unit), a memory and a disk and the current user connection number;
the optimized cache system based on the edge computing framework is divided into two periods of working peak period and non-working peak period, and comprises the following steps:
(1) judging whether the current edge node cluster is in a working peak period or a non-working peak period, if the current edge node cluster is in the working peak period, the system is responsible for providing service by using a cache in the working peak period, entering the step (4), otherwise, the system is responsible for placing the cache in the non-working peak period, and entering the step (2);
(2) the optimized cache system automatically polls the database, selects the data tags of which the access times reach the threshold, including the data tags of which the total access times in the whole local area network reach the threshold p0 and the access times of each local edge node reach the threshold pi, forms a set A to be cached, and enters the step (3);
(3) screening the data tags in the set A to be cached, caching the cache resources in the regional edge server if the total access times in the whole local area network reach a threshold value p0, and removing the data tags from the set A to be cached after updating indexes of storage paths of the cache resources; if the access times of a certain local edge node reach a threshold value pi, caching the cache resources in the local edge node, and moving out the set A to be cached after the label updating storage path until the set A to be cached is empty;
(4) the regional edge server intercepts all access requests of a local area network which is in charge of the regional edge server, obtains the URI of the requested resource according to the http request, inquires whether a data label of the current cache resource exists in a database according to the URI of the requested resource, and if not, executes the step (5) and if so, executes the step (6);
(5) the http request is sent to a corresponding website server in the internet to complete the service; meanwhile, updating the database, and creating a data label of the cache resource of the http request;
(6) checking the queried URI of the cache resource to judge whether the resource is cached or not, if the URI of the cache resource is empty, indicating that the resource is not cached, and entering the step (7); otherwise, the cache resource is already cached, and the step (8) is carried out
(7) Sending the http request to a corresponding website server in the internet, and updating a data tag of the cache resource, so that the number of the access times in the data tag is increased by 1;
(8) and extracting the storage path information from the corresponding data label, and selecting the local edge node closest to the http request IP to respond to the request.
2. The cache placement and use method of the optimized cache system based on the edge computing framework according to claim 1, characterized by comprising the following steps:
A. calculating the load parameter wi (t) of each local edge node in real time, if the load parameter wi (t) is more than 0.6, triggering a cleaning mechanism, and entering the step B, otherwise, not triggering the cleaning mechanism;
B. the method for cleaning the cache comprises the following steps:
firstly, inquiring storage paths in the screened data labels, selecting cache resources which contain more than three addresses and comprise addresses of edge nodes needing to be cleaned in the storage paths, and forming a cache set H to be cleaned;
checking the temporary cache flag bit of the data tag in the cache set H, namely whether the temporary cache is present, and when the temporary cache flag bit of the data tag of a certain cache resource is set to be 1, directly clearing the cache resource and the corresponding data tag thereof;
checking a storage path of the data tags in the cache set H, and if the storage path contains the address of the regional edge server, cleaning the rest path information in the storage path and the corresponding cache; if the storage path does not contain the regional edge server and the storage addresses exceed 3, backing up the cache in the regional edge server, clearing the cache on the local edge node corresponding to the regional edge server, and updating the data tag of the regional edge server;
and returning to the step A until the load parameter wi (t) of each local edge node is not more than 0.6.
3. The cache placement and use method of the optimized cache system based on the edge computing framework as claimed in claim 2, wherein the load parameter wi (t) is calculated according to the following formula (i):
Wi(t)=(0.2*RCPUi+0.25*RMEMi+0.4*RDISKi+0.15*RIOi)*(1+Ruser/K)(Ⅰ)
in formula (I), wi (t) is a load parameter at a certain time t, RMEMiIs the memory usage rate, R, of the local edge nodeCPUiFor CPU usage of local edge nodes, RDISKiIs the remaining disk capacity, R, of the local edge nodeIOiDisk IO, R for local edge nodeuserAnd K is a user correction coefficient.
4. The cache placement and use method of the optimized cache system based on the edge computing framework according to claim 1, characterized by comprising the following steps:
a. setting corresponding working time of each edge node cluster, and entering a step b if the number of requests for a certain cache resource S reaches a threshold value V in a peak period, wherein V is 2 times of the number of user terminals in the local area network;
b. taking one edge node cluster with the least hop count in all edge node clusters with the load parameter of 0.4 in the reachable list as a target cluster, and sending a request of a resource S to the target cluster by an area edge server of the current cluster;
c. after receiving the request, the target cluster searches the data label with the temporary cache zone bit of 1, if corresponding request resources exist, the step d is executed, otherwise, the step e is executed;
d. returning corresponding cache resources and responding to the request;
e. and newly building a data label, setting the temporary cache zone bit to be 1, selecting the local edge node with the lowest load parameter for caching, updating the storage path of the data label, returning the file name and the cache address of the cache resource to the requested edge node cluster, and updating the information of the database in the regional edge server in the edge node cluster.
CN201911212414.XA 2019-12-02 2019-12-02 Optimized cache system based on edge computing framework and application thereof Active CN110933692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212414.XA CN110933692B (en) 2019-12-02 2019-12-02 Optimized cache system based on edge computing framework and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212414.XA CN110933692B (en) 2019-12-02 2019-12-02 Optimized cache system based on edge computing framework and application thereof

Publications (2)

Publication Number Publication Date
CN110933692A CN110933692A (en) 2020-03-27
CN110933692B true CN110933692B (en) 2021-06-01

Family

ID=69848073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212414.XA Active CN110933692B (en) 2019-12-02 2019-12-02 Optimized cache system based on edge computing framework and application thereof

Country Status (1)

Country Link
CN (1) CN110933692B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565437B (en) * 2020-12-07 2021-11-19 浙江大学 Service caching method for cross-border service network
CN112698950B (en) * 2020-12-31 2024-04-05 杭州电子科技大学 Memory optimization method for industrial Internet of things edge equipment
CN113315806B (en) * 2021-04-14 2022-09-27 深圳大学 Multi-access edge computing architecture for cloud network fusion
CN113037872B (en) * 2021-05-20 2021-08-10 杭州雅观科技有限公司 Caching and prefetching method based on Internet of things multi-level edge nodes
CN113630383B (en) * 2021-07-08 2023-03-28 杨妍茜 Edge cloud cooperation method and device
CN113660162B (en) * 2021-08-09 2024-04-09 陕西悟空云信息技术有限公司 Semi-centralized routing method and system for proximity cache perception

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
US9667739B2 (en) * 2011-02-07 2017-05-30 Microsoft Technology Licensing, Llc Proxy-based cache content distribution and affinity
CN108121512A (en) * 2017-12-22 2018-06-05 苏州大学 A kind of edge calculations services cache method, system, device and readable storage medium storing program for executing
CN108156267A (en) * 2018-03-22 2018-06-12 山东大学 Improve the method and system of website visiting time delay in a kind of mist computing architecture using caching
CN108551472A (en) * 2018-03-20 2018-09-18 南京邮电大学 A kind of content caching optimization method based on edge calculations
CN109788319A (en) * 2017-11-14 2019-05-21 中国科学院声学研究所 A kind of data cache method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076142A (en) * 2017-11-28 2018-05-25 郑州云海信息技术有限公司 A kind of method and system for accelerating user's request based on CDN technologies

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137138A (en) * 2010-09-28 2011-07-27 华为技术有限公司 Method, device and system for cache collaboration
US9667739B2 (en) * 2011-02-07 2017-05-30 Microsoft Technology Licensing, Llc Proxy-based cache content distribution and affinity
CN109788319A (en) * 2017-11-14 2019-05-21 中国科学院声学研究所 A kind of data cache method
CN108121512A (en) * 2017-12-22 2018-06-05 苏州大学 A kind of edge calculations services cache method, system, device and readable storage medium storing program for executing
CN108551472A (en) * 2018-03-20 2018-09-18 南京邮电大学 A kind of content caching optimization method based on edge calculations
CN108156267A (en) * 2018-03-22 2018-06-12 山东大学 Improve the method and system of website visiting time delay in a kind of mist computing architecture using caching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于移动边缘计算的网络信息监测与缓存业务优化;罗明;《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》;20190915(第09期);第4章 *

Also Published As

Publication number Publication date
CN110933692A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110933692B (en) Optimized cache system based on edge computing framework and application thereof
EP2704402B1 (en) Method and node for distributing electronic content in a content distribution network
CN110213627B (en) Streaming media cache allocation method based on multi-cell user mobility
CA2303001C (en) Scheme for information delivery to mobile computers using cache servers
Yan et al. PECS: Towards personalized edge caching for future service-centric networks
WO2018120802A1 (en) Collaborative content cache control system and method
CN111988796B (en) Dual-mode communication-based system and method for optimizing platform information acquisition service bandwidth
CN110784779B (en) Data acquisition method of electricity consumption information acquisition system
CN104022911A (en) Content route managing method of fusion type content distribution network
CN103458466A (en) Flow control device, flow control method, network flow management system, and network flow management method
CN105072151A (en) Content collaborative scheduling method and system for CDN
CN107949007A (en) A kind of resource allocation algorithm based on Game Theory in wireless caching system
Lungaro et al. Predictive and context-aware multimedia content delivery for future cellular networks
Jazaeri et al. Toward caching techniques in edge computing over SDN-IoT architecture: A review of challenges, solutions, and open issues
Li et al. Adaptive per-user per-object cache consistency management for mobile data access in wireless mesh networks
CN110913430B (en) Active cooperative caching method and cache management device for files in wireless network
CN108174395B (en) Base station cache management method and system based on transfer action evaluation learning framework
CN103686944A (en) Gateway selection method for interconnection of cellular network and multi-hop wireless sensing network
CN108540959B (en) Internet of vehicles AP cooperative scheduling optimization method for accessing scheduling system
Santos et al. Multimedia microservice placement in hierarchical multi-tier cloud-to-fog networks
CN104506432A (en) Content request rate aggregation method and cache placement method
CN103260270B (en) A kind of base station
Li et al. A novel cooperative cache policy for wireless networks
CN112910779A (en) Ad Hoc network-based cross-layer routing optimization protocol
Luo et al. Software defined network‐based multipath state‐aware routing with traffic prediction in satellite network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant