CN112020103B - Content cache deployment method in mobile edge cloud - Google Patents

Content cache deployment method in mobile edge cloud Download PDF

Info

Publication number
CN112020103B
CN112020103B CN202010781085.7A CN202010781085A CN112020103B CN 112020103 B CN112020103 B CN 112020103B CN 202010781085 A CN202010781085 A CN 202010781085A CN 112020103 B CN112020103 B CN 112020103B
Authority
CN
China
Prior art keywords
file
multicast
content
delay
base station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010781085.7A
Other languages
Chinese (zh)
Other versions
CN112020103A (en
Inventor
周继鹏
纪杨阳
张效铨
庄娘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202010781085.7A priority Critical patent/CN112020103B/en
Publication of CN112020103A publication Critical patent/CN112020103A/en
Application granted granted Critical
Publication of CN112020103B publication Critical patent/CN112020103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a content caching deployment method in a mobile edge cloud, which combines mobile edge calculation and multicast routing to construct a macro-cell-micro-cell two-layer heterogeneous network architecture with a controller and solves the problem of collaborative caching of multiple cells. Based on the architecture, the problem of content cache deployment in the mobile edge cloud is converted into a 0-1 integer programming problem, and a cache deployment method for solving the problem is provided. According to the method, multicast delivery is used as an access point, popularity and delivery time delay of a file are used as deployment basis, the delivery time delay is optimized by combining a collaborative caching algorithm based on multicast and popularity perception and an improved ant colony optimization multicast algorithm, a content caching deployment mode capable of reducing the total time delay of a system is found, and hot content is reasonably pre-cached in an edge server. The method reduces the delivery delay, thereby improving the service quality of the user, and simultaneously efficiently utilizes limited network resources to respond to the requirements of 5G low-delay, high-bandwidth and large connection.

Description

Content cache deployment method in mobile edge cloud
Technical Field
The invention relates to the technical field of mobile edge cloud infrastructure and cache deployment and content delivery thereof, in particular to a content cache deployment method in a mobile edge cloud.
Background
Since 1978, mobile communication has evolved from analog communication technology to digital communication, and then to the current 4G age, LTE technology dominated by 3GPP has become a global unified technical standard, network bandwidth has been greatly improved, and data services have also been popularized. The continuous improvement of mobile communication networks and the progressive introduction of new mobile devices such as smartphones, tablet computers, etc. has spawned various applications for mobile connections, with the consequent explosive exponential growth of network traffic. The intelligent mobile terminal plays an indispensable role in entertainment, life, study, office, multimedia and the like. Along with the birth of 5G, the applications of various vertical industries such as 4K/8K high-definition video, virtual reality, smart city, automatic driving, smart power grid and the like are also continuously developed, and higher requirements are put forward on a communication system. Therefore, the fifth generation mobile communication technology with high speed, large connection and ultra-low latency is a development trend facing serious challenges. According to Cisco VNI (Visual Networking Index), by 2022, smartphones would on average produce 11GB of data traffic per month, where mobile video traffic would occupy 79% of the total traffic. The capabilities of mobile terminals have failed to meet user demands in the face of the pressure of computationally intensive tasks such as massive data requests and video transcoding.
In view of the limited storage, computing resources of the user terminals, cloud Computing (CC) provides the public with the required resources in a pay-per-demand manner. The cloud is used as an effective scheme for solving the problem that terminal resources are limited, so that shared software and hardware resources and information can be provided for various terminal devices according to requirements, and a user does not need to know various details of infrastructure and the like in the cloud, thereby bringing great convenience. However, the cloud platform provides services, which solve the resource problem, but bring new challenges. The centralized processing mode not only increases the transmission bandwidth load from the edge to the far-end core network, but also causes a certain service delay due to the distance between the terminal equipment and the far-end cloud, and is not applicable to delay-sensitive application programs.
To solve the above problem, a moving edge calculation (Mobile Edge Computing, MEC) has been developed. The basic idea of the MEC is to migrate a cloud computing platform from the inside of a mobile core network to the edge of a mobile access network, and deploy edge nodes with functions of computation, storage, communication and the like to enable a traditional wireless access network (Radio Access Network, RAN) to have service localization conditions, so that the high-timeliness service requirements of end users can be effectively processed, the end-to-end time delay is greatly shortened, and related problems such as data flow bottlenecks of the core network are solved. Currently, MEC is one of 5G delay reduction technologies, and for low-delay services, a service server deployed at the edge of a mobile network locally offloads streaming data, so as to reduce the requirements on bandwidth of a transmission network and a core network, and become an important technical support of 5G.
The key technologies of mobile edge computing include caching, unloading, server deployment and the like, wherein the caching is used as a research focus of the invention, namely, an MEC server is deployed on a base station side close to a user by utilizing a wireless access network, and related hot spot data is cached on the MEC server in advance, so that an end user can directly carry out content delivery by the MEC server when requesting the hot spot content. Compared with the traditional edge network, the user does not need to wait for the long process of acquiring the content from the far-end core network and returning the content layer by layer to the local base station to realize delivery, so that the backhaul link bandwidth resource of the core network is effectively saved, the local delivery brought by pre-caching the hot content can also obviously shorten the delay of service response, and the service experience of the user is improved.
However, the hot content pre-caching technique in mobile edge computing also has the following problems: (1) what content is cached; (2) where to cache; (3) How to make content delivery to improve the quality of service (Quality of Service, qoS) of the user. Therefore, formulating an efficient and intelligent edge cloud cache deployment method becomes a problem to be solved.
Disclosure of Invention
The invention aims to solve the problems in the prior pre-caching technology and provide a content caching deployment method in a mobile edge cloud, which can calculate the optimal caching deployment of hot content in MEC under the condition of real-time change of content popularity so as to meet the high requirements of users on service quality and simultaneously minimize the total time delay of delivering content files by a system.
The invention starts from the combination of caching and multicasting, and provides a novel dynamic deployment method aiming at a collaborative caching deployment strategy of multiple cells in an edge cloud. Adopting a differential integration moving average autoregressive model (Autoregressive Integrated Moving Average Model, ARIMA) to accurately predict a demand set of users in each cell, selecting a candidate base station as a multicast source node, and laying a data foundation for subsequent hot spot file cache deployment; constructing a delay minimum multicast tree for content delivery of corresponding candidate nodes by adopting a heuristic improved ant colony optimization multicast (Improved Ant Colony Optimization Multicast, IACOM) algorithm; a collaborative Caching (Cooperative Multicast & polarity-Aware Caching, CMPAC) algorithm is adopted, and the total time delay of the system is reduced greedily on the basis of conforming to the detection of the capacity violations of the base station. Through the optimization and improvement, the bandwidth occupation caused by the 5G large connection is relieved to a certain extent while the time delay is reduced and the QoS of the user is improved.
The aim of the invention can be achieved by adopting the following technical scheme:
a content cache deployment method in a mobile edge cloud comprises the following steps:
s1, constructing a system model of a mobile edge cloud, wherein the system model is a two-layer heterogeneous network architecture with a controller, and the two-layer heterogeneous network architecture consists of a Macro Base Station (MBS) and a micro base station (SBS); each cell is provided with an MBS, communication is realized among the MBSs through optical fiber connection, and each MBS can be connected with a remote core cloud through a backhaul link; a plurality of low-power SBSs are deployed in the coverage area of the MBS, and users can be directly provided with services by the SBSs accessed by the users and also can be provided with services by the MBS; the MBS is connected to the controller through a control link, and the controller uniformly collects and maintains the request conditions sent by the users in each cell;
s2, each macro base station is provided with an MEC server, hot content in the mobile edge cloud is pre-cached on the MEC server of the macro base station, and content delivery service is sunk to the network edge, so that the pressure of a return link and a remote cloud is relieved. Each content file can only be deployed on a unique MEC server under the system model, namely, each file is cached without a copy, if users in other cells request the content file, a macro base station for caching the file is used as a multicast source, MBS of the cell in which other requesting users are located forms a target node set, a multicast group is integrated in a certain time window, parallel delivery is realized by a macro base station multicast stream, and each target base station receives the file and delivers the file to a local user;
s3, dividing the operation time into a plurality of time slots by the controller, predicting a demand set of users in each cell for each content file by each MBS through an ARIMA prediction algorithm at the beginning of each time slot, and acquiring demand information of the users by the controller through a prediction result of each MBS;
s4, the controller selects candidate base stations for caching each content file according to the prediction result of the ARIMA prediction algorithm, calculates the optimal cache deployment mode of the time slot according to the collaborative caching algorithm based on multicast and popularity perception, and deploys the content files on the MEC server which enables the system time delay to be the lowest in a greedy way all the time from the beginning of null allocation;
s5, constructing a corresponding network topological graph as a network model according to the system model, deploying MEC servers with calculation, storage and data processing functions at each MBS node side in the network model, setting corresponding data transmission rate for each link, and calculating the transmission delay of each content file according to the network model;
s6, a multicast group consisting of a file caching node and a destination node set for requesting the file is given, an improved ant colony optimization multicast algorithm is utilized to construct a minimum delay multicast tree, so that the minimum delay of content delivery of the file cached under a corresponding macro base station is determined, and the final caching position of the file is selected through comparison with the minimum delay of content delivery calculated by other caching candidate nodes of the file;
s7, after the multicast tree is built for the given multicast group, leaf nodes of non-destination nodes possibly exist in the tree built by each data packet, and the leaf nodes are checked and pruning operation is carried out on the leaf nodes;
and S8, after the system model of the mobile edge cloud finishes the deployment of content files one by one in a single time slot, the content delivery based on multicast can be carried out in the time slot, namely, a base station set which sends out requests for the same content is aggregated in a multicast continuous window, and unified and parallel service is carried out by a multicast stream.
Further, the content cache deployment method focuses on the content cache of multiple cells, the invention is developed by taking a macro base station layer in a two-layer heterogeneous network as a research object, and a controller additionally arranged is used for grasping global request information. The popularity of the file in each cell macro base station and the file transmission delay based on multicast are considered, and the transmission delay only considers the delay of delivering the file from the cached cell base station to the cell base station where the user is located. The content cache deployment method focuses on the collaborative cache of multiple cells, but can extend to the micro base station layer in each cell and synchronously realize the cache deployment among the micro base stations in the cells.
Further, in the step S3, an ARIMA prediction algorithm is executed at the beginning of each time slot to predict the popularity of the next time slot of the file. The ratio of the number of times a file is requested in a cell in a single time slot to the total number of times the file is requested is used as an index of popularity assessment. The request number of each historical time slot of a content file under a cell base station can be regarded as a time sequence, and the trend that the user demand of the file is continuously changed along with time is reflected. After the sequence is imported, the data is subjected to stabilization treatment, a differential parameter d is determined, the model is subjected to order determination according to an autocorrelation and partial autocorrelation diagram, proper p and q values are automatically screened, the ARIMA (p, d, q) model is fitted, the request number of the file in the next time slot under a designated macro base station can be predicted, and the popularity of the file in the next time slot is calculated according to the definition of popularity.
Further, in the step S4, when the content cache deployment operation is executed in a single time slot, the content files are first sorted in descending order according to the predicted total number of requests of each file in the time slot, and then the content files are deployed one by one starting from the null allocation according to the sorting in descending order. For the deployment of each file, first K macro base stations with higher popularity are selected to form a cache file f i Is set of candidate base stations W i Sequentially selecting one node s in the candidate base station set n ∈W i As a cache file f i Calculating corresponding file transmission delay
Wherein s= { S n N=1, 2,..n } represents the set of macro base stations in the multi-cell architecture, f= { F i I=1, 2, I } represents the file set to be cached under this architecture, N is the number of macro base stations in the system model of the mobile edge cloud, I is the number of cached files, M i Representing request file f i Is set in the destination base station, D (T in ) Buffer-storage in macro base station s for transmission n Lower file f i Is(s) m ∈(M i \s n ) Representing a certain destination node s in a set of destination nodes other than the source node mFor going from a source node to a destination node s m Is used for the transmission delay of the (a). Due to file f i Through a source node s n Is delivered to all destination base stations in parallel, file f i Is->Equal to the delay D (T in ) I.e. file transfer delay->Is the maximum of the time delays from the source node to the respective destination nodes. In the process of arranging content files one by one, the files are cached in +.>On the smallest node, if a certain node does not have enough storage space along with deployment, the cached node is selected according to the obtained K time delay ascending sequences.
Further, in the step S5, a system model of the mobile edge cloud is simulated by using a weighted network topology graph g= (V, E), where V represents a communication node set, i.e. a base station set S, E represents a communication link set between base stations, each edge E has a weight d (E), which represents a transmission delay of the link, and two neighboring base stations S p ,s q Inter-link e (p, q e {1,2,..N } and p+.q) pq E, the transmission delay on E is the ratio of the file size to the link transmission rate, and the file f is cached i From a source node to a destination node s other than the source node m ∈(M i \s n ) Is of (2)I.e. path between two nodes nm The total time delay of the parallel delivery file through the multicast tree is the maximum value of the time delay from the source node to each destination node.
Further, in the step S6, a cache file f is given i Constructing a delay minimum multicast tree by using an improved ant colony optimization multicast algorithm, setting the maximum iteration times IterNum, sending c data packets from a source node each time of iteration, constructing a multicast tree respectively, continuously updating the current minimum delay multicast tree each time of iteration, taking adjacent nodes of all nodes on a generated tree as next optional nodes each time, selecting next access nodes according to the state transition rule of the ant colony algorithm, adding corresponding links to extend outwards, adjusting the pheromone concentration of the links according to the pheromone updating rule of the ant colony algorithm each time of adding one link on the multicast tree until all destination nodes are added into the multicast tree, finishing one iteration, globally updating the pheromone on the current delay minimum multicast tree, further optimizing the next iteration on the basis, and converging the algorithm on one delay minimum multicast tree when the iteration times reach the set maximum iteration times, and obtaining the minimum delay corresponding to the given multicast group.
Further, in the step S7, each iteration, leaf nodes of non-destination nodes may exist in the tree constructed after the data packet is transmitted to all destination nodes. Therefore, after each data packet is delivered once, pruning operation is needed to be carried out on the constructed tree, and edges connected with leaf nodes of non-destination nodes are removed, so that a multicast tree covering the source node and all destination nodes is obtained.
Further, the number N of macro base stations in the system model of the mobile edge cloud is 12.
Further, the number of the cache files I is 50.
Further, the length T of the time slot is 5 minutes.
Further, the number K of candidate base stations for selecting the cache file in the CMPAC cache algorithm is set to 5.
Further, the maximum iteration number IterNum is set to 11 times, and the number c of data packets sent out in each iteration is 12.
Compared with the prior art, the invention has the following advantages and effects:
(1) The method is different from the instantaneous statistics of the popularity in the past cache research, and adopts a popularity prediction algorithm to sense the requirement condition of the dynamic change of the file in the system in real time. Regarding the request number of different time slots of the file under the macro base station as a time sequence, and adopting an ARIMA model to predict the request number of the next time slot. The popularity of the file obtained by the prediction algorithm can provide more accurate reference for subsequent cache deployment.
(2) The method takes the content delivery method as the access point, considers that a user can send a large number of requests to related content of a hot event in a short time, and the hot content pre-cache can relieve the pressure of a return link, but the content delivery method also affects the quality of service delay. The traditional cache based on unicast delivery is abandoned, and an edge cloud content cache deployment method based on a multicast mode is provided, so that the total time delay of the system for content delivery is minimized. And selecting candidate nodes for caching the file through the prediction of the popularity of the first stage, and sequentially constructing a multicast path as a multicast source node, so that a base station with the minimum time delay and still having a cache space is selected for pre-caching the file.
(3) The invention adopts an improved ant colony optimization multicast algorithm for constructing a multicast tree for a multicast group formed by a certain candidate node and a target node set thereof. And searching a path on the constructed network topology by utilizing the characteristics of the ant colony algorithm, and quickly converging through multiple iterations to finally find the optimal solution of the problem.
(4) Although the invention is aimed at the macro base station level in the two-layer heterogeneous network architecture, with the continuous progress of 5G technology, the interconnection between micro base stations is possible. If the micro base stations in the cells can communicate with each other, the multi-cell collaborative caching deployment provided by the invention can be applied to the caching deployment in the cells, and the collaborative caching based on multicast is synchronously realized at the micro base station level.
Drawings
Fig. 1 is a macro base station-micro base station two-layer heterogeneous network architecture diagram with a controller (micro base stations in cells are not drawn in the diagram because the invention is expanded around the cache of the macro base station);
FIG. 2 is a flow chart of a method for deploying content caches in an edge cloud of the present disclosure;
fig. 3 is a schematic diagram of the multicast tree construction of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
The embodiment provides a use of a content cache deployment method in a mobile edge cloud, and the scheme of the invention consists of two parts, namely user demand prediction and content cache deployment, and the scheme of the invention is specifically described by combining a flow chart of content cache deployment in the mobile edge cloud disclosed in the invention in the attached figure 2, and the implementation of the scheme comprises the following steps:
firstly, a system model of a mobile edge cloud is built, wherein the system model is a two-layer heterogeneous network architecture with a controller, and the two-layer heterogeneous network architecture is composed of macro base stations and micro base stations, as shown in figure 1, wherein the macro base stations are called MBS for short, and the micro base stations are called SBS for short. Each cell is provided with an MBS, communication is realized between the MBSs through optical fiber connection, and each MBS can be connected with a remote core cloud through a backhaul link. A plurality of low-power SBS are deployed in the coverage area of MBS, and users can be served directly by their accessed SBS or by MBS. The MBS is connected to the controller through the control link, the controller collects and maintains the request condition sent by the users in each cell uniformly, and grasps the global information.
Each macro base station is provided with a mobile edge computing server, namely an MEC server, hot content in the mobile edge cloud is pre-cached on the MEC server of the macro base station, so that the content delivery service sinks to the network edge, and the pressure of a return link and a remote cloud is relieved. Under the system model of the invention, each content file is set to be only deployed on a unique MEC server under the framework, namely, each file cache has no copy, if users in other cells request the content file, a macro base station for caching the file is used as a multicast source, MBS of the cell in which other requested users are located forms a target node set, a multicast group is formed by aggregation in a certain time window, parallel delivery is realized by multicast streams of one macro base station, and each target base station receives the file and delivers the file to users in the cell. The content cache deployment method focuses on the content cache of a plurality of cells, and the macro base station layer in the two-layer heterogeneous network is used as a research object to develop the invention. The popularity of the file in each cell macro base station and the file transmission delay based on multicast are considered, and the transmission delay only considers the delay of delivering the file from the cached cell base station to the cell base station where the user is located. The content cache deployment method focuses on the collaborative cache of multiple cells, but can extend to the micro base station layer in each cell and synchronously realize the cache deployment among the micro base stations in the cells.
The controller divides the running time into a plurality of time slots, at the beginning of each time slot, each MBS predicts the demand set of the users in each cell for each content file through an ARIMA prediction algorithm, and the controller obtains the demand information of the users through the prediction result of each MBS. And executing an ARIMA prediction algorithm at the beginning of each time slot, regarding the request number of each historical time slot of a certain content file under a macro base station of a certain cell as a time sequence reflecting the change trend of the user demand, importing the sequence, stabilizing the data, determining a parameter d, automatically screening out proper parameters p and q according to the autocorrelation and partial autocorrelation graphs for model grading, fitting an ARIMA (p, d, q) model, and predicting the request number of the file under a designated macro base station in the next time slot, wherein the ratio of the result to the total request number of the file is taken as the popularity of the file in the predicted next time slot.
When the content cache deployment operation is executed in a single time slot, the controller will sort the content files in descending order according to the total request times according to the prediction result of ARIMA prediction algorithm, and the content files are cached according to the cooperation based on multicast and popularity perception (Cooperative Multicast&The polarity-Aware Caching (CMPAC) algorithm deploys content files one by one, and deploys content files all the time greedily on MEC servers that minimize system latency from the beginning of null allocation. For the deployment of each file, first K macro base stations with higher popularity are selected to form a cache file f i Is set of candidate base stations W i Sequentially selecting one node s in the candidate base station set n ∈W i As a cache file f i Calculating corresponding file transmission delay
Wherein s= { S n N=1, 2,..n } represents the set of macro base stations in the multi-cell architecture, f= { F i I=1, 2, I } represents the file set to be cached under this architecture, N is the number of macro base stations in the system model of the mobile edge cloud, I is the number of cached files, M i Representing request file f i Is set in the destination base station, D (T in ) Buffer-storage in macro base station s for transmission n Lower file f i Is(s) m ∈(M i \s n ) Representing a certain destination in a set of destination nodes other than the source nodeNode s mFor going from a source node to a destination node s m Is used for the transmission delay of the (a). Due to file f i Through a source node s n Is delivered to all destination base stations in parallel, file f i Is->Equal to the delay D (T in ) I.e. file transfer delay->Is the maximum of the time delays from the source node to the respective destination nodes. In the process of arranging content files one by one, the files are cached in +.>On the smallest node, if a certain node does not have enough storage space along with deployment, the cached node is selected according to the obtained K time delay ascending sequences.
Constructing a corresponding network topological graph as a network model according to the system model, deploying MEC servers with calculation, storage and data processing functions at each MBS node side in the network model, setting corresponding data transmission rate for each link, and calculating the transmission delay of the content file according to the network model. Simulating a system model of a mobile edge cloud by using a weighted network topological graph G= (V, E), wherein V represents a communication node set, namely a base station set S, E represents a communication link set among base stations, each edge E E has a weight d (E) to represent the transmission delay of the link, and two adjacent base stations S p ,s q Inter-link e (p, q e {1,2,..N } and p+.q) pq E, the transmission delay on E is the ratio of the file size to the link transmission rate, and the file f i From a source node to a destination node s other than the source node m ∈(M i \s n ) Is of (2)I.e. path between two nodes nm The total time delay of the parallel delivery file through the multicast tree is the maximum value of the time delay from the source node to each destination node.
Given a multicast group consisting of a file cache node and a destination node set requesting the file, an improved ant colony optimization multicast (Improved Ant Colony Optimization Multicast, IACOM) algorithm is utilized to construct a multicast tree with minimum time delay, so that the corresponding content delivery time delay of the file cache under the macro base station is determined, and the construction principle of the multicast tree is shown in figure 3 as the deployment basis of the total time delay of an optimization system. Setting maximum iteration times IterNum, sending c data packets from a source node each time, constructing multicast trees respectively, continuously updating the current minimum time delay multicast tree each time when each iteration is performed, selecting next access nodes from all nodes on the generated tree as next optional nodes each time according to the state transition rule of an ant colony algorithm, adding corresponding links to extend outwards, adjusting the pheromone concentration of the links each time when each link is added on the multicast tree until all destination nodes are added in the multicast tree, finishing one iteration, globally updating the pheromone on the current minimum time delay multicast tree, further optimizing the next iteration on the basis, finishing the iteration for a plurality of times when the iteration times reach the set maximum iteration times, and converging the algorithm on the minimum time delay multicast tree of a given multicast group. In the process of constructing the multicast tree, each iteration is performed, and leaf nodes of non-destination nodes may exist in the tree constructed after the data packet is transmitted to all destination nodes. Therefore, after each data packet is delivered once, pruning operation is needed to be carried out on the constructed tree, and edges connected with leaf nodes of non-destination nodes are removed, so that a multicast tree covering the source node and all destination nodes is obtained. For K candidate cache nodes of a file, constructing K multicast trees with minimum time delay in the mode, and selecting the node with the minimum time delay and still having a cache space to cache the file.
After the system model of the mobile edge cloud finishes the deployment of content files one by one in a single time slot, the content delivery based on multicast can be carried out in the time slot, namely, in a multicast continuous window, the base station sets which send out requests for the same content are aggregated, and the multicast streams are uniformly served, so that the efficient parallel delivery of the files is realized.
The number N of macro base stations in the system model of the mobile edge cloud is 12, the number I of cache files is 50, and other parameters configure the system through values in table 1:
table 1.Ns2 network simulation parameter configuration table
Parameters (parameters) Parameter description Parameter value
N Number of macro base stations 12
I Number of cached files 50
cap S Macro base station buffer capacity 2GB
SIZE i Content file size 100MB-400MB
z Parameters of Zipf distribution 1.2
K Number of cache candidate base stations 5
T Time slot length 5min
\ Duration of the experiment 60min
An ARIMA predictive algorithm predicts user demand
And verifying feasibility of combining the prediction algorithm with content cache deployment in the edge cloud through two aspects of fitting degree of the prediction algorithm and the real demand value and comparison of deployment by using a prediction result and deployment in other modes.
(1) Fitting degree
The data set employed in this embodiment includes user requests counted by 12 cell base station servers for 24 hours in succession. Firstly, preprocessing data, extracting parameters of three fields, namely request time, access base station ID and request file ID, used in a prediction experiment, screening data records corresponding to first 50 hot files with larger demand, using the data of first 14 hours as training samples for modeling as a data set of a final experiment, and using the data of last 10 hours as test samples for predicting result evaluation.
(2) Impact on content cache deployment
Two sets of comparative experiments were used: one group is to use real demand data as a demand set, calculate the system time delay according to the set decision deployment mode, and the time delay in the case is the total time delay of the system closest to the real situation and can be used as the reference of three groups of experimental results; the other group adopts a moving average method to calculate a demand set of the time slot T, and a deployment decision is operated according to the result to calculate the system time delay of the transmission file, wherein the calculation rule is as follows: the average request value of several slots before the slot is taken as the request value of slot T. The experiment was run for a total of 12 time slots.
(II) content cache deployment algorithm
And taking the system time delay as an evaluation index, and evaluating the performance of the scheme through comparison with other cache deployment methods, comparison of different content delivery methods and comparison under different base station quantity architectures. Other cache deployment algorithms are Random Cache (RC) and Popularity Aware Cache (PAC), different content delivery methods are unicast delivery and multicast delivery adopted by the invention, and the number of different base stations is 12 base stations and 6 base stations.
(1) Cache deployment method
The test duration was 5 minutes and the mobile edge cloud system model was run for 12 time slots. The system time delay brought by the CMPAC algorithm deployment cache is compared with the system time delay of RC and PAC algorithms so as to verify the superiority of the cache deployment method. Content delivery method
(2) Under the 12 macro base station architectures, comparing the system time delays of the CMPAC_multicast, the CMPAC_unicast, the PAC_multicast and the PAC_unicast for file transmission after buffer deployment in the corresponding 12 time slots, and if the time delay of multicast delivery is always smaller than the time delay of unicast delivery no matter what buffer deployment method is adopted, indicating that the multicast delivery is an important cut-in point for reducing the time delay in the aspect of edge cloud buffer deployment.
(3) Number of base stations
The CMPAC algorithm proposed in this embodiment is implemented in the following four scenarios respectively: 12 base stations & multicast, 12 base stations & unicast; 6 base stations & multicast, 6 base stations & unicast, compare the system delay of file transmission after buffer deployment in the same 12 time slots. If increasing the number of base stations can reduce the delay of multicast delivery more than unicast, the more the number of base stations, the more the benefits of multicast transmission are, the more remarkable the superiority of the base stations over unicast is.
(III) multicast tree construction algorithm
Comparing the IACOM algorithm provided by the embodiment with an improved ant colony optimization algorithm (MACO), and comparing the time required by constructing the multicast tree by the two algorithms and the time delay of content delivery of the constructed multicast tree. If the IACOM algorithm is converged to the multicast tree with the minimum time delay of the transmission file faster under the same iteration times, the algorithm provided by the invention has the advantage of time efficiency.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (5)

1. The content cache deployment method in the mobile edge cloud is characterized by comprising the following steps of:
s1, constructing a system model of a mobile edge cloud, wherein the system model is a two-layer heterogeneous network architecture with a controller, and the two-layer heterogeneous network architecture consists of macro base stations and micro base stations, wherein the macro base stations are called MBS (methyl methacrylate-butadiene-styrene) for short, and the micro base stations are called SBS for short; each cell is provided with an MBS, communication is realized among the MBSs through optical fiber connection, and each MBS is connected with a remote core cloud through a return link; a plurality of low-power SBSs are deployed in the coverage area of the MBS, and users can be directly provided with services by the SBSs accessed by the users and also can be provided with services by the MBS; the MBS is connected to the controller through a control link, and the controller uniformly collects and maintains the request conditions sent by the users in each cell;
s2, each macro base station is provided with a mobile edge computing server, called MEC server for short, hot content in the mobile edge cloud is pre-cached on the MEC server of the macro base station, each content file can only be deployed on one unique MEC server under the system model, namely, each file cache has no copy, if users in other cells request the content file, the macro base station which caches the file is used as a multicast source, MBS of the cell where other requesting users are located forms a destination node set, a multicast group is integrated in a certain time window, parallel delivery is realized by a macro base station multicast stream, and each destination base station receives the file and delivers the file to a local user;
s3, dividing the operation time into a plurality of time slots by the controller, predicting a demand set of users in each cell for each content file by each MBS through an ARIMA prediction algorithm at the beginning of each time slot, and acquiring demand information of the users by the controller through a prediction result of each MBS; the ARIMA prediction algorithm is executed at the beginning of each time slot, the request number of each historical time slot of a certain content file under a macro base station of a certain cell is regarded as a time sequence, the data is stabilized after the sequence is imported, the ARIMA model is ranked according to the autocorrelation and partial autocorrelation diagrams, the request number of the file under a designated macro base station of the next time slot can be predicted after the ARIMA model is fitted, and the ratio of the result to the total request number of the file is regarded as the popularity of the file of the predicted next time slot;
s4, the controller selects candidate base stations for caching each content file according to the prediction result of the ARIMA prediction algorithm, calculates the optimal cache deployment mode of the time slot according to the collaborative caching algorithm based on multicast and popularity perception, and deploys the content files on the MEC server which enables the system time delay to be the lowest all the time from the beginning of null allocation; when content cache deployment operation is executed in a single time slot, firstly, content files are ordered in descending order according to the predicted total request times of all files in the time slot, and then content files are deployed one by one from the empty allocation according to the ordering in descending order;
for the deployment of each file, first K macro base stations with higher popularity are selected to form a cache file f i Is set of candidate base stations W i Sequentially selecting one node s in the candidate base station set n ∈W i As a cache file f i Calculating corresponding file transmission delay
Wherein s= { S n N=1, 2,..n } represents the set of macro base stations in the multi-cell architecture, f= { F i I=1, 2, I } represents the file set to be cached under this architecture, N is the number of macro base stations in the system model of the mobile edge cloud, I is the number of cached files, M i Representing request file f i Is set in the destination base station, D (T in ) Buffer-storage in macro base station s for transmission n Lower file f i Is(s) m ∈(M i \s n ) Representing a certain destination node s in a set of destination nodes other than the source node mFor going from a source node to a destination node s m Due to the transmission delay of the file f i Through a source node s n Is delivered to all destination base stations in parallel, file f i Is->Equal to the delay D (T in ) I.e. file transfer delay->For maximum value in time delay from source node to each destination node, in the process of deploying content files one by one, the files are cached in +.>On the smallest node, if a certain node does not have enough storage space along with deployment, selecting a cached node according to the obtained K time delay ascending sequences;
s5, constructing a corresponding network topological graph according to a system model to serve as a network model, deploying MEC servers with calculation, storage and data processing functions at each MBS node side in the network model, setting corresponding data transmission rate for each link, and enabling the transmission delay of each content file to be according to the network modelCalculating the model; wherein, the system model of the mobile edge cloud is simulated by a weighted network topology graph G= (V, E), wherein V represents a communication node set, namely a base station set S, E represents a communication link set among base stations, each edge E E has a weight d (E) to represent the transmission delay of the link, and two adjacent base stations S p ,s q Where p, q e {1,2,..N, N } and p+.q, inter-base station link e pq E, the transmission delay on E is the ratio of the file size to the link transmission rate, and the file f is cached i From a source node to a destination node s other than the source node m ∈(M i \s n ) Is of (2)I.e. path between two nodes nm The total time delay of the parallel delivery file through the multicast tree is the maximum value of the time delay from the source node to each destination node;
s6, a multicast group consisting of a file caching node and a target node set for requesting the file is given, and an improved ant colony optimization multicast algorithm is utilized to construct a minimum delay multicast tree, so that the minimum delay of content delivery of the file caching under the corresponding macro base station is determined; wherein, given the cache file f i Constructing a delay minimum multicast tree by utilizing an improved ant colony optimization multicast algorithm, setting the maximum iteration times IterNum, sending c data packets from a source node each time of iteration, constructing multicast trees respectively, continuously updating the current minimum delay multicast tree each time of iteration, taking adjacent nodes of all nodes on a generated tree as next optional nodes each time, selecting next access nodes according to the state transition rule of the ant colony algorithm, adding corresponding links to extend outwards, adjusting the pheromone concentration of the links each time of adding one link on the multicast tree until all destination nodes are added into the multicast tree, finishing one iteration, globally updating the pheromone on the current delay minimum multicast tree, further optimizing the next iteration on the basis, and finishing the iteration when the iteration times reach the set maximum iteration times to obtain the corresponding delay of a given multicast groupA small multicast tree;
s7, pruning leaf nodes of non-destination nodes in the constructed multicast tree after the multicast tree is constructed for the given multicast group;
and S8, after the system model of the mobile edge cloud finishes the deployment of content files one by one in a single time slot, carrying out multicast-based content delivery in the single time slot, namely, aggregating base station sets which send out requests for the same content in a multicast continuous window, and uniformly and parallelly serving by a multicast stream.
2. The method for deploying content caches in a mobile edge cloud according to claim 1, wherein in step S7, each data packet performs pruning operation on the constructed tree after completing one delivery, and removes edges connected to leaf nodes of non-destination nodes to obtain a multicast tree covering the source node and all destination nodes.
3. The content cache deployment method in the mobile edge cloud according to claim 1, wherein the number N of macro base stations in the system model of the mobile edge cloud is 12, the number I of cache files is 50, and the length T of the time slot is 5 minutes.
4. The content cache deployment method in the mobile edge cloud according to claim 1, wherein the number K of candidate base stations for selecting the cache file in the collaborative caching algorithm based on multicast and popularity perception is set to be 5.
5. The content cache deployment method in a mobile edge cloud according to claim 1, wherein the maximum iteration number IterNum is set to 11, and the number c of data packets sent out per iteration is 12.
CN202010781085.7A 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud Active CN112020103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010781085.7A CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010781085.7A CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Publications (2)

Publication Number Publication Date
CN112020103A CN112020103A (en) 2020-12-01
CN112020103B true CN112020103B (en) 2023-08-08

Family

ID=73499315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010781085.7A Active CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Country Status (1)

Country Link
CN (1) CN112020103B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671880B (en) * 2020-12-18 2022-08-16 中国科学院上海高等研究院 Distributed content caching and addressing method, system, medium, macro base station and micro base station
CN112422352B (en) * 2021-01-25 2021-04-20 华东交通大学 Edge computing node deployment method based on user data hotspot distribution
CN112822727B (en) * 2021-01-29 2022-07-01 重庆邮电大学 Self-adaptive edge content caching method based on mobility and popularity perception
CN114979156A (en) * 2021-02-26 2022-08-30 中国电信股份有限公司 Method, system and terminal for realizing edge cloud service
CN113709853B (en) * 2021-07-23 2022-11-15 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113727358B (en) * 2021-08-31 2023-09-15 河北工程大学 Edge server deployment and content caching method based on KM and greedy algorithm
CN113766540B (en) * 2021-09-02 2024-04-16 北京工业大学 Low-delay network content transmission method, device, electronic equipment and medium
CN114070859B (en) * 2021-11-29 2023-09-01 重庆邮电大学 Edge cloud cache cooperation method, device and system based on boundary cost benefit model
CN114513514B (en) * 2022-01-24 2023-07-21 重庆邮电大学 Edge network content caching and pre-caching method for vehicle users
CN114826900B (en) * 2022-04-22 2024-03-29 阿里巴巴(中国)有限公司 Service deployment processing method and device for distributed cloud architecture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110247793A (en) * 2019-05-29 2019-09-17 暨南大学 A kind of application department arranging method in mobile edge cloud
CN110418367A (en) * 2019-06-14 2019-11-05 电子科技大学 A kind of 5G forward pass mixture of networks edge cache low time delay method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110247793A (en) * 2019-05-29 2019-09-17 暨南大学 A kind of application department arranging method in mobile edge cloud
CN110418367A (en) * 2019-06-14 2019-11-05 电子科技大学 A kind of 5G forward pass mixture of networks edge cache low time delay method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于蚁群优化算法的移动边缘协作计算;花德培;孙彦赞;吴雅婷;王涛;;电子测量技术(第20期);全文 *

Also Published As

Publication number Publication date
CN112020103A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN112020103B (en) Content cache deployment method in mobile edge cloud
Jiang et al. Multi-agent reinforcement learning based cooperative content caching for mobile edge networks
CN109818865B (en) SDN enhanced path boxing device and method
Hu et al. Twin-timescale artificial intelligence aided mobility-aware edge caching and computing in vehicular networks
CN110247793B (en) Application program deployment method in mobile edge cloud
CN111445111B (en) Electric power Internet of things task allocation method based on edge cooperation
Chamola et al. An optimal delay aware task assignment scheme for wireless SDN networked edge cloudlets
CN112995950B (en) Resource joint allocation method based on deep reinforcement learning in Internet of vehicles
CN108900355B (en) Satellite-ground multistage edge network resource allocation method
WO2018120802A1 (en) Collaborative content cache control system and method
CN108156596B (en) Method for supporting D2D-cellular heterogeneous network combined user association and content caching
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
CN114595632A (en) Mobile edge cache optimization method based on federal learning
CN106792995B (en) User access method for guaranteeing low-delay content transmission in 5G network
Sun et al. A DQN-based cache strategy for mobile edge networks
Xu et al. MECC: a mobile edge collaborative caching framework empowered by deep reinforcement learning
CN110913239B (en) Video cache updating method for refined mobile edge calculation
CN112887943B (en) Cache resource allocation method and system based on centrality
Aloqaily et al. Trustworthy cooperative UAV-based data management in densely crowded environments
CN113993168A (en) Multi-agent reinforcement learning-based cooperative caching method in fog wireless access network
US20230284130A1 (en) Network slice assignment control systems and methods
Santos et al. Multimedia microservice placement in hierarchical multi-tier cloud-to-fog networks
CN110753365A (en) Heterogeneous cellular network interference coordination method
CN115278779A (en) Rendering perception-based dynamic placement method for VR service module in MEC network
CN114245422A (en) Edge active caching method based on intelligent sharing in cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant