CN112020103A - Content cache deployment method in mobile edge cloud - Google Patents

Content cache deployment method in mobile edge cloud Download PDF

Info

Publication number
CN112020103A
CN112020103A CN202010781085.7A CN202010781085A CN112020103A CN 112020103 A CN112020103 A CN 112020103A CN 202010781085 A CN202010781085 A CN 202010781085A CN 112020103 A CN112020103 A CN 112020103A
Authority
CN
China
Prior art keywords
file
multicast
content
cache
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010781085.7A
Other languages
Chinese (zh)
Other versions
CN112020103B (en
Inventor
周继鹏
纪杨阳
张效铨
庄娘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202010781085.7A priority Critical patent/CN112020103B/en
Publication of CN112020103A publication Critical patent/CN112020103A/en
Application granted granted Critical
Publication of CN112020103B publication Critical patent/CN112020103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a content cache deployment method in a mobile edge cloud, which combines mobile edge calculation and multicast routing to construct a macro-cellular-micro-cellular two-layer heterogeneous network architecture with a controller and solve the problem of multi-cell cooperative caching. On the basis of the architecture, the content cache deployment problem in the mobile edge cloud is converted into a 0-1 integer programming problem, and a cache deployment method for solving the problem is provided. The method takes multicast delivery as an entry point, takes the popularity and the delivery delay of a file as deployment basis, combines a cooperative caching algorithm based on multicast and popularity perception and an improved ant colony optimization multicasting algorithm to optimize the delivery delay, finds a content caching deployment mode capable of reducing the total delay of a system, and reasonably pre-caches the hot content in an edge server. The method reduces the delivery time delay so as to improve the service quality of the user, and simultaneously efficiently utilizes the limited network resources to respond to the requirements of 5G on low time delay, high bandwidth and large connection.

Description

Content cache deployment method in mobile edge cloud
Technical Field
The invention relates to the technical field of mobile edge cloud infrastructure and cache deployment and content delivery thereof, in particular to a content cache deployment method in a mobile edge cloud.
Background
Since 1978, mobile communication has been developed from analog communication technology to digital communication, and now in the 4G era, the LTE technology dominated by 3GPP has become a globally unified technical standard, network bandwidth has been greatly improved, and data services have also been popularized. The continuous improvement of mobile communication networks and the gradual introduction of new mobile devices such as smart phones and tablet computers have brought forth various applications for mobile connectivity, which has followed the explosive exponential growth of network traffic. The intelligent mobile terminal plays an indispensable role in entertainment, life, learning, office, multimedia and the like. With the birth of 5G, the applications of various vertical industries such as 4K \8K high-definition videos, virtual reality, smart cities, automatic driving, smart power grids and the like are continuously developed, and higher requirements are put forward on a communication system. Therefore, the fifth generation mobile communication technology with high speed, large connection and ultra-low delay as development trend is facing a serious challenge. By 2022, smartphones will generate an average of 11GB of data traffic each month, as predicted by the cisco vni (visual Networking index), where mobile video traffic will account for 79% of the total traffic. In the face of the pressure of computationally intensive tasks such as large data requests and video transcoding, the capabilities of mobile terminals have not been able to meet user requirements.
In view of the limited storage and Computing resources of the user terminal, Cloud Computing (CC) provides the public with the required resources in a pay-as-you-go manner. The cloud is used as an effective scheme for solving the problem of limited terminal resources, so that shared software and hardware resources and information can be provided for various terminal devices as required, and a user does not need to know details such as infrastructure in the cloud, so that great convenience is brought. However, the cloud platform provides services, which solve the resource problem, but bring new challenges. The centralized processing mode not only increases the transmission bandwidth load from the edge to the far-end core network, but also causes a certain service delay due to the distance between the terminal device and the far-end cloud, and is not suitable for delay-sensitive applications.
To solve the above problem, Mobile Edge Computing (MEC) has come to work. The MEC has the basic idea that a cloud computing platform is migrated from the inside of a mobile core Network to the edge of a mobile Access Network, and a traditional Radio Access Network (RAN) has service localization conditions by deploying edge nodes with functions of computing, storing, communicating and the like, so that the high timeliness service requirement of a terminal user can be effectively processed, the end-to-end time delay is greatly shortened, and the related problems of data flow bottleneck and the like of the core Network are solved. At present, MEC is one of 5G delay reduction technologies, and for a low-delay service, a service server deployed at the edge of a mobile network locally offloads traffic data, so as to reduce the requirements on bandwidths of a transport network and a core network, and becomes an important technical support for 5G.
The key technologies of mobile edge computing include caching, unloading, server deployment and the like, wherein caching is the research focus of the invention, and means that an MEC server is deployed on a base station side close to a user by using a wireless access network, and relevant hot spot data is cached in the MEC server in advance, so that a terminal user can directly deliver content by the MEC server when requesting hot spot content. Compared with the traditional edge network, the user does not need to wait for the long process of acquiring the content from the far-end core network and then transmitting the content back to the local base station layer by layer to realize delivery, so that the backhaul link bandwidth resource of the core network is effectively saved, the local delivery brought by pre-caching the hot content can also obviously shorten the time delay of service response, and the service experience of the user is improved.
However, the hotspot content pre-caching technique in the mobile edge calculation also has the following problems: (1) what content is cached; (2) where to cache; (3) how to deliver the content to improve the Quality of Service (QoS) of the user. Thus, an efficient and intelligent edge cloud cache deployment approach is enacted in order to prime the problem to be solved.
Disclosure of Invention
The invention aims to solve the problems in the prior pre-caching technology and provide a content caching deployment method in a mobile edge cloud, and the solution can calculate the optimal caching deployment of hot content in an MEC under the condition of real-time change of content popularity so as to meet the high requirement of a user on service quality and minimize the total time delay of a system for delivering content files.
The invention provides a new dynamic deployment method aiming at a cooperative cache deployment strategy of a plurality of cells in an edge cloud by combining cache and multicast, and the method comprises accurate prediction of the requirement of a user for accessing hot content, multicast routing of content delivery and reasonable deployment of content cache. Accurately predicting the demand set of each cell user by adopting a differential Integrated Moving Average Autoregressive Model (ARIMA), selecting a candidate base station as a multicast source node, and laying a data base for subsequent hotspot file cache deployment; adopting a heuristic Improved Ant Colony Optimization Multicast (IACOM) algorithm to construct a time delay minimum Multicast tree for content delivery of corresponding candidate nodes; a Cooperative multi-cast and power-Aware (CMPAC) algorithm is adopted to greedily reduce the total time delay of the system on the basis of conforming to the detection of the capacity violation of the base station. Through the optimization and improvement, the time delay is reduced, the QoS of a user is improved, and meanwhile, the bandwidth occupation caused by 5G large connection is relieved to a certain extent.
The purpose of the invention can be achieved by adopting the following technical scheme:
a content cache deployment method in a mobile edge cloud comprises the following steps:
s1, constructing a system model of the mobile edge cloud, wherein the system model is a two-layer heterogeneous network architecture with a controller, and the two-layer heterogeneous network architecture consists of a Macro Base Station (MBS) and a micro base station (SBS); each cell is provided with an MBS, the MBS realizes communication through optical fiber connection, and each MBS can be connected with a far-end core cloud through a return link; the SBS with low power is deployed in the coverage area of the MBS, and the user can be directly served by the SBS accessed by the user or the MBS; MBS is connected to the controller through the control link, the request condition sent by users in each cell is collected and maintained by the controller;
s2, each macro base station is provided with an MEC server, hot content in the mobile edge cloud is pre-cached on the MEC server of the macro base station, content delivery service is sunk to the edge of the network, and therefore pressure of a backhaul link and a far-end cloud is relieved. Each content file can only be deployed on a unique MEC server under the system model, namely each file cache has no copy, if users of other cells request the content file, a macro base station caching the file is used as a multicast source, MBS of the cell where other requesting users are located forms a target node set, a multicast group is formed in a certain time window in an aggregation mode, a macro base station multicast stream realizes parallel delivery, and each target base station receives the file and then delivers the file to a local user;
s3, the controller divides the operation time into a plurality of time slots, at the beginning of each time slot, each MBS predicts the demand set of each content file of each user in each cell by ARIMA prediction algorithm, and the controller obtains the demand information of each user by the prediction result of each MBS;
s4, selecting candidate base stations for caching each content file according to the prediction result of the ARIMA prediction algorithm by the controller, calculating the optimal cache deployment mode of the time slot according to the cooperative cache algorithm based on multicast and popularity perception, and deploying the content files on an MEC server which enables the system delay to be lowest greedily all the time from space distribution;
s5, constructing a corresponding network topological graph as a network model according to the system model, deploying an MEC server with computing, storing and data processing functions at each MBS node side in the network model, setting corresponding data transmission rate for each link, and calculating the transmission delay of each content file according to the network model;
s6, a multicast group composed of a file cache node and a destination node set requesting the file is given, a multicast tree with minimum time delay is constructed by utilizing an improved ant colony optimization multicast algorithm, so that the minimum time delay of content delivery of the file cache under a corresponding macro base station is determined, and the final cache position of the file is selected by comparing with the minimum time delay of content delivery calculated by other cache candidate nodes of the file;
s7, after the multicast tree is constructed for the given multicast group, the tree constructed by each data packet may have leaf nodes of non-destination nodes, and the leaf nodes need to be checked and pruned;
s8, after the system model of the mobile edge cloud finishes one-by-one deployment of the content files in a single time slot, the multicast-based content delivery can be carried out in the time slot, namely, in a multicast continuous window, a base station set which sends a request to the same content is aggregated, and a multicast stream is used for unified parallel service.
Further, the content cache deployment method focuses on the content cache of multiple cells, the invention is developed by taking a macro base station layer in a two-layer heterogeneous network as a research object, and global request information is mastered by an additionally arranged controller. The popularity of the file in macro base stations of all cells and the transmission delay of the file based on multicast are considered, and the transmission delay only considers the delay of delivering the file from the cached cell base station to the cell base station where the user is located. Although the content cache deployment method focuses on the cooperative cache of multiple cells, the content cache deployment method can be expanded to a micro base station layer in each cell, and cache deployment among micro base stations in the cell is synchronously realized.
Further, in step S3, an ARIMA prediction algorithm is performed at the beginning of each time slot to perform prediction on the popularity of the next time slot of the file. The ratio of the number of times a file is requested in a cell in a single time slot to the total number of requests for that file is used as an indicator for popularity rating. The number of requests of a certain content file in each historical time slot under a certain cell base station can be regarded as a time sequence, and the trend that the user demand of the file changes constantly along with the time is reflected. After the sequence is introduced, data are subjected to stabilization processing, a difference parameter d is determined, the order of the model is determined according to an autocorrelation graph and a partial autocorrelation graph, appropriate p and q values are automatically screened, the number of requests of the file in the next time slot under the designated macro base station can be predicted after an ARIMA (p, d, q) model is fitted, and the popularity of the file in the next time slot is calculated according to the definition of the popularity.
Further, in step S4, the methodWhen the content cache deployment operation is executed in the time slot, the content files are firstly sorted in a descending order according to the predicted total request times of the files in the time slot, and then the content files are deployed one by one from the empty allocation according to the descending order. Aiming at the deployment of each file, firstly, selecting the first K macro base stations with higher popularity to form a cache file fiCandidate base station set WiSequentially selecting one node s in the candidate base station setn∈WiAs a cache file fiThe source node calculates the corresponding file transmission delay
Figure BDA0002620221570000061
Figure BDA0002620221570000062
Wherein S ═ { S ═ S n1, 2., N } represents a set of macro base stations in a multi-cell architecture, and F ═ FiI | (1, 2.·, I) } represents a file set to be cached under the architecture, N is the number of macro base stations in a system model of the mobile edge cloud, I is the number of cached files, and MiRepresenting a request document fiDestination base station set of D (T)in) Caching at macro base stations s for transmissionsnFile f ofiOf the multicast tree sm∈(Mi\sn) Representing a certain destination node s of a set of destination nodes other than the source nodem
Figure BDA0002620221570000063
From a source node to a destination node smThe transmission delay of (2). Due to file fiThrough a source node as snThe multicast tree of (a) is delivered to all destination base stations in parallel, then the file fiIs delayed
Figure BDA0002620221570000064
Equal to the delay D (T) of the multicast treein) I.e. file transfer delay
Figure BDA0002620221570000065
Is the maximum value of the time delay from the source node to each destination node. In the process of deploying the content files one by one, caching the files in
Figure BDA0002620221570000066
And on the minimum node, if a certain node does not have enough storage space along with the deployment, selecting the cached nodes according to the obtained K time delays in an ascending order.
Further, in step S5, the system model of the moving edge cloud is simulated by using a weighted network topology graph G ═ V, E, where V represents a communication node set, that is, a base station set S, E represents a communication link set between base stations, each edge E ∈ E has a weight d (E), which represents the transmission delay of the link, and two adjacent base stations S are simulated by using a weighted network topology graph G ═ V, Ep,sq(p, q. e {1, 2.. An, N } and p ≠ q) inter-link epqThe transmission delay belonging to E is the ratio of the file size to the link transmission rate, and the file f is cachediFrom a source node to a destination node s other than the source nodem∈(Mi\sn) Time delay of
Figure BDA0002620221570000067
I.e. the path between two nodesnmThe sum of the weights of all the edges and the total time delay of the parallel delivery of the files through the multicast tree are the maximum value of the time delay from the source node to all the destination nodes.
Further, in step S6, a cache file f is giveniThe node and the destination node set requesting the file, utilize the improved ant colony optimization multicast algorithm to construct the minimum delay multicast tree, set the maximum iteration number Iternum, each iteration sends c data packets from the source node, each constructs the multicast tree, and continuously updates the current minimum delay multicast tree in each iteration, each time takes the adjacent nodes of all nodes on the tree that have been generated as the next step selectable node, selects the next access node from the next access node according to the state transfer rule of the ant colony algorithm, joins the corresponding link to extend outward, each time adds a link on the multicast tree, adjusts the pheromone concentration of the link according to the pheromone update rule of the ant colony algorithmAnd ending one iteration until all destination nodes are added into the multicast tree, globally updating the pheromone on the multicast tree with the minimum time delay at present, further optimizing the next iteration on the basis, ending the iteration when the iteration times reach the set maximum iteration times, and converging the algorithm on the multicast tree with the minimum time delay to obtain the minimum time delay corresponding to the given multicast group.
Further, in step S7, in each iteration, the data packet is transmitted to all the destination nodes, and then leaf nodes of non-destination nodes may exist in the constructed tree. Therefore, after each data packet is delivered once, pruning operation needs to be performed on the constructed tree, and edges connected with leaf nodes of non-destination nodes are removed, so as to obtain a multicast tree covering the source node and all destination nodes.
Further, the number N of macro base stations in the mobile edge cloud system model is 12.
Further, the number I of the cache files is 50.
Further, the length T of the timeslot is 5 minutes.
Further, the number K of candidate base stations for selecting the cache file in the CMPAC caching algorithm is set to be 5.
Further, the maximum iteration number IterNum is set to 11, and the number c of data packets sent out in each iteration is 12.
Compared with the prior art, the invention has the following advantages and effects:
(1) the method is different from the traditional instantaneous statistics of the popularity in the cache research, and adopts a popularity prediction algorithm to sense the dynamic change demand condition of the file in the system in real time. And (3) taking the request number of the file in different time slots under the macro base station as a time sequence, and predicting the request number of the next time slot by adopting an ARIMA model. The file popularity obtained by the prediction algorithm can provide a relatively accurate reference for subsequent cache deployment.
(2) The method takes a content delivery mode as an entry point, and considers that a user can send a large number of requests to the related content of the hot event in a short time, although the pressure of a backhaul link can be relieved by pre-caching the hot content, the content delivery mode also influences the quality of service delay. The traditional cache based on unicast delivery is abandoned, and an edge cloud content cache deployment method based on a multicast mode is provided, so that the total time delay of the system for content delivery is minimized. And selecting candidate nodes of the cache file through the first-stage popularity prediction, and sequentially using the candidate nodes as multicast source nodes to construct a multicast path, so that the base station with the minimum time delay and still having a cache space is selected to perform pre-caching of the file.
(3) The invention adopts an improved ant colony optimization multicast algorithm for constructing the multicast group formed by a candidate node and a destination node set thereof. And (3) searching a path on the constructed network topology by utilizing the characteristics of ant colony algorithm exploration and utilization, and finally finding the optimal solution of the problem through repeated iteration and rapid convergence.
(4) Although the invention is directed to a macro base station level in a two-layer heterogeneous network architecture, with the continuous progress of the 5G technology, the interconnection among micro base stations becomes possible. If the micro base stations in the cell can communicate with each other, the multi-cell cooperative cache deployment provided by the invention can be applied to the cache deployment in the cell, and the cooperative cache based on multicast is synchronously realized on the micro base station level.
Drawings
Fig. 1 is a macro base station-micro base station two-layer heterogeneous network architecture diagram with a controller disclosed in the present invention (the micro base station in a cell is not shown in the diagram because the present invention is deployed around the cache of the macro base station);
FIG. 2 is a flowchart of a method for deploying content caches in edge clouds according to the disclosure;
fig. 3 is a schematic diagram of multicast tree construction disclosed in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The present embodiment provides a use of a content cache deployment method in a mobile edge cloud, a scheme of the present invention is composed of two parts, namely, user demand prediction and content cache deployment, and the following specifically describes the scheme of the present invention with reference to a flowchart of content cache deployment in a mobile edge cloud disclosed in the present invention in fig. 2, and the implementation of the scheme includes the following steps:
firstly, a system model of a mobile edge cloud is constructed, and the system model is a two-layer heterogeneous network architecture with a controller, which is composed of a macro base station and a micro base station, as shown in fig. 1, wherein the macro base station is abbreviated as MBS, and the micro base station is abbreviated as SBS. Each cell is provided with an MBS, the MBS realizes communication through optical fiber connection, and each MBS can be connected with a far-end core cloud through a return link. A plurality of low-power SBS are deployed in the coverage area of the MBS, and users can be directly served by the SBS to which they access, or can be served by the MBS. MBS is connected to the controller through the control link, and the controller collects and maintains the request condition sent by the user in each cell and grasps the global information.
Each macro base station is provided with a mobile edge computing server, referred to as an MEC server for short, hot content in the mobile edge cloud is pre-cached on the MEC server of the macro base station, content delivery service is sunk to the edge of a network, and therefore pressure of a backhaul link and a far-end cloud is relieved. In the system model of the invention, each content file is set to be only deployed on a unique MEC server under the framework, namely each file is cached without copy, if users of other cells request the content file, a macro base station caching the file is used as a multicast source, MBS of the cell where other requesting users are located forms a destination node set, a certain time window is aggregated into a multicast group, parallel delivery is realized by a macro base station multicast flow, and each destination base station receives the file and then delivers the file to the user of the cell. The content cache deployment method focuses on the content cache of multiple cells, and the invention is developed by taking a macro base station layer in a two-layer heterogeneous network as a research object. The popularity of the file in macro base stations of all cells and the transmission delay of the file based on multicast are considered, and the transmission delay only considers the delay of delivering the file from the cached cell base station to the cell base station where the user is located. Although the content cache deployment method focuses on the cooperative cache of multiple cells, the content cache deployment method can be expanded to a micro base station layer in each cell, and cache deployment among micro base stations in the cell is synchronously realized.
The controller divides the running time into a plurality of time slots, at the beginning of each time slot, each MBS predicts the demand set of each content file of each cell by ARIMA prediction algorithm, and the controller obtains the demand information of each user by the prediction result of each MBS. Executing an ARIMA prediction algorithm at the beginning of each time slot, regarding the request number of each historical time slot of a certain content file under a macro base station of a certain cell as a time sequence reflecting the change trend of user requirements, leading in the sequence, carrying out stabilization processing on data, determining a parameter d, carrying out order determination on the model according to an autocorrelation and partial autocorrelation graph, automatically screening out appropriate parameters p and q, fitting an ARIMA (p, d, q) model, predicting the request number of the file under a specified macro base station of the next time slot, and taking the ratio of the result and the total request number of the file as the popularity of the file of the next time slot.
When the content cache deployment operation is executed in a single time slot, the controller sorts the content files in descending order according to the total request times according to the prediction result of the ARIMA prediction algorithm, and according to the Cooperative cache (Cooperative Multi-case) based on Multicast and popularity perception&A proprietary-Aware Caching, CMPAC) algorithm deploys content files one by one, and deploys the content files on MEC servers that minimize system latency greedily all the time from null allocation. Aiming at the deployment of each file, firstly, selecting the first K macro base stations with higher popularity to form a cache file fiCandidate base station set WiSequentially selecting one node s in the candidate base station setn∈WiAs a cache file fiThe source node calculates the corresponding file transmission delay
Figure BDA0002620221570000101
Figure BDA0002620221570000102
Wherein S ═ { S ═ S n1, 2., N } represents a set of macro base stations in a multi-cell architecture, and F ═ FiI | (1, 2.·, I) } represents a file set to be cached under the architecture, N is the number of macro base stations in a system model of the mobile edge cloud, I is the number of cached files, and MiRepresenting a request document fiDestination base station set of D (T)in) Caching at macro base stations s for transmissionsnFile f ofiOf the multicast tree sm∈(Mi\sn) Representing a certain destination node s of a set of destination nodes other than the source nodem
Figure BDA0002620221570000103
From a source node to a destination node smThe transmission delay of (2). Due to file fiThrough a source node as snThe multicast tree of (a) is delivered to all destination base stations in parallel, then the file fiIs delayed
Figure BDA0002620221570000111
Equal to the delay D (T) of the multicast treein) I.e. file transfer delay
Figure BDA0002620221570000112
Is the maximum value of the time delay from the source node to each destination node. In the process of deploying the content files one by one, caching the files in
Figure BDA0002620221570000113
And on the minimum node, if a certain node does not have enough storage space along with the deployment, selecting the cached nodes according to the obtained K time delays in an ascending order.
Constructing a corresponding network topological graph as a network model according to the system model, wherein each MBS node side in the network model is provided with calculation, storage and dataAnd each link of the MEC server with the processing function is set with a corresponding data transmission rate, and the transmission delay of the content file can be calculated according to the network model. Simulating a system model of the mobile edge cloud by using a weighted network topological graph G (V, E), wherein V represents a communication node set, namely a base station set S, E represents a communication link set between base stations, each edge E belongs to E and is provided with a weight d (E) to represent the transmission delay of the link, and two adjacent base stations Sp,sq(p, q. e {1, 2.. An, N } and p ≠ q) inter-link epqThe transmission delay belonging to E is the ratio of the file size to the link transmission rate, and the file f is processediFrom a source node to a destination node s other than the source nodem∈(Mi\sn) Time delay of
Figure BDA0002620221570000114
I.e. the path between two nodesnmThe sum of the weights of all the edges and the total time delay of the parallel delivery of the files through the multicast tree are the maximum value of the time delay from the source node to all the destination nodes.
A Multicast group composed of a file cache node and a destination node set requesting the file is given, an Improved Ant Colony Optimization Multicast (IACOM) algorithm is utilized to construct a Multicast tree with minimum time delay, so that the corresponding content delivery time delay of the file cache under a macro base station is determined and used as a deployment basis for optimizing the total time delay of the system, and the construction principle of the Multicast tree is shown in fig. 3. Setting a maximum iteration number Iternum, sending c data packets from a source node in each iteration, respectively constructing a multicast tree, continuously updating a current minimum delay multicast tree in each iteration, taking adjacent nodes of all nodes on the generated tree as next optional nodes in each iteration, selecting a next access node from the nodes according to a state transfer rule of an ant colony algorithm, adding a corresponding link to extend outwards, adjusting the pheromone concentration of the link every time a link is added on the multicast tree until all target nodes are added in the multicast tree, finishing one iteration, globally updating the pheromone on the current minimum delay multicast tree, further optimizing the next iteration on the basis, and converging the algorithm into the minimum delay multicast tree of a given multicast group when the iteration number reaches the set maximum iteration number after the multiple iterations are finished. In the process of building the multicast tree, each iteration is carried out, and leaf nodes of non-destination nodes may exist in the built tree after the data packet is transmitted to all destination nodes. Therefore, after each data packet is delivered once, pruning operation needs to be performed on the constructed tree, and edges connected with leaf nodes of non-destination nodes are removed, so as to obtain a multicast tree covering the source node and all destination nodes. And for K candidate cache nodes of a file, constructing K multicast trees with the minimum time delay by adopting the method, and selecting the node with the minimum time delay and still having cache space to cache the file.
After the system model of the mobile edge cloud finishes one-by-one deployment of content files in a single time slot, content delivery based on multicast can be carried out in the time slot, namely, a base station set which sends a request to the same content is aggregated in a multicast continuous window, and unified service is carried out by a multicast stream, so that efficient parallel delivery of the files is realized.
The number N of macro base stations in the system model of the mobile edge cloud is 12, the number I of cache files is 50, and other parameters configure the system through values in table 1:
TABLE 1 NS2 network simulation parameter configuration Table
Parameter(s) Description of the parameters Parameter value
N Number of macro base stations 12
I Number of files cached 50
capS Macro base station cache capacity 2GB
SIZEi Content file size 100MB-400MB
z Parameters of Zipf distribution 1.2
K Caching the number of candidate base stations 5
T Time slot length 5min
\ Duration of experiment 60min
ARIMA prediction algorithm for predicting user demand
The feasibility of combining the prediction algorithm with the content cache deployment in the edge cloud is verified through the fitting degree of the prediction algorithm and the real demand value and the comparison between the deployment by using the prediction result and the deployment in other modes.
(1) Degree of fitting
The data set used in this embodiment includes user requests counted by the 12-cell base station server for 24 consecutive hours. Firstly, preprocessing data, extracting parameters of three fields of request time, ID of an access base station and ID of a requested file used in a prediction experiment, screening out data records corresponding to the first 50 hot files with large demand quantity as a data set of a final experiment, modeling by using the data of the first 14 hours as a training sample, and using the data of the last 10 hours as a test sample for prediction result evaluation.
(2) Impact on content caching deployment
Two sets of comparative tests were used: one group uses real demand data as a demand set, and system time delay is calculated according to the set decision deployment mode, wherein the time delay under the condition is the total time delay of the system closest to the real condition and can be used as the reference of three groups of experimental results; the other group adopts a moving average method to calculate a demand set of the time slot T, and operates a deployment decision according to the result to calculate the system time delay of the transmission file, wherein the calculation rule is as follows: the average request value of a plurality of time slots before the time slot is taken as the request value of the time slot T. The experiment was run for a total of 12 slots.
Content caching and deploying algorithm
And taking the system time delay as an evaluation index, and evaluating the performance of the scheme by comparing with other cache deployment methods, different content delivery modes and different base station quantity frameworks. Other cache deployment algorithms are Random Cache (RC) and Popularity Aware Cache (PAC), different content delivery modes are unicast delivery and multicast delivery adopted by the present invention, and different base stations are 12 base stations and 6 base stations in number.
(1) Cache deployment method
The experiment duration is 5 minutes, and the mobile edge cloud system model runs for 12 time slots. The system time delay caused by cache deployment according to the CMPAC algorithm provided by the invention is compared with the system time delay of RC and PAC algorithms so as to verify the superiority of the cache deployment method provided by the invention. Content delivery method
(2) Under the architecture of 12 macro base stations, system time delays of file transmission after the CMPAC _ multicast, CMPAC _ unicast, PAC _ multicast and PAC _ unicast are cached and deployed in corresponding 12 time slots are compared, and if the time delay of multicast delivery is always smaller than the time delay of unicast delivery no matter which cache deployment method is adopted, the important access point for reducing the time delay in the aspect of edge cloud cache deployment is shown in the process of multicast delivery.
(3) Number of base stations
The CMPAC algorithm proposed in this embodiment is implemented in the following four scenarios: 12 base stations & multicast, 12 base stations & unicast; 6 base stations & multicast, 6 base stations & unicast, compare the system delay of file transmission after buffer deployment in the same 12 time slots. If the delay of multicast delivery is reduced more than that of unicast by increasing the number of base stations, it means that the benefit of multicast transmission is greater as the number of base stations is greater, and the superiority is more remarkable as compared with unicast.
(III) multicast tree construction algorithm
The IACOM algorithm provided in this embodiment is compared with an improved ant colony optimization algorithm (MACO), and the time required for the two algorithms to construct the multicast tree and the time delay for delivering the content of the constructed multicast tree are compared. If the IACOM algorithm converges to the minimum delay multicast tree for transmitting the file more quickly under the same iteration number, the advantage of time efficiency of the algorithm provided by the invention is shown.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A content cache deployment method in a mobile edge cloud is characterized by comprising the following steps:
s1, constructing a system model of the mobile edge cloud, wherein the system model is a two-layer heterogeneous network architecture with a controller, and the system model is composed of macro base stations and micro base stations, wherein the macro base stations are abbreviated as MBS, and the micro base stations are abbreviated as SBS; each cell is provided with an MBS, the MBS realizes communication through optical fiber connection, and each MBS is connected with a far-end core cloud through a return link; the SBS with low power is deployed in the coverage area of the MBS, and the user can be directly served by the SBS accessed by the user or the MBS; MBS is connected to the controller through the control link, the request condition sent by users in each cell is collected and maintained by the controller;
s2, each macro base station is provided with a mobile edge computing server, referred to as an MEC server for short, hot content in a mobile edge cloud is pre-cached on the MEC server of the macro base station, each content file can only be deployed on one unique MEC server under the system model, namely each file cache has no copy, if users of other cells request the content file, the macro base station caching the file is used as a multicast source, MBS of the cell where other requesting users are located forms a destination node set, a multicast group is formed in a certain time window in a cohesive mode, parallel delivery is realized through a macro base station multicast stream, and each destination base station receives the file and then delivers the file to a local user;
s3, the controller divides the operation time into a plurality of time slots, at the beginning of each time slot, each MBS predicts the demand set of each content file of each user in each cell by ARIMA prediction algorithm, and the controller obtains the demand information of each user by the prediction result of each MBS;
s4, selecting candidate base stations for caching each content file by the controller according to the prediction result of the ARIMA prediction algorithm, calculating the optimal cache deployment mode of the time slot according to the cooperative cache algorithm based on multicast and popularity perception, and always deploying the content files on the MEC server which enables the system time delay to be lowest from the beginning of space distribution;
s5, constructing a corresponding network topological graph as a network model according to the system model, deploying an MEC server with computing, storing and data processing functions at each MBS node side in the network model, setting corresponding data transmission rate for each link, and calculating the transmission delay of each content file according to the network model;
s6, a multicast group consisting of a file cache node and a destination node set requesting the file is given, and a multicast tree with minimum time delay is constructed by utilizing an improved ant colony optimization multicast algorithm, so that the minimum time delay of content delivery of the file cache under a corresponding macro base station is determined;
s7, after the multicast tree is constructed for the given multicast group, pruning the leaf nodes of the non-destination nodes in the constructed multicast tree;
s8, after the system model of the mobile edge cloud finishes one-by-one deployment of the content files in a single time slot, the multicast-based content delivery is carried out in the single time slot, namely, in a multicast continuous window, a base station set which sends a request to the same content is aggregated, and a multicast stream is used for unified parallel service.
2. The method as claimed in claim 1, wherein in step S3, an ARIMA prediction algorithm is performed at the beginning of each time slot, the number of requests of each historical time slot of a content file at a macro base station of a cell is regarded as a time sequence, the data is smoothed after the sequence is imported, an ARIMA model is ranked according to an autocorrelation and partial autocorrelation graph, the number of requests of the file at a designated macro base station of a next time slot can be predicted after the ARIMA model is fitted, and the ratio of the result to the total number of requests of the file is regarded as the popularity of the file at the predicted next time slot.
3. The method according to claim 1, wherein in step S4, when performing the content cache deployment operation in a single time slot, the content files are sorted in descending order according to the predicted total request times of the files in the time slot, and then the content files are deployed one by one from the empty allocation according to the descending order.
4. The method according to claim 3, wherein in step S4, for each file deployment, the first K macro base stations with higher popularity are selected to form a cache file fiCandidate base station set WiSequentially selecting one node s in the candidate base station setn∈WiAs a cache file fiThe source node calculates the corresponding file transmission delay
Figure FDA0002620221560000031
Figure FDA0002620221560000032
Wherein S ═ { S ═ Sn1, 2., N } represents a set of macro base stations in a multi-cell architecture, and F ═ FiI | (1, 2.·, I) } represents a file set to be cached under the architecture, N is the number of macro base stations in a system model of the mobile edge cloud, I is the number of cached files, and MiRepresenting a request document fiDestination base station set of D (T)in) Caching at macro base stations s for transmissionsnFile f ofiOf the multicast tree sm∈(Mi\sn) Representing a certain destination node s of a set of destination nodes other than the source nodem
Figure FDA0002620221560000033
From a source node to a destination node smThe transmission delay of (2). Due to file fiThrough a source node as snThe multicast tree of (a) is delivered to all destination base stations in parallel, then the file fiIs delayed
Figure FDA0002620221560000034
Equal to the delay D (T) of the multicast treein) I.e. file transfer delay
Figure FDA0002620221560000035
For the maximum value of the time delay from the source node to each destination node, the files are cached in the process of deploying the content files one by one
Figure FDA0002620221560000036
On the smallest node, if a node is deployed as the deployment progressesAnd if the node does not have enough storage space, selecting the cached nodes in ascending order according to the obtained K time delays.
5. The method according to claim 1, wherein in step S5, a system model of the mobile edge cloud is simulated by using a weighted network topology graph G ═ (V, E), where V represents a communication node set, that is, a base station set S, and E represents a communication link set between base stations, each edge E ∈ E has a weight d (E), and represents a transmission delay of the link, and two adjacent base stations S are each provided with a weight d (E)p,sqWhere p, q ∈ {1, 2., N } and p ≠ q, inter-base-station link epqThe transmission delay belonging to E is the ratio of the file size to the link transmission rate, and the file f is cachediFrom a source node to a destination node s other than the source nodem∈(Mi\sn) Time delay of
Figure FDA0002620221560000041
I.e. the path between two nodesnmThe sum of the weights of all the edges and the total time delay of the parallel delivery of the files through the multicast tree are the maximum value of the time delay from the source node to all the destination nodes.
6. The method for deploying content cache in mobile edge cloud as claimed in claim 1, wherein in step S6, the cache file f is giveniUtilizing an improved ant colony optimization multicast algorithm to construct a minimum delay multicast tree, setting a maximum iteration number Iternum, sending c data packets from a source node in each iteration, respectively constructing a multicast tree, continuously updating the current minimum delay multicast tree in each iteration, taking adjacent nodes of all nodes on the generated tree as next step selectable nodes in each iteration, selecting a next access node from the next access nodes according to a state transfer rule of the ant colony algorithm, adding a corresponding link to extend outwards, adjusting the pheromone concentration of the link by adding one link on the multicast tree, ending one iteration until all target nodes are added into the multicast tree, and adding the target nodes into the multicast tree to obtain the final product of the next iterationAnd globally updating the pheromone on the current multicast tree with the minimum time delay, further optimizing the next iteration on the basis, and finishing the iteration when the iteration times reach the set maximum iteration times to obtain the multicast tree with the minimum time delay corresponding to the given multicast group.
7. The method according to claim 1, wherein in step S7, after completing one delivery, each packet performs pruning on the constructed tree, and removes edges connected to leaf nodes of non-destination nodes, so as to obtain a multicast tree covering a source node and all destination nodes.
8. The method according to claim 1, wherein the number N of macro base stations in the system model of the mobile edge cloud is 12, the number I of the cache files is 50, and the length T of the time slot is 5 minutes.
9. The content cache deployment method in the mobile edge cloud according to claim 4, wherein the number K of candidate base stations for selecting the cache file in the cooperative cache algorithm based on multicast and popularity perception is set to be 5.
10. The method of claim 6, wherein the maximum number of iterations Iternum is set to 11, and the number of data packets sent out per iteration c is 12.
CN202010781085.7A 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud Active CN112020103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010781085.7A CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010781085.7A CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Publications (2)

Publication Number Publication Date
CN112020103A true CN112020103A (en) 2020-12-01
CN112020103B CN112020103B (en) 2023-08-08

Family

ID=73499315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010781085.7A Active CN112020103B (en) 2020-08-06 2020-08-06 Content cache deployment method in mobile edge cloud

Country Status (1)

Country Link
CN (1) CN112020103B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422352A (en) * 2021-01-25 2021-02-26 华东交通大学 Edge computing node deployment method based on user data hotspot distribution
CN112671880A (en) * 2020-12-18 2021-04-16 中国科学院上海高等研究院 Distributed content caching and addressing method, system, medium, macro base station and micro base station
CN112822727A (en) * 2021-01-29 2021-05-18 重庆邮电大学 Self-adaptive edge content caching method based on mobility and popularity perception
CN113709853A (en) * 2021-07-23 2021-11-26 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113727358A (en) * 2021-08-31 2021-11-30 河北工程大学 KM and greedy algorithm-based edge server deployment and content caching method
CN113766540A (en) * 2021-09-02 2021-12-07 北京工业大学 Low-delay network content transmission method and device, electronic equipment and medium
CN114070859A (en) * 2021-11-29 2022-02-18 重庆邮电大学 Edge cloud cache cooperation method, device and system based on boundary cost benefit model
CN114500560A (en) * 2022-01-06 2022-05-13 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN114513514A (en) * 2022-01-24 2022-05-17 重庆邮电大学 Edge network content caching and pre-caching method for vehicle users
CN114979156A (en) * 2021-02-26 2022-08-30 中国电信股份有限公司 Method, system and terminal for realizing edge cloud service
CN115766882A (en) * 2022-10-19 2023-03-07 北京奇艺世纪科技有限公司 Distributed source data returning method and device, storage medium and electronic equipment
CN114826900B (en) * 2022-04-22 2024-03-29 阿里巴巴(中国)有限公司 Service deployment processing method and device for distributed cloud architecture
CN117955979A (en) * 2024-03-27 2024-04-30 中国电子科技集团公司第五十四研究所 Cloud network fusion edge information service method based on mobile communication node

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110247793A (en) * 2019-05-29 2019-09-17 暨南大学 A kind of application department arranging method in mobile edge cloud
CN110418367A (en) * 2019-06-14 2019-11-05 电子科技大学 A kind of 5G forward pass mixture of networks edge cache low time delay method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110247793A (en) * 2019-05-29 2019-09-17 暨南大学 A kind of application department arranging method in mobile edge cloud
CN110418367A (en) * 2019-06-14 2019-11-05 电子科技大学 A kind of 5G forward pass mixture of networks edge cache low time delay method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
花德培;孙彦赞;吴雅婷;王涛;: "基于蚁群优化算法的移动边缘协作计算", 电子测量技术, no. 20 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112671880B (en) * 2020-12-18 2022-08-16 中国科学院上海高等研究院 Distributed content caching and addressing method, system, medium, macro base station and micro base station
CN112671880A (en) * 2020-12-18 2021-04-16 中国科学院上海高等研究院 Distributed content caching and addressing method, system, medium, macro base station and micro base station
CN112422352A (en) * 2021-01-25 2021-02-26 华东交通大学 Edge computing node deployment method based on user data hotspot distribution
CN112822727A (en) * 2021-01-29 2021-05-18 重庆邮电大学 Self-adaptive edge content caching method based on mobility and popularity perception
CN112822727B (en) * 2021-01-29 2022-07-01 重庆邮电大学 Self-adaptive edge content caching method based on mobility and popularity perception
CN114979156A (en) * 2021-02-26 2022-08-30 中国电信股份有限公司 Method, system and terminal for realizing edge cloud service
CN113709853A (en) * 2021-07-23 2021-11-26 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113709853B (en) * 2021-07-23 2022-11-15 北京工业大学 Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN113727358A (en) * 2021-08-31 2021-11-30 河北工程大学 KM and greedy algorithm-based edge server deployment and content caching method
CN113727358B (en) * 2021-08-31 2023-09-15 河北工程大学 Edge server deployment and content caching method based on KM and greedy algorithm
CN113766540A (en) * 2021-09-02 2021-12-07 北京工业大学 Low-delay network content transmission method and device, electronic equipment and medium
CN113766540B (en) * 2021-09-02 2024-04-16 北京工业大学 Low-delay network content transmission method, device, electronic equipment and medium
CN114070859B (en) * 2021-11-29 2023-09-01 重庆邮电大学 Edge cloud cache cooperation method, device and system based on boundary cost benefit model
CN114070859A (en) * 2021-11-29 2022-02-18 重庆邮电大学 Edge cloud cache cooperation method, device and system based on boundary cost benefit model
CN114500560A (en) * 2022-01-06 2022-05-13 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN114500560B (en) * 2022-01-06 2024-04-26 浙江鼎峰科技股份有限公司 Edge node service deployment and load balancing method for minimizing network delay
CN114513514A (en) * 2022-01-24 2022-05-17 重庆邮电大学 Edge network content caching and pre-caching method for vehicle users
CN114513514B (en) * 2022-01-24 2023-07-21 重庆邮电大学 Edge network content caching and pre-caching method for vehicle users
CN114826900B (en) * 2022-04-22 2024-03-29 阿里巴巴(中国)有限公司 Service deployment processing method and device for distributed cloud architecture
CN115766882A (en) * 2022-10-19 2023-03-07 北京奇艺世纪科技有限公司 Distributed source data returning method and device, storage medium and electronic equipment
CN117955979A (en) * 2024-03-27 2024-04-30 中国电子科技集团公司第五十四研究所 Cloud network fusion edge information service method based on mobile communication node

Also Published As

Publication number Publication date
CN112020103B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN112020103B (en) Content cache deployment method in mobile edge cloud
CN110247793B (en) Application program deployment method in mobile edge cloud
Huang et al. A services routing based caching scheme for cloud assisted CRNs
CN108900355B (en) Satellite-ground multistage edge network resource allocation method
Chamola et al. An optimal delay aware task assignment scheme for wireless SDN networked edge cloudlets
CN111988796B (en) Dual-mode communication-based system and method for optimizing platform information acquisition service bandwidth
CN108834080B (en) Distributed cache and user association method based on multicast technology in heterogeneous network
WO2023024219A1 (en) Joint optimization method and system for delay and spectrum occupancy in cloud-edge collaborative network
CN109699033B (en) LoRa power Internet of things base station deployment method and device for cost and load balancing
CN110913405B (en) Intelligent communication system testing method and system based on scene grading and evaluation feedback
Li et al. Distributed task offloading strategy to low load base stations in mobile edge computing environment
Li et al. A delay-aware caching algorithm for wireless D2D caching networks
Zhang et al. DMRA: A decentralized resource allocation scheme for multi-SP mobile edge computing
CN106792995B (en) User access method for guaranteeing low-delay content transmission in 5G network
CN106060876B (en) A kind of method of heterogeneous wireless network equally loaded
Di Maio et al. A centralized approach for setting floating content parameters in VANETs
Jia et al. A BUS‐aided RSU access scheme based on SDN and evolutionary game in the Internet of Vehicle
Wang et al. Task allocation mechanism of power internet of things based on cooperative edge computing
CN114785692B (en) Communication network flow balancing method and device for aggregation regulation of virtual power plants
Sun et al. A DQN-based cache strategy for mobile edge networks
Luo et al. Joint game theory and greedy optimization scheme of computation offloading for UAV-aided network
CN112887943B (en) Cache resource allocation method and system based on centrality
Hamzaoui et al. Enhancing OLSR routing protocol using k-medoids clustering method in manets
CN116567667A (en) Heterogeneous network resource energy efficiency optimization method based on deep reinforcement learning
CN116471632A (en) Task migration method based on multi-point cooperation in mobile edge calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant