CN113709853A - Network content transmission method and device oriented to cloud edge collaboration and storage medium - Google Patents

Network content transmission method and device oriented to cloud edge collaboration and storage medium Download PDF

Info

Publication number
CN113709853A
CN113709853A CN202110836601.6A CN202110836601A CN113709853A CN 113709853 A CN113709853 A CN 113709853A CN 202110836601 A CN202110836601 A CN 202110836601A CN 113709853 A CN113709853 A CN 113709853A
Authority
CN
China
Prior art keywords
energy consumption
network
cloud
mbs
sbs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110836601.6A
Other languages
Chinese (zh)
Other versions
CN113709853B (en
Inventor
方超
黄潇洁
徐航
王朱伟
陈华敏
孙阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110836601.6A priority Critical patent/CN113709853B/en
Publication of CN113709853A publication Critical patent/CN113709853A/en
Application granted granted Critical
Publication of CN113709853B publication Critical patent/CN113709853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • H04W52/24TPC being performed according to specific parameters using SIR [Signal to Interference Ratio] or other wireless path parameters
    • H04W52/241TPC being performed according to specific parameters using SIR [Signal to Interference Ratio] or other wireless path parameters taking into account channel quality metrics, e.g. SIR, SNR, CIR, Eb/lo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/04TPC
    • H04W52/18TPC being performed according to specific parameters
    • H04W52/26TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service]
    • H04W52/265TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service] taking into account the quality of service QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention relates to a network content transmission method, a device and a storage medium for cloud-edge collaboration, wherein the network content transmission method for cloud-edge collaboration comprises the following steps: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing a system energy consumption model based on the energy consumption models; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption. According to the cloud-edge-collaboration-oriented network content transmission method, the cloud-edge-collaboration-oriented network content transmission device and the storage medium, the in-network cache is reasonably deployed under the cooperation of the SBS and the MBS base stations, and the request aggregation and the cooperative allocation of calculation, cache and communication resources are adopted, so that the redundant transmission of the network content is reduced, and the content transmission efficiency, the resource utilization rate and the network energy efficiency of the heterogeneous network are improved.

Description

Network content transmission method and device oriented to cloud edge collaboration and storage medium
Technical Field
The invention relates to the technical field of wireless communication, in particular to a network content transmission method and device facing cloud edge collaboration and a storage medium.
Background
With the rapid growth of internet traffic represented by multimedia (e.g., Youtube, Netflix) and smart mobile terminals (e.g., smart phones, tablet computers), as a key technology of a fifth generation (5G) mobile communication system, an ultra-dense network can effectively improve resource utilization rate and content transmission efficiency. However, with the dense deployment of base stations and the increasing demands for Quality of Service (QoS) due to the increasing number of users and the size of communication data, the problem of energy consumption is increasingly highlighted. Therefore, improving network energy efficiency while ensuring content transmission efficiency is an urgent and challenging task.
The first cloud computing to appear improves content delivery by shortening the distance between content requesters and providers, but incurs additional deployment and operational costs, as well as significant energy consumption. As a lightweight extension of cloud Computing, Mobile Edge Computing (MEC) can further reduce transmission delay and network energy consumption by implementing caching and Computing capabilities at the Edge of a Mobile network (e.g., base stations, access routers, and switches). In view of the significant advantages and limited service capabilities of mobile edge computing compared to cloud computing, the cloud-edge collaborative scheme has recently attracted great attention in both academic and industrial fields. The cloud-edge collaborative solution is initially applied in a two-layer network of edge devices and clouds, ignoring the computing and caching capabilities of the base station. Many attempts have been made by researchers: the problems of energy-saving wireless backhaul bandwidth allocation and power allocation are researched in a heterogeneous small cellular network, and an optimal energy efficiency model capable of being solved iteratively is provided. Then, it is proposed to develop a distributed framework and centralized processing capability to implement a green network, especially to reduce energy consumption by a base station sleep strategy. On the other hand, researchers establish a method for energy and user joint allocation based on energy sharing between Macro Base Stations (MBS) and Small Base Stations (SBS) in a heterogeneous network. Second, researchers improve the performance of mobile networks by jointly managing and allocating cache, computing, and communication resources. However, current work is focused primarily on the co-optimization of the two resources. Scholars propose a software-defined networking-based IC-IoT network architecture and use DQN (Deep Q-learning) to optimize the allocation of computational and cache resources. Others have defined the joint offloading and resource allocation problem as a Markov Decision Process (MDP) to maximize the number of offloading tasks while reducing energy consumption.
While cloud-edge collaboration can improve energy efficiency and content delivery, most research work is done in a three-tier topology of Mobile Users (MUs), base stations, and the cloud, without considering the effects of intra-network caching, request aggregation, and network heterogeneity. And mainly focuses on joint optimal configuration of two resources.
Disclosure of Invention
The invention aims to provide a network content transmission method, a network content transmission device and a network content transmission storage medium for cloud-edge collaboration, which are used for solving the problems in the prior art.
In a first aspect, the present invention provides a network content transmission method facing cloud edge collaboration, including:
analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system;
and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
Further, the network model comprises a cloud, an MBS, an SBS and a mobile user terminal, wherein the MBS and the SBS have the caching and calculating capabilities, and the cloud stores the content requested by the mobile user terminal.
Further, the energy consumption model of each part of the system comprises: the energy consumption model of the mobile user terminal, the energy consumption model of the base station, the energy consumption model of the cloud and the energy consumption model of the link.
Further, the energy consumption model of the mobile user terminal comprises the energy consumption of the mobile user terminal directly connecting the SBS and the mobile user terminal directly connecting the MBS;
the base station energy consumption model comprises MBS energy consumption and SBS energy consumption;
the cloud energy consumption model comprises static energy consumption of a cloud end and energy consumption generated by meeting a request which cannot be processed by MBS;
the link energy consumption model comprises energy consumption between the MBS and the SBS and between the MBS and the cloud.
Further, the system energy consumption model includes link energy consumption and base station energy consumption for processing and transmitting requests in the network.
Further, the caching policy includes: the method comprises the steps of not considering the converged non-caching strategy, considering the offline caching strategy and considering the online caching strategy.
Further, the aggregation refers to processing the same request as one request in one time slot, and the buffering refers to buffering proper contents at MBS and SBS.
In a second aspect, the present invention provides a network content transmission device facing cloud edge collaboration, including:
the network model building module is used for analyzing and building a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module is used for analyzing and building energy consumption models of all parts of the system based on the network model and building energy consumption models of the system based on the energy consumption models of all parts of the system;
and the system energy consumption optimization module is used for solving the system energy consumption model based on the cache strategy and optimizing the network system based on the solution result so as to reduce energy consumption.
In a third aspect, the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the cloud edge collaboration-oriented network content transmission method according to the first aspect when executing the program.
In a fourth aspect, the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the cloud edge collaboration oriented network content transmission method according to the first aspect.
As can be seen from the above technical solutions, the network content transmission method, the device and the storage medium for cloud-edge cooperation provided by the present invention reasonably deploy in-network cache under the cooperation of the SBS and MBS base stations, and reduce redundant transmission of network content and improve content transmission efficiency, resource utilization rate and network energy efficiency of a heterogeneous network by requesting aggregation and cooperative allocation of computation, cache and communication (3C) resources.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a network content transmission method for cloud edge collaboration according to an embodiment of the present invention;
FIG. 2 is a diagram of a network topology model depicting a heterogeneous network system, according to an embodiment of the invention;
FIG. 3 illustrates the core parameter notation and significance employed by the model according to an embodiment of the present invention;
FIG. 4 is a flow diagram of an LRU policy according to an embodiment of the present invention;
FIG. 5 is a diagram of a performance description of a caching policy according to an embodiment of the invention;
fig. 6 is a schematic structural diagram of a network content transmission apparatus facing cloud edge collaboration according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention designs an energy-efficient network content transmission algorithm facing cloud edge cooperation, and realizes the minimization of the energy consumption of the whole network by optimizing the cooperation distribution and the optimal routing path transmission of 3C resources. The invention focuses on the energy consumption optimization problem under the cooperative condition of MBS and SBS.
Fig. 1 is a flowchart of a network content transmission method facing cloud edge collaboration according to an embodiment of the present invention, and referring to fig. 1, the network content transmission method facing cloud edge collaboration according to the embodiment of the present invention includes:
step 110: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
step 120: analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system;
step 130: and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In the embodiment of the present invention, it should be noted that, the present invention first introduces a topology structure of a network according to SBS and MBS connection conditions under a heterogeneous network and a manner of accessing a user to a base station.
In order to effectively analyze the problem of energy consumption of base stations in heterogeneous networks, fig. 2 shows a reasonable model describing the infrastructure of a network system under a heterogeneous network. According to the sequence from top to bottom, the uppermost layer is a cloud end, the middle layer is MBS, and the layer closest to the user is SBS. Each MBS or SBS has buffering and computation capabilities, and the content requests of the mobile user terminals can be satisfied by the base station or the cloud. The cloud stores the content and resources requested by all users, but the users directly connect with the MBS or SBS to send the requests because of the large energy consumption and the long distance from the users.
Cloud and N through the InternetMMBS is connected with each other, each MBS and Ni,sAnd SBS are connected. The base stations are connected by wire. Both MBS and SBS are edge nodes that are closer to the mobile user terminal, have both caching and computing capabilities, and can respond to and process part of the requests from the mobile user terminal. But MBS is larger than SBS buffer area, and brings larger energy consumption; SBS has limited processing capacity for user requests but consumes less power. Each MBS is limited with N within the coverage area of the base station according to the distancei,mEach mobile user terminal is directly connected, and each SBS is connected with N in the coverage range of the base station according to the limitation of distanceij,mAnd directly connecting the mobile user terminals. The mobile user terminal and the base station are in a wireless connection. The mobile user terminal can only send a request to a base station directly connected with the user to search for the content.
In the embodiment of the present invention, it should be noted that the energy consumption models are energy consumption of the mobile user terminal, energy consumption of the base station (including MBS and SBS energy consumption models), cloud energy consumption, and link energy consumption, respectively. The invention also provides a system model and an optimization target of the model. Fig. 3 summarizes the core parameter symbols and meanings employed by the present invention.
And constructing energy consumption models of the mobile user terminals, wherein the energy consumption models are respectively energy consumption of the direct connection of the mobile user terminals to the SBS and the direct connection of the mobile user terminals to the MBS.
Suppose there is N in the networkMA MB (multimedia broadcasting)S, MBSi directly connecting Ni,sAnd the SBS requests to be transmitted to the directly connected base station through the link to search the content. The mobile user terminal can directly access the MBS or SBS through the wireless link. In the system model, there is Ni,mThe i-th MBS, N is directly accessed by each mobile user terminalij,mThe individual mobile user terminals are connected to the jth SBS under the ith MBS. Assume that the number of different content categories available in the network system of the present model is F.
The energy consumption of the mobile user terminal may be expressed as the sum of the energy consumption of the MBS and SBS that are directly connected to the mobile client. Therefore, energy consumption E of the mobile user terminalMUCan be expressed as:
Figure BDA0003177393540000061
wherein the content of the first and second substances,
Figure BDA0003177393540000062
represents the energy consumption generated by the mth user sending a request k to the jth SBS connected under the ith MBS,
Figure BDA0003177393540000063
indicating the energy consumption generated by the mth user sending the request k to the ith MBS.
Figure BDA0003177393540000064
And
Figure BDA0003177393540000065
the two cases of the mobile user terminal accessing the system are the wireless direct connection SBS and the wireless direct connection MBS, and the two access cases will be described in detail below.
For SBS-direct users:
the energy consumption generated by the user directly connecting with the lowest SBS can be expressed as the product of the power consumption and the time delay generated during the period that the user sends a request and receives the content returned by the network. In the present model, the energy consumption generated by the mth user sending the request k to the jth SBS connected under the ith MBS is represented as
Figure BDA0003177393540000071
The product of the time delay between sending this request and receiving the corresponding data can be expanded to be written as:
Figure BDA0003177393540000072
wherein the content of the first and second substances,
Figure BDA0003177393540000073
represents the power consumption generated by the mth user sending a request k to the jth SBS connected under the ith MBS,
Figure BDA0003177393540000074
indicating the time delay generated by the mth user sending the request k to the jth SBS connected under the ith MBS.
Figure BDA0003177393540000075
Can be further expressed as:
Figure BDA0003177393540000076
wherein the content of the first and second substances,
Figure BDA0003177393540000077
indicating the delay that occurs when request k is fulfilled to the jth SBS connected under the ith MBS.
Figure BDA0003177393540000078
Indicating the delay incurred when request k is satisfied by the ith MBS,
Figure BDA0003177393540000079
representing the delay incurred when request k is satisfied to the cloud.
Figure BDA00031773935400000710
And
Figure BDA00031773935400000711
is a boolean variable indicating whether the request k is satisfied in the jth SBS and ith MBS connected under the ith MBS, respectively. If the content k is cached in the jth SBS connected under the ith MBS
Figure BDA00031773935400000712
Equal to 1, otherwise equal to 0. As can be seen from the same process, if the content k is cached in the ith MBS,
Figure BDA00031773935400000713
equal to 1, otherwise equal to 0.
The delay of the system can be described as three parts, namely response delay, processing delay and link delay. To describe the time delay in equation (3) in detail, we describe the basic equation assumptions using queuing theory. The queuing model presented herein considers a mechanism of aggregation, the model assuming that requests are M/M/k compliant at the base stationsQueuing theory. Let λ beij,kijAnd muijRespectively representing the arrival rate of the request k to the jth SBS connected under the ith MBS after considering the convergence mechanism, the number of the service stations of the jth SBS connected under the ith MBS and the service rate of the base station. Therefore, the utilization of the jth SBS connected under the ith MBS can be expressed as:
Figure BDA00031773935400000714
but the base station is not always in an idle state when the request arrives at the base station, and when the server is occupied, the request from the mobile user terminal of the j-th SBS connected under the i-th MBS must be queued on this SBS, waiting for the request to be processed again when idle, and this probability can be written as:
Figure BDA0003177393540000081
wherein, pi0ijIndicating stability when no task is requested in the jth SBS connected under the ith MBSAnd (4) determining the state probability. Therefore, based on equations (4) and (5), the response delay that the content request k is satisfied in the jth SBS connected under the ith MBS can be written as:
Figure BDA0003177393540000082
when the request k is in the buffer of the jth SBS connected under the ith MBS, the total delay generated by the mobile user equipment includes the uplink and downlink transmission delay between the jth SBS connected under the ith MBS and the mobile user equipment, and the processing delay and the response delay in the jth SBS connected under the ith MBS, which can be expressed as:
Figure BDA0003177393540000083
wherein lij,upAnd lij,downRespectively representing the uplink and downlink transmission rates, v, of the mobile user terminal to the jth SBS connected under the ith MBSijCPU clock speed, s, representing the SBS process request content kkWhich represents the size of the content k and,
Figure BDA0003177393540000084
indicating the number of CPU revolutions, alpha, required to process the requestcIndicating the proportion of requested data in the total data generated by a requested task.
Let λ bei、kiAnd muiRespectively considering the request arrival rate, the number of servers and the service rate on the ith MBS after the convergence condition. Likewise, λC、kCAnd muCIndicating the request arrival rate, the number of servers, and the service rate of the cloud after considering the convergence situation. Therefore, the total time delay of the request k satisfied at the ith MBS and the cloud is:
Figure BDA0003177393540000085
Figure BDA0003177393540000086
wherein li,upAnd li,downFor the uplink and downlink transmission rate between the ith MBS and the jth SBS connected under the MBS lC,upAnd lC,downFor uplink and downlink transmission rate, v, between MBS and cloudiAnd vcFor the i MBS and the CPU clock rate of the cloud,
Figure BDA0003177393540000091
and
Figure BDA0003177393540000092
the response delays that content request k satisfies on the ith MBS and the cloud, respectively.
For MBS direct connection users:
based on the above analysis, the m-th user sends a request k to the i-th MBS to generate energy consumption
Figure BDA0003177393540000093
Can be written as:
Figure BDA0003177393540000094
wherein the content of the first and second substances,
Figure BDA0003177393540000095
for the power consumed by the user at the ith MBS access request k,
Figure BDA0003177393540000096
the time delay generated for the user in the ith MBS access request k. The time delay can be expanded as follows:
Figure BDA0003177393540000097
wherein the content of the first and second substances,
Figure BDA0003177393540000098
indicating the delay incurred when request k is satisfied by the ith MBS,
Figure BDA0003177393540000099
represents the delay incurred when request k is fulfilled to the jth SBS connected under the ith MBS,
Figure BDA00031773935400000910
representing the delay incurred when request k is satisfied to the cloud.
Figure BDA00031773935400000911
And
Figure BDA00031773935400000912
is a boolean variable indicating whether the request k is satisfied in the jth SBS and ith MBS connected under the ith MBS, respectively. If the content k is cached in the jth SBS connected under the ith MBS
Figure BDA00031773935400000913
Equal to 1, otherwise equal to 0. As can be seen from the same process, if the content k is cached in the ith MBS,
Figure BDA00031773935400000914
equal to 1, otherwise equal to 0.
In a mobile network, the energy consumption of a base station can be written as the sum of the energy consumption of MBS and SBS, wherein the energy consumption of a base station can also be expressed as the product of power consumption and time delay. So the total energy consumption E of the base stationBSCan be expressed as:
Figure BDA00031773935400000915
wherein, TsIs the running time of the system, PijAnd PiRespectively the power consumption of the jth SBS and the ith MBS connected under the ith MBS.
SBS energy consumption model:
the total power consumption of the jth SBS connected under the ith MBS may be expressed as the sum of the conventional power consumption and the buffer power consumption:
Figure BDA0003177393540000101
wherein the content of the first and second substances,
Figure BDA0003177393540000102
and
Figure BDA0003177393540000103
respectively representing the traditional power consumption and the cache power consumption of the jth SBS connected under the ith MBS. Conventional power consumption of jth SBS connected under ith MBS
Figure BDA0003177393540000104
And can be represented as:
Figure BDA0003177393540000105
at this time, the process of the present invention,
Figure BDA0003177393540000106
γij,Bijrespectively representing the signal-to-noise ratio and the maximum transmission rate of j SBS connected under i MBS,
Figure BDA0003177393540000107
indicating the amount of requested data on the jth SBS that requests k to connect under the ith MBS. P0ijAnd Δ PijRespectively representing the static power consumption of the j SBS connected under the i MBS and a slope parameter representing the influence of the base station load on the power consumption.
Caching power consumption in a formula
Figure BDA0003177393540000108
The method comprises two parts, namely cache retrieval power consumption and content cache power consumption, which are respectively related to the data volume requested by a user and the content size. Therefore, the buffering power consumption of the j SBS connected under the i MBSCan be defined as:
Figure BDA0003177393540000109
wherein the content of the first and second substances,
Figure BDA00031773935400001010
representing the average retrieval power consumption of request k in the buffer. w is acaRepresenting a power efficiency parameter that depends on the storage hardware technology.
MBS energy consumption model:
similar to the above, the total power consumption of the ith MBS may be expressed as:
Figure BDA00031773935400001011
wherein the content of the first and second substances,
Figure BDA00031773935400001012
and
Figure BDA00031773935400001013
respectively representing the conventional power consumption and the cache power consumption of the ith MBS. Similar to SBS, legacy power consumption of ith MBS
Figure BDA00031773935400001014
And cache power consumption
Figure BDA00031773935400001015
Can be respectively unfolded as follows:
Figure BDA0003177393540000111
Figure BDA0003177393540000112
wherein, γi,BiRespectively representing the signal-to-noise of the ith MBSThe ratio and the maximum transmission rate are,
Figure BDA0003177393540000113
indicating the requested data amount of request k on the ith MBS.
Figure BDA00031773935400001110
And Δ PiRespectively representing the static power consumption of the ith MBS and a slope parameter representing the influence of the load of the base station on the power consumption of the ith MBS.
Figure BDA0003177393540000114
Representing the average retrieval power consumption of request k in the buffer. w is acaIs a power efficiency parameter.
Energy consumption of the cloud is denoted as PcStatic energy consumption by cloud PsAnd the energy consumption for satisfying requests that cannot be handled by the MBS is composed of:
Figure BDA0003177393540000115
wherein the content of the first and second substances,
Figure BDA0003177393540000116
representing the average retrieval power consumption for request k.
And the MBS is connected with the SBS and the MBS is connected with the cloud end through wired links. From the illustration of fig. 2, the total energy consumption of the wired link can be expressed as:
Figure BDA0003177393540000117
wherein the content of the first and second substances,
Figure BDA0003177393540000118
indicating the power consumption of requesting k to transmit from the link between the jth SBS connected under the ith MBS,
Figure BDA0003177393540000119
indicating link transmission from the ith MBS to the cloudAnd power consumption. T issRepresenting the run time of the system.
And customizing the minimum energy consumption problem as a cloud-edge cooperative resource allocation model of the content delivery service. Then, theoretical analysis is carried out on the model, and a method for obtaining the optimal solution is given. The total energy consumption of the system may represent the sum of the link energy consumption and the base station energy consumption for processing and transmitting requests in the network, and the energy efficiency model may be represented as:
Figure BDA0003177393540000121
Figure BDA0003177393540000122
Figure BDA0003177393540000123
Figure BDA0003177393540000124
Figure BDA0003177393540000125
C5:ρc≤1
Figure BDA0003177393540000126
C7:αc∈(0,1)
wherein, C1-C2Is the maximum buffer size constraint for MBS and SBS,
Figure BDA0003177393540000127
and
Figure BDA0003177393540000128
respectively indicating whether the jth SBS and the ith MBS connected under the ith MBS cache hits, wherein the hit is 1, and the miss is 0. skIndicating the requested file size. CijIndicates the buffer space size of the j SBS connected under the i MBSiIndicating the buffer space size of the ith MBS. C3-C5Representing usage constraints of the base station, constraining the base station and cloud utilization to be less than 1, C6Representing that the Boolean variable associated with the cache decision can only be 0 or 1, C7The ratio of request data to the total flow generated by the requesting task is limited to between 0 and 1.
And solving the designed model based on four strategies to optimize the system performance. The four strategies are respectively: a non-caching scheme that does not consider aggregation, a non-caching scheme that does consider aggregation, an offline caching scheme, and an online caching scheme.
Without considering the converged no-cache scheme:
the non-caching scheme means that neither MBS nor SBS has caching capability, and the request of the mobile user can only be transmitted to the cloud for processing. Aggregation refers to processing as one request, regardless of queuing at the base station, in one time slot, if there is the same request. The non-cache scheme without considering aggregation means that, when the base station does not have cache capability, the aggregation condition of the same request is not considered in the request transmission process, and each request needs to be transmitted to the cloud in the link.
Consider a converged no-cache scheme:
the non-buffer scheme considering aggregation refers to the condition that the same request aggregation is considered in the process of request transmission under the condition that the base stations do not have buffer capacity. If the same request is sent to the queue of one base station in one time slot, the request is counted as one and transmitted to the next base station for processing. This strategy reduces response delay, processing delay, and power consumption at the base station, and thus reduces power consumption, compared to non-buffered approaches that do not account for convergence.
An offline caching scheme:
the offline caching scheme is also called as an optimal caching scheme, and is to store contents with the highest popularity in a cache region of the base station according to the ranking of the popularity, and the condition that the replacement request in the cache region is not considered whether the request is hit or not.
Because the theoretically optimal request content is stored in the base station, the hit probability of the user in the base station at the bottom layer is increased, the transmission time delay is obviously reduced, and the theoretically optimal model solution can be obtained. Therefore, the optimal solution of the hierarchical energy consumption problem based on the content popularity distribution characteristics, which is proposed herein, can be derived, which provides a benchmark for obtaining a near-optimal result for the online caching solution in the heterogeneous network.
Assuming that the popularity of the network content obeys the distribution of Zipf, the base station caches the network content according to the ranking of the popularity of the content. Thus, the utilization of the jth SBS and ith MBS using the i-th MBS connection with the best buffer can be written as:
Figure BDA0003177393540000141
Figure BDA0003177393540000142
wherein alpha isijAnd alphaiIs the skewness factor of the popularity of the jth SBS and the ith MBS contents connected under the ith MBS. The amount of popular content reaching the base station increases as the skewness factor increases. According to the utilization expression rewritten above, the minimum time delay can be obtained under the optimal buffering.
And (3) an online caching scheme:
the online caching scheme is implemented by using a Least Recently Used (LRU) algorithm, which replaces a request that is not Used for the longest time in a cache area with a request that is entered into the cache area, and sets the request as a newly stored request, where this replacement is shown in fig. 4. This alternative approach may ensure that requests in the cache remain the most recently requested content at all times. This is a cache replacement method considering real life scenarios, i.e. the content that is requested recently is also searched recently by the user, and the probability that the content that is not requested for a long time is requested again is low.
The method of the present invention is then analyzed and compared in terms of performance in conjunction with simulation results.
According to the system model established herein, the network is a three-layer topology. The uppermost layer represents a cloud end; the middle layer represents an MBS layer; the bottom layer represents the SBS layer. The SBS number is set to be 4, the MBS number is set to be 2, and the cloud number is set to be 1. The request can be transmitted in the connected nodes through the connected links, wherein the uplink transmission is a calculation request and the file is small; and the content requested by the user is transmitted downstream, and the file is large. The user follows 3: the ratio of 1 sends requests according to the allocation access MBS and SBS layers. In the topological structure, the mobile user terminal has no cache capability, and the SBS, the MBS and the cloud have calculation and cache capabilities.
In order to embody the system performance of the invention, under the condition of not considering the deployment of the edge cache in the wireless network, the invention is based on a non-cache scheme without considering aggregation, a non-cache scheme with considering aggregation, an online cache scheme and an offline cache scheme. And (5) obtaining the optimal solution of the system power consumption model by discussing the system performance. The characteristics of the four strategies are shown in fig. 5.
For network energy consumption of different strategies under different cache sizes, performing macroscopic trend analysis to obtain the worst performance of the non-cache scheme without considering convergence, and considering the second performance of the non-cache scheme without considering convergence; the performance of the offline caching scheme is optimal. The energy consumption without considering the aggregated non-cache remains unchanged due to the lack of cache deployment in the access network. Aggregation at the base station is better than not considering the energy consumption of aggregation without buffering because aggregation without buffering is considered to be the same content request. As cache size increases, more popular content is cached to the edge of the network, reducing the performance gap between "offline caching" and "online caching". As the buffer size increases, the energy efficiency of the BSs converges to a steady state in the model, since the more undesirable content of the buffer, the less impact on reducing energy consumption.
Network energy consumption under different content popularity for different policies. As content popularity increases, there is no impact regardless of aggregate cacheability, since each request must be routed to the cloud to obtain the corresponding content. The performance of aggregate no-cache is considered to improve as content popularity increases because more content requests are aggregated in the queues of the base station. The energy consumption of the offline cache and the online cache is rapidly reduced, and the larger the Zipf skewness parameter is, the more popular the MUs request content is, so that most of the data requested to meet the cache in the SBS and the MBS are directly generated.
Network energy consumption at different request arrival rates for different policies. The energy efficiency of all four schemes tends to decrease because of the increased energy consumption caused by the larger time delay in queuing. The performance gap between offline caching and other solutions increases as the request arrival rate increases, since popular content is always cached at the network edge in "offline caching". The performance gap between not considering aggregate no-cache and considering aggregate no-cache becomes larger because the effect of request aggregation increases with increasing arrival rate.
The energy consumption of the network under different content types for different policies increases with the increase of the number of different contents, and since MUs requests more contents which are not popular, the requests which are not satisfied in the BSs with limited cache capacity increase, and the content of the requests is obtained from the cloud. The smaller the performance gap between offline and online caching, the lower the cache hit rate for hot content in both, as more requests are undesirable. At the same time, the energy consumption for convergence cache-less is considered to grow slowly, since the growing diversity of content has a limited impact on request aggregation compared to in-network caching.
In order to improve network energy efficiency and content transmission, the invention provides a novel energy-saving hierarchical cooperation scheme, which considers the joint allocation of cache, calculation and communication resources in a hierarchical heterogeneous network such as a mobile user terminal, a small base station, a macro base station and a cloud. And establishing an energy consumption problem as a centralized model, and analyzing the optimal solution according to the distribution characteristics of the content popularity on the base station. And finally, performing simulation analysis on the designed combined energy consumption model under various strategies, and discussing the problem of realizing the minimum energy consumption by comprehensively influencing various factors of the energy consumption. Simulation results show that the performance of the model provided by the invention is obviously superior to that of the existing cloud-edge collaborative solution under the condition of not considering cache resource deployment. Provides a feasible solution for the intensive deployment of the base station and the high energy consumption brought by the increase of the number of users and the communication data size.
The energy efficiency problem is defined as a centralized model based on queuing in the cloud edge cooperative network, wherein the caching in the network is considered, and the same content requests can be aggregated in the queue of each base station. The allocation of 3C resources is jointly optimized while guaranteeing the quality of service of the content delivery. The invention minimizes the energy consumption on the basis of ensuring the service quality, and has wider research and development prospect. The invention establishes an energy efficiency model to solve the energy consumption problem in the context of a heterogeneous network. The scheme provided by the invention reasonably deploys the in-network cache under the cooperation of the SBS and the MBS base stations, reduces redundant transmission of network contents by adopting request aggregation and cooperative distribution of 3C resources, improves the content transmission efficiency, the resource utilization rate and the network energy efficiency of the heterogeneous network, has positive influence on national energy and financial conservation, accords with the green energy-saving concept, and has wide development prospect.
Fig. 6 is a schematic diagram of a network content transmission apparatus facing cloud edge collaboration provided in an embodiment of the present invention, and as shown in fig. 6, the network content transmission apparatus facing cloud edge collaboration provided in the embodiment of the present invention includes:
the network model building module 610 is configured to analyze and build a network model according to SBS and MBS connection conditions in the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module 620 is used for analyzing and building energy consumption models of all parts of the system based on the network model and building an energy consumption model of the system based on the energy consumption models of all parts of the system;
and the system energy consumption optimization module 630 is configured to solve the system energy consumption model based on a cache policy, and optimize the network system based on a solution result to reduce energy consumption.
Fig. 7 illustrates a physical structure diagram of an electronic device, and as shown in fig. 7, the electronic device may include: a processor (processor)710, a communication Interface (Communications Interface)720, a memory (memory)730, and a communication bus 740, wherein the processor 710, the communication Interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a cloud-edge collaboration oriented network content delivery method comprising: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the cloud edge collaboration-oriented network content transmission method provided by the above methods, the method including: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the cloud edge collaboration-oriented network content transmission method provided in the foregoing, the method including: analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station; analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system; and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A network content transmission method facing cloud edge collaboration is characterized by comprising the following steps:
analyzing and constructing a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
analyzing and constructing energy consumption models of all parts of the system based on the network model, and constructing energy consumption models of the system based on the energy consumption models of all parts of the system;
and solving the system energy consumption model based on a cache strategy, and optimizing the network system based on a solving result to reduce energy consumption.
2. The method for transmitting network contents facing cloud-edge collaboration as claimed in claim 1, wherein the network model comprises a cloud, an MBS, an SBS and a mobile user terminal, the MBS and the SBS have caching and computing capabilities, and the cloud stores contents requested by the mobile user terminal.
3. The cloud-edge-collaboration-oriented network content transmission method according to claim 1, wherein the system part energy consumption model comprises: the energy consumption model of the mobile user terminal, the energy consumption model of the base station, the energy consumption model of the cloud and the energy consumption model of the link.
4. The cloud-edge-collaboration-oriented network content transmission method as claimed in claim 3, wherein the mobile user terminal energy consumption model comprises energy consumption of the mobile user terminal directly connecting the SBS and the mobile user terminal directly connecting the MBS;
the base station energy consumption model comprises MBS energy consumption and SBS energy consumption;
the cloud energy consumption model comprises static energy consumption of a cloud end and energy consumption generated by meeting a request which cannot be processed by MBS;
the link energy consumption model comprises energy consumption between the MBS and the SBS and between the MBS and the cloud.
5. The cloud-edge-collaboration-oriented network content transmission method as claimed in claim 4, wherein the system energy consumption model comprises link energy consumption and base station energy consumption for processing and transmitting requests in the network.
6. The cloud-edge-collaboration-oriented network content transmission method according to claim 1, wherein the caching policy includes: the method comprises the steps of not considering the converged non-caching strategy, considering the offline caching strategy and considering the online caching strategy.
7. The method as claimed in claim 6, wherein the aggregating refers to processing the same request as one request in one time slot, and the buffering refers to buffering the proper content at MBS and SBS.
8. A network content transmission apparatus facing cloud edge collaboration, comprising:
the network model building module is used for analyzing and building a network model according to SBS and MBS connection conditions under the heterogeneous network and a mode of accessing a user to a base station;
the energy consumption model building module is used for analyzing and building energy consumption models of all parts of the system based on the network model and building energy consumption models of the system based on the energy consumption models of all parts of the system;
and the system energy consumption optimization module is used for solving the system energy consumption model based on the cache strategy and optimizing the network system based on the solution result so as to reduce energy consumption.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the cloud edge collaboration oriented network content transmission method according to any one of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the cloud edge collaboration oriented network content transmission method according to any one of claims 1 to 7.
CN202110836601.6A 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium Active CN113709853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110836601.6A CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110836601.6A CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Publications (2)

Publication Number Publication Date
CN113709853A true CN113709853A (en) 2021-11-26
CN113709853B CN113709853B (en) 2022-11-15

Family

ID=78650361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110836601.6A Active CN113709853B (en) 2021-07-23 2021-07-23 Network content transmission method and device oriented to cloud edge collaboration and storage medium

Country Status (1)

Country Link
CN (1) CN113709853B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134382A (en) * 2022-06-06 2022-09-30 北京航空航天大学 Airport transport capacity flexible scheduling method based on cloud edge cooperation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140101312A1 (en) * 2012-10-09 2014-04-10 Transpacific Ip Management Group Ltd. Access allocation in heterogeneous networks
CN107659946A (en) * 2017-09-19 2018-02-02 北京工业大学 A kind of mobile communications network model building method based on edge cache
CN111124439A (en) * 2019-12-16 2020-05-08 华侨大学 Intelligent dynamic unloading algorithm with cloud edge cooperation
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN112020103A (en) * 2020-08-06 2020-12-01 暨南大学 Content cache deployment method in mobile edge cloud
AU2020103384A4 (en) * 2020-11-11 2021-01-28 Beijing University Of Technology Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140101312A1 (en) * 2012-10-09 2014-04-10 Transpacific Ip Management Group Ltd. Access allocation in heterogeneous networks
CN107659946A (en) * 2017-09-19 2018-02-02 北京工业大学 A kind of mobile communications network model building method based on edge cache
CN111124439A (en) * 2019-12-16 2020-05-08 华侨大学 Intelligent dynamic unloading algorithm with cloud edge cooperation
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN112020103A (en) * 2020-08-06 2020-12-01 暨南大学 Content cache deployment method in mobile edge cloud
AU2020103384A4 (en) * 2020-11-11 2021-01-28 Beijing University Of Technology Method for Constructing Energy-efficient Network Content Distribution Mechanism Based on Edge Intelligent Caches

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHAO FANG: "An Edge Cache-Based Content Delivery Scheme in Green Wireless Networks", 《2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)》 *
CHAO FANG等: "An Edge Cache-based Power-Efficient Content Delivery Scheme in Mobile Wireless Networks", 《2019 19TH INTERNATIONAL SYMPOSIUM ON COMMUNICATIONS AND INFORMATION TECHNOLOGIES (ISCIT)》 *
HENGLIANG TANG等: "Optimal multilevel media stream caching in cloud-edge environment", 《THE JOURNAL OF SUPERCOMPUTING》 *
张天魁等: "信息中心网络缓存技术研究综述", 《北京邮电大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134382A (en) * 2022-06-06 2022-09-30 北京航空航天大学 Airport transport capacity flexible scheduling method based on cloud edge cooperation

Also Published As

Publication number Publication date
CN113709853B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN111930436B (en) Random task queuing unloading optimization method based on edge calculation
CN112995950B (en) Resource joint allocation method based on deep reinforcement learning in Internet of vehicles
CN111475274B (en) Cloud collaborative multi-task scheduling method and device
Li et al. Distributed task offloading strategy to low load base stations in mobile edge computing environment
WO2023108718A1 (en) Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network
CN113810931B (en) Self-adaptive video caching method for mobile edge computing network
Zheng et al. 5G network-oriented hierarchical distributed cloud computing system resource optimization scheduling and allocation
Li et al. An optimized content caching strategy for video stream in edge-cloud environment
CN116963182A (en) Time delay optimal task unloading method and device, electronic equipment and storage medium
Yang et al. Optimal task scheduling in communication-constrained mobile edge computing systems for wireless virtual reality
CN113709853B (en) Network content transmission method and device oriented to cloud edge collaboration and storage medium
Li et al. DQN-enabled content caching and quantum ant colony-based computation offloading in MEC
Fang et al. Drl-driven joint task offloading and resource allocation for energy-efficient content delivery in cloud-edge cooperation networks
Sang et al. GCS: Collaborative video cache management strategy in multi-access edge computing
Fang et al. AI-driven energy-efficient content task offloading in cloud-edge-end cooperation networks
Peng et al. Value‐aware cache replacement in edge networks for Internet of Things
CN113766540B (en) Low-delay network content transmission method, device, electronic equipment and medium
Zhu et al. Computing offloading decision based on multi-objective immune algorithm in mobile edge computing scenario
Ren et al. Hierarchical resource distribution network based on mobile edge computing
Fang et al. Offloading strategy for edge computing tasks based on cache mechanism
Fang et al. Q-learning based delay-aware content delivery in cloud-edge cooperation networks
Tang et al. Optimal multilevel media stream caching in cloud-edge environment
CN108429919B (en) Caching and transmission optimization method of multi-rate video in wireless network
CN115051999B (en) Energy consumption optimal task unloading method, device and system based on cloud edge cooperation
Fang et al. Energy-Efficient Hierarchical Collaborative Scheme for Content Delivery in Mobile Edge Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant