CN111447506B - Streaming media content placement method based on delay and cost balance in cloud edge environment - Google Patents

Streaming media content placement method based on delay and cost balance in cloud edge environment Download PDF

Info

Publication number
CN111447506B
CN111447506B CN202010216284.3A CN202010216284A CN111447506B CN 111447506 B CN111447506 B CN 111447506B CN 202010216284 A CN202010216284 A CN 202010216284A CN 111447506 B CN111447506 B CN 111447506B
Authority
CN
China
Prior art keywords
file
delay
edge
cost
edge node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010216284.3A
Other languages
Chinese (zh)
Other versions
CN111447506A (en
Inventor
李春林
赵光艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010216284.3A priority Critical patent/CN111447506B/en
Publication of CN111447506A publication Critical patent/CN111447506A/en
Application granted granted Critical
Publication of CN111447506B publication Critical patent/CN111447506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a streaming media content placement method based on delay and cost balance in a cloud edge environment, which comprises the following steps: dividing the streaming media content into a plurality of fixed-size segments, and sequencing edge nodes in the collaborative cache domain according to the distance from a user; the delay and cost balance threshold value for initializing each streaming media file is 0, and the number of content placement result coding segments is { ciIs 1, delay and cost balance threshold list { τiIt is empty. The cloud data center executes an active content placement algorithm under the constraint of delay and cost balance thresholds; when the edge node's cache is full, the edge node decides whether to replace an existing file in the cache with the requested new file to optimize the objective submodules. The invention can reduce the average response delay of the user and reduce the content placement cost.

Description

Streaming media content placement method based on delay and cost balance in cloud edge environment
Technical Field
The invention relates to the technical field of cloud computing and edge computing, in particular to a content placement method based on delay and cost balance in a cloud edge environment.
Background
In the big data era, cloud computing has become a major batch data and stream data processing computing model due to its powerful computing potential. However, due to the exponential growth of large data, including large-scale long-term global data and small-scale short-term local data, cloud computing is facing increasing computing demands, and by Cisco's forecast, 507.5ZB data will be generated to join the Internet annually by 2020. Furthermore, the huge computing tasks on the cloud pose challenging issues with respect to cost, energy consumption and quality of service. And the terminal intelligent equipment is widely applied, so that the calculation model gradually develops towards the edge closer to the user. Therefore, a cloud edge cooperative architecture combining edge processing and centralized cloud processing becomes one of the most promising trends of the wireless network architecture towards the fifth generation (5G) system specification. While centralized cloud processing provides high spectral efficiency from coordinated transmission of the cloud, interference management capabilities are quite limited due to the potentially large delay of the fronthaul link transmission and due to decentralized processing by the edge nodes. Modern wireless networks including 5G systems need to meet wide service quality requirements of mobile broadband communication and optimize for spectrum efficiency and delay, and in a cloud edge cooperative architecture for caching proposed by many recent researches, edge nodes have caching capability and can control cooperative transmission of the edge nodes from a central cloud. The cloud-edge cooperative caching architecture can fully utilize key advantages of cloud data center processing and edge caching low delay, provides services for streaming media, and provides low-delay network transmission and strong computing power.
The content placement method in the cloud edge environment is currently an important research, and the research on the streaming media content placement method in the cloud edge environment is of great significance. And a proper content placement method is selected in the cloud side environment, so that the content placement cost and the content transmission cost can be reduced while the response delay of a user is effectively reduced. In recent years, the problem of content placement in a cloud-edge environment has received a wide attention from many scholars, and various content placement methods have been proposed. Most cloud edge coordination methods do not fully utilize the powerful computing power of the cloud data center and the storage capacity of the edge nodes, and the placement strategy of the streaming media content is rarely considered. In view of the trend of streaming media application development and the popularization process of the cloud-edge environment, it is necessary to adjust and optimize the conventional cloud-edge collaborative strategy. In addition, how to achieve the optimal tradeoff between the user response delay and the cost of the service provider is not well solved, because most of research only optimizes the target through an active content placement algorithm, and does not cooperate the active content placement strategy of the cloud data center with the passive cache replacement strategy in the edge node to maintain the optimal target of content placement, the traditional streaming media content placement method under the cloud edge environment is limited in achieving the optimal balance between delay and cost.
Disclosure of Invention
The invention aims to provide a streaming media content placement method based on delay and cost balance in a cloud edge environment aiming at the defects of the prior art, and the method can reduce the average response delay of a user and reduce the content placement cost by fully utilizing edge node resources.
The technical scheme adopted by the invention is as follows: a streaming media content placement method based on delay and cost balance in a cloud edge environment is characterized by comprising the following steps:
1) dividing the stream media content into a plurality of fixed-size segments to form a stream media file coding segment set { ci};
2) Sequencing edge nodes in the collaborative cache domain according to the distance from the user, wherein the edge node m represents an edge node which is close to the mth of the user, and calculating the average file transmission rate eta of each edge node mm
3) The delay and cost balance threshold value for initializing each streaming media file is 0, and the number of content placement result coding segments is { ciIs 1, delay and cost balance threshold list { τiIt is empty.
4) The cloud data center executes an active content placement algorithm under the constraint of delay and cost balance thresholds;
5) completing initialization of content placement according to the above steps, utilizing unused backhaul bandwidth during off-peak periods of traffic; when the flow is dense, distributing the new file from the remote cloud data center to the edge node and forwarding the new file to the requesting user when the buffer is not hit;
6) when the edge node's cache is full, the edge node decides whether to replace an existing file in the cache with the requested new file to optimize the objective submodules.
In the above technical solution, the step 4) specifically includes the following steps:
4.1) initializing the content placement result set ciBandwidth allocation fraction set
Figure BDA0002424550070000041
For null, set the current coding segment number ciAnd bandwidth allocation ratio
Figure BDA0002424550070000042
Are both 0;
4.2) obtaining a delay cost equalization threshold τ of a file for each file iiJudging the current code segment number ciWhether or not condition c is satisfiedi<τiIf not, executing step 4.5);
4.3) acquiring the coding segment index with the maximum content placement marginal gain;
4.4) number of current code segments ciAfter adding 1, returning to the step 4.2);
4.5) number of current code segments cjAdd content Placement result set { ci};
4.6) for each edge node m, put the result set { c) according to the contentiFourthly, calculating the bandwidth allocation proportion
Figure BDA0002424550070000043
Bandwidth allocation proportion to cloud data center M +1
Figure BDA0002424550070000044
4.7) allocating the bandwidth proportionally
Figure BDA0002424550070000045
Joining a set of bandwidth allocation proportions
Figure BDA0002424550070000046
In the above technical solution, the specific step of the edge node performing cache replacement in step 6) includes:
6.1) if the newly requested file r is no longer in the edge cache
Figure BDA0002424550070000047
Meanwhile, if the cache capacity of the edge node is full, the step 6.2) is executed;
6.1) initialization MminCoding segment index i for maximum marginal gain value and minimum marginal gain value of coding segment in edge cacheminInitialized to 0
6.2) Place the result set { c) for contentiIn (1)File, traversing all the coding segments;
6.3) calculating the marginal gain M ({ c) for erasure coding section lossi},-i)<MminDetermine whether it satisfies M ({ c)i},-i)<MminIf the condition is satisfied, recording the file index iminAssigning the index as the index of the current coding segment, and returning to the step 6.3);
6.4) if the marginal gain of the added new file meets the following conditions, executing the step 6.3), otherwise, finishing the algorithm, and not executing cache replacement;
6.5) from { ciRemoving c fromiminAdding crInto the new cached results set ci}。
In the above technical solution, the specific steps of calculating the delay and cost balance threshold in step 3) include:
3.1) for each file i, acquiring the total number of the coding segments of the file as C, and if the content is placed, determining the number of the coding segments of the result as CiIf the value is larger than or equal to C, executing the step 3.6);
3.2) number of code segments ciLess than C, calculating the load distribution of each edge node { omegamAnd load distribution omega of cloud data centerM+1The calculation mode for calculating the proportion of the number of the file coding segments acquired by the user u from the edge node M and the proportion of the number of the remaining file coding segments acquired by the user u from the cloud data center M +1 is as follows:
3.3) calculating the marginal gain of one coding section of the added file i;
3.4) calculating the marginal gain of one coding section of the file i to be continuously added;
3.5) if the condition M (S', j) -M (S, j) > ═ 0 or M (S, j) < 0 is met, then step 3.6 is executed, else the current number of code segments ciAfter adding 1, returning to the step 3.1);
3.6) assigning a delay and cost balance threshold τ to the current number of code segments ci,;
3.7) adding the delay and cost equalization threshold τ to the list of delay and cost equalization thresholds.
5. The method for placing streaming media content based on delay and cost balance in cloud edge environment as claimed in claim 3, wherein the submodules of the content targeted for delay and cost balance in step 6) are:
Figure BDA0002424550070000061
Figure BDA0002424550070000062
Sirepresents a set of encoded segments sufficient to decode file i and | Si|=siFrom XiRepresenting a set of code segments cached at an edge node, and so
Figure BDA0002424550070000063
And | Xi|=ci
In the above technical solution, the bandwidth allocation proportion of the edge node m in the step 4) is
Figure BDA0002424550070000064
The calculation method comprises the following steps:
Figure BDA0002424550070000065
wherein etamFor the edge node signal average transmission rate, B is the system bandwidth,
Figure BDA0002424550070000066
is the average length of the requested files, each file having a fixed length of L bits, ΩmAnd ΩM+1Respectively representing the average content hit rate and the average content miss rate of candidate edge nodes storing the video coding segments, beta is a standardized constant of the content placement transmission cost, and lambda is a normalized constant of the content placement transmission cost0And λmCalculated from the following equation:
Figure BDA0002424550070000067
in the above technical solution, the signal transmission rate of each edge node in step 2) is independent of content placement and bandwidth allocation, depends on the overall traffic load and network resources, and is calculated in the following manner:
Figure BDA0002424550070000071
wherein SINR is signal-to-noise ratio of transmission data, and λ and δ are edge node distribution density and user distribution density in the point-to-point network, respectively.
The traditional content placement method does not fully utilize the powerful computing capacity of the cloud data center and the storage capacity of the edge node, and the placement strategy of the streaming media content is rarely considered, in addition, how to realize the optimal balance between the user response delay and the cost of a service provider is not well solved, because most of researches only optimize the target through an active content placement algorithm and do not cooperate the active content placement strategy of the cloud data center with a passive cache replacement strategy in the edge node to maintain the optimal target of content placement, the traditional streaming media content placement method under the cloud edge environment is limited in realizing the optimal balance of the delay and the cost. In a cloud-edge environment, it is considered that the utilization of system resources and the transmission rate of the content acquired by the user and the cost of content placement are key factors for providing better service for the user. In the process of content placement, streaming media content is divided into coding segments with fixed sizes, then modeling is carried out on average delay of content requests, content placement and content transmission cost, a balance threshold value of the number of the content placement coding segments is calculated by taking delay and cost balance as a target, the cloud data center is constrained to execute active content placement based on the threshold value, and meanwhile, an edge node executes a passive cache replacement method to work cooperatively, so that optimal delay and cost balance is realized. The invention provides a streaming media content placement method based on delay and cost balance, which can enable a user to obtain the average delay and the data cost of all the streaming media content coding segments to be optimal.
The invention provides a streaming media content placement method based on delay and cost balance by combining the characteristics of streaming media service and the characteristics of cloud edge cooperative resources. The placement method is suitable for content placement in a cloud edge environment, the streaming media content is divided into content placement coding sections, modeling is carried out through content requests, the upper limit threshold of the number of the content placement coding sections is calculated by taking delay and cost balance as targets, the cloud data center is constrained to execute active content placement based on the threshold, and meanwhile, the edge nodes execute a passive cache replacement method so as to achieve optimal delay and cost balance. The optimal placement method makes full use of system resources, shortens the average response delay of the user for acquiring the content, and simultaneously minimizes the cost of content placement.
Drawings
FIG. 1 is a flowchart of an active content placement method based on delay and cost balancing for a cloud data center in a cloud edge environment according to the present invention
FIG. 2 is a flowchart of a passive cache replacement method for an edge node based on latency and cost balancing in a cloud edge environment according to the present invention
FIG. 3 is a system architecture for content placement in a cloud-edge collaboration system
Detailed Description
The invention will be further described in detail with reference to the following drawings and specific examples, which are not intended to limit the invention, but are for clear understanding.
The streaming media content placement method based on delay and cost balance in the cloud edge environment is provided based on the content placement method in the current cloud edge environment and combined with the characteristics of streaming media service. As shown in fig. 1, the algorithm includes the following steps:
1) dividing the stream media content into a plurality of fixed-size segments to form a stream media file coding segment set { ci};
2) Sequencing edge nodes in the collaborative cache domain according to the distance from the user, wherein the edge node m represents an edge node which is close to the mth of the user, and calculating the average file transmission rate for each edge node m in the following calculation mode:
Figure BDA0002424550070000091
wherein, the signal-to-noise ratio SINR of the transmission data can be obtained according to the Shannon formula, and the calculation mode is as follows:
Figure BDA0002424550070000092
wherein N is0As the noise power spectral density constant, B is the system bandwidth, PmIs the average transmit power of the edge node m and x is the interference power constraint.
3) The delay and cost balance threshold value for initializing each streaming media file is 0, namely the upper limit value of the number of the content placement code segments, and the number of the content placement result code segments { c }iIs 1, delay and cost balance threshold list { τiIt is empty.
The specific steps of the delay and cost balance threshold calculation comprise:
3.1) for each file i, acquiring the total number of the coding segments of the file as C, and if the content is placed, determining the number of the coding segments of the result as CiIf the value is larger than or equal to C, executing the step 3.6);
3.2) number of code segments ciLess than C, calculating the load distribution of each edge node { omegamThe calculation method is as follows:
Figure BDA0002424550070000093
wherein P ism,iRepresenting the proportion of the number of the coding segments, p, of the file i acquired by the user u from the edge node miThe possibility to request file i is.
Load distribution omega of cloud data centerM+1The calculation method is as follows:
Figure BDA0002424550070000101
3.3) calculating the marginal gain of one coding section of the added file j in the following way:
Figure BDA0002424550070000102
wherein c isjRepresenting the number of the j code segments of the file cached at the edge node, when the user acquires siA code segment sufficient to decode the streaming media file i, the load change being expressed as
Figure BDA0002424550070000103
ηmAnd averaging the transmission rates for the edge node signals,
Figure BDA0002424550070000104
representing the average backhaul delay for obtaining a file from a cloud data center server, B is the system bandwidth,
Figure BDA0002424550070000105
is the average length of the requested files, each file having a fixed length of L bits, ΩmRepresenting the average content hit rate, Ω ', of candidate edge nodes storing a coded segment of video before adding a coded segment'mIndicating that the average content hit rate of candidate edge nodes of a video coding segment is stored after adding a coding segment.
3.4) calculating the marginal gain of one coding section of the continuous adding file j in the following way:
Figure BDA0002424550070000106
wherein c isjRepresenting the number of coded segments, c ', of the file j cached at the edge node'jIndicating the number of the j code segments of the file continuously added to the edge node, when the user acquires siA code segment sufficient to decode the streaming media file i, the load change being expressed as
Figure BDA0002424550070000111
ηmAnd averaging transmission for edge node signalsThe rate of the speed of the motor is,
Figure BDA0002424550070000112
representing the average backhaul delay for obtaining a file from a cloud data center server, B is the system bandwidth,
Figure BDA0002424550070000113
is the average length of the requested files, each file having a fixed length of L bits, Ω'mIndicating that the average content hit rate, Ω ″, of the candidate edge nodes of the stored video coding segment before continuing to add a coding segmentmIndicating that the average content hit rate of candidate edge nodes of a video coding segment is stored after continuing to add a coding segment.
3.5) if the condition M (S', j) -M (S, j) > ═ 0 or M (S, j) < 0 is met, then step 3.6 is executed, else the current number of code segments ciAfter adding 1, returning to the step 3.1);
3.6) assigning a delay and cost balance threshold τ to the current number of code segments ci,;
3.7) Add delay and cost equalization threshold τ to the delay and cost equalization threshold list
4) The cloud data center executes an active content placement algorithm under the constraint of a delay and cost balance threshold, and the specific steps comprise:
4.1) initializing the content placement result set ciBandwidth allocation fraction set
Figure BDA0002424550070000114
For null, set the current coding segment number ciAnd bandwidth allocation ratio
Figure BDA0002424550070000115
Are both 0;
4.2) obtaining a delay cost equalization threshold τ of a file for each file iiJudging the current code segment number ciWhether or not condition c is satisfiedi<τiIf not, executing step 4.5);
4.3) the calculation mode of the index of the coding segment with the maximum content placement margin gain is as follows:
j=argmax(M({ci},j))
where M ({ c)iJ) represents placing the coded segment set c in the current contentiAnd adding the marginal gain of the coding section of the file j.
4.4) number of current code segments ciAfter adding 1, returning to the step 4.2);
4.5) number of current code segments cjAdd content Placement result set { ci};
4.6) for each edge node m, put the result set { c) according to the contentiFourthly, calculating the bandwidth allocation proportion
Figure BDA0002424550070000121
The calculation method is as follows:
Figure BDA0002424550070000122
bandwidth allocation proportion of cloud data center M +1
Figure BDA0002424550070000123
The calculation method is as follows:
Figure BDA0002424550070000124
wherein etamAnd ηM+1The average transmission rate of the edge node signal and the average transmission rate of the cloud data center signal, respectively, B is the system bandwidth,
Figure BDA0002424550070000125
is the average length of the requested files, each file having a fixed length of L bits, ΩmAnd ΩM+1Respectively representing the average content hit rate and the average content miss rate of candidate edge nodes storing the video coding segments, beta is a standardized constant of the content placement transmission cost, and lambda is a normalized constant of the content placement transmission cost0And λmCalculated from the following equation:
Figure BDA0002424550070000126
4.7) allocating the bandwidth proportionally
Figure BDA0002424550070000127
Joining a set of bandwidth allocation proportions
Figure BDA0002424550070000128
5) Initialization of content placement according to the above steps may be accomplished during off-peak periods of traffic, such as at night, using unused backhaul bandwidth. And when the traffic is dense in the daytime, the new file can be distributed to the edge node from the remote cloud data center and forwarded to the requesting user when the buffer is not hit.
6) When the cache of the edge node is full, the edge node will decide whether to replace the existing file in the cache with the requested new file to optimize the objective submodular function by adopting the following steps:
Figure BDA0002424550070000131
Figure BDA0002424550070000132
wherein SiRepresents a set of encoded segments sufficient to decode file i and | Si|=siFrom XiRepresenting a set of code segments cached at an edge node, and so
Figure BDA0002424550070000133
And | Xi|=ci
Figure BDA0002424550070000134
Is the average length of the requested file,
Figure BDA0002424550070000135
the calculation method is as follows:
Figure BDA0002424550070000136
ciindicating the number of the streaming media coding segments cached at the edge node, when the user acquires siOne coded segment is then sufficient to decode the streaming media file i, τ being the delay and cost equalization threshold. EtamAnd ηM+1The average transmission rate of the edge node signal and the average transmission rate of the cloud data center signal are respectively,
Figure BDA0002424550070000137
represents the average backhaul delay for obtaining files from the cloud data center server, B is the system bandwidth, each file has a fixed length of L bits, ΩmAnd ΩM+1Respectively representing the average content hit rate and the average content miss rate of candidate edge nodes storing the video coding segment. The specific steps of the edge node for executing cache replacement comprise:
6.1) if the newly requested file r is no longer in the edge cache
Figure BDA0002424550070000138
Meanwhile, if the cache capacity of the edge node is full, the step 6.2) is executed;
6.1) initialization MminCoding segment index i for maximum marginal gain value and minimum marginal gain value of coding segment in edge cacheminInitialized to 0
6.2) Place the result set { c) for contentiTraversing all the coding sections by the file in the file;
6.3) calculating the marginal gain M ({ c) for erasure coding section lossi},-i)<MminJudging whether the following conditions are met:
M({ci},-i)<Mmin
where M ({ c)i-i) represents the placement of the set of coded segments { c) at the current contentiAnd deleting the marginal gain of the coding section of the file i. If the condition is satisfied, recording the fileIndex, iminAssigning the index as the index of the current coding segment, and returning to the step 6.3);
6.4) if the marginal gain of the added new file meets the following conditions, executing the step 6.3), otherwise, finishing the algorithm, and not executing cache replacement;
6.5) placing the result set from the content ciRemoving c fromiminAdding a new request file code segment crInto the new cached results set ci};
The study procedure of the present invention is detailed below:
before the streaming media content is placed in the cloud edge environment, the characteristics of the streaming media content need to be analyzed, so that the storage capacity of the edge nodes is reasonably utilized, the average response delay of users is reduced, the content placement cost is controlled, and the task execution energy consumption is reduced. The problem of placing streaming media content in a cloud-edge environment has been studied by scholars, but few studies at present consider controlling the content placement cost while reducing the average response delay of a user for obtaining the content. The common streaming media content placement method in the cloud edge environment mainly aims at minimizing delay, analyzes content requests and content transmission models, and calculates content placement results, but the designs do not fully utilize the powerful computing power of a cloud data center and the storage capacity of edge nodes, and the characteristics of streaming media content are rarely considered. In view of the trend of streaming media application development and the popularization process of the cloud-edge environment, it is necessary to adjust and optimize the conventional cloud-edge collaborative strategy. In addition, how to achieve the optimal tradeoff between the user response delay and the cost of the service provider is not well solved, because most of research only optimizes the delay target through an active content placement algorithm, and does not cooperate the active content placement strategy of the cloud data center with a passive cache replacement strategy in the edge node to maintain the overall target of the delay and cost balance of content placement to be optimal, the traditional streaming media content placement method under the cloud edge environment is limited in achieving the optimal balance of delay and cost.
The model of the streaming media content placement method based on delay and cost balance in the cloud edge environment provided by the invention comprises two parts: (1) an active content placement algorithm is deployed in the cloud data center and actively distributes content to the edge nodes based on system attributes and content information. According to this algorithm, an optimal balance between cost and delay is achieved by maximizing the marginal gain of the submodular function, whose placement model is shown in fig. 1. (2) During off-peak periods of traffic, such as nighttime, initialization of content placement may be accomplished according to active content placement, utilizing unused backhaul bandwidth. And when the traffic is dense in the daytime, the new file can be distributed to the edge node from the remote cloud data center and forwarded to the requesting user when the buffer is missed. When the cache of the edge node is full, the edge node decides whether to use the requested new file or not, and replaces the existing file in the cache, so as to optimize the delay and cost balance target, and ensure that the cache is placed with the result set and has the maximum practical gain all the time. The placement model is shown in fig. 2.
Related parameter definition in scheduling method
(1) The file popularity distribution is P: the invention considers a cloud-edge collaborative caching model with cloud data centers and edge nodes, which can serve a group of users through a shared wireless channel. The edge node is linked to the cloud processor through a fronthaul link. And the cloud data center may store and access a database containing all content data. This patent assumes that the capacity of the edge wireless channel is fixed and that each edge node is equipped with a limited size of caching capability. Let the set of video stream files I {1, 2. Video popularity generally follows Zipf distribution, so this patent defines file popularity distribution as P and the likelihood of requesting file j as PjThey satisfy the conditions (1) and (2), respectively. Where γ represents a popularity deviation, and the larger γ, the more likely the request is concentrated on the first few contents with high popularity. The smaller γ, the more dispersed the request probability is throughout the content.
P={p1,p2,...,pi,...pI},
Figure BDA0002424550070000161
And is
Figure BDA0002424550070000162
Figure BDA0002424550070000163
(2) User obtains file transmission rate R from edge node mm: when a file i has been cached in one or more edge nodes, the user can download this file from the relevant edge node, the transfer rate being given by equation (3).
Figure BDA0002424550070000164
Where SINR is the signal-to-noise ratio of the transmitted data, B is the system bandwidth, NmThe number of users serving the edge node m,
Figure BDA0002424550070000165
representing the proportion of bandwidth allocated to edge node m.
(3) Average file transfer delay for all users downloading video files from edge
Figure BDA00024245500700001714
: the propagation delay is given by equation (4), Eh{RmDenotes a random variable N basedmAnd
Figure BDA0002424550070000171
of the channel coefficient
Figure BDA0002424550070000172
Is expected value of
Figure BDA0002424550070000173
Representing the average transmission distance between the edge node m and the users it serves.
Figure BDA0002424550070000174
Representing services from a cloud data centerThe average backhaul delay for the device to acquire the file,
Figure BDA0002424550070000175
is the average length of the requested file.
Figure BDA0002424550070000176
Wherein E ish{RmDenotes a random variable N basedmAnd
Figure BDA0002424550070000177
of the channel coefficient
Figure BDA0002424550070000178
Is expected value of
Figure BDA0002424550070000179
Representing the average transmission distance between the edge node m and the users it serves.
Figure BDA00024245500700001710
Representing the average backhaul delay for obtaining a file from a cloud data center server.
Figure BDA00024245500700001711
Is the average length of the requested files, each file having a fixed length of L bits, ΩmAnd ΩM+1Respectively representing the average content hit rate and the average content miss rate of candidate edge nodes storing the video coding segment.
(4) Noise power spectral density constant SINR: n is a radical of0The signal-to-noise ratio SINR of the transmitted data obtained according to the shannon formula is shown in formula (4). Where B is the system bandwidth. PmIs the average transmission power of the edge node m.
SINR=Pm/(N0B+x) (5)
(5) Storage cost of content placement
Figure BDA00024245500700001712
The storage cost for content placement is given by equation (6), μ being a normalized constant of the resource usage cost.
Figure BDA00024245500700001713
ciIndicating the number of the streaming media coding segments cached at the edge node, when the user acquires siOne coded segment, then, is sufficient to decode streaming media files i, each file having a fixed length of L bits.
(5) Transmission cost of content placement
Figure BDA0002424550070000181
The transmission cost of content placement is given by equation (7), and β is a normalized constant of the content placement transmission cost.
Figure BDA0002424550070000182
ciIndicating the number of the streaming media coding segments cached at the edge node, when the user acquires siOne coded segment, then, is sufficient to decode streaming media files i, each file having a fixed length of L bits.
According to the streaming media content placement method based on delay and cost balance in the cloud edge environment, firstly, a content request is modeled according to content and system information, then, the user average response delay represented by a formula (4) and the content placement cost represented by formulas (6) and (7) are taken as targets, and the target is converted into a submodular function optimization problem. And then searching for a content placement result with an optimal marginal gain by using a content placement algorithm based on delay and cost balance. The method is described in detail as follows:
1) dividing the stream media content into a plurality of fixed-size segments to form a stream media file coding segment set { ci};
2) Sequencing edge nodes in the collaborative cache domain according to the distance from the user, wherein the edge node m represents the edge node which is close to the mth of the user and is everyCalculating the average file transmission rate eta by each edge node mm
3) Calculating a delay and cost balance threshold value of each file i;
4) the cloud data center executes an active content placement algorithm under the constraint of delay and cost balance thresholds;
5) when the daytime flow is dense, when the buffer is not hit and the buffer of the edge node is full, the edge node executes the buffer replacement;
pseudo-code description of placement methods
(1) Acquiring a video file set I and file popularity distribution { p }iH, file length siEdge node cache capacity C, edge node density σ, user density ξ, number of edge nodes M, average backhaul delay
Figure BDA0002424550070000191
(2) for all edge nodes m
(3) Calculating the average signal transmission rate eta of the edge nodesm
(4)end for
(5) Initialization threshold τ is, code segment { ciIs 1, delay and cost balance threshold list { τiIs empty
(6) for streaming media file i
(7)while ci<=C
(8) Calculating edge node load distribution { omegam}
(9) Calculating the marginal gain M (S, j) of adding a code segment
(10) Calculating the marginal gain M (S', j) of a continuing addition of a code segment
(11)if(M(S′,j)-M(S,j)>=0||M(S,j)<0)
(12)τ=ci break
(13)else ci++
(14)end if
(15)end while
(16) Will tauiAdd delay and cost equalization threshold list { τ }i}
(17)end for
(18)end for
(19)
Figure BDA0002424550070000201
(20)for 0≤i≤I
(21)while ci<τi do
(22)j=argmax(M({ci},j))
(23)ci++
(24)end while
(25) C is tojAdd content Placement result { ci}
(26)for 0≤m≤M
(27) Calculating the bandwidth allocation proportion of the edge node m
Figure BDA0002424550070000202
(28) Will be provided with
Figure BDA0002424550070000203
Joining bandwidth allocation result sets
Figure BDA0002424550070000204
(29)end for
(30)if
Figure BDA0002424550070000205
(31)MminMarginal gain maximum, int imin=0
(32)for each{ci}
(33)if(M({ci},-i)<Mmin)
(34)imin=i
(35)end if
(36)if(M({ci},r)>Mmin)
(37) From { ciIs removed from
Figure BDA0002424550070000206
(38) Adding crInto the new cached results set ci}
(39)end if
(40)end for
(41) Returning a new cached result of c'i}
(42)end if
The pseudo code description of the algorithm can be obtained, and the 1 st line acquires the content size, the popularity and the system information; lines 2 to 4, calculate the average signal transfer rate η for each edge nodem(ii) a Lines 5 to 18 calculate the upper limit of the number of code segments for delay and cost equalization by calculating the marginal gain of adding code segments. Lines 19 to 25 add the coding segments that can achieve the largest marginal gain gradually to the content placement result set based on the constraints of delay and cost equalization thresholds. Lines 26 to 29 calculate the bandwidth proportion that each edge node should allocate based on the content placement results. Lines 30 through 42 search the cache for the smallest margin gain compared to the newly requested file and decide whether to replace the encoded segments in the cache with the encoded segments of the newly requested file. By ensuring that the marginal gain is maximized, the average delay and cost of placement of the entire content is minimized.
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.

Claims (6)

1. A streaming media content placement method based on delay and cost balance in a cloud edge environment is characterized by comprising the following steps:
1) dividing the stream media content into a plurality of fixed-size segments to form a stream media file coding segment set { ci};
2) Sequencing edge nodes in the collaborative cache domain according to the distance from the user, wherein the edge node m represents an edge node which is close to the mth of the user, and calculating the average file transmission rate eta of each edge node mm
3) Initializing the delay and cost balance threshold value of each streaming media file to be 0, and encoding the content placement resultNumber of segments { ci1, delay and cost balance threshold list Ti} is empty;
4) determining the proportion of bandwidth allocation through a threshold value of the number of current coding segments, and executing an active content placement algorithm by the cloud data center under the constraint of a delay and cost balance threshold value;
5) completing initialization of content placement according to the above steps, utilizing unused backhaul bandwidth during off-peak periods of traffic; when the flow is dense, distributing the new file from the remote cloud data center to the edge node and forwarding the new file to the requesting user when the buffer is not hit;
6) and when the cache of the edge node is full, traversing the coding section in a marginal gain value mode of the edge cache to decide whether to replace the existing file in the cache with the requested new file so as to optimize the target secondary modulus function.
2. The method for placing streaming media content based on delay and cost balance in cloud edge environment according to claim 1, wherein the step 4) specifically comprises the following steps:
4.1) initializing the content placement result set ciBandwidth allocation fraction set
Figure FDA0003176749080000021
For null, set the current coding segment number ciAnd bandwidth allocation ratio
Figure FDA0003176749080000022
Are both 0;
4.2) obtaining the delay cost balance threshold T of the file for each file iiJudging the current code segment number ciWhether or not condition c is satisfiedi<TiIf not, executing step 4.5); if yes, executing step 4.3);
4.3) acquiring the coding segment index with the maximum content placement marginal gain;
4.4) number of current code segments ciAfter adding 1, returning to the step 4.2);
4.5) encoding the current segmentQuantity cjAdd content Placement result set { ci};
4.6) for each edge node m, put the result set { c) according to the contentiFourthly, calculating the bandwidth allocation proportion
Figure FDA0003176749080000023
Bandwidth allocation proportion to cloud data center M +1
Figure FDA0003176749080000024
4.7) allocating the bandwidth proportionally
Figure FDA0003176749080000025
Joining a set of bandwidth allocation proportions
Figure FDA0003176749080000026
3. The method for placing streaming media content based on delay and cost balance in cloud edge environment according to claim 2, wherein the specific step of performing cache replacement by the edge node in step 6) includes:
6.1) if the newly requested file r is no longer in the edge cache
Figure FDA0003176749080000027
Meanwhile, if the cache capacity of the edge node is full, the step 6.2) is executed;
6.1) initialization MminCoding segment index i for maximum marginal gain value and minimum marginal gain value of coding segment in edge cacheminInitialization is 0;
6.2) Place the result set { c) for contentiTraversing all the coding sections by the file in the file;
6.3) calculating the marginal gain M ({ c) for erasure coding section lossi},-i)<MminDetermine whether it satisfies M ({ c)i},-i)<MminIf the condition is satisfied, recording the file index iminAssigning the current code segment index and returningStep 6.3); otherwise, the step 6.4 is executed
6.4) if the marginal gain of the added new file meets the condition, executing the step 6.3), otherwise, finishing the algorithm and not executing the cache replacement;
6.5) from { ciIs removed from
Figure FDA0003176749080000032
Adding crInto the new cached results set ci}。
4. The method for placing streaming media content based on delay and cost balance in cloud edge environment according to claim 3, wherein the specific steps of calculating the delay and cost balance threshold in step 3) comprise:
3.1) for each file i, acquiring the total number of the coding segments of the file as C, and if the content is placed, determining the number of the coding segments of the result as CiIf the value is larger than or equal to C, executing the step 3.6); otherwise, executing step 3.2);
3.2) number of code segments ciLess than C, calculating the load distribution of each edge node { omegamAnd load distribution omega of cloud data centerM+1The calculation mode for calculating the proportion of the number of the file coding segments acquired by the user u from the edge node M and the proportion of the number of the remaining file coding segments acquired by the user u from the cloud data center M +1 is as follows:
3.3) calculating the marginal gain of one coding section of the added file i;
3.4) calculating the marginal gain of one coding section of the file i to be continuously added;
3.5) if the condition M (S', j) -M (S, j) > ═ 0 or M (S, j) < 0 is met, then step 3.6 is executed, else the current number of code segments ciAfter adding 1, returning to the step 3.1);
3.6) assigning a delay and cost balance threshold T to the current number of code segments ci
3.7) adding the delay and cost equalization threshold T to the list of delay and cost equalization thresholds.
5. The cloud-edge environment latency and cost balancing based flow of claim 4A media content placement method, characterized by: the bandwidth allocation proportion of the edge node m in the step 4)
Figure FDA0003176749080000031
The calculation method comprises the following steps:
Figure FDA0003176749080000041
wherein etamFor the edge node signal average transmission rate, B is the system bandwidth,
Figure FDA0003176749080000044
is the average length of the requested files, each file having a fixed length of L bits, ΩmAnd ΩM+1Respectively representing the average content hit rate and the average content miss rate of candidate edge nodes storing the video coding segments, beta is a standardized constant of the content placement transmission cost, and lambda is a normalized constant of the content placement transmission cost0And λmCalculated from the following equation:
Figure FDA0003176749080000042
6. the method for placing streaming media content based on delay and cost balance in cloud edge environment according to claim 1, wherein: the signal transmission rate of each edge node in the step 2) is independent of content placement and bandwidth allocation, depends on the whole traffic load and network resources, and is calculated in the following way:
Figure FDA0003176749080000043
wherein SINR is signal-to-noise ratio of transmission data, and λ and δ are edge node distribution density and user distribution density in the point-to-point network, respectively.
CN202010216284.3A 2020-03-25 2020-03-25 Streaming media content placement method based on delay and cost balance in cloud edge environment Active CN111447506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010216284.3A CN111447506B (en) 2020-03-25 2020-03-25 Streaming media content placement method based on delay and cost balance in cloud edge environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010216284.3A CN111447506B (en) 2020-03-25 2020-03-25 Streaming media content placement method based on delay and cost balance in cloud edge environment

Publications (2)

Publication Number Publication Date
CN111447506A CN111447506A (en) 2020-07-24
CN111447506B true CN111447506B (en) 2021-10-15

Family

ID=71652573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010216284.3A Active CN111447506B (en) 2020-03-25 2020-03-25 Streaming media content placement method based on delay and cost balance in cloud edge environment

Country Status (1)

Country Link
CN (1) CN111447506B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070859B (en) * 2021-11-29 2023-09-01 重庆邮电大学 Edge cloud cache cooperation method, device and system based on boundary cost benefit model
CN114679438B (en) * 2022-03-03 2024-04-30 上海艾策通讯科技股份有限公司 Streaming media data transmission method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979274A (en) * 2016-05-06 2016-09-28 上海交通大学 Distributive cache storage method for dynamic self-adaptive video streaming media
CN110012106A (en) * 2019-04-15 2019-07-12 北京邮电大学 A kind of coordination caching method, apparatus and system based on edge calculations
CN110730471A (en) * 2019-10-25 2020-01-24 重庆邮电大学 Mobile edge caching method based on regional user interest matching
US10567462B2 (en) * 2013-07-16 2020-02-18 Bitmovin Gmbh Apparatus and method for cloud assisted adaptive streaming

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10313470B2 (en) * 2016-09-06 2019-06-04 Integrated Device Technology, Inc. Hierarchical caching and analytics
US11395020B2 (en) * 2016-09-08 2022-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Bitrate control in a virtual reality (VR) environment
US10735778B2 (en) * 2018-08-23 2020-08-04 At&T Intellectual Property I, L.P. Proxy assisted panoramic video streaming at mobile edge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10567462B2 (en) * 2013-07-16 2020-02-18 Bitmovin Gmbh Apparatus and method for cloud assisted adaptive streaming
CN105979274A (en) * 2016-05-06 2016-09-28 上海交通大学 Distributive cache storage method for dynamic self-adaptive video streaming media
CN110012106A (en) * 2019-04-15 2019-07-12 北京邮电大学 A kind of coordination caching method, apparatus and system based on edge calculations
CN110730471A (en) * 2019-10-25 2020-01-24 重庆邮电大学 Mobile edge caching method based on regional user interest matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Cloud RAN for Mobile Networks-A Technology Overview》;Checko;《IEEE Communications Surveys&Tutorials》;20150331;全文 *
《Video caching in radio access network:Impact on delay and capacity》;Ahlehagh;《2012 IEEE 9th Wireless Communications and Networking Conference》;20121109;全文 *

Also Published As

Publication number Publication date
CN111447506A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN112218337B (en) Cache strategy decision method in mobile edge calculation
CN111930436A (en) Random task queuing and unloading optimization method based on edge calculation
CN111935784A (en) Content caching method based on federal learning in fog computing network
CN111552564A (en) Task unloading and resource optimization method based on edge cache
CN111447506B (en) Streaming media content placement method based on delay and cost balance in cloud edge environment
CN113282786B (en) Panoramic video edge collaborative cache replacement method based on deep reinforcement learning
CN111491331B (en) Network perception self-adaptive caching method based on transfer learning in fog computing network
CN111432270B (en) Real-time service delay optimization method based on layered cache
CN113691598B (en) Cooperative caching method for satellite-ground converged network
CN115665804B (en) Cache optimization method for cooperative unmanned aerial vehicle-intelligent vehicle cluster
CN113810931B (en) Self-adaptive video caching method for mobile edge computing network
CN115344395B (en) Heterogeneous task generalization-oriented edge cache scheduling and task unloading method and system
CN110913239B (en) Video cache updating method for refined mobile edge calculation
CN113993168B (en) Collaborative caching method based on multi-agent reinforcement learning in fog wireless access network
CN112887314B (en) Time delay perception cloud and mist cooperative video distribution method
CN112702443B (en) Multi-satellite multi-level cache allocation method and device for satellite-ground cooperative communication system
CN113766540B (en) Low-delay network content transmission method, device, electronic equipment and medium
CN115720237A (en) Caching and resource scheduling method for edge network self-adaptive bit rate video
CN112954026B (en) Multi-constraint content cooperative cache optimization method based on edge calculation
CN113709853B (en) Network content transmission method and device oriented to cloud edge collaboration and storage medium
CN108429919B (en) Caching and transmission optimization method of multi-rate video in wireless network
CN109729510B (en) D2D content secure distribution method and system based on Stencoberg game
Tirupathi et al. HybridCache: AI-assisted cloud-RAN caching with reduced in-network content redundancy
CN114786137B (en) Cache-enabled multi-quality video distribution method
Dai et al. Joint resource optimization for adaptive multimedia services in MEC-based vehicular networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant