CN115720237A - Caching and resource scheduling method for edge network self-adaptive bit rate video - Google Patents

Caching and resource scheduling method for edge network self-adaptive bit rate video Download PDF

Info

Publication number
CN115720237A
CN115720237A CN202211421390.0A CN202211421390A CN115720237A CN 115720237 A CN115720237 A CN 115720237A CN 202211421390 A CN202211421390 A CN 202211421390A CN 115720237 A CN115720237 A CN 115720237A
Authority
CN
China
Prior art keywords
video
user
bit rate
edge server
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211421390.0A
Other languages
Chinese (zh)
Inventor
张幸林
田嘉琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202211421390.0A priority Critical patent/CN115720237A/en
Publication of CN115720237A publication Critical patent/CN115720237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a caching and resource scheduling method of an edge network self-adaptive bit rate video, which optimizes the QoE of a user under the condition of resource constraint, considers various fine-grained influence factors in the video service process, ensures the reliability of a scheme, and expresses the video caching and video processing problems as a nonlinear integer programming problem. The original problem is broken down into two sub-problems: the cache placement problem in the cache phase and the resource scheduling problem in the delivery phase. In the caching stage, the nonlinear integer programming problem is simplified, an optimization solver is used, then a final result is obtained by using a rounding strategy, and a section with a proper bit rate level is selected for caching, so that the feasibility of the scheme is ensured; in the delivery stage, the user request scheduling is carried out according to the minimum delay, and the original problem is solved to carry out bandwidth allocation and power allocation on the user and carry out data transmission, so that the operability of the scheme is ensured.

Description

Caching and resource scheduling method for edge network self-adaptive bit rate video
Technical Field
The invention relates to the technical field of edge computing and video service, in particular to a caching and resource scheduling method for an edge network self-adaptive bit rate video.
Background
In the modern times, with the development of new mobile applications, various technologies of mobile devices such as smart phones are also rapidly developed. However, when a computationally intensive program is run on a mobile device with greatly limited storage performance and energy consumption, the resource consumption and the computation delay of the program can seriously affect the user experience. One common solution is to offload such applications that require significant computing resources to the cloud. However, a new problem is caused, which is that a larger delay is caused, including delay of calculation, communication, etc., and a larger backhaul traffic is generated on the core network, causing network congestion. How to better solve the problems becomes a key for realizing 5G everything interconnection. As one of the solutions to the above problems, multi-access Edge Computing (MEC) has therefore been proposed. The multi-access edge calculation is to perform resource deployment and user service at the network edge, for example, to perform part of calculation work, so as to reduce the delay of the user service and further improve the quality of user experience.
Today, the development of mobile applications and the rise of various video software, such as short video applications, jittering, bilibilii, video communication, etc., make video traffic occupy a large portion of network traffic. By 2022, video traffic will account for 82% of network traffic, according to Cisco's data. The video applications also increase the traffic burden of the core network while providing sufficient video services, increase service delay, and inevitably affect user experience.
To address this key issue, multi-access edge computing (MEC) has become a viable technology that can use edge servers for video caching and transmission in the vicinity of the user. However, in recent research, video caching in edge networks mainly takes into account popularity of videos and user preferences for videos, and few research considers user behavior and user preferences for different parts of videos, which does have a substantial impact on caching efficiency. Meanwhile, due to the limited computational and wireless resources of the MEC network, it is still challenging and urgent to design an efficient video caching and resource scheduling scheme to fully utilize the MEC network by combining MEC with DASH technology. Therefore, the present invention tries to combine these fine-grained factors to propose a new adaptive bitrate video buffering and resource scheduling scheme.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a method for caching and resource scheduling of an edge network adaptive bit rate video, which optimizes the QoE of a user under the condition of resource constraint. The original problem is broken down into two sub-problems: the cache placement problem in the cache phase and the resource scheduling problem in the delivery phase. In the caching stage, the nonlinear integer programming problem is simplified, an optimization solver is used, then a final result is obtained by using a rounding strategy, and a section with a proper bit rate level is selected for caching, so that the feasibility of the scheme is ensured; in the delivery stage, the user request scheduling is carried out according to the minimum delay, and the original problem is solved to carry out bandwidth allocation and power allocation on the user and carry out data transmission, so that the operability of the scheme is ensured.
In order to realize the purpose, the technical scheme provided by the invention is as follows: the caching and resource scheduling method of the edge network self-adaptive bit rate video comprises the following steps:
1) Acquiring information: in the off-line stage, the current popularity of the video and the preference of the user to the video band are obtained and analyzed according to the video playing amount and the user playing record information;
2) Video service modeling: modeling video service under an edge network according to the current popularity of a video and the preference of a user on a video frequency band, establishing a mathematical model which takes the user experience quality as an optimization target and takes bandwidth, power, the size of a cache space, calculation resources and energy consumption as constraints, and expressing the mathematical model as a nonlinear integer programming problem;
3) Video caching: simplifying the nonlinear integer programming problem through a relaxation variable, solving the problem by using an optimization solver, then obtaining a final result by using a rounding strategy, and caching video segments with various bit rates by using an edge server according to the result;
4) Service delivery: at each moment, the user request is dispatched to a base station which can be served for service according to a rule with the lowest delay, the edge server allocates bandwidth and power to the user initiating the request by solving a nonlinear integer programming problem, and finally, according to a cache result and a resource allocation result, the edge server services the user request, namely, video data is transmitted.
Further, in step 1), the current popularity of the video and the preference of the user for the video band are obtained and analyzed, and the detailed contents are as follows:
and (3) popularity: the edge server obtains the ratio of the accumulated playing amount of the current video to the total playing amount as the popularity basis Pr v Representing the popularity of the V-th video, wherein V is equal to {1,2,. And V }, and V represents the number of videos, namely that the video speed increasing rate of high popularity is higher than the video speed increasing rate of low popularity under most conditions; in some cases, the playing amount of a certain video suddenly jumps, or a certain video is just released, and the current playing amount does not match the increase; but this is only a short term phenomenon. After a period of time, the growth of the video tends to be stable and is matched with the playing amount;
user preferences: for the user preference, the preference degree of the n-th video to the v-th video can be obtained from the historical record of the user by calculating the proportion of the playing times of the single video to the total playing times in a period of time as the preference degree of the n-th video to the v-th video
Figure BDA0003941313440000031
And a video segment v k Preference of
Figure BDA0003941313440000032
Wherein v is k Represents the kth segment of the v-th video, N ∈ {1,2,. And N }, K ∈ {1,2,. And K }, N represents the number of users, and K represents the number of segments of the video.
Further, the step 2) comprises:
confirming model variables: use of
Figure BDA0003941313440000033
To indicate whether the kth segment of the v-th video is cached at the m-th edge server at the l-th bit rate, wherein
Figure BDA0003941313440000034
If the value is 0, the result is no, and if the value is 1, the result is that L belongs to {1,2.,. L }, M belongs to {1,2.,. M }, L represents the bit rate number, and M represents the number of edge servers; taking the combined value of popularity and preference as the request probability
Figure BDA0003941313440000035
Representing the probability that user n requests the kth segment of the vth video and the l bit rate level; use identifier
Figure BDA0003941313440000036
Indicating whether or not to perform a transcoding process,
Figure BDA0003941313440000037
meaning that if and only if the mth edge server does not cache the video segment v at the lth bit rate k But with a higher bit rate cached than the l-th bit rate, a transcoding operation is performed,
Figure BDA0003941313440000041
indicating whether a kth segment of a vth video is cached on an mth edge server at an ith bitrate, i = l +1;
modeling the user experience quality: the quality of experience of each user n is determined by the average video quality AVQ n Video bit rate switching VS n And a rebuffering time RT n Calculating; by calculating the total quality of video segments that a user has viewed
Figure BDA0003941313440000042
Divided by the number of viewed video segments K n Obtaining average video quality AVQ n Wherein BR l Represents the l-th bit rate; video bit rate switching VS n By superimposing the absolute value of the bit rate difference of the historically requested video segments, i.e.
Figure BDA0003941313440000043
I.e. superimposing the absolute value of the difference in bit rate between two adjacent video segments, BR l' Represents the L 'th bit rate, L' is in the field of 1,2,. And L; and re-buffering time RT n The amount of data currently buffered by the buffer of the user equipment BF n (t) determination of, wherein BF n (t) in seconds, when the user buffer data is exhausted, i.e. BF n (t) =0, the time from the pause of the user to the next continuous playing of the video is regarded as the rebuffering, and the time is accumulated to be the rebuffering time in the watching process of the user; the three are linearly combined to obtain the user experience quality Q, namely Q = AVQ n -αVS n -βRT n Wherein alpha is more than 0, beta is more than 0, and represents a weight parameter;
constraint modeling: the bandwidth constraint is the bandwidth B allocated to each user n n The sum of the two bandwidths cannot exceed B, sigma n∈ N B n B is less than or equal to B; power equality, power P of a single user n n The sum cannot exceed the power threshold P, sigma n∈N P n P is less than or equal to P; the amount of data that the edge server has cached cannot exceed the space size CH m
Figure BDA0003941313440000044
In the course of service, the computing resources for transcoding cannot exceed the limit PC m
Figure BDA0003941313440000045
Wherein S l The data amount of the video representing the l-th bit rate, i.e., the video data size;
energy consumption modeling: energy consumption E m The calculation of (A) includes the energy consumption in the whole service process, respectively the cache energy consumption
Figure BDA0003941313440000046
Transcoding energy consumption
Figure BDA0003941313440000047
And transmission energy consumption
Figure BDA0003941313440000048
Energy consumption in the process of caching, namely caching energy consumption
Figure BDA0003941313440000049
And the size of the data volume currently being cached
Figure BDA0003941313440000051
Is in direct proportion; and the energy consumption of the transcoding process is the transcoding energy consumption
Figure BDA0003941313440000052
Same as the amount of video data S l In connection with the improvement of computing power of a computer, the so-called transcoding can be used for transcoding a high-bit-rate video into a low-bit-rate video through transcoding operation and then providing the low-bit-rate video for a user, and the transcoding energy consumption of a video segment is calculated based on CPU (Central processing Unit) performance according to hardware configuration of equipment by decoding and then re-encoding
Figure BDA0003941313440000053
Wherein ω is tc Representing the energy consumption factor, S l Representing video data size, cp representing transcoding strength, f cpu Represents the CPU frequency; transmission energy consumption, i.e. transmission energy consumption
Figure BDA0003941313440000054
In relation to the amount of video data and the network conditions,
Figure BDA0003941313440000055
wherein R is n Indicating the user transmission rate, P n Indicating the power allocated to user n.
Further, in step 3), the nonlinear integer programming problem is simplified, an optimization solver is used, then a rounding policy is used to obtain a final result, according to the solved cache placement result, the result shows whether a certain video segment should be cached or at which bit rate to cache, and then according to the result, a video segment with a specific bit rate is obtained from the data center and cached on the edge server, and the details thereof are as follows:
obtaining a feasible solution: firstly, according to the existing information of the combination of popularity and user preference, video segments with higher preference are selected from all video segments in turn by using a greedy algorithm, and the identifiers of the video segments are set to be 1, namely
Figure BDA0003941313440000056
Recording the size of the placed data until the space of the remaining edge server is insufficient, and putting down the next video segment, and obtaining a feasible result array XL at the moment, namely feasible solution;
obtaining a current optimal solution: calculating the bandwidth B of each user n at the current moment by taking the feasible solution XL as a cache placement result n And power P n While the identifier in the original problem is assigned based on the obtained feasible solution XL
Figure BDA0003941313440000057
The value range of (1) is relaxed from {0,1} to [0,1 ]]Then, each identifier is calculated by solving the original problem using an SLSLSLQP optimizer
Figure BDA0003941313440000058
The user experience quality of the whole cache result is optimal, and the constraints on bandwidth, power, cache space size, computing resource size and energy consumption are met;
obtaining a global optimal solution: for each edge server, the feasible solution XL pre-calculates bandwidth and power allocation based on a rule of maximizing user experience quality, and substitutes the allocation result into the original formula in turn, the video cache placement is calculated again, and then the calculation cache placement, the bandwidth allocation and the power allocation are iterated for multiple times to obtain the optimal solution XU of the video cache placement;
rounding to get the best feasible solution: on the basis of the optimal solution XU, identifiers in the result are used
Figure BDA0003941313440000061
Is reduced to 0 or 1, orThe process is as follows: by controlling the parameter sigma pair
Figure BDA0003941313440000062
Performing rounding, in particular, for each edge server, for each
Figure BDA0003941313440000063
If it is not
Figure BDA0003941313440000064
Is provided with
Figure BDA0003941313440000065
If not, then,
Figure BDA0003941313440000066
representing the rounded set of cached variables as X and the corresponding quality of user experience value as Q X (ii) a If X exceeds the constraints of cache space and computational resources, increase σ to σ + λ | (Q) XU -Q XL )/Q ZU L, where λ is the step size, because when σ is increased, the number of variables with a value of 1 decreases, the buffer space required for the video segment decreases, and the constraint condition can be satisfied; conversely, if σ satisfies the constraint, the value of σ is decreased, thereby buffering more video segments; if Q is X ≥Q XL X is a feasible result, which indicates that X is a better solution than XL, and the lower bound of the feasible solution is updated by setting XL ← X; repeating the rounding process until Q x The improvement in value is less than the threshold epsilon; note that Q XL The value of (A) increases with the number of iterations, since λ is a constant, Q XU Is fixed when Q XL The magnitude of the change of σ gradually decreases as the value of (c) increases, so the rounding process can converge.
Further, in step 4), at each time, the user request is scheduled to the serviceable base station for service with the rule of lowest delay, the edge server allocates bandwidth and power to the user initiating the request, so that the average quality of experience is highest and the bandwidth and power constraints are not exceeded, and according to the caching result and the resource allocation result, the edge server services the user request, that is, transmits video data, and the details thereof are as follows:
user request scheduling: in dense cellular networks, a user may be covered and served by multiple SBS, thus redirecting the user request to the appropriate SBS in pursuit of high QoE, in particular, given a user request and the set of SBS covering the request, fixed bandwidth and power, calculating the video transcoding tc and transmission delay tr for each SBS, and selecting the SBS with the lowest delay to serve the request, according to the cached results, where
Figure BDA0003941313440000071
Where cp denotes the transcoding strength, f cpu Denotes CPU frequency, and tr = Data n /R n Wherein Data n Indicating the size of the transmitted data volume, R n Representing the user transmission rate;
bandwidth and power allocation: according to the request scheduling result, on the premise of not exceeding the constraint, an SLSLQP optimizer is used for solving a nonlinear integer programming problem with the user experience quality as an optimization target and the bandwidth, the power, the size of a cache space, the calculation resources and the energy consumption as constraints, and the bandwidth and the power which are currently pre-allocated to the user are calculated, so that the overall user experience quality is optimal;
service delivery: the user request is served, namely data is transmitted, according to the cache result and the resource allocation result, wherein the following four conditions exist when the user initiates a request to the edge server:
(1) caching the requested video segment with a specific bit rate in an edge server, wherein the video segment can be directly transmitted, otherwise, the edge server checks whether the bit rate is higher than the requested bit rate;
(2) if a high bit rate is cached, the edge server may perform a transcoding operation and then send the transcoded version to the user;
(3) the requested segment is not cached in the edge server, but cached in the neighbor server, and the edge server stores the information of the adjacent server for cooperation; when the information is updated, the information is exchanged through broadcasting; therefore, the edge server can acquire the video from the neighbor and decide whether to transcode;
(4) and if all the edge servers capable of cooperating do not cache the video segments requested by the users, directly acquiring videos with specific bit rates from the cloud.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the present invention takes into account user preferences and edge caching at a finer granularity video segment level in a dense cellular network.
2. The invention models the placement and processing of the multiple bit rate video clips into a nonlinear integer programming problem, considers the influence of a plurality of factors and ensures the reliability of the scheme.
3. The method adopts a divide-and-conquer strategy to decompose the original problem into a cache placement problem in a cache stage and an online request scheduling and resource allocation problem in a delivery stage, and then designs an efficient algorithm based on technologies such as greedy heuristic, relaxation and rounding to solve the two problems, thereby ensuring the feasibility of the scheme.
4. The present invention has been experimented with real data sets to evaluate the performance comparison of this strategy with conventional caching strategies. The result shows that the strategy achieves better effects in the aspects of hit rate, backhaul flow and user experience quality.
Drawings
FIG. 1 is a diagram of a model of the present invention.
FIG. 2 is a logic flow diagram of the present invention.
Fig. 3 is a diagram of an application architecture of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the embodiments of the present invention are not limited thereto.
The embodiment discloses a caching and resource scheduling method of an edge network self-adaptive bit rate video, which is based on a video streaming media technology, considers user behaviors and preferences of users on different video segments and establishes a model of fine-grained caching and resource scheduling problems in the edge network. We decouple the expressed problem into the caching problem and the resource scheduling problem. For the former, we have designed two effective caching algorithms, including greedy heuristics, relaxation and rounding non-linear integer programming techniques, and for the latter, we have proposed an online algorithm to schedule user requests and power allocation. The invention uses the scheme to improve the experience quality of the user in the video service process.
As shown in fig. 1, we consider the classical framework of video services in dense cellular networks, including cloud or remote data centers, base SBS and users. M denotes the SBS set and N denotes the user set. Each SBS is attached with an edge server with storage capacity CH m The computing resource is PC m . The SBS serves users within its coverage area. According to the actual situation, the coverage areas of the SBS have a certain overlap to prevent blind spots from occurring in the coverage areas. Meanwhile, assume that the cloud stores video clips of all bit rate levels.
The overall scheme flow is shown in figure 2. Firstly, in an off-line stage, the current popularity of the video and the preference of the user to the video frequency range are obtained. According to prior knowledge, including video popularity and user preference, in combination with video service modeling and the proposed caching scheme, multiple bit rate video segments are cached on multiple servers of the edge network.
Secondly, in the online service phase, at each moment, a user request is scheduled to a serviceable base station for service with a minimum delay rule. The edge server allocates bandwidth and power to the user initiating the request based on the resources such as bandwidth, so that the average experience quality is highest and the bandwidth and power constraints are not exceeded. And according to the caching result and the resource allocation result, the edge server serves the user request, namely transmitting the video data.
And updating the cache when the offline stage is started again, namely updating the cache placement result according to the current video popularity and the change of the user preference, wherein the updating comprises the combination of acquiring the prior knowledge and the video cache.
And repeating the above processes to perform video service on the user.
As shown in fig. 1 to fig. 3, the specific steps of the scheme related to the fine-grained caching and resource scheduling problem in the edge network in the embodiment are as follows:
1) Acquiring information: in the off-line stage, the current popularity of the video and the preference of the user to the video band are obtained and analyzed according to the information such as the video playing amount, the user playing record and the like.
And (3) popularity: the edge server obtains the ratio of the accumulated playing amount of the current video to the total playing amount as the popularity basis Pr v And representing the popularity of the V-th video, wherein V is equal to {1,2,. And V }, and V represents the number of videos, namely that in most cases, the video speed increasing rate with high popularity is higher than the video speed increasing rate with low popularity. In some cases, such as when the amount of play of a video suddenly jumps, or a video has just been released, the current amount of play does not match the increase. But this is only a short term phenomenon. After a period of time, the growth of the video will tend to be stable and match the amount played.
User preferences: for the user preference, the preference degree of the n-th video to the v-th video of the user can be obtained from the historical record of the user by calculating the proportion of the playing times of the single video in a period of time to the total playing times
Figure BDA0003941313440000101
And a video segment v k Preference of
Figure BDA0003941313440000102
Wherein v is k Represents the kth segment of the v-th video, N is E {1,2,. And N }, K is E {1,2,. And K }, N represents the number of users, and K represents the number of segments of the video.
2) Video service modeling: the method comprises the following steps of modeling video service under an edge network according to the current popularity of a video and the preference of a user on a video frequency band, establishing a mathematical model which takes the user experience quality as an optimization target and takes bandwidth, power, cache space size, computing resources and energy consumption as constraints, and expressing the mathematical model as a nonlinear integer programming problem, wherein the detailed contents are as follows:
confirming model variables: use of
Figure BDA0003941313440000103
To indicate whether the kth segment of the v-th video is cached at the mth edge server at the mth bit rate, where
Figure BDA0003941313440000104
If the value is 0, the result is no, and if the value is 1, the result is that L belongs to {1,2.,. L }, M belongs to {1,2.,. M }, L represents the bit rate number, and M represents the number of edge servers; taking the combined value of popularity and preference as the request probability
Figure BDA0003941313440000105
Representing the probability that user n requests the kth segment of the vth video and the l bit rate level; use identifier
Figure BDA0003941313440000106
Indicating whether or not to perform a transcoding process,
Figure BDA0003941313440000107
meaning that video segment v at the ith bit rate is not cached and only if the mth edge server does not cache video segment v at the ith bit rate k But with a higher bit rate than the l-th bit rate buffered, a transcoding operation is performed,
Figure BDA0003941313440000108
indicating whether a kth segment of a v-th video is cached on an m-th edge server at an i-th bit rate, i = l +1;
modeling the user experience quality: the quality of experience of each user n is determined by the average video quality AVQ n Video bit rate switching VS n And a rebuffering time RT n Calculating; by calculating the total quality of video segments that a user has viewed
Figure BDA0003941313440000109
Divided by the number of viewed video segments K n Obtaining average video quality AVQ n Wherein BR l Represents the l-th bit rate; video bit rate switching VS n Requesting bit rate of video segments by overlaying historyThe absolute value of the difference being calculated, i.e.
Figure BDA00039413134400001010
I.e. superimposing the absolute value of the difference in bit rate between two adjacent video segments, BR l' Represents the L 'th bit rate, L' is in the field of 1,2,. And L; and a rebuffering time RT n The amount of data currently buffered by the buffer of the user equipment BF n (t) determination of, wherein BF n (t) in seconds, when the user buffer data is exhausted, i.e. BF n (t) =0, the time for the user to pause playing to the next time to continue playing the video is regarded as the re-buffering, and the time is accumulated to be used as the re-buffering time in the watching process of the user; the three are linearly combined to obtain the user experience quality Q, namely Q = AVQ n -αVS n -βRT n Wherein alpha is more than 0, beta is more than 0, and represents a weight parameter;
constraint modeling: the bandwidth constraint is the bandwidth B allocated to each user n n The sum of the two bandwidths cannot exceed B, sigma n∈ N B n B is less than or equal to B; power equality, power P of a single user n n The sum cannot exceed the power threshold P, sigma n∈N P n Less than or equal to P; the amount of data that the edge server has cached cannot exceed the space size CH m
Figure BDA0003941313440000111
In the service process, the computing resource for transcoding can not exceed the limit PC m
Figure BDA0003941313440000112
Wherein S l The data amount of the video representing the l-th bit rate, i.e., the video data size;
energy consumption modeling: energy consumption E m The calculation of (A) includes the energy consumption in the whole service process, respectively the cache energy consumption
Figure BDA0003941313440000113
Transcoding energy consumption
Figure BDA0003941313440000114
And transmission energy consumption
Figure BDA0003941313440000115
Energy consumption in the process of caching, namely caching energy consumption
Figure BDA0003941313440000116
And the size of the data volume currently being cached
Figure BDA0003941313440000117
Is in direct proportion; and the energy consumption of the transcoding process is the transcoding energy consumption
Figure BDA0003941313440000118
Same as the amount of video data S l In connection with the improvement of the computing power of a computer, the so-called transcoding can be used for transcoding a high-bit-rate video into a low-bit-rate video through transcoding operation and then providing the low-bit-rate video for a user, and the essence is that the high-bit-rate video is decoded and then re-encoded, and the transcoding energy consumption of a video segment is calculated based on the CPU performance according to the hardware configuration of equipment
Figure BDA0003941313440000119
Wherein ω is tc Represents the energy consumption factor, S l Representing video data size, cp representing transcoding strength, f cpu Represents the CPU frequency; transmission energy consumption, i.e. transmission energy consumption
Figure BDA00039413134400001110
In relation to the amount of video data and the network conditions,
Figure BDA00039413134400001111
wherein R is n Indicating the user transmission rate, P n Representing the allocated power for user n.
3) Video caching: according to the priori knowledge, including video popularity and user preference, the multi-bit-rate video segments are cached on the multi-server of the edge network in combination with the proposed caching scheme. The edge server calculates the optimal result of the cache placement according to the used scheme, the result shows whether a certain video segment should be cached or at which bit rate, and then obtains the video segment with the specific bit rate from the data center according to the result and caches the video segment on the edge server. The scheme comprises the following steps:
obtaining a feasible solution: firstly, according to the existing information of the combination of popularity and user preference, a greedy algorithm is used to select video segments with higher preference from all video segments in turn and the identifiers of the video segments are set to be 1, namely
Figure BDA0003941313440000121
And recording the size of the placed data until the next video segment is put down when the remaining edge server space is insufficient, and obtaining a feasible result array XL at the moment so as to obtain a feasible solution.
Obtaining a current optimal solution: calculating the bandwidth B of each user n at the current moment by taking the feasible solution XL as a cache placement result n And power P n The allocation of (c). Meanwhile, based on the obtained feasible solution XL, the identifier in the original problem is identified
Figure BDA0003941313440000122
The value range of (1) is relaxed from {0,1} to [0,1 ]]Then, each identifier is calculated by solving the original problem using an SLSLSLQP optimizer
Figure BDA0003941313440000123
Such that the user experience quality of the overall cached results is optimal and the constraints on bandwidth, power, cache space size, computational resource size and energy consumption are met.
Obtaining a global optimal solution: for each edge server, the feasible solution XL pre-calculates bandwidth and power allocation based on the rule of maximizing user experience quality, and then substitutes the allocation result into the original formula, calculates the video cache placement again, and then iterates the calculation cache placement, bandwidth and power allocation for multiple times to obtain the optimal solution XU of the video cache placement.
Rounding to get the best feasible solution: on the basis of the optimal solution XU, identifiers in the result are used
Figure BDA0003941313440000124
Is reduced to 0 or 1, the process is as follows: by controlling the parameter sigma pair
Figure BDA0003941313440000125
Rounding is performed. Specifically, for each edge server, for each
Figure BDA0003941313440000126
If it is not
Figure BDA0003941313440000127
Is provided with
Figure BDA0003941313440000128
If not, then,
Figure BDA0003941313440000129
representing the rounded set of cached variables as X and the corresponding quality of user experience value as Q X . If X exceeds the constraints of cache space and computational resources, increase σ to σ + λ | (Q) XU -Q XL )/Q ZU L (where λ is the step size). This is because when σ is increased, the number of variables having a value of 1 decreases, the buffer space required for the video segment decreases, and the constraint can be satisfied. Conversely, if σ satisfies the constraint, the value of σ is decreased, thereby buffering more video segments. In this example, if Q X ≥Q XL X is a feasible result, which indicates that X is a better solution than XL, and XL ← X is set to update the lower bound of the feasible solution. Repeating the rounding process until Q X The improvement in value is less than the threshold epsilon. Note that Q XL The value of (c) increases as the number of iterations increases. Since λ is a constant, Q XU Is fixed when Q XL The magnitude of change of σ gradually decreases as the value of (c) increases, so the rounding process can converge.
4) In the online service phase, at each moment, a user request is scheduled to a serviceable base station for service with the lowest delay rule. The edge server allocates bandwidth and power to the user initiating the request based on the resources such as bandwidth, so that the average experience quality is highest and the bandwidth and power constraints are not exceeded. According to the cache result and the resource allocation result, the edge server services the user request, that is, transmits the video data, and the details thereof are as follows:
user request scheduling: in dense cellular networks, a user may be covered and served by multiple SBS, thus redirecting the user request to the appropriate SBS in pursuit of high QoE. Specifically, based on the cached results, given a user request and the SBS set covering the request, fixed bandwidth and power, the video transcoding tc and transmission delay tr for each SBS are calculated, and the SBS with the lowest delay is selected to serve the request, wherein
Figure BDA0003941313440000131
Where cp denotes the transcoding strength, f cpu Denotes the CPU frequency, and tr = Data n /R n Wherein Data n Indicating the size of the transmitted data volume, R n Representing the user transmission rate.
Bandwidth and power allocation: according to the request scheduling result, on the premise of not exceeding the constraint, an SLSLQP optimizer is used for solving a nonlinear integer programming problem with the user experience quality as an optimization target and the bandwidth, power, the size of a cache space, computing resources and energy consumption as constraints, and the bandwidth and power which are currently pre-allocated to the user are computed, so that the overall user experience quality is optimal.
Service delivery: and servicing the user request, namely transmitting data, according to the caching result and the resource allocation result. Under this system structure, there are four cases that a user makes a request to an edge server:
(1) the requested video segment with a specific bit rate is cached in the edge server and can be directly transmitted. Otherwise, the edge server checks if there is a higher bit rate than the requested bit rate.
(2) If a higher bit rate is cached, the edge server may perform a transcoding operation and then send the transcoded version to the user;
(3) the requested segment is not cached in the edge server, but in the neighbor server. The edge server stores information of neighboring servers for collaboration. When the information is updated, the information exchange will be performed by broadcasting. Therefore, the edge server can acquire the video from the neighbor and decide whether to transcode;
(4) and if all the edge servers capable of cooperating do not cache the video segments requested by the users, directly acquiring videos with specific bit rates from the cloud.
And (3) cache updating: and updating the cache placement result according to the current video popularity and the change of the user preference, namely acquiring the combination of the prior knowledge and the video cache.
Cloud/data center: and storing the video segments of all bit rate levels, and transmitting the video segment data of a specific bit rate level to the base station when the edge server performs cache updating or does not cache the video segment requested by the user.
Edge node/edge server: calculating a pre-caching result according to information such as the popularity of the video segment, the preference of a user and the like, and acquiring and caching data from a data center according to the result; receiving a user request, and calculating the bandwidth and power distribution of the current user; when the benefit of transcoding is larger than that of data acquired from a neighbor server or a data center, transcoding the video and delivering the video to a user; and exchanging information with the neighbor server, and acquiring video data from the neighbor server instead of the data center under a specific scene.
The user: providing information such as preferences to an edge server; a request for a video segment at a particular bit rate level is initiated to an edge server and the buffer status is played and updated after the data is received.
In summary, the present invention provides a new adaptive bitrate video caching and resource scheduling scheme, which considers fine granularity factors in the cellular network video service. The cloud/data center stores video segments of all bit rate levels, and when the edge server performs cache updating or does not cache the video segments requested by the user, the cloud/data center transmits the video segment data of a specific bit rate level to the base station; a user provides information such as preference and the like to an edge server, initiates a video segment request with a specific bit rate level to the edge server at a certain moment, and plays the video segment request after receiving data; the edge server calculates a pre-caching result according to information such as video segment popularity, user preference and the like, acquires data from a data center according to the result and caches the data, receives a user request, calculates bandwidth and power distribution to a current user, performs transcoding operation if necessary, delivers the data to the user, or exchanges information with a neighbor server, and acquires video data from the neighbor server instead of the data center under a specific scene.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (5)

1. The caching and resource scheduling method of the edge network self-adaptive bit rate video is characterized by comprising the following steps of:
1) Acquiring information: in the off-line stage, the current popularity of the video and the preference of the user to the video frequency band are obtained and analyzed according to the video playing amount and the user playing record information;
2) Video service modeling: according to the current popularity of the video and the preference of the user to the video frequency band, modeling the video service under the edge network, establishing a mathematical model which takes the user experience quality as an optimization target and takes the bandwidth, the power, the size of a cache space, the calculation resource and the energy consumption as constraints, and expressing the mathematical model as a nonlinear integer programming problem;
3) Video caching: simplifying the nonlinear integer programming problem through a relaxation variable, solving the problem by using an optimization solver, then obtaining a final result by using a rounding strategy, and caching video segments with various bit rates by using an edge server according to the result;
4) Service delivery: at each moment, the user request is dispatched to a base station which can be served for service according to a rule with the lowest delay, the edge server allocates bandwidth and power to the user which initiates the request by solving a nonlinear integer programming problem, and finally, according to a cache result and a resource allocation result, the edge server serves the user request, namely, video data is transmitted.
2. The method for caching and resource scheduling of the edge network adaptive bit rate video according to claim 1, wherein in step 1), the current popularity of the video and the preference of the user for the video band are obtained and analyzed, and the details thereof are as follows:
and (3) popularity: the edge server obtains the ratio of the accumulated playing amount of the current video to the total playing amount as the popularity basis Pr v Representing the popularity of the V-th video, wherein V is E {1,2., V }, and V represents the number of videos, namely that the video speed increasing rate of the high popularity video is higher than the video speed increasing rate of the low popularity video in most cases; in some cases, the playing amount of a certain video suddenly jumps, or a certain video is just released, and the current playing amount does not match the increase; however, the phenomenon is a short-term phenomenon, and after a period of time, the growth of the video tends to be stable and is matched with the playing amount;
user preferences: for the user preference, the preference degree of the n-th video to the v-th video can be obtained from the historical record of the user by calculating the proportion of the playing times of the single video to the total playing times in a period of time as the preference degree of the n-th video to the v-th video
Figure FDA0003941313430000021
And a video segment v k Preference of
Figure FDA0003941313430000022
Wherein v is k Represents the kth segment of the v-th video, N ∈ {1,2,. And N }, K ∈ {1,2,. And K }, N represents the number of users, and K represents the number of segments of the video.
3. The edge network adaptive bit rate video buffering and resource scheduling method according to claim 2, wherein the step 2) comprises:
confirmation dieType variable: use of
Figure FDA0003941313430000023
To indicate whether the kth segment of the v-th video is cached at the m-th edge server at the l-th bit rate, wherein
Figure FDA0003941313430000024
If the value is 0, the result is no, and if the value is 1, the result is that L belongs to {1,2.,. L }, M belongs to {1,2.,. M }, L represents the bit rate number, and M represents the number of edge servers; taking the combined value of popularity and preference as the request probability
Figure FDA0003941313430000025
Representing the probability that user n requests the kth segment of the vth video and the l bit rate level; use identifier
Figure FDA0003941313430000026
Indicating whether or not to perform a transcoding process,
Figure FDA0003941313430000027
meaning that if and only if the mth edge server does not cache the video segment v at the lth bit rate k But with a higher bit rate cached than the l-th bit rate, a transcoding operation is performed,
Figure FDA0003941313430000028
indicating whether a kth segment of a vth video is cached on an mth edge server at an ith bitrate, i = l +1;
modeling the user experience quality: the quality of experience of each user n is determined by the average video quality AVQ n Video bit rate switching VS n And a rebuffering time RT n Calculating; by calculating the total quality of video segments that a user has viewed
Figure FDA0003941313430000029
Divided by the number of viewed video segments K n To obtainAverage video quality AVQ n Wherein BR l Represents the l-th bit rate; video bit rate switching VS n By superimposing the absolute value of the bit rate difference of the historically requested video segments, i.e.
Figure FDA00039413134300000210
I.e. superimposing the absolute value of the difference in bit rate between two adjacent video segments, BR l' Represents the L 'th bit rate, L' is in the field of 1,2,. And L; and a rebuffering time RT n The amount of data currently buffered by the buffer of the user equipment BF n (t) determination of, wherein BF n (t) in seconds, when the user buffer data is exhausted, i.e. BF n (t) =0, the time for the user to pause playing to the next time to continue playing the video is regarded as the re-buffering, and the time is accumulated to be used as the re-buffering time in the watching process of the user; the three are linearly combined to obtain the user experience quality Q, namely Q = AVQ n -αVS n -βRT n Wherein alpha is more than 0, beta is more than 0, and represents a weight parameter;
constraint modeling: the bandwidth constraint is the bandwidth B allocated to each user n n The sum of the two bandwidths cannot exceed B, sigma n∈N B n B is less than or equal to B; power equality, power P of a single user n n The sum cannot exceed the power threshold P, sigma n∈N P n Less than or equal to P; the amount of data that the edge server has cached cannot exceed the space size CH m
Figure FDA0003941313430000031
In the course of service, the computing resources for transcoding cannot exceed the limit PC m
Figure FDA0003941313430000032
Wherein S l A data amount representing video of the l-th bit rate, i.e., a video data size;
energy consumption modeling: energy consumption E m The calculation of (A) includes the energy consumption in the whole service process, respectively the cache energy consumption
Figure FDA0003941313430000033
Transcoding energy consumption
Figure FDA0003941313430000034
And transmission energy consumption
Figure FDA0003941313430000035
Energy consumption in the process of caching, namely caching energy consumption
Figure FDA0003941313430000036
And the size of the data volume currently being cached
Figure FDA0003941313430000037
Is in direct proportion; and the energy consumption of the transcoding process is the transcoding energy consumption
Figure FDA0003941313430000038
Same as the amount of video data S l In connection with the improvement of the computing power of a computer, the so-called transcoding can be used for transcoding a high-bit-rate video into a low-bit-rate video through transcoding operation and then providing the low-bit-rate video for a user, and the essence is that the high-bit-rate video is decoded and then re-encoded, and the transcoding energy consumption of a video segment is calculated based on the CPU performance according to the hardware configuration of equipment
Figure FDA0003941313430000039
Wherein ω is tc Representing the energy consumption factor, S l Representing video data size, cp representing transcoding strength, f cpu Represents the CPU frequency; transmission energy consumption, i.e. transmission energy consumption
Figure FDA00039413134300000310
In relation to the amount of video data and the network conditions,
Figure FDA00039413134300000311
wherein R is n Indicating the user transmission rate, P n Indicating the power allocated to user n.
4. The method for caching and resource scheduling of adaptive bit rate videos in an edge network according to claim 3, wherein in step 3), the nonlinear integer programming problem is simplified and an optimized solver is used, then a rounding strategy is used to obtain a final result, according to the solved cache placement result, the result shows whether a certain video segment should be cached or cached at what bit rate, and then according to the result, a video segment with a specific bit rate is obtained from the data center and cached on the edge server, and the details thereof are as follows:
obtaining a feasible solution: firstly, according to the existing information of the combination of popularity and user preference, a greedy algorithm is used to select video segments with higher preference from all video segments in turn and the identifiers of the video segments are set to be 1, namely
Figure FDA0003941313430000041
Recording the size of the placed data until the space of the remaining edge server is insufficient, and putting down the next video segment, and obtaining a feasible result array XL at the moment, namely feasible solution;
obtaining a current optimal solution: calculating the bandwidth B of each user n at the current moment by taking the feasible solution XL as a cache placement result n And power P n While the identifier in the original problem is assigned based on the obtained feasible solution XL
Figure FDA0003941313430000042
The value range of (1) is relaxed from {0,1} to [0,1 ]]Then, each identifier is calculated by solving the original problem using an SLSLSLQP optimizer
Figure FDA0003941313430000043
The user experience quality of the whole cache result is optimal, and the constraints on bandwidth, power, cache space size, computing resource size and energy consumption are met;
obtaining a global optimal solution: for each edge server, the feasible solution XL pre-calculates bandwidth and power allocation based on a rule of maximizing user experience quality, and substitutes the allocation result into the original formula in turn, the video cache placement is calculated again, and then the calculation cache placement, the bandwidth allocation and the power allocation are iterated for multiple times to obtain the optimal solution XU of the video cache placement;
rounding to get the best feasible solution: on the basis of the optimal solution XU, identifiers in the result are used
Figure FDA0003941313430000044
The value of (b) is reduced to 0 or 1 by the following procedure: by controlling the parameter sigma pair
Figure FDA0003941313430000045
Performing rounding, in particular, for each edge server, for each
Figure FDA0003941313430000046
If it is not
Figure FDA0003941313430000047
Is provided with
Figure FDA0003941313430000048
If not, then,
Figure FDA0003941313430000049
representing the rounded set of cached variables as X and the corresponding quality of user experience value as Q X (ii) a If X exceeds the constraints of cache space and computational resources, increase σ to σ + λ | (Q) XU -Q XL )/Q ZU Where λ is the step size, because when σ is increased, the number of variables having a value of 1 decreases, the buffer space required for the video segment decreases, and the constraint condition can be satisfied; conversely, if σ satisfies the constraint, the value of σ is decreased, thereby buffering more video segments; if Q is X ≥Q XL X is a feasible result, which indicates that X is a better solution than XL, and XL ← X is set to update the lower bound of the feasible solution; repeat the above roundingStroke up to Q X The improvement in value is less than the threshold epsilon; note that Q XL The value of (A) increases with the number of iterations, since λ is a constant, Q XU Is fixed when Q XL The magnitude of the change of σ gradually decreases as the value of (c) increases, so the rounding process can converge.
5. The method for buffering and resource scheduling of adaptive bitrate video in edge network according to claim 4, wherein in step 4), at each moment, the user request is scheduled to the serviceable base station for service with the lowest delay rule, the edge server allocates bandwidth and power to the user initiating the request so that the average quality of experience is the highest and the bandwidth and power constraints are not exceeded, and according to the buffering result and the resource allocation result, the edge server services the user request, i.e. transmits video data, as follows:
user request scheduling: in dense cellular networks, a user may be covered and served by multiple SBS, thus redirecting the user request to the appropriate SBS in pursuit of high QoE, in particular, given a user request and the set of SBS covering the request, fixed bandwidth and power, calculating the video transcoding tc and transmission delay tr for each SBS, and selecting the SBS with the lowest delay to serve the request, according to the cached results, where
Figure FDA0003941313430000051
Where cp denotes the transcoding strength, f cpu Denotes the CPU frequency, and tr = Data n /R n Wherein Data n Indicating the size of the transmitted data volume, R n Representing the user transmission rate;
bandwidth and power allocation: according to the request scheduling result, on the premise of not exceeding the constraint, an SLSLQP optimizer is used for solving a nonlinear integer programming problem with the user experience quality as an optimization target and the bandwidth, the power, the size of a cache space, the calculation resources and the energy consumption as constraints, and the bandwidth and the power which are currently pre-allocated to the user are calculated, so that the overall user experience quality is optimal;
service delivery: the user request is served, namely data is transmitted, according to the cache result and the resource allocation result, wherein the following four conditions exist when the user initiates a request to the edge server:
(1) caching the requested video segment with a specific bit rate in an edge server, wherein the video segment can be directly transmitted, otherwise, the edge server checks whether the bit rate is higher than the requested bit rate;
(2) if a high bit rate is cached, the edge server may perform a transcoding operation and then send the transcoded version to the user;
(3) the requested segment is not cached in the edge server, but cached in the neighbor server, and the edge server stores the information of the adjacent server for cooperation; when the information is updated, the information is exchanged through broadcasting; therefore, the edge server can acquire the video from the neighbor and decide whether to transcode;
(4) and if all the edge servers capable of cooperating do not cache the video segments requested by the users, directly acquiring videos with specific bit rates from the cloud.
CN202211421390.0A 2022-11-14 2022-11-14 Caching and resource scheduling method for edge network self-adaptive bit rate video Pending CN115720237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211421390.0A CN115720237A (en) 2022-11-14 2022-11-14 Caching and resource scheduling method for edge network self-adaptive bit rate video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211421390.0A CN115720237A (en) 2022-11-14 2022-11-14 Caching and resource scheduling method for edge network self-adaptive bit rate video

Publications (1)

Publication Number Publication Date
CN115720237A true CN115720237A (en) 2023-02-28

Family

ID=85255106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211421390.0A Pending CN115720237A (en) 2022-11-14 2022-11-14 Caching and resource scheduling method for edge network self-adaptive bit rate video

Country Status (1)

Country Link
CN (1) CN115720237A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN117692338A (en) * 2024-02-01 2024-03-12 长城数字能源(西安)科技有限公司 Energy Internet of things data visualization method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN116828226B (en) * 2023-08-28 2023-11-10 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN117692338A (en) * 2024-02-01 2024-03-12 长城数字能源(西安)科技有限公司 Energy Internet of things data visualization method and system

Similar Documents

Publication Publication Date Title
Li et al. QoE-driven mobile edge caching placement for adaptive video streaming
CN115720237A (en) Caching and resource scheduling method for edge network self-adaptive bit rate video
US20180176325A1 (en) Data pre-fetching in mobile networks
CN110248210B (en) Video transmission optimization method
CN112954385B (en) Self-adaptive shunt decision method based on control theory and data driving
CN110809167B (en) Video playing method and device, electronic equipment and storage medium
Liu et al. MEC-assisted flexible transcoding strategy for adaptive bitrate video streaming in small cell networks
CN113282786B (en) Panoramic video edge collaborative cache replacement method based on deep reinforcement learning
Baccour et al. Proactive video chunks caching and processing for latency and cost minimization in edge networks
CN114640870A (en) QoE-driven wireless VR video self-adaptive transmission optimization method and system
CN108769729B (en) Cache arrangement system and cache method based on genetic algorithm
Xiao et al. Transcoding-Enabled Cloud-Edge-Terminal Collaborative Video Caching in Heterogeneous IoT Networks: A Online Learning Approach with Time-Varying Information
Dai et al. MAPCaching: A novel mobility aware proactive caching over C-RAN
CN116916390A (en) Edge collaborative cache optimization method and device combining resource allocation
KR101966588B1 (en) Method and apparatus for receiving video contents
Vasilakos et al. Mobility-based proactive multicast for seamless mobility support in cellular network environments
CN111447506B (en) Streaming media content placement method based on delay and cost balance in cloud edge environment
CN112887314B (en) Time delay perception cloud and mist cooperative video distribution method
CN113766540B (en) Low-delay network content transmission method, device, electronic equipment and medium
CN116056156A (en) MEC auxiliary collaborative caching system supporting self-adaptive bit rate video
CN113473408A (en) User association method and system for realizing video transmission in Internet of vehicles
Kumar et al. Consolidated caching with cache splitting and trans-rating in mobile edge computing networks
Darwich et al. Adaptive Video Streaming: An AI-Driven Approach Leveraging Cloud and Edge Computing
CN108429919B (en) Caching and transmission optimization method of multi-rate video in wireless network
Lee et al. Adaptive and Stabilized Streaming for Edge-Assisted Connected Vehicles under Heterogeneous Computing Constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination