CN116056156A - MEC auxiliary collaborative caching system supporting self-adaptive bit rate video - Google Patents

MEC auxiliary collaborative caching system supporting self-adaptive bit rate video Download PDF

Info

Publication number
CN116056156A
CN116056156A CN202211570420.4A CN202211570420A CN116056156A CN 116056156 A CN116056156 A CN 116056156A CN 202211570420 A CN202211570420 A CN 202211570420A CN 116056156 A CN116056156 A CN 116056156A
Authority
CN
China
Prior art keywords
video
executing
cache
edge
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211570420.4A
Other languages
Chinese (zh)
Inventor
何豪佳
郭松涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202211570420.4A priority Critical patent/CN116056156A/en
Publication of CN116056156A publication Critical patent/CN116056156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a MEC auxiliary collaborative caching system supporting self-adaptive bit rate video, which comprises an edge server, a small base station, a source content server and user equipment, wherein the edge server and the small base station are deployed together to form an edge node, a plurality of edge nodes cover the same area at the same time, the source content server maximizes hit income of a user request in the system according to the request of the user equipment and provides video content to the user equipment, and the video content is video content containing a plurality of bit rate versions. The method comprehensively considers the popularity of the video, the popularity of the version and the collaboration among the edge servers, thereby maximizing the request hit rate of the users in the cache system.

Description

MEC auxiliary collaborative caching system supporting self-adaptive bit rate video
Technical Field
The invention belongs to the field of video caching in a mobile edge computing technology, and particularly relates to a MEC auxiliary collaborative caching system supporting self-adaptive bit rate video.
Background
With the continuous development of mobile networks and the popularization of intelligent devices, the tremendous video demand puts tremendous pressure on mobile network operators, and the limited capacity of the return link becomes a major bottleneck to meet user demands, especially during traffic peaks. Mobile Edge Computing (MEC) sinks cloud-like capabilities to the edge of the radio access network, providing computing and storage resources close to the user, e.g., by deploying edge servers at small base stations, video content with high popularity can be pre-cached closer to the user, thus satisfying different users' requests for the same content without the need for repeated transmissions from the source content server.
In the existing femotocasting scheme, two applications of uncoded and coded video caching in a distributed caching network formed by small base stations like femotocells are proposed, but only caching of a single edge server is considered, and collaborative caching among the edge servers is not considered.
The cloud cache in the existing low-complexity collaborative hierarchical caching scheme based on the cloud wireless access network background is introduced as an intermediary for the edge-based and core-based caching schemes; existing distributed caching strategies take into account the trade-off between diversity and redundancy of the cached video content and achieve the best redundancy rate in each edge server, but they do not take into account the application of adaptive bitrate video streaming techniques. In the prior art, a joint caching and processing scheme based on popularity is designed based on the transcoding technology, but the situation of cooperative caching of overlapping coverage areas among small base stations is not considered.
Disclosure of Invention
Accordingly, it is an object of the present invention to provide an MEC-assisted collaborative caching system supporting adaptive bitrate video.
The invention aims at realizing the following technical scheme:
a MEC-assisted collaborative caching system supporting adaptive bitrate video, comprising: an edge server, a small cell, a source content server and a user device, wherein,
the edge server and the small base stations are deployed together to form edge nodes, each edge node has the functions of calculation, storage and communication service, a plurality of edge nodes simultaneously cover the same area, namely, the request of the same user can be jointly completed by a plurality of base stations,
the cache system maximizes user request hit revenue in the system and provides video content to the user device based on the user device request, the video content being video content comprising multiple bit rate versions,
wherein maximizing user request hit revenue in the system includes:
actively placing video content in an edge server based on an active cache deployment algorithm with a user request hit gain increment maximization;
when a cache miss occurs, the active content server downloads new video content to the edge server, based on a passive cache replacement algorithm that maximizes the hit gain increment for the user request, determines whether to replace existing content in the cache with new video content.
Further, the active cache deployment algorithm for maximizing gain increment based on user request hits includes:
step 2-1: initializing cache deployment sets of all edge servers, and executing the step 2-2;
step 2-2: calculating hit gain of a user request caused by each video deployment which is not executed yet, and executing the steps 2-3;
step 2-3: selecting the video deployment with the largest hit gain of the user request, and executing the step 2-4;
step 2-4: judging whether the gain is equal to zero, if so, executing the step 2-9; if not, executing the step 2-5;
step 2-5: judging whether the cache capacity of the node where the video deployment with the maximum user request hit gain is located is enough or not, and if not, executing the steps 2-6; if yes, executing the step 2-7;
step 2-6: removing the edge server, and executing the steps 2-8;
step 2-7: executing the cache deployment and returning to the step 2-2;
step 2-8: judging whether all edge servers are removed, if so, executing the steps 2-9; if not, returning to the step 2-2;
step 2-9: and (5) finishing deployment.
Further, the passive cache replacement algorithm for maximizing gain increment based on user request hits includes:
step 3-1: initializing a set to be processed, and executing the step 3-2;
step 3-2: calculating hit benefits of the user request of each edge server, and executing the step 3-3;
step 3-3: selecting an edge server with the smallest hit gain of the user request, and executing the step 3-4;
step 3-4: judging whether the residual cache capacity of the current edge server is enough to accommodate new video content, if so, executing the step 3-7; if not, executing the step 3-5;
step 3-5: selecting an edge server with minimum gain of hit gain of a user request for deployment, and executing the steps 3-6;
step 3-6: calculating the space size occupied by the new video content, placing the new video content into a set to be processed, and returning to the step 3-4;
step 3-7: judging whether hit gain of a user request caused by replacing video contents in a set to be processed with new video contents is improved, if so, executing the step 3-9; if not, executing the step 3-8;
step 3-8: replacing, and executing the steps 3-9;
step 3-9: and (5) finishing deployment.
Further, the objective function f and constraint conditions of the user request hit gain increment maximization are respectively:
Figure BDA0003987807370000031
Figure BDA0003987807370000032
wherein the set of all videos available to the user in the cache system is
Figure BDA0003987807370000033
Each video
Figure BDA0003987807370000034
Is encoded into R different bit rate versions, which are combined into a set of +.>
Figure BDA0003987807370000035
N edge nodes in the system, the set of which is +.>
Figure BDA0003987807370000036
The U user sets are denoted +.>
Figure BDA0003987807370000037
The radix set of the cache deployment may be expressed as: />
Figure BDA0003987807370000038
(v, r, n) represents video representation v r Cached in edge node n, v r A video representation representing an r-th bit rate version of video v; ψ (S) represents the total user request hit benefit in the cache system in case of a cache deployment scheme of S, < ->
Figure BDA0003987807370000039
Figure BDA00039878073700000310
Representation for video representation v r Which hits the benefit by the request received by user u, where o v,l Video representation v representing the first bit rate version of video v l Size of->
Figure BDA00039878073700000311
It means that in case of cache deployment S, v r Version of video when user u's request is satisfied, < > on>
Figure BDA00039878073700000312
For any r', if v r' At->
Figure BDA00039878073700000313
Is buffered, then->
Figure BDA00039878073700000314
The value is 1, otherwise the value is 0, wherein +_>
Figure BDA00039878073700000315
Representing a set of edge nodes adjacent to user u, p v,r Representing video representation v r Popularity of 0.ltoreq.p v,r Less than or equal to 1, for any v r If it is at->
Figure BDA00039878073700000316
Is buffered, then->
Figure BDA00039878073700000317
The value is 1, otherwise, the value is 0; for each of the edge nodes EN,
Figure BDA00039878073700000318
can be divided into N disjoint subsets +.>
Figure BDA00039878073700000319
Figure BDA00039878073700000320
The cache deployment S is +.>
Figure BDA00039878073700000321
Can be expressed as s= { S 1 ,S 2 ,...,S N }, wherein->
Figure BDA00039878073700000322
v r Dimension o of (2) v,r Can be expressed as: o (o) v,r =l v ·b r ,l v Representing the length of video v, b r Representing video representation v r Bit rate of C n Representing edge nodesn storage capacity.
The beneficial effects of the invention are as follows:
the invention considers the processing capacity of the edge server, and the edge server with real-time computing capacity can transcode one video into a plurality of versions with different bit rates so as to meet the requirements of users on the versions with different video bit rates. In addition, the invention introduces a collaboration mechanism among edge servers in the cache system, namely, the edge servers in a plurality of base stations covering the same area can jointly provide service for the request of users in the area. In addition, the popularity of the video, the popularity of the version and the cooperation of the edge server are comprehensively considered, the index of hit benefits of the user request is provided, the problem of self-adaptive bit rate video cooperation cache under the constraint of cache capacity of the edge server is built, the hit benefits of the user request are maximum, and therefore the optimal deployment scheme is obtained.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings, in which:
FIG. 1 is an application scenario diagram of a mobile edge computing network with joint video buffering and processing capabilities, according to one embodiment of the present application;
FIG. 2 is a schematic flow diagram of an active cache deployment algorithm based on user request hit gain maximization according to one embodiment of the present application;
FIG. 3 is a schematic flow diagram of a passive cache replacement algorithm based on user request hit gain maximization according to one embodiment of the present application.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
The application proposes an MEC-assisted collaborative caching system (hereinafter referred to as "caching system") supporting adaptive bitrate video.
FIG. 1 is an application scenario diagram of a mobile edge computing network with joint video buffering and processing capabilities, according to one embodiment of the present application. The network comprises an MEC assisted collaborative caching system supporting adaptive bitrate video, the caching system comprising a plurality of edge servers (Mobile Edge Server, MEC), a plurality of small base stations (Small Base Station, SBS), a source content Server (Origin Server) and user equipment associated with Mobile users (Mobile users).
In this scenario, the Edge server and the small base station are deployed together, forming an Edge Node (Edge Node, EN). Each edge node has the functions of computing, storage and communication services. Multiple edge nodes may cover the same area at the same time. Due to the limitation of the transmission distance of the edge nodes, each mobile user can only connect with its neighboring edge nodes through its user equipment. Each edge node may contain a cache unit (cache unit) and a transcoding unit (transcoding unit). When a user requests a specific version of video content, if the version of video content is already cached in the edge node, the caching unit may directly send the video content to an output buffer (output buffer) by means of a video stream and send the video content to the user equipment; otherwise, if the requested version of the video content does not exist, but a higher bit rate version thereof exists, the transcoding unit may transcode it into the desired version and send it into the output buffer.
Each edge node may also include a built-in RTP/RTSP client that may receive video content from a source content server over a backhaul link (backhaul link) and store the video content in an input buffer (input buffer). The incoming video stream may then be transferred directly to the buffering unit and/or the output buffer according to a specific buffering strategy.
Each edge node may also include a built-in RTP/RTSP server that may be used to send video content from the output buffer into the user device.
Since multiple edge nodes may cover the same area at the same time, in this case, the request of the same user may be commonly completed by multiple base stations, i.e. the edge nodes may cooperate with each other. In this way, a plurality of different edge nodes covering the same area may collectively serve user requests in that area.
The invention also considers the processing capabilities of the edge nodes, in particular their real-time processing capabilities. The edge node may transcode the video content into versions with multiple bit rates to meet different user requirements. Therefore, the scene also has the self-adaptive bit rate video-on-demand service, namely, a user can select the most suitable bit rate version video to play according to the equipment performance and the current network condition. For such a scenario, where the same video content has multiple bit rate versions, the high bit rate version of the video content may be transcoded into the lower bit rate version of the video by transcoding techniques. When a user requests a low bit rate version of video, the high bit rate version of video may be transcoded by a transcoding technique to meet the user's needs.
The aim of the invention is to maximize the hit benefits (request hitprofit, RHP) of user requests in a cache system. The source content server maximizes the hit gain of user requests in the system and provides video content to the user device based on the user device requests. Thus, when a video content is deployed to an edge node, the improvement of hit revenue can be evaluated by comprehensively considering the impact of other cached versions and the deployment status of other cooperating edge nodes. Wherein the user request hit benefit characterizes the probability that when one edge node requests a video, the video can be obtained directly from its neighboring edge nodes. User request hit revenue integrates video popularity, version popularity, and collaboration between edge servers.
Under the constraint condition of the caching capability of the edge node, the collaborative caching problem aiming at the self-adaptive bit rate video is constructed, so that the hit gain of the user request is maximum. This problem is the NP-complete problem. In order to maximize the request hit benefits of the users in the cache system, there may be two steps: firstly, actively placing video content in a cache of an edge server through an active cache strategy, so that hit benefits of user requests received by users are maximized; then, whenever a cache miss occurs, a new video content is downloaded from the source content server to the edge server, and a decision needs to be made as to whether to replace the existing content in the cache with this new content. For the two steps, the invention respectively provides an active cache deployment algorithm based on the maximization of the hit gain of the user request and a passive cache replacement algorithm based on the maximization of the hit gain of the user request. The video content is actively placed in an edge server based on an active cache deployment algorithm with the maximum hit gain increment of the user request; when a cache miss occurs, the active content server downloads new video content to the edge server, based on a passive cache replacement algorithm that maximizes the hit gain increment for the user request, determines whether to replace existing content in the cache with new video content.
Assume that the set of all videos available to a user in a cache system is
Figure BDA0003987807370000061
Every video +.>
Figure BDA0003987807370000062
Is encoded into R different bit rate versions, which are combined into a set of +.>
Figure BDA0003987807370000063
N edge nodes in the system, the set of which is +.>
Figure BDA0003987807370000064
The U user sets are denoted +.>
Figure BDA0003987807370000065
The radix set of the cache deployment may be expressed as: />
Figure BDA0003987807370000066
(v, r, n) represents video representation v r Cached in edge node n, v r A video representation representing an r-th bit rate version of video v. For each edge node EN +.>
Figure BDA0003987807370000067
Can be divided into N disjoint subsets +.>
Figure BDA0003987807370000068
The cache deployment S is +.>
Figure BDA0003987807370000069
Can be expressed as s= { S 1 ,S 2 ,...,S N }, wherein->
Figure BDA00039878073700000610
For each edge node, its buffering capacity constraint can be expressed as:
Figure BDA00039878073700000611
wherein v is r Dimension o of (2) v,r Can be expressed as: o (o) v,r =l v ·b r ,l v Representing the length of video v, b r Representing video representation v r Bit rate of C n Representing the storage capacity of the edge node n.
When a user requests a video content, whether the video content can be obtained directly from an adjacent edge node or from a source content server, a certain delay is required in this process, so that the minimum waiting delay can be converted into the maximum obtaining of the video (straightAcquisition or need to be transcoded). An indicator of demand hit return is defined in this application to reflect the probability of a user demand hit. When a user requests a particular video representation v r When the user is said to obtain hit revenue, whether by directly obtaining video from the neighboring edge node to fulfill the request or by transcoding from the neighboring edge node.
Wherein, the objective function for maximizing the request hit gain of the user in the cache system can be expressed as follows by the formula (1):
Figure BDA00039878073700000612
where ψ (S) represents the total user request hit benefit in the cache system in case of a cache deployment scheme of S,
Figure BDA00039878073700000613
representation for v r Which hits the benefit by the request received by user u, where o v,l Video representation v representing the first bit rate version of video v l Is provided in the form of a sheet of paper,
Figure BDA0003987807370000071
it means that in case of cache deployment S, v r Version of video when user u's request is satisfied, < > on>
Figure BDA0003987807370000072
For any r', if v r' At->
Figure BDA0003987807370000073
Is cached in the middle part
Figure BDA0003987807370000074
The value is 1, otherwise, the value is 0. Wherein (1)>
Figure BDA0003987807370000075
Representing a set of edge nodes adjacent to user u, p v,r Representing video representation v r Popularity of 0.ltoreq.p v,r Less than or equal to 1, for any v r If it is at->
Figure BDA0003987807370000076
Is buffered, then->
Figure BDA0003987807370000077
And the value is 1, otherwise, the value is 0.
The constraint that maximizes the request hit gain for a user in a cache system is expressed by formulas (2) - (4):
Figure BDA0003987807370000078
Figure BDA0003987807370000079
Figure BDA00039878073700000710
FIG. 2 is a schematic flow diagram of an active cache deployment algorithm based on user request hit gain maximization according to one embodiment of the present application. As shown in fig. 2, the active cache deployment algorithm that maximizes the gain increment based on user request hits comprises the steps of:
step 2-1: initializing a cache deployment set of the edge servers in all edge nodes, and executing the step 2-2;
step 2-2: calculating hit gain of a user request caused by each video deployment which is not executed yet, and executing the steps 2-3;
step 2-3: selecting the video deployment with the largest hit gain of the user request, and executing the step 2-4;
step 2-4: judging whether the gain is equal to zero, if so, executing the step 2-9; if not, executing the step 2-5;
step 2-5: judging whether the cache capacity of the node where the video deployment with the maximum user request hit gain is located is enough or not, and if not, executing the steps 2-6; if yes, executing the step 2-7;
step 2-6: removing the edge server, and executing the steps 2-8;
step 2-7: executing the cache deployment and returning to the step 2-2;
step 2-8: judging whether all edge servers are removed, if so, executing the steps 2-9; if not, returning to the step 2-2;
step 2-9: and (5) finishing deployment.
The above active cache deployment algorithm that maximizes the gain increment based on user request hits may perform cache deployment of the edge node servers during off-peak traffic periods (e.g., midnight) to avoid traffic overload of the backhaul links. And at each instant in time, for each cache miss, means that a new video content needs to be downloaded from the source content server to service the user's request. Since the cache space of all edge node servers is now full, it is necessary to determine whether to replace the cached video content with newly arrived content.
FIG. 3 is a schematic flow diagram of a passive cache replacement algorithm based on user request hit gain maximization according to one embodiment of the present application. The steps shown in fig. 3 occur in the case of a cache miss with a request within a new video. As shown in fig. 3, in this case, when a mobile user U associated with an edge node requests new video content, a passive cache replacement algorithm that maximizes the gain increment based on the user request includes the steps of:
step 3-1: initializing a set to be processed, and executing the step 3-2. Wherein the set to be processed represents a combination of video content in the edge node that may be replaced.
Step 3-2: calculating hit benefits of the user request of each edge server adjacent to the mobile user U, and executing step 3-3;
step 3-3: selecting an edge server with the smallest hit gain of the user request, and executing the step 3-4;
step 3-4: judging whether the residual cache capacity of the current edge server is enough to accommodate new video content, if so, executing the step 3-7; if not, executing the step 3-5;
step 3-5: selecting an edge server with minimum gain of hit gain of a user request for deployment, and executing the steps 3-6;
step 3-6: calculating the space size occupied by the new video content, placing the new video content into a set to be processed, and returning to the step 3-4;
step 3-7: judging whether hit gain of a user request caused by replacing video contents in a set to be processed with new video contents is improved, if so, executing the step 3-9; if not, executing the step 3-8;
step 3-8: replacing, and executing the steps 3-9;
step 3-9: and (5) finishing deployment.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the claims of the present invention.

Claims (4)

1. A MEC-assisted collaborative caching system supporting adaptive bitrate video, comprising:
an edge server, a small cell, a source content server and a user device, wherein,
the edge server and the small base stations are deployed together to form edge nodes, each edge node has the functions of calculation, storage and communication service, a plurality of edge nodes simultaneously cover the same area, namely, the request of the same user can be jointly completed by a plurality of base stations,
the cache system maximizes user request hit revenue in the system and provides video content to the user device based on the user device request, the video content being video content comprising multiple bit rate versions,
wherein maximizing user request hit revenue in the system includes:
actively placing video content in an edge server based on an active cache deployment algorithm with a user request hit gain increment maximization;
when a cache miss occurs, the active content server downloads new video content to the edge server, based on a passive cache replacement algorithm that maximizes the hit gain increment for the user request, determines whether to replace existing content in the cache with new video content.
2. The MEC-assisted collaborative caching system supporting adaptive bitrate video according to claim 1, wherein an active cache deployment algorithm that maximizes gain based on user request hits comprises:
step 2-1: initializing cache deployment sets of all edge servers, and executing the step 2-2;
step 2-2: calculating hit gain of a user request caused by each video deployment which is not executed yet, and executing the steps 2-3;
step 2-3: selecting the video deployment with the largest hit gain of the user request, and executing the step 2-4;
step 2-4: judging whether the gain is equal to zero, if so, executing the step 2-9; if not, executing the step 2-5;
step 2-5: judging whether the cache capacity of the node where the video deployment with the maximum user request hit gain is located is enough or not, and if not, executing the steps 2-6; if yes, executing the step 2-7;
step 2-6: removing the edge server, and executing the steps 2-8;
step 2-7: executing the cache deployment and returning to the step 2-2;
step 2-8: judging whether all edge servers are removed, if so, executing the steps 2-9; if not, returning to the step 2-2;
step 2-9: and (5) finishing deployment.
3. The MEC-assisted collaborative caching system supporting adaptive bitrate video according to claim 1, wherein a passive cache replacement algorithm that maximizes gain based on user request hits comprises:
step 3-1: initializing a set to be processed, and executing the step 3-2;
step 3-2: calculating hit benefits of the user request of each edge server, and executing the step 3-3;
step 3-3: selecting an edge server with the smallest hit gain of the user request, and executing the step 3-4;
step 3-4: judging whether the residual cache capacity of the current edge server is enough to accommodate new video content, if so, executing the step 3-7; if not, executing the step 3-5;
step 3-5: selecting an edge server with minimum gain of hit gain of a user request for deployment, and executing the steps 3-6;
step 3-6: calculating the space size occupied by the new video content, placing the new video content into a set to be processed, and returning to the step 3-4;
step 3-7: judging whether hit gain of a user request caused by replacing video contents in a set to be processed with new video contents is improved, if so, executing the step 3-9; if not, executing the step 3-8;
step 3-8: replacing, and executing the steps 3-9;
step 3-9: and (5) finishing deployment.
4. The MEC-assisted collaborative caching system supporting adaptive bitrate video according to claim 1, wherein the objective function f and constraint for maximizing gain in user request hits are:
Figure FDA0003987807360000021
Figure FDA0003987807360000022
wherein the set of all videos available to the user in the cache system is
Figure FDA0003987807360000023
Every video +.>
Figure FDA0003987807360000024
Is encoded into R different bit rate versions, which are combined into a set of +.>
Figure FDA0003987807360000025
N edge nodes in the system, the set of which is +.>
Figure FDA0003987807360000026
The U user sets are denoted +.>
Figure FDA0003987807360000027
The radix set of the cache deployment may be expressed as: />
Figure FDA0003987807360000028
(v, r, n) represents video representation v r Cached in edge node n, v r A video representation representing an r-th bit rate version of video v; ψ (S) represents the total user request hit benefit in the cache system in case of a cache deployment scheme of S, < ->
Figure FDA0003987807360000029
Figure FDA00039878073600000210
Representation for video representation v r Which hits the benefit by the request received by user u, where o v,l Video representing the first bit rate version of video vRepresenting v l Size of->
Figure FDA00039878073600000211
It means that in case of cache deployment S, v r Version of video when user u's request is satisfied, < > on>
Figure FDA0003987807360000031
For any r', if v r' At->
Figure FDA0003987807360000032
Is buffered, then->
Figure FDA0003987807360000033
The value is 1, otherwise the value is 0, wherein +_>
Figure FDA0003987807360000034
Representing a set of edge nodes adjacent to user u, p v,r Representing video representation v r Popularity of 0.ltoreq.p v,r Less than or equal to 1, for any v r If it is at->
Figure FDA0003987807360000035
Is buffered, then->
Figure FDA0003987807360000036
The value is 1, otherwise, the value is 0; for each edge node EN +.>
Figure FDA0003987807360000037
Can be divided into N disjoint subsets +.>
Figure FDA0003987807360000038
Figure FDA0003987807360000039
Caching is performedDeployment S is +.>
Figure FDA00039878073600000310
Can be expressed as s= { S 1 ,S 2 ,...,S N }, wherein->
Figure FDA00039878073600000311
v r Dimension o of (2) v,r Can be expressed as: o (o) v,r =l v ·b r ,l v Representing the length of video v, b r Representing video representation v r Bit rate of C n Representing the storage capacity of the edge node n. />
CN202211570420.4A 2022-12-08 2022-12-08 MEC auxiliary collaborative caching system supporting self-adaptive bit rate video Pending CN116056156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211570420.4A CN116056156A (en) 2022-12-08 2022-12-08 MEC auxiliary collaborative caching system supporting self-adaptive bit rate video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211570420.4A CN116056156A (en) 2022-12-08 2022-12-08 MEC auxiliary collaborative caching system supporting self-adaptive bit rate video

Publications (1)

Publication Number Publication Date
CN116056156A true CN116056156A (en) 2023-05-02

Family

ID=86115305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211570420.4A Pending CN116056156A (en) 2022-12-08 2022-12-08 MEC auxiliary collaborative caching system supporting self-adaptive bit rate video

Country Status (1)

Country Link
CN (1) CN116056156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN116828226B (en) * 2023-08-28 2023-11-10 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain

Similar Documents

Publication Publication Date Title
CN113225584B (en) Cross-layer combined video transmission method and system based on coding and caching
CN110290507B (en) Caching strategy and spectrum allocation method of D2D communication auxiliary edge caching system
CN109673018B (en) Novel content cache distribution optimization method in wireless heterogeneous network
CN111432270B (en) Real-time service delay optimization method based on layered cache
WO2015076705A1 (en) Controlling the transmission of a video data stream over a network to a network user device
CN111698732B (en) Time delay oriented cooperative cache optimization method in micro-cellular wireless network
CN109451517B (en) Cache placement optimization method based on mobile edge cache network
CN116056156A (en) MEC auxiliary collaborative caching system supporting self-adaptive bit rate video
He et al. Cache-enabled coordinated mobile edge network: Opportunities and challenges
CN108769729B (en) Cache arrangement system and cache method based on genetic algorithm
CN111093213A (en) Hot content superposition pushing and distributing method and system and wireless communication system
US20130198330A1 (en) Cooperative catching method and apparatus for mobile communication system
WO2010033711A1 (en) System and method for determining a cache arrangement
CN110913239A (en) Video cache updating method for refined mobile edge calculation
CN110602722A (en) Design method for joint content pushing and transmission based on NOMA
CN111447506B (en) Streaming media content placement method based on delay and cost balance in cloud edge environment
CN111314349B (en) Code caching method based on joint maximum distance code division and cluster cooperation in fog wireless access network
CN111586439A (en) Green video caching method for cognitive content center network
CN108668288B (en) Method for optimizing small base station positions in wireless cache network
CN115720237A (en) Caching and resource scheduling method for edge network self-adaptive bit rate video
Yao et al. Joint caching in fronthaul and backhaul constrained C-RAN
CN112954026B (en) Multi-constraint content cooperative cache optimization method based on edge calculation
Kumar et al. Consolidated caching with cache splitting and trans-rating in mobile edge computing networks
CN108429919B (en) Caching and transmission optimization method of multi-rate video in wireless network
Noh et al. Cooperative and distributive caching system for video streaming services over the information centric networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination