CN115226075A - Video caching method and system based on context awareness - Google Patents

Video caching method and system based on context awareness Download PDF

Info

Publication number
CN115226075A
CN115226075A CN202210853414.3A CN202210853414A CN115226075A CN 115226075 A CN115226075 A CN 115226075A CN 202210853414 A CN202210853414 A CN 202210853414A CN 115226075 A CN115226075 A CN 115226075A
Authority
CN
China
Prior art keywords
video
user equipment
context
video segment
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210853414.3A
Other languages
Chinese (zh)
Inventor
夏秋粉
焦志伟
徐子川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210853414.3A priority Critical patent/CN115226075A/en
Publication of CN115226075A publication Critical patent/CN115226075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0917Management thereof based on the energy state of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • H04W28/0967Quality of Service [QoS] parameters
    • H04W28/0975Quality of Service [QoS] parameters for reducing delays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a video caching method and system based on context awareness, and belongs to the technical field of mobile edge network computing. The method comprises the steps of dividing all videos in a target community into equal-length video segments, dividing each user device into a plurality of virtual devices, effectively predicting the actual capacity of all the video segments in a prediction time period based on a dobby slot machine algorithm, determining a video caching strategy of the prediction time period by taking the video segments as the devices of a UCKM frame and the actual capacity of the video segments as the device capacity, taking the virtual devices as the positions of the UCKM frame and aiming at minimizing transmission delay, and making a video caching decision with minimized average delay, wherein the average delay of the user devices capable of achieving video caching in the community can be minimized, so that the problems that the probability of different users requesting different content videos is uncertain, unnecessary energy consumption and unnecessary delay are caused by blind caching can be solved, and further the service quality of the users is improved.

Description

Video caching method and system based on context awareness
Technical Field
The invention relates to the technical field of mobile edge network computing, in particular to a video caching method and system based on context awareness in a D2D mobile edge network.
Background
With the advent of the internet and the modernization of user equipment, the number of global user equipment is expected to reach 182.2 billion by 2025. Device-to-Device (D2D) communication is a promising prototype that enables user devices to communicate with each other with or without network infrastructure, such as access points and base stations. Meanwhile, watching video over the internet is one of the most popular activities worldwide, and video has become the most popular choice for content consumption. According to Statista, by 2023, american adults watch videos for 80 minutes each day, more than 75% of the videos are played through user equipment worldwide, and by 2022, 92% of mobile video viewers share content with others. People have become accustomed to playing video on demand from a variety of user devices, and these video viewers have reduced attention span and have very low tolerance for delay, as delays in excess of 1000 milliseconds can cause significant delay in viewing the video. The low latency ensures a good interactivity and an optimal viewing experience with participation. Caching in D2D networks is obviously a solution to speed up video access for user equipment, by temporarily storing frequently accessed video or video segments on user equipment in the D2D network close to where the viewer is located. In this way, the delay experienced by the viewer can be minimised, since the video no longer needs to be transmitted over a long distance in the network, and video access can be less affected by a congested link between the base station and the user equipment.
However, in the D2D network, when viewing short videos or live broadcasts, community users may use different local area networks or network service providers, and different users may have different perceptions of video fluency, content preference, and video service quality. Therefore, making video caching decisions in real-time that can minimize the average latency of user devices within a community that perform video caching is essential to improve user quality of service.
Disclosure of Invention
The invention aims to provide a video caching method and system based on context awareness, and aims to solve the problems that the probability of different users requesting different content videos is uncertain, and blind caching can generate unnecessary energy consumption and time delay due to the limitation of available storage resources of user equipment, the personalized preference of the users on the video contents and other factors, and further improve the service quality of the users.
In order to achieve the purpose, the invention provides the following scheme:
a video caching method based on context awareness, the video caching method comprising:
dividing all videos in the target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
determining the minimum value of the energy budgets, the maximum value of the transmission power and the minimum value of the storage resource budget of all the user equipment to obtain the minimum energy budget, the maximum transmission power and the minimum storage resource budget;
determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a multi-arm slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of user equipment which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user equipment on the video segment;
for each user device, dividing the user device into a plurality of virtual devices according to the storage resource budget of the user device and the length of the video segment;
determining a video caching strategy of the prediction time period by taking the video segment as a device of a UCKM frame, taking the actual capacity of the video segment as the capacity of the device, taking the virtual device as the position of the UCKM frame and taking the minimized transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
A context-aware based video caching system, the video caching system comprising:
the video dividing module is used for dividing all videos in the target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
the actual capacity calculation module is used for determining the minimum value of the energy budgets, the maximum value of the transmission power and the minimum value of the storage resource budget of all the user equipment to obtain the minimum energy budget, the maximum transmission power and the minimum storage resource budget; determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a multi-arm slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of user equipment which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user equipment on the video segment;
the user equipment dividing module is used for dividing each user equipment into a plurality of virtual equipment according to the storage resource budget of the user equipment and the length of the video segment;
a video caching strategy solving module, configured to determine a video caching strategy for the prediction time period with the video segment as a device of the UCKM framework, with the actual capacity of the video segment as the capacity of the device, with the virtual device as the position of the UCKM framework, and with a minimum transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a video caching method and a system based on context awareness, which are characterized in that all videos in a target community are divided into video segments with equal length, each user device is divided into a plurality of virtual devices, the initial capacity of all the video segments is determined, the context of each video segment is extracted, the request probability of all the video segments in a prediction time period is determined by using a dobby tiger machine algorithm, the actual capacity of all the video segments is calculated according to the initial capacity and the request probability, so that the actual capacity of all the video segments in the prediction time period is effectively predicted, finally, the video segments are used as a device of a UCKM frame, the actual capacity of the video segments is used as the capacity of the device, the virtual devices are used as the position of the UCKM frame, the minimum transmission delay is used as a target, a video caching strategy of the prediction time period is determined, a video caching decision which can minimize the average delay of the user devices which can be used for video caching in the community can be made in real time, after the caching of the video segments is carried out according to the video caching strategy, the delay of the prediction time period can be greatly reduced, so as to solve the problem that the blind service is generated due to the individual video content with different energy consumption caused by the limitation of available storage resources of the user devices and the user devices, and the uncertain user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for caching video according to embodiment 1 of the present invention;
fig. 2 is a flowchart of an online learning method of a video caching method according to embodiment 1 of the present invention;
fig. 3 is another flowchart of an online learning method of a video caching method according to embodiment 1 of the present invention;
fig. 4 is a system block diagram of a video caching system according to embodiment 2 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention aims to provide a video caching method and system based on context awareness, and aims to solve the problems that the probability of different users requesting different content videos is uncertain, and blind caching can generate unnecessary energy consumption and time delay due to the limitation of available storage resources of user equipment, the personalized preference of the users on the video contents and other factors, and further improve the service quality of the users.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Example 1:
the present embodiment is configured to provide a video caching method based on context awareness, as shown in fig. 1, the video caching method includes:
s1: dividing all videos in a target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
specifically, the communities are divided according to the geographic location of the user equipment, which may refer to a mobile device used by the user, such as a mobile phone. In this embodiment, the geographic range of the community is fixed, but the number of the user devices in the same community is not limited, and the user devices in the geographic range belonging to the community are the user devices included in the community, that is, the number of the user devices included in the community is not fixed. Each community is provided with a base station, video content which can be provided for all user equipment in the community is cached on the base station, video content which is watched by the user equipment in history is cached on the user equipment, and when the user equipment in the community has no video access record, videos are randomly obtained from the base station and serve as videos which are initially cached on the user equipment. The base station is provided with a controller which is used for processing parameters in the video cache decision process in the community. The user equipment in the same community can directly communicate through the D2D link, and the user equipment can also send a communication request to the base station for communication.
One community is selected as a target community according to the prediction requirement, and the controller divides all videos cached on a base station and each user device of the target community into video segments with equal length (the length can be delta) for indexing and caching. In the process of dividing video segments, for each video, the video is divided according to the length δ, and if the video cannot be equally divided, the last remaining portion less than δ is also independently used as one video segment. The geographic location of each video segment is the same as that of the corresponding original video, and the total data size of each video segment is equal to the data size of the corresponding original video, that is, the total size of the video segments is equal to the size of the corresponding original video.
S2: determining the minimum value of the energy budgets, the maximum value of the transmission power and the minimum value of the storage resource budget of all the user equipment to obtain the minimum energy budget, the maximum transmission power and the minimum storage resource budget;
each user equipment in the target community has an energy budget EB, a transmission power P and a storage resource budget B, where the storage resource budget refers to a storage resource available for the user equipment to cache content. The controller selects the minimum value of the energy budgets of all the user equipment in the target community to obtain the maximum valueSmall energy budget EB min (ii) a Selecting the maximum value of the transmission power of all the user equipment to obtain the maximum transmission power P max (ii) a Selecting the minimum value of the storage resource budgets of all the user equipment to obtain the minimum storage resource budget B min
S3: determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a multi-arm slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of a user device which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user device on the video segment;
in S3, determining the initial capacity of all video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget, and the length of the video segments may include: and determining the initial capacity of all the video segments by using an initial capacity calculation formula according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments.
The initial capacity calculation formula used in this embodiment is as follows:
Figure BDA0003737210570000061
wherein, C Initial Is the initial capacity; EB min Is the minimum energy budget; delta is the length of the video segment; p max Is the maximum transmission power; b is min A minimum storage resource budget. Based on the above equation, the initial volume of all video segments is the same.
In this embodiment, a feature combination of the user equipment, the video and the historical playing information is defined as a context θ having different influences on video segments, and the context of each video segment includes: the user information of the user equipment which has been played through the video segment historically comprises age, occupation and the like; video information such as video type of the video segment; historical playing times, scores of the user equipment on the video frequency band, and other historical playing information.
The present embodiment expresses the online context-aware video buffering problem as a greedy algorithm-based context slot machine problem, assuming that each ue acts as one arm of the slot machine problem and makes a decision when predicting the request probability of its buffered video segment, in each time slot t, the request probability of each video segment may change compared to its request probability in t-1 time slot, so the arm of each ue observes the historical request probability and the context of each buffered video segment to predict the future request probability of the video segment. Based on this, in S3, determining the request probability of all video segments in the predicted time period by using the dobby slot machine algorithm based on the context may include:
(1) According to the historical playing times of the video segments, all the video segments in the target community are divided into a plurality of levels, and the historical playing times of the video segments belonging to the same level are the same.
All video segments in the target community are divided into Q levels according to the historical playing times (historical access frequency), and the video segments with the same historical playing times are of the same level. Q (Q is more than or equal to 1 and less than or equal to Q) level contains f unit A video segment, the request frequency of the level video segment is defined as f unit ·q。
(2) For each video segment, calculating the recommendation probability of the video segment according to the accumulative loss of the context of the video segment; the cumulative loss is the inverse of the average historical request latency for requesting the video segment;
video segment s at the q level j,m The cumulative loss of the context θ of is noted as L θ,q (S j,m ) The cumulative loss is the reciprocal of the average historical request latency experienced to request the video segment, i.e., the cumulative loss is the reciprocal of the average of all historical request latencies historically requested to the video segment. Each context has different recommendation probability to the request probability of the corresponding video segment, and the recommendation probability is determined according to the upper part of the video segmentThe following cumulative loss calculation recommendation probabilities for video segments can include: and calculating the recommendation probability of the video segment by using a recommendation probability calculation formula according to the accumulative loss of the context of the video segment.
The recommendation probability calculation formula used in this embodiment is as follows:
Figure BDA0003737210570000071
wherein p is θ,q (s j,m ) For a context theta to a q level video segment s j,m The recommended probability of (2); alpha is alpha t Is the learning rate; l is θ,q (s j,m ) For the q level video segment s j,m Cumulative loss of context θ of; l is a radical of an alcohol θ,q′ (s j,m ) For the q' th level video segment s j,m The cumulative loss of context θ.
Whether the recommendation probability prediction is accurate or not influences the delay experienced by the user and the resource utilization rate. By the method of the embodiment, the recommendation probability of each video segment in the prediction time period can be accurately predicted.
(3) For each user device within the target community, determining a range of a plurality of context groups, dividing the context of all first video segments cached on the user device into the plurality of context groups according to the cumulative loss of context of all first video segments, context group g w Has a value range of [ LB ] w ,UB w ]Each context belongs to a context group, and the contexts within a context group have similar impact on the probability of requests for different video segments. Selecting the recommendation probability of a first video segment corresponding to the context with the minimum accumulated loss from each context group as the representative probability of the context group, using the user equipment as one arm in a multi-armed slot machine algorithm, randomly selecting a context group based on the representative probabilities of all the context groups to obtain a target context group, determining the recommendation probability of a second video segment corresponding to the context contained in the target context group as the request probability of the second video segment, and using the second video segment as the request probability of the second video segmentAnd the request probability of the video segment is used as the request probability of a third video segment which belongs to the same level as the second video segment, and the predicted value of the request probability of the current user equipment to the video segment is obtained.
For example, if the present embodiment chooses to divide the contexts into 2 groups, one group is 0-5, one group is 6-10, and the cumulative loss of the contexts is 1, 4, 6, 7, 9, then 1, 4 belongs to the first group, and 6, 7, 9 belongs to the second group, thereby dividing the contexts into 2 groups.
It should be noted that, when the probability of a request is first predicted, a plurality of context packets are determined to be a plurality of packets with equal intervals but without overlapping, and when the probability of a request is not first predicted, each context packet g used in the last prediction process w Expert exp of w To adjust the context within the group, expert exp w The interval is extended by the probability sigma (0 is more than or equal to sigma and less than or equal to 1). If the context packet g w Expert exp of w Choose to extend its interval range to ζ, then in the next slot, the new interval is [ LB ] w -ζ,UB w +ζ]So that more context is included into the expert's context packet g w In the middle, the expert group has more opportunities to explore better situations.
(4) The controller collects and summarizes predicted values of the request probabilities of all the user equipment to the video band, namely, summarizes the request probabilities of the second video band and the third video band corresponding to all the user equipment, and determines that the request probabilities of the remaining video bands are the average value of the request probabilities of all the second video bands and the third video band to obtain the request probabilities of all the video bands in the target community.
By the method, the request probability of all video segments in the target community can be effectively predicted.
After obtaining the initial capacity and the request probability, in S3, calculating the actual capacity of all video segments according to the initial capacity and the request probability may include: and calculating the product of the initial capacity of the video segment and the request probability for each video segment to obtain the actual capacity of the video segment.
S4: for each user device, dividing the user device into a plurality of virtual devices according to the storage resource budget of the user device and the length of the video segment;
s4 may include: and for each user equipment, calculating the number of virtual equipment included in each user equipment according to the storage resource budget of the user equipment and the length of the video segment, and dividing each user equipment according to the number to obtain a plurality of virtual equipment.
Calculating the number of virtual devices included in each user device according to the storage resource budget of the user device and the length of the video segment may include: and calculating the ratio of the storage resource budget of the user equipment to the length of the video segment, wherein the ratio is the number of the virtual equipment included in the user equipment. Then, the user equipment is divided according to the length delta, if the user equipment cannot be divided equally, the last remaining part which is less than the delta is also independently used as a virtual equipment, the geographic positions of the virtual equipment and the original user equipment are the same, the sum of the storage resources of the virtual equipment is equal to the available storage resources of the corresponding original user equipment, the embodiment only divides the real user equipment into a plurality of virtual equipment, and other attributes are consistent with the original real user equipment.
S5: determining a video caching strategy of the prediction time period by taking the video segment as a device of a UCKM frame, taking the actual capacity of the video segment as the capacity of the device, taking the virtual device as the position of the UCKM frame and taking the minimized transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
The UCKM framework of this embodiment is a k-median framework based on uniform capacity, and the UCKM framework is commonly used to solve the content matching problem, and includes a set of devices (facilities) and locations (facilities) where whether a device can be placed or not can be selected, each device has a certain capacity (capacity) to represent that the device can be placed at an upper limit of the number of times, and one location can only place one device at most. The principle of the UCKM framework is to normalize the problem into an integer linear programming problem with the goal of: d i1 x i1 +d i2 x i2 +...+d ij x ij +...+d iJ x iJ <=D i Wherein D is i The constrained delay for the ith device means the maximum delay that the ith device can accept; when device i is placed at location j, then x ij =1,d ij I =1, 2.. For corresponding time delays, I is the total number of devices; j =1, 2.. J, J is the total number of positions, and the objective function is calculated to obtain a placement optimal value, i.e. a video caching strategy is generated.
Each video segment of this embodiment corresponds to a facility device in the UCKM frame, each virtual device corresponds to a facility location in the UCKM frame, the virtual devices of the same video segment cache cannot belong to one user device, the actual capacity of the video segment represents the maximum upper bound of the number of times that each video segment can be requested by a user, and corresponds to the capacity of a device in the UCKM frame. The video caching policy refers to a caching correspondence relationship between which video segments are cached on which user equipment.
After obtaining the video caching strategy of the prediction time period, the video caching method of this embodiment further includes: and caching the video segments on the corresponding user equipment according to the video caching strategy. After caching the video segment on the corresponding user equipment, the video caching method of this embodiment further includes: and updating the request time delay of each video segment on the user equipment, adjusting the range of context grouping, returning to the step of dividing all videos in the target community into video segments with equal length, and predicting the video caching strategy of the next prediction time period.
Each context packet g w Is provided with an expert exp w For adjusting context within a group, defining ζ as expert exp w The step size of adjustment. In the updating process, the user equipment in the target community calculates the time delay experienced by the video segment on the current user equipment according to the cache result so as to update the accumulated loss of the context of the cached video segment, namely expert exp w The interval is extended by the probability sigma (0 is more than or equal to sigma and less than or equal to 1). If the context packet g w Expert exp of w Choose to extend its interval range to ζ, then in each time slot t the new interval is [ LB ] w -ζ,UB w +ζ]So that more context is included into the expert's context packet g w In the middle, the expert group has more opportunities to explore better situations. In this embodiment, the accumulated loss of the context after the t-slot cache is updated, the context grouping is adjusted, and the accumulated loss of the context grouping of each ue and the request probability of the representative video segment in the next prediction time period are recalculated according to the context grouping adjusted by the expert, so as to obtain a feasible cache location.
The scheme of this embodiment relates to two algorithms (an offline algorithm and an OL algorithm), and the offline algorithm is a method using integer linear programming, and calculates an optimal cache position of a video segment with a minimum time delay as a target to obtain a video cache policy. The OL algorithm is realized based on the offline algorithm, and after the request probability of the video segment is obtained, the product of the request probability of the video segment and the initial capacity of the video segment is used as the new actual capacity of the video segment.
And obtaining a position which can be cached and minimizes the caching delay of all the current video segments by using an offline algorithm. In the process, firstly, a feature combination of a video segment, a user and a historical access scoring condition is defined as a context theta, in order to predict the recommendation probability of the video segment, the video segment is firstly grouped according to the historical request frequency, each piece of context information has a recommendation probability, the subsequent calculation of the request probability is facilitated, the contexts are grouped according to the cumulative loss of the contexts, the recommendation probability of the context with the minimum cumulative loss in each group after grouping is used as the representative probability of the group, and based on a dobby-tiger algorithm, the user equipment dynamically selects the context grouping as the recommendation probability of the current user to the group of video segment, so that the request probability of the video segment is obtained.
And after the cache strategy is obtained by the Offline algorithm, updating the video segment request frequency and the context information, adjusting grouping and performing next prediction cache.
When a user in a target community requests a video, whether content meeting requirements exists on local equipment (namely user equipment used by the user) is checked, and if the content meets the requirements, direct access is performed; if not, sending a request to a nearby neighbor user, and if the request can not be met, sending the request to the base station. The time delay of the three processes is increased in sequence. According to the embodiment, the video caching strategy in the prediction time period is predicted, so that the content delay of the user cache is reduced, the service quality is improved, and the user satisfaction is improved.
In this embodiment, the video caching policy can be predicted in real time, and prediction is performed once in each time slot, specifically, as shown in fig. 2, the process may include:
step 100: a controller deployed on a base station divides a video into a plurality of video segments with equal size, and the size of each video segment is equal;
step 101: the controller divides all video segments in the target community into Q levels according to the historical access frequency number, wherein the Q (Q is more than or equal to 1 and less than or equal to Q) level contains f unit Content, the request frequency of the level video segment is defined as f unit Q; defining a group of video segments, a user and a characteristic combination of historical viewing scores as a context, and recording the recommendation probability of the context theta to the video segment of the q level as p θ,q (s j,m ) Whether the prediction of the recommendation probability is accurate or not influences the delay experienced by the user and the resource utilization rate; video segment s at level q j,m The cumulative loss of the context of (θ) is noted as L θ,q (S j,m ) The cumulative loss is derived from the inverse of the average historical request latency experienced to request the video segment.
Step 102: and judging whether the user equipment set U in the current community is empty or not.
If the current community is not empty, step 103 is executed.
Step 103: the user equipment divides the context on the user equipment into w groups according to the accumulated loss of the context.
The cumulative loss of context is defined as the inverse of the average historical request latency experienced by the video segment. Calculating cumulative loss L of context theta of video segment at q level θ,q Dividing contexts into different classes based on cumulative loss of contextGroup g w
Step 104: and taking the recommendation probability of the context with the minimum accumulated loss in the context group as the representative probability of the context group.
Selecting each context group g based on all context groups w Context in which cumulative loss is minimized theta min,w And will be theta min,w As a representative probability of the context group.
Step 105: and according to the representative probability of the context grouping, the user equipment makes a decision randomly to obtain a predicted value of the request probability of the current user equipment to the video frequency band.
If there are no traversable user devices in the community, go to step 106.
Step 106: the controller collects and summarizes predicted values of the request probabilities of all the user devices for different video segment groups.
Step 107: the capacity of each video segment is set.
Step 108: each user equipment is divided into a plurality of equal capacity virtual devices.
Step 109: and solving the matching problem between the video segment and the virtual equipment by taking the minimum delay as a target, thus obtaining a cache scheme.
Step 110: and caching the video segments according to the caching scheme, calculating the delay of each video segment in ui E U, and updating the loss and the probability of the current t time slot of the context.
Step 111: the expert expands the loss range by the probability sigma and regroups the context in the next prediction according to the loss range.
Grouping g for each context w Setting an expert exp w To adjust the context within the group, define ζ as expert exp w The step size of adjustment. Expert exp w The context interval is extended with a probability σ (0 ≦ σ ≦ 1). If the context is grouped into g w Expert exp of w Choose to extend its interval range to ζ, then in each time slot t its new interval is [ LB ] w -ζ,UB w +ζ]So that more context is included into the expert's context packet g w Middle and high expert groupThere is more opportunity to explore better scenarios.
According to the embodiment, through targeted matching between video content and users, comprehensive performance balance optimization matching is performed on all video segment information, user equipment characteristics and historical access information based on a uniform capacity k median method, the problem of content distribution cache is optimized, and the overall content transmission delay is reduced. And the online learning method using the multi-arm slot machine in the D2D network carries out content preference prediction caching aiming at the individual characteristics of the variability of the user, introduces the concept of an expert to dynamically adjust the context grouping, ensures that the prediction changes along with the change of the access frequency of the video segment, has the characteristics of high efficiency, rapidness, strong expandability and the like, can well meet the variable user requirements, and improves the user satisfaction.
As shown in fig. 3, the online learning process of video caching based on context awareness in a D2D edge network provided by this embodiment is as follows:
step1: dividing the video into equal-size video segments before predicting the request probability of the video segments on line; the user equipment is divided into a plurality of virtual devices. Dividing all video segments in the community into different levels in advance by the community edge base station according to the historical playing times, wherein each level corresponds to a request frequency; and combining with user equipment, video characteristics and comprehensive characteristics formed by historical access information in the community to form the context information.
Step2: grouping each user equipment according to the accumulated loss of the context, taking the recommendation probability of the video segment represented by the context with the minimum accumulated loss in each group as the representative probability of the context grouping, regarding the user equipment as one arm based on a dobby slot machine, randomly selecting the context grouping, and predicting the request probability of the video segment on line;
step3: and taking the product of the obtained request probability of the video segment and the capacity obtained in the off-line algorithm as the maximum capacity of the requested times of the video segment, and taking the minimized transmission delay as a target to make a cache position decision by means of the UCKM frame.
Step4: and caching the video segments according to the obtained cache position, updating the request frequency of the video segments, adjusting the video groups to obtain transmission delay so as to update the cumulative loss of the context, and dynamically adjusting the range of the context groups by the expert of each context group to adjust the context in the groups and enter the next round of prediction.
Example 2:
this embodiment is configured to provide a video caching system based on context awareness, as shown in fig. 4, the video caching system includes:
the video dividing module M1 is used for dividing all videos in the target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
an actual capacity calculation module M2, configured to determine a minimum value of energy budgets, a maximum value of transmission power, and a minimum value of storage resource budgets of all the user equipments, so as to obtain a minimum energy budget, a maximum transmission power, and a minimum storage resource budget; determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a multi-arm slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of a user device which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user device on the video segment;
a user equipment dividing module M3, configured to, for each user equipment, divide the user equipment into a plurality of virtual devices according to a storage resource budget of the user equipment and a length of the video segment;
a video caching strategy solving module M4, configured to determine a video caching strategy for the prediction time period with the video segment as a device of the UCKM frame, with the actual capacity of the video segment as the capacity of the device, with the virtual device as the position of the UCKM frame, and with a minimum transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
The same and similar parts in the various embodiments of the present specification may be referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the foregoing, the description is not to be taken in a limiting sense.

Claims (10)

1. A video caching method based on context awareness, the video caching method comprising:
dividing all videos in the target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
determining the minimum value of the energy budgets, the maximum value of the transmission power and the minimum value of the storage resource budget of all the user equipment to obtain the minimum energy budget, the maximum transmission power and the minimum storage resource budget;
determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a multi-arm slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of user equipment which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user equipment on the video segment;
for each user device, dividing the user device into a plurality of virtual devices according to the storage resource budget of the user device and the length of the video segment;
determining a video caching strategy of the prediction time period by taking the video segment as a device of a UCKM frame, taking the actual capacity of the video segment as the capacity of the device, taking the virtual device as the position of the UCKM frame and taking the minimized transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
2. The method according to claim 1, wherein the determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments comprises: determining the initial capacity of all the video segments by using an initial capacity calculation formula according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments;
the initial capacity calculation formula includes:
Figure FDA0003737210560000011
wherein, C Initial Is the initial capacity; EB min Is the minimum energy budget; delta is the length of said video segment; p max Is the maximum transmission power; b is min Budgeting the minimum storage resources.
3. The method according to claim 1, wherein said determining the probability of all said video segments being requested during a prediction period using a dobby slot machine algorithm based on said context comprises:
dividing all the video segments into a plurality of levels according to the historical playing times of the video segments, wherein the historical playing times of the video segments belonging to the same level are the same;
for each video segment, calculating a recommendation probability for the video segment based on the cumulative loss of context for the video segment; the accumulated loss is the inverse of the average historical request latency for requesting the video segment;
for each of the user devices, determining a range of a plurality of context groups, and dividing the contexts of all the first video segments into the plurality of context groups according to the accumulated loss of the contexts of all the first video segments cached on the user device, wherein each context belongs to one context group; selecting the recommended probability of the first video segment corresponding to the context with the minimum accumulated loss from each context group as the representative probability of the context group, using the user equipment as one arm in a dobby slot machine algorithm, and randomly selecting one context group based on the representative probabilities of all the context groups to obtain a target context group; determining the recommendation probability of a second video segment corresponding to the context included in the target context group as the request probability of the second video segment, and taking the request probability of the second video segment as the request probability of a third video segment which belongs to the same level as the second video segment;
summarizing the request probabilities of the second video segments and the third video segments corresponding to all the user equipment, and determining that the request probabilities of the remaining video segments are the average value of the request probabilities of all the second video segments and the third video segments to obtain the request probabilities of all the video segments.
4. The video caching algorithm of claim 3, wherein said calculating the recommendation probability for the video segment based on the cumulative loss of context for the video segment comprises: calculating the recommendation probability of the video segment by using a recommendation probability calculation formula according to the accumulative loss of the context of the video segment;
the recommendation probability calculation formula includes:
Figure FDA0003737210560000021
wherein p is θ,q (s j,m ) For context theta to the q-th level video segment s j,m The recommended probability of (2); alpha is alpha t Is the learning rate; l is a radical of an alcohol θ,q (s j,m ) For the q level video segment s j,m Cumulative loss of context θ of; l is θ,q′ (s j,m ) For the q' th level video segment s j,m The cumulative loss of context θ.
5. The method according to claim 1, wherein said calculating the actual volume of all the video segments according to the initial volume and the request probability specifically comprises:
and calculating the product of the initial capacity and the request probability for each video segment to obtain the actual capacity of the video segment.
6. The video caching method according to claim 1, wherein said dividing the user equipment into a plurality of virtual devices according to a storage resource budget of the user equipment and a length of the video segment specifically comprises:
calculating the number of virtual devices included in the user equipment according to the storage resource budget of the user equipment and the length of the video segment;
and dividing the user equipment according to the number to obtain a plurality of virtual equipment.
7. The video caching method according to claim 6, wherein said calculating, according to a storage resource budget of the user equipment and a length of the video segment, a number of virtual devices included in the user equipment specifically comprises:
and calculating the ratio of the storage resource budget of the user equipment to the length of the video segment, wherein the ratio is the number of the virtual equipment included in the user equipment.
8. The video buffering method according to claim 1, wherein after obtaining the video buffering policy for the predicted time period, the video buffering method further comprises: and caching the video segment on the corresponding user equipment according to the video caching strategy.
9. The video caching method according to claim 8, wherein after caching the video segment on the corresponding user device, the video caching method further comprises: and updating the request time delay of each video segment on the user equipment, adjusting the range of context grouping, returning to the step of dividing all videos in the target community into video segments with equal length, and predicting the video cache strategy of the next prediction time period.
10. A video caching system based on context awareness, the video caching system comprising:
the video dividing module is used for dividing all videos in the target community into video segments with equal length; the target community comprises a base station and a plurality of user equipment, and the user equipment is in direct communication through a D2D network; the all videos comprise videos cached on the base station and each user equipment;
the actual capacity calculation module is used for determining the minimum value of the energy budgets, the maximum value of the transmission power and the minimum value of the storage resource budget of all the user equipment to obtain the minimum energy budget, the maximum transmission power and the minimum storage resource budget; determining the initial capacity of all the video segments according to the minimum energy budget, the maximum transmission power, the minimum storage resource budget and the length of the video segments; extracting the context of each video segment, and determining the request probability of all the video segments in a prediction time period by using a dobby slot machine algorithm based on the context; calculating the actual capacity of all the video segments according to the initial capacity and the request probability; the context comprises user information of user equipment which plays the video segment historically, a video type of the video segment, historical playing times and scores of the user equipment on the video segment;
a user equipment dividing module, configured to divide, for each user equipment, the user equipment into multiple virtual devices according to a storage resource budget of the user equipment and a length of the video segment;
a video caching strategy solving module, configured to determine a video caching strategy for the prediction time period with the video segment as a device of the UCKM framework, with the actual capacity of the video segment as the capacity of the device, with the virtual device as the position of the UCKM framework, and with a minimum transmission delay as a target; the video caching strategy comprises the caching corresponding relation between all the video segments and the user equipment.
CN202210853414.3A 2022-07-08 2022-07-08 Video caching method and system based on context awareness Pending CN115226075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210853414.3A CN115226075A (en) 2022-07-08 2022-07-08 Video caching method and system based on context awareness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210853414.3A CN115226075A (en) 2022-07-08 2022-07-08 Video caching method and system based on context awareness

Publications (1)

Publication Number Publication Date
CN115226075A true CN115226075A (en) 2022-10-21

Family

ID=83612668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210853414.3A Pending CN115226075A (en) 2022-07-08 2022-07-08 Video caching method and system based on context awareness

Country Status (1)

Country Link
CN (1) CN115226075A (en)

Similar Documents

Publication Publication Date Title
CN111414252B (en) Task unloading method based on deep reinforcement learning
CN111132077B (en) Multi-access edge computing task unloading method based on D2D in Internet of vehicles environment
Sun et al. Flocking-based live streaming of 360-degree video
US8832709B2 (en) Network optimization
WO2018010119A1 (en) Video service resource allocation method and device
CN110809167B (en) Video playing method and device, electronic equipment and storage medium
CN112637908B (en) Fine-grained layered edge caching method based on content popularity
Guo et al. Deep-Q-network-based multimedia multi-service QoS optimization for mobile edge computing systems
CN112714315B (en) Layered buffering method and system based on panoramic video
CN112752117B (en) Video caching method, device, equipment and storage medium
Kim et al. Traffic management in the mobile edge cloud to improve the quality of experience of mobile video
Sun et al. Optimal strategies for live video streaming in the low-latency regime
KR101966588B1 (en) Method and apparatus for receiving video contents
CN113207015B (en) Task scheduling strategy generation method and device, storage medium and computer equipment
CN112887314B (en) Time delay perception cloud and mist cooperative video distribution method
Kim et al. Multipath-based HTTP adaptive streaming scheme for the 5G network
CN114040257A (en) Self-adaptive video stream transmission playing method, device, equipment and storage medium
Yu et al. Efficient QoS provisioning for adaptive multimedia in mobile communication networks by reinforcement learning
CN115226075A (en) Video caching method and system based on context awareness
Lu et al. Optimizing stored video delivery for wireless networks: The value of knowing the future
CN114007113A (en) Video code rate self-adaptive adjusting method and device
Li et al. Adaptive mobile VR content delivery for industrial 5.0
ur Rahman et al. QoE optimization for HTTP adaptive streaming: Performance evaluation of MEC-assisted and client-based methods
Mehrabi et al. Cache-aware QoE-traffic optimization in mobile edge assisted adaptive video streaming
CN111935781A (en) Control method of data sharing network, network system and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination