WO2023125970A1 - Code rate allocation method and apparatus, storage method and apparatus, device, and storage medium - Google Patents

Code rate allocation method and apparatus, storage method and apparatus, device, and storage medium Download PDF

Info

Publication number
WO2023125970A1
WO2023125970A1 PCT/CN2022/144132 CN2022144132W WO2023125970A1 WO 2023125970 A1 WO2023125970 A1 WO 2023125970A1 CN 2022144132 W CN2022144132 W CN 2022144132W WO 2023125970 A1 WO2023125970 A1 WO 2023125970A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video segment
code rate
storage
value
Prior art date
Application number
PCT/CN2022/144132
Other languages
French (fr)
Chinese (zh)
Inventor
俞义方
华新海
曹洋
李瑾熙
朱佳欣
江涛
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023125970A1 publication Critical patent/WO2023125970A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched

Definitions

  • the present application relates to the communication field, for example, to a code rate allocation method, storage method, device, equipment and storage medium.
  • high-definition video is gradually applied in many industries such as surveillance security, remote conference, e-commerce live broadcast, VR game/video, etc., and is favored by content providers and consumers.
  • the transmission process of high-definition video needs to consume a lot of bandwidth and storage resources.
  • the storage of high-definition video is performed by the server, resulting in a large loss in the transmission process. Therefore, edge nodes closer to users can be used to cache high-definition video. The user can directly obtain the cached high-definition video from the edge node instead of obtaining it from the server, which reduces the loss in the transmission process.
  • the main purpose of the embodiments of the present application is to provide a code rate allocation method, a storage method, a device, a device, and a storage medium that reduce transmission loss.
  • an embodiment of the present application provides a code rate allocation method, including: acquiring target video data; the target video data includes viewing probabilities of all video segments in the target transmission video; according to the Viewing probability performs code rate allocation to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the The viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  • the embodiment of the present application also proposes a storage method, including: obtaining the viewing probability and code rate allocation results of all video clips in the target transmission video through the code rate allocation method; according to the code rate allocation results and the viewing Probability, predicting the expected effective quality and expected transmission loss; according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node, perform storage optimization processing to obtain storage decision information; the storage decision information includes each of the A third storage decision value of the video clip, where the third storage decision value represents whether the video clip needs to be stored in an edge node; and the video clip is stored according to the storage decision information.
  • the embodiment of the present application also proposes a code rate allocation device, including: an acquisition module, configured to acquire target video data; the target video data includes viewing probabilities of all video segments in the target transmission video; a processing module, configured to According to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, so The user's viewing quality is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  • the embodiment of the present application also proposes a storage device, including: a determination module, configured to determine viewing probabilities and code rate allocation results of all video segments in the target transmission video; The viewing probability is predicted to predict the expected effective quality and the expected transmission loss; the optimization module is used to perform storage optimization processing according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node to obtain storage decision information; the storage The decision information includes a third storage decision value for each of the video clips, the third storage decision value representing whether the video clip needs to be stored in an edge node; a storage module configured to perform the video processing according to the storage decision information.
  • the viewing probability and code rate distribution result of all video clips in the described determination target transmission video comprise: obtain target video data; Described target video data comprises the viewing probability of all video clips in target transmission video; According to the Viewing probability performs code rate allocation to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the The viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  • the embodiment of the present application also provides an electronic device, the electronic device includes a processor and a memory, the memory stores at least one instruction, at least one program, a code set or an instruction set, the at least one instruction, the At least one segment of program, the code set or instruction set is loaded and executed by the processor to implement the code rate allocation method or storage method described above.
  • the embodiment of the present application also provides a computer-readable storage medium, the storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the The code set or instruction set is loaded and executed by the processor to implement the above code rate allocation method or storage method.
  • Fig. 1 is a schematic flow chart of the steps of the code rate allocation method of the present application
  • Fig. 2 is a schematic flow chart of the steps of the storage method in a specific embodiment of the present application
  • FIG. 3 is a schematic flow diagram of a code rate allocation method and a storage method in one application scenario of a specific embodiment of the present application;
  • FIG. 4 is a schematic flowchart of a code rate allocation method and a storage method in another application scenario according to a specific embodiment of the present application;
  • FIG. 5 is a schematic diagram of a code rate allocation device according to a specific embodiment of the present application.
  • FIG. 6 is a schematic diagram of a storage device according to a specific embodiment of the present application.
  • FIG. 7 is a schematic diagram of an electronic device according to a specific embodiment of the present application.
  • the embodiment of the present application provides a code rate allocation method, at least including but not limited to steps S100-S200:
  • the target video data includes viewing probabilities of all video segments in the target transmission video.
  • the target video data is the data of the video that the user needs to watch, that is, the relevant data of the target transmission video, which can be obtained by the user end by generating a video request based on the user's input instruction.
  • the cloud or server side can divide the target transmission video into blocks to generate multiple video clips in different spatial positions.
  • the target transmission video with a duration of T is encoded into K quality levels in the cloud, and the video of each quality level is temporally divided into video segments with a length of ⁇ t seconds (including but not limited to 2 seconds) , each video segment with a length of ⁇ t seconds is spatially divided into M ⁇ N video segments, where M and N are the number of horizontal and vertical video segments respectively.
  • the viewing probability of each video segment can be determined through the user's attention point track, and the sum of the viewing probabilities located in the user's RoI is 1, which satisfies m,n is the index position of the video segment, p(m,n) is the viewing probability (probability within the user RoI) corresponding to the video segment (m,n), the higher the viewing probability, the greater the user's attention.
  • the target transmission video is immersive video, such as VR (Virtual Reality, virtual reality) video and AR (Augmented Reality, augmented reality) video
  • the user can see 360-degree video content by wearing a head-mounted display device
  • the RoI area is the user's actual field of view (Field of View, FoV) area in the head-mounted display
  • the viewing probability can be determined through the eye movement trajectory
  • the target transmission video is other types of video, it can be passed Acquire real-time operation area frequency of the mouse or predict the importance of the user's possible attention position through video image processing.
  • the Saliency map salient area of the video, and judge the importance of the viewing area through the gray value ratio of different areas to determine the viewing probability.
  • code rate allocation refers to determining the target transmission code rate of each video segment, that is, the code rate at which the edge node transmits each video segment to the user end; the total utility of the target video transmission is based on the user viewing quality of the video segment and the transmission loss is determined.
  • the total utility of the target transmission video is characterized by a total utility function:
  • Utility t is the total utility at time t, that is, the utility of the code rate distribution system at time t
  • QoP t (m,n) is the user viewing quality of video segment (m,n) at time t
  • TC t (m,n) is the transmission loss of the video segment (m,n) at time t
  • is the weight of the transmission loss.
  • the transmission loss of a video segment includes the communication loss of the edge node obtaining the video segment from the cloud and the communication loss of the edge node transmitting the video segment to the user end; if a The video clips are stored in the edge node, and the transmission loss of the video clip includes the communication loss of the edge node transmitting the video clip to the client, or also includes the transcoding loss of the edge node.
  • the total utility of the target transmission video satisfies the target value, including but not limited to making the total utility function reach the maximum value (MAX Utility t ); or, first setting the total utility target threshold, when the total If the utility is greater than or equal to the total utility target threshold, it is considered to meet the target value.
  • the total utility can improve the user viewing quality of the target video transmission and reduce the transmission and communication loss of video clips obtained from the cloud. Under certain conditions, users can watch the quality of the target transmission video.
  • B max is the maximum bandwidth transmitted between the edge node and the user end, is the code rate at time t, in an example, it is the transmission code rate allocated for video segment (m, n) at time t for the edge node to transmit video segment (m, n) to the user end, c t-1 (m , n) is the storage decision value at time t-1.
  • the storage decision value is 1, it means that the video segment (m, n) is stored in the edge node, and when the storage decision value is 0, it means that the video segment (m, n) is not stored in edge node; is the code rate at time t-1, in an example, it is the storage code rate when the video segment (m,n) is stored by the edge node at time t-1.
  • the bandwidth constraints limit all The sum cannot exceed the maximum bandwidth B max ; for the video clips already stored in the edge node (i.e.
  • the code rate constraints stipulate that the code allocated to the stored video clips at time t rate cannot be higher than In this way, users can directly obtain video clips from edge nodes, reducing the additional communication loss caused by obtaining video clips from the cloud.
  • q K is the Kth quality level, which can be measured by the ffprobe tool (a tool that can be used to view file format information in FFmpeg), and the quality level is related to the code of the video segment
  • the rate is closely related to the content complexity o t (m,j), namely F() represents a mapping relationship.
  • the calculation complexity of the integer programming problem is O(K M N ), and then the target quality level of each video segment can be determined to solve the maximum total utility, while the present application
  • the computational complexity of the integer programming problem is O(K M N )
  • the execution time is too large, so the greedy algorithm is used to obtain the optimal solution of the approximate solution to reduce the computational complexity, so that the total utility satisfies Determine the target quality level of each video segment and the corresponding target transmission bit rate based on the target value to realize bit rate allocation.
  • the code rate is allocated to the video segment according to the viewing probability, including steps B201 and B202, and B203 and/or B204 :
  • the old utility value may be the current value of the total utility of the target video transmission determined according to the quality level and bit rate, which is denoted as oldU t .
  • the quality level and the code rate can be the quality level and the code rate currently assigned to the video segment, or both can be set values, and in the embodiment of the present application, the quality level is used as the set value as an example. description, without specific limitation.
  • the quality level of each video segment (m, n) is set to the lowest quality level 1, denoted as A matrix of quality levels constituting all video clips (m,n)
  • the corresponding code rate can be determined through the quality level or the initial code rate can be set as So as to get the code rate matrix of all video clips (m,n)
  • oldU t can be calculated by substituting it into formula (1) or (2).
  • B203 when the sum of the code rates is less than or equal to the bandwidth capacity, traverse the video segment to increase the quality level, according to the increase processing result, new utility value and old utility value, update the quality level and the code rate of the video segment, and return to determine the code rate
  • the quality level is incremented by traversing the video segments.
  • the Nth video clip in the Mth column in the video clip matrix composed of M ⁇ N video clips is used as the first video clip to start the traversal process until the first video clip in the first column is traversed.
  • Increased processing of the quality level of the video segment for example, the quality level of the video segment
  • the result of the increase processing is obtained It is equivalent to an increased quality level; wherein, the increased value can be adjusted as required, and the present application uses 1 as an example without constituting a specific limit to the increased value.
  • new utility value and old utility value update the quality level and code rate of video segment, return to step B202 until Thus, the target transmission bit rate of each video segment is obtained.
  • B204 when the sum of the code rates is not less than or not equal to the bandwidth capacity, traverse the video segment to reduce the quality level, and update the quality level and the code rate of the video segment according to the reduction processing result, new utility value and old utility value, and return to judge The step of whether the sum of the code rates is less than or equal to the bandwidth capacity, until the sum of the code rates is less than or equal to the bandwidth capacity, to obtain the target transmission code rate of each video segment.
  • the sum of code rates is not less than or not equal to the bandwidth capacity, that is Iterate over the video clips to reduce the quality level.
  • the Nth video clip in the Mth column in the video clip matrix composed of M ⁇ N video clips is used as the first video clip to start the traversal process until the first video clip in the first column is traversed.
  • the reduction processing of the quality level of the video segment for example, the quality level of the video segment After the reduction processing, the reduction processing result is obtained It is equivalent to the reduced quality level; similarly, the reduced value can be adjusted as needed, and the present application uses 1 as an example without constituting a specific limit to the reduced value.
  • new utility value and old utility value update the quality level and code rate of the video segment, return to step B202 until Thus, the target transmission bit rate of each video segment is obtained.
  • step B202 or B203 when the quality level and code rate of one or more video segments are updated, an updated video segment matrix composed of M ⁇ N video segments and the updated quality level matrix, so as to determine the target transmission bit rate of the video segment according to the updated quality level matrix.
  • the target transmission bit rate refers to the final determined recorded as The final target transmission code rate matrix obtained is
  • step B203 update the quality level and code rate of the video segment according to the increase processing result, new utility value and old utility value, including step B211 or B212:
  • the quality level threshold is quality level K
  • the quality level threshold is quality level K
  • the quality level matrix obtained after adding processing is The updated new utility value, that is, the updated newU t .
  • the video clip at the position is used as the first updated video clip, and the quality level of the first updated video clip is updated as the corresponding increase processing result of the first updated video clip
  • update the code rate of the first updated video segment according to the increase processing result corresponding to the first updated video segment Or it can be expressed as: get a new quality level matrix according to the increase processing result corresponding to the first updated video segment So as to determine the updated code rate matrix Then return to step B202.
  • F() stands for mapping.
  • the current video segment is the Mth row and the Nth column's video segment, if At this time, keep the quality level and code rate of the video segment unchanged, that is, the quality level of the video segment is used as the updated quality level, and the bit rate of the video segment is used as the updated bit rate, and then traverse the M row and N column
  • the next video segment of the video segment determines the relationship between the added processing result and the quality level threshold.
  • step B211 the new utility value is updated according to the increase processing result, including steps B221-B226:
  • the calculation steps of the transmission loss at the current moment include B2211-B2212, and B2212 includes step A1 or A2:
  • the current time is recorded as time t
  • the previous time is recorded as time t-1.
  • the previous time may be time t-2 or other time, which is not specifically limited.
  • the storage state of the video segment and the edge node at the previous moment includes whether it is stored in the edge node or not stored in the edge node.
  • the first storage decision value refers to the storage decision value corresponding to the storage state of the video segment at the previous moment.
  • the first storage decision value indicates that the video clip is stored in the edge node
  • obtain the storage code rate of the video clip and calculate the code rate difference between the stored code rate and the code rate of the video clip, according to the first storage decision value, the code rate difference and The first loss coefficient is used to determine the first loss
  • the second loss is determined according to the second loss coefficient and the code rate of the video segment
  • the transmission loss at the current moment is determined according to the sum of the first loss and the second loss.
  • the third loss is determined according to the third loss coefficient and the code rate of the video segment
  • the second loss is determined according to the second loss coefficient and the code rate of the video segment
  • c 1 is the first loss coefficient
  • c 2 is the second loss coefficient
  • c 3 is the third loss coefficient.
  • c t-1 (m,n) is the first storage decision value of the video segment at time t-1
  • is the bit rate of the video segment at time t is the code rate of the video segment at time t-1
  • the sizes of the first loss coefficient, the second loss coefficient and the third loss coefficient can be adjusted as required.
  • the transmission loss of the target transmission video can be obtained by calculating the sum of the transmission losses of each video segment. In an example, it can be known from formula (6) that the transmission loss is divided into three parts, taking one of the video segments (m,n) as an example:
  • the first loss represents the transcoding loss of the edge node
  • the second loss represents the communication loss of the edge node transmitting video clips to the client
  • the third loss represents the communication loss for edge nodes to obtain video clips from the cloud.
  • the storage code rate at this time is That is, the video clip has been is stored in the edge node and calculates the code rate difference Thereby determine the above-mentioned first loss, according to the second loss coefficient c 2 and the code rate of the video segment Determine the second loss, and according to the sum of the first loss and the second loss, determine the transmission loss at the current moment as
  • the third loss coefficient c 3 and the code rate of the video segment Determine the third loss
  • the second loss coefficient c 2 and the code rate of the video segment Determine the second loss
  • calculating the non-freshness at the current moment in step B221 includes steps B2213-B2215, and B2215 includes steps B1 or B2:
  • the non-freshness degree uf t-1 (m,n) at the previous moment is calculated by the non-freshness formula; if the previous When the video segment is not stored in the edge node, the non-freshness uft -1 (m,n) is 0.
  • the non-freshness can be other values, such as a smaller value close to 0.
  • the second storage decision value is determined.
  • the second storage decision value indicates that the video segment is stored in the edge node at the previous moment, and when the second storage decision value is 0, it indicates the previous moment. Video clips are not stored on edge nodes.
  • the second storage decision value refers to a storage decision value corresponding to the storage state of the video segment at the current moment.
  • uft (m, n) is the non-freshness of the video segment (m, n) at the current moment, that is, t moment, when the second storage decision value is 0, the non-freshness is 0; when c t (m, When n) is 1, it is calculated by the above formula (7), p t (m,n) is the viewing probability of the video clip (m,n); ⁇ is the preset freshness growth factor, uf t-1 (m,n) is the non-freshness of the video segment at the previous moment.
  • the non-freshness of each video segment can be calculated by formula (7); in one example, uf 0 (m,n) is 0, and the non-freshness of the previous moment can be calculated by formula (7) calculated. It should be noted that the non-freshness can be used to reflect the outdated degree of video clips. The freshness of video clips stored in edge nodes is low, and outdated video clips will weaken the quality of user viewing experience.
  • uft (m, n) is related to the viewing probability p t (m, n) of the video segment and the storage status of the video segment at the edge node. Quality-sensitive, the picture content in the user RoI needs to be constantly updated to present the freshest content of the user, so the larger p t (m,n) is, the faster the freshness of the video clip will be lost.
  • B222 Determine the effective quality according to the non-freshness at the current moment, the viewing probability, and the quality level at the current moment.
  • EQ t (m,n) is the effective quality of the video segment (m,n) at time t
  • is a factor of the effective quality. It should be noted that when each sub-step in step B221 calculates the effective quality, the quality level at the current moment It is the quality level in step B201 or it can also be the result of adding processing in step B203. It can be understood that the effective quality of each video segment can be calculated by the above formula (8).
  • the purpose of setting the effective quality is to assign a higher quality level to the video clips in the user's attention area and ensure that the freshness of the video clips is less, because if the quality level of the video clip is high, but the content is outdated , then the effective quality of the video segment will become poor, and it is necessary to assign a high quality level to the video segment in the user's region of interest to ensure that the user can clearly see the content of the video segment in the region of interest.
  • non-freshness is used as a non-freshness factor to calculate effective quality
  • effective quality is used to determine user viewing quality, that is, user viewing quality is determined according to non-freshness factor.
  • multimedia content such as popular videos is usually cached in the MEC server at the base station in advance to achieve the purpose of enhancing the viewing experience of users.
  • the non-freshness factor is introduced in the embodiment of this application, which can describe the obsolete degree of the content of the video clips that have been stored in the edge node and the video clips that do not exist in the edge node.
  • the non-freshness The factor is applied to the storage method of code rate allocation and storage decision, which can improve the quality of video watched by users and reduce transmission loss.
  • TQ t (m,n)
  • Q t (m, n) is the subjective quality (i.e. the first subjective quality) of the video segment (m, n) at time t (current time)
  • TQ t (m, n) is the video segment (m, n) at time t
  • Q t-1 (m,n) is the subjective quality (ie, the second subjective quality) of the video segment (m,n) at the moment t-1 (the previous moment).
  • the first subjective quality can be calculated by formula (9), and the first subjective quality can be calculated by formula (9) through the quality of the video segment at the previous moment and the viewing probability at the previous moment (approximately, the viewing probability at the current moment can be used)
  • Two subjective quality, then combined with formula (10) can calculate the temporal quality loss of the video segment.
  • the subjective quality representation of a video clip depends on the user's sensitivity to the video clip.
  • the viewing probability factor of the video clip is normalized to the interval of 0-1 by using a mathematical method with the base e as the basis.
  • Q t (m,n) is inversely proportional to p t (m,n).
  • B224 Calculate the subjective quality mean value according to the first subjective quality and the total number of video segments, and determine the spatial quality loss according to the difference between the first subjective quality and the subjective quality mean value.
  • SQ t (m,n) is the spatial quality loss of the video segment (m,n) at time t
  • M ⁇ N is the total number of video clips
  • Q t (m,n) at time t is the above-mentioned first subjective quality.
  • the calculation formula of user viewing quality QoP t (m, n) in formula (1) and formula (2) is:
  • the effective quality, temporal quality loss, spatial quality loss and the corresponding preset weights may be the same or different, and the corresponding preset weight of the effective quality in the embodiment of the present application is the preset first weight ⁇ 1 , The corresponding preset weight of the temporal quality loss is the preset second weight ⁇ 2 , and the corresponding preset weight of the spatial quality loss is the preset third weight ⁇ 3 .
  • the viewing quality of the user corresponding to each video segment can be calculated respectively by formula (12).
  • the updated new utility value that is, the updated newU t
  • is 1 as an example.
  • the difference refers to the user's viewing quality
  • the updated new utility value can be obtained according to the sum of the differences.
  • step B204 according to the reduction processing result, the new utility value and the old utility value, the quality level and the code rate of the video segment are updated, including step B231 or B232:
  • the preset threshold can be set as required.
  • the preset threshold is 1, which is the lowest quality level, as an example.
  • the new utility value newU t in the step B201 is updated, according to the second difference between the new utility value after updating and the old utility value, determine the second impact factor diff t ' (m, n) of the video segment, and the second
  • the quality level corresponding to the maximum value of the impact factor is updated as the result of the reduction process, and the video segment corresponding to the maximum value of the second impact factor is used as the second update video segment, and the quality level of the second update video segment is updated to the second update video
  • the reduction processing result corresponding to the segment and then update the code rate of the second updated video segment according to the reduction processing result corresponding to the second updated video segment.
  • the determination method of the second impact factor diff t ′(m,n) is similar to that of the first impact factor diff t (m,n), by traversing the video clips according to the updated new utility value and the old utility value oldU t to determine the second impact factor diff t '(m,n) of each video segment, so as to obtain a new second impact factor matrix Second Impact Factor Matrix
  • the quality level is updated as the reduction processing result corresponding to the second updated video segment and updating the code rate of the second updated video segment according to the reduction processing result corresponding to the second updated video segment, Or it can be expressed as: obtain a new quality level matrix according to the increase processing
  • the current video segment is the Mth row and the Nth column's video segment, if At this time, keep the quality level and code rate of the video segment unchanged, that is, the quality level of the video segment is used as the updated quality level, and the bit rate of the video segment is used as the updated bit rate, and then traverse the M row and N column
  • the next video segment of the video segment determines the relationship between the reduction processing result and the quality level threshold.
  • B max ( is the matrix formed by the quality level of each video segment at time t-1, is the matrix formed by each video segment c t-1 (m,n) at time t-1, is the matrix formed by uf t-1 of each video segment at time t-1, B max is the maximum bandwidth)
  • the final output/return is the final quality level matrix Utility() means to use formula (1) or (2) to calculate. It should be noted that when Execute lines 6 to 11 in the above code during the traversal of video clips, execute lines 12 to 14 after a traversal is completed, and then return to line 5 until judgment is made in line 5 Output the final quality level matrix And when Execute lines 17 to 22 in the above code during the traversal of video clips, execute lines 23 to 25 after a traversal is completed, and then return to line 16 until the judgment is made at line 16 Output the final quality level matrix
  • the embodiment of the present application also provides a storage method, at least including but not limited to steps S300-S600:
  • the code rate allocation method refers to the code rate allocation method in the embodiment, and the code rate allocation result includes the target transmission code rate of each video segment (m, n) and the target quality level
  • time t is the current time
  • the code rate allocation result at time t is used , video segment, and target quality level for prediction.
  • the expected effective quality and expected transmission loss of the video segment (m, n) respectively, and respectively predict the impact of the video segment stored at time t on the effective quality of the video segment at time t+1 and the transmission loss of obtaining the video segment from the cloud.
  • step S400 includes steps B401-B403:
  • predicting non-freshness The formula is:
  • uft (m,n) is the non-freshness at the current moment
  • c t (m,n) is the fourth storage decision value
  • is the growth factor of non-freshness
  • p t (m,n) is the viewing probability .
  • the fourth storage decision value refers to the storage decision value corresponding to the storage state of the video segment at the current moment, which is the same as the second storage decision value.
  • B402. Determine the expected effective quality according to the predicted non-freshness, viewing probability and target quality level.
  • the expected effective mass The formula is:
  • is a factor of the effective mass
  • B403. Determine expected transmission loss according to the target transmission code rate, the fourth storage decision value, the second loss coefficient, and the third loss coefficient.
  • the expected transmission loss The formula is:
  • c 2 is the second loss coefficient
  • c 3 is the third loss coefficient
  • c t (m,n) is the fourth storage decision value.
  • S500 Perform storage optimization processing according to the expected effective quality, expected transmission loss, and maximum capacity of the edge node, to obtain storage decision information.
  • the storage decision information includes the third storage decision value c t '(m,n) of each video segment (m,n), and constitutes the third storage decision value matrix
  • the third storage decision value represents whether the video segment needs to be stored in the edge node, for example, the third storage decision value is 1, the representation needs to be stored in the edge node, and the third storage decision value is 0, the representation does not need to be stored in the edge node.
  • the third storage decision value refers to a storage decision value corresponding to the video segment obtained after storage optimization processing.
  • the goal of the storage method is to increase the expected effective quality and reduce the expected transmission loss, so the storage optimization processing problem can be expressed as:
  • ⁇ 1 is the preset first weight
  • s t (m,n) is the size of the video clip (m,n)
  • C max is the maximum storage capacity of the edge node, that is, the maximum capacity
  • is the weight of transmission loss.
  • the constraints stipulate that the sum of the sizes of the stored video clips cannot exceed the storage capacity of the edge nodes, and the cache strategy to be solved for the storage problem is a binary variable matrix.
  • the third storage decision value is solved by the branch and bound algorithm .
  • step B500 includes steps B511-B513:
  • B511 Determine the expected utility according to the expected effective quality and expected transmission loss, determine the storage gain according to the expected utility corresponding to the video segment stored in the edge node and not stored in the edge node at the current moment, and determine the video value of each video segment according to the storage gain .
  • the expected utility is It is divided into two cases.
  • the expected utility of the first case is recorded as the first expected utility DC t (m,n), that is, the expected utility of the video segment stored in the edge node at the current moment
  • the expected utility of the second case is recorded as the first expected utility.
  • Expected utility DC t ′(m,n) that is, the expected utility of not storing the video segment in the edge node at the current moment
  • is a factor of effective quality
  • ⁇ 1 is the preset first weight
  • is the weight of transmission loss
  • is the growth factor of non-freshness
  • c 2 is the second loss coefficient
  • c 3 is the third loss coefficient.
  • the calculation formula of video value is:
  • v t (m,n) is the video value of the video segment (m,n)
  • DC t (m,n)-DC t '(m,n) is the storage gain.
  • the goal of obtaining storage decision information is to make the size of the video segment less than or equal to C max and the sum of the values reaches the maximum value as much as possible.
  • the value density ds t (m, n) formula is:
  • the third storage decision value calculated through the branch and bound algorithm may be a first value or a second value, for example, the first value is 1, and the second value is 0.
  • the third storage decision value of the video segment c t (m,n) when the third storage decision value of the video segment c t (m,n) is 1, it means that the video segment needs to be stored in the edge node, and when the third storage decision value of the video segment is c t When (m,n) is 0, it means that the video segment does not need to be stored in the edge node
  • the branch and bound algorithm actually generates a binary tree with multiple nodes, and the processing flow is as follows:
  • the initial node there is a node initialized to 0 in the initialization queue pq (referred to as the initial node), and the data type of the node in the binary tree is a structure containing variables such as storage quality (such as weight, value, etc.), allocated code rate, etc. body, and the storage quality of the initial node is 0, which is assigned to the parent node during initialization, and the head node of the queue pq is returned through pq.get() in the subsequent traversal as the new parent node and the head node is deleted and the queue pq The rest of the nodes move forward.
  • the left and right nodes are initialized to 0 in each round of iteration. Every time it is judged whether the left and right nodes may have a solution, a new node will be added to the queue, and the parent nodes will be updated in turn.
  • the initial node is used as the parent node, and then the left node is judged according to the video segment with the highest value density in the modesList: a fictitious left node is generated, and the video segment with the highest value density is used as the segment to be confirmed, when the segment to be confirmed
  • the total capacity after adding the knapsack is less than C max , where the total capacity after adding the knapsack refers to the sum of the sizes of all nodes on the branch where the imaginary left node is located in the binary tree, for example: the total capacity at this time is the size of the segment to be confirmed and the initial The sum of the sizes of the nodes.
  • the imaginary left node is determined to be the actual left node and the fourth storage decision value of the fragment to be confirmed is the first value, and the left node is added to the queue pq, At this time, the size of the segment to be confirmed and the video value of the segment to be confirmed are saved in the left node; then, the value threshold, the upper limit value upprofit, is updated by using the left node to generate a new value threshold upprofit.
  • the upprofit value threshold represents the maximum video value that the current edge node can hold, which is the maximum value of the remaining knapsack nodes that can be loaded into the knapsack calculated by the greedy algorithm during the process of solving the 0-1 knapsack problem. It should be noted that when the total capacity of the segment to be confirmed is greater than or equal to C max after being added to the backpack, that is, the sum of the size of the segment to be confirmed and the size of the initial node is greater than or equal to C max , the actual left node will not be generated and the upper bound will not be updated at this time Value upprofit.
  • the video segment with the highest value density is also used as the segment to be confirmed, and the segment to be confirmed is used to determine the right node: generate a fictitious right node, and determine whether the maximum utility value of the segment to be confirmed is not added to the backpack is greater than the updated upper limit value upprofit, if the maximum utility value of the segment to be confirmed not added to the backpack is greater than the updated upper limit value upprofit, then the imaginary right node is determined to be the actually generated right node, and the value saved in the right node at this time is The size of the segment to be confirmed and the video value of the segment to be confirmed, then mark the fourth storage decision value of the segment to be confirmed as the second value and add the right node to the queue pq; and if the segment to be confirmed is not added to the maximum utility value of the backpack If it is less than or equal to the updated upper bound value upprofit, the actual right node will not be generated.
  • the maximum utility value refers to the sum of the video values of all nodes on the branch of the fictitious right node in the binary tree.
  • the maximum utility value is the difference between the segment to be confirmed and the video value of the initial node.
  • the queue adopts the queue data structure Queue(pq) of python, and the queue data structure Queue(pq) of python obeys the order of first-in-first-out, and pq.get() indicates the return queue The head node of pq, and delete the head node, and then determine the new parent node from the queue pq.
  • the parent node will be deleted, and then the first node arranged in order in the queue will be used as the new parent node, and the above-mentioned steps of judging the left node and judging the left node will be performed, and each time it is determined When confirming a segment, the video segment that has not been used as a segment to be confirmed and has the highest value density is used as a new segment to be confirmed, and the steps of judging the left node and judging the left node are continuously iterated.
  • the iterative process when the actual left node is generated, the corresponding segment to be confirmed is marked as the first value, and when the actual right node is generated, the corresponding segment to be confirmed is marked as the second value, and in In the process of continuous iteration, the iteration ends when the queue pq is empty, and the third storage decision value of each video segment is obtained.
  • step B514 is also included:
  • the third storage decision value c t (m,n) of the video segment (m,n) ⁇ 0 is updated to 0, and the other third storage decision values remain unchanged, and the storage decision information is obtained, that is, the target storage matrix
  • the edge node can store decision information according to the target storage matrix
  • the third storage decision value in determines the storage policy. For example: store the video segment with the third storage decision value of 1 in the edge node, update the storage content of the edge node, and use it for the next moment, that is, the service when the user end requests to obtain the target video transmission.
  • the code rate allocation method and the storage method of the embodiment of the present application can be used as an online algorithm to respond to the request information of the client in real time, and the storage of edge nodes updated at the current moment
  • the content can provide services for users to obtain video clips in the future.
  • the process of obtaining storage decision information in the storage method is as follows:
  • code rate allocation method and the storage method of the embodiment of the present application are described in two specific application scenarios:
  • the server slices the video, that is, divides it into blocks, obtains the video segment, and then records the movement track of the user's focus; when the user sends a request through the client, the server The motion trajectory calculates the probability that the video segment is located in the user's FOV, obtains the viewing probability, and then performs code rate allocation (refer to step S200 for details), and judges whether all video segments (i.e. video segments) have all been sent to the user: if so, carry out storage decisions, that is Execute the above storage method to update the video slices (i.e.
  • the request is forwarded to the cloud, so that the cloud server in the cloud sends the video clip to the edge node, and the edge node then sends the video clip (that is, the video clip) to the client, and if the video clip (that is, the video clip) is stored in the edge node, the user
  • the content of the video piece ie video clip
  • the server slices the video, that is, divides it into blocks, obtains the video segment, and then records the interaction track between the user and the cloud desktop (such as the operation of the mouse or touch screen) region frequency); when the user sends a request through the client terminal, the server calculates the probability that the video slice is located in the ROI of the user according to the cloud desktop interactive trajectory, obtains the viewing probability, and then performs code rate allocation (refer to step S200 for details), and judges that all video slices (i.e.
  • video segment has been sent to the user: if so, make a storage decision, that is, execute the above storage method, thereby updating the video segment stored in the edge node; if not, determine whether the video segment (ie video segment) is stored in the edge node, if the video If the video clips (that is, video clips) are not stored in the edge nodes, the request needs to be forwarded to the cloud, so that the cloud server in the cloud sends the video clips (that is, video clips) to the edge nodes, and the edge nodes then send the video clips (that is, video clips) sent to the user end, and if the video piece (ie video clip) is stored in the edge node, the user can read the content of the video piece (ie video clip) from the edge node.
  • the code rate allocation method in the embodiment of the present application is executed by the terminal of the user end, and in other embodiments, it can also be executed by the cloud server in the cloud, and the storage method can be executed by the edge node or executed by the cloud server in the cloud, Edge nodes store video clips according to storage decision information.
  • the edge node refers to a network node with caching capability close to the user side in the Internet of Things.
  • the embodiment of the present application also provides a code rate allocation device, including:
  • the obtaining module is used to obtain target video data;
  • the target video data includes the viewing probability of all video clips in the target transmission video;
  • the processing module is used to allocate the code rate to the video segment according to the viewing probability, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the user's viewing quality is determined according to The non-freshness factor is determined, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  • the embodiment of the present application also provides a storage device, including:
  • Determine module be used for determining the viewing probability of all video clips in the target transmission video and the code rate allocation result
  • a prediction module configured to predict expected effective quality and expected transmission loss according to the code rate allocation result and viewing probability
  • the optimization module is used to perform storage optimization processing according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node to obtain storage decision information;
  • the storage decision information includes the third storage decision value of each video clip, the third storage decision value Indicating whether video clips need to be stored in edge nodes;
  • a storage module configured to store video clips according to storage decision information
  • the target video data includes viewing probabilities of all video clips in the target transmission video
  • the code rate of the video segment is allocated so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the user's viewing quality is determined according to the non-freshness factor , the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  • the embodiment of the present application also provides an electronic device, the electronic device includes a processor and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, at least one instruction, at least one The program, code set or instruction set is loaded and executed by the processor to implement the code rate allocation or storage method of the foregoing embodiments.
  • the electronic devices in the embodiments of the present application include, but are not limited to, mobile phones, tablet computers, computers, vehicle-mounted computers, servers, and the like.
  • the content in the above-mentioned method embodiment is applicable to this device embodiment.
  • the specific functions realized by this device embodiment are the same as those of the above-mentioned method embodiment, and the beneficial effects achieved are also the same as those achieved by the above-mentioned method embodiment.
  • the embodiment of the present application also provides a computer-readable storage medium. At least one instruction, at least one program, code set or instruction set is stored in the storage medium, and at least one instruction, at least one program, code set or instruction set is loaded by the processor. And execute to implement the code rate allocation or storage method of the foregoing embodiments.
  • the embodiment of the present application also provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the code rate allocation or storage method of the foregoing embodiments.
  • a code rate allocation method, storage method, device, device, and storage medium proposed in the embodiments of the present application by acquiring target video data, the target video data includes viewing probabilities of all video segments in the target transmission video, and the introduction of viewing probabilities is beneficial to Improve the user's viewing quality; according to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; and the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, said User viewing quality is determined according to the non-freshness factor.
  • the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability. The goal of reducing transmission loss while improving user viewing quality of video clips.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit .
  • a processor such as a central processing unit, digital signal processor, or microprocessor
  • Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Abstract

The present application discloses a code rate allocation method and apparatus, a storage method and apparatus, a device, and a storage medium. The code rate allocation method comprises: obtaining target video data, the target video data comprising viewing probabilities of all video clips in a target transmission video (S100); and performing code rate allocation on the video clips according to the viewing probabilities, so that the total utility of the target transmission video satisfies a target value (S200). The total utility of the target transmission video is determined according to the user watching quality and transmission loss of the video clips, the user watching quality is determined according to non-freshness factors, and the non-freshness factors are determined according to the storage state and the viewing probabilities of the video clips in edge nodes.

Description

一种码率分配方法、存储方法、装置、设备及存储介质A code rate allocation method, storage method, device, equipment and storage medium
相关申请的交叉引用Cross References to Related Applications
本申请基于申请号为202111658162.0、申请日为2021年12月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on a Chinese patent application with application number 202111658162.0 and a filing date of December 30, 2021, and claims the priority of this Chinese patent application. The entire content of this Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本申请涉及通信领域,例如涉及一种码率分配方法、存储方法、装置、设备及存储介质。The present application relates to the communication field, for example, to a code rate allocation method, storage method, device, equipment and storage medium.
背景技术Background technique
基于5G网络对高清视频的良好支持,高清视频逐渐应用在监控安防、远程会议、电商直播、VR游戏/视频等多个行业中,受到内容提供商和消费者们的青睐。高清视频的传输过程中需要消耗大量的带宽和存储资源,而通常情况下由服务器进行高清视频的存储,导致传输过程的损耗大,为此可以利用距离用户较近的边缘节点来缓存高清视频,用户可以直接从该边缘节点获取缓存的高清视频即可而不需要不从服务器获取,减小了传输过程中的损耗。然而,当高清视频存储于边缘节点的时间增长,该高清视频对于用户的价值会降低,在带宽有限的条件下若该高清视频维持原有码率,会对其他高清视频造成影响,导致增加高清视频整体的传输损耗并响应用户观看体验。Based on the good support of 5G network for high-definition video, high-definition video is gradually applied in many industries such as surveillance security, remote conference, e-commerce live broadcast, VR game/video, etc., and is favored by content providers and consumers. The transmission process of high-definition video needs to consume a lot of bandwidth and storage resources. Usually, the storage of high-definition video is performed by the server, resulting in a large loss in the transmission process. Therefore, edge nodes closer to users can be used to cache high-definition video. The user can directly obtain the cached high-definition video from the edge node instead of obtaining it from the server, which reduces the loss in the transmission process. However, when the time for storing high-definition video at the edge node increases, the value of the high-definition video to users will decrease. Under the condition of limited bandwidth, if the high-definition video maintains the original bit rate, it will affect other high-definition videos, resulting in an increase in high-definition The overall transmission loss of the video and respond to the user viewing experience.
发明内容Contents of the invention
本申请实施例的主要目的在于提供一种减少传输损耗的码率分配方法、存储方法、装置、设备及存储介质。The main purpose of the embodiments of the present application is to provide a code rate allocation method, a storage method, a device, a device, and a storage medium that reduce transmission loss.
为至少在一定程度上实现上述目的,本申请实施例提供了一种码率分配方法,包括:获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点的存储状态以及所述观看概率确定。In order to achieve the above purpose at least to a certain extent, an embodiment of the present application provides a code rate allocation method, including: acquiring target video data; the target video data includes viewing probabilities of all video segments in the target transmission video; according to the Viewing probability performs code rate allocation to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the The viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
本申请实施例还提出了一种存储方法,包括:通过所述的码率分配方法得到目标传输视频中所有视频片段的观看概率以及码率分配结果;根据所述码率分配结果以及所述观看概率,预测预期有效质量以及预期传输损耗;根据所述预期有效质量、所述预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息;所述存储决策信息包括每一所述视频片段的第三存储决策值,所述第三存储决策值表征所述视频片段是否需要存储于边缘节点;根据所述存储决策信息进行所述视频片段的存储。The embodiment of the present application also proposes a storage method, including: obtaining the viewing probability and code rate allocation results of all video clips in the target transmission video through the code rate allocation method; according to the code rate allocation results and the viewing Probability, predicting the expected effective quality and expected transmission loss; according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node, perform storage optimization processing to obtain storage decision information; the storage decision information includes each of the A third storage decision value of the video clip, where the third storage decision value represents whether the video clip needs to be stored in an edge node; and the video clip is stored according to the storage decision information.
本申请实施例还提出了一种码率分配装置,包括:获取模块,用于获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;处理模块,用于根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述 目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点的存储状态以及所述观看概率确定。The embodiment of the present application also proposes a code rate allocation device, including: an acquisition module, configured to acquire target video data; the target video data includes viewing probabilities of all video segments in the target transmission video; a processing module, configured to According to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, so The user's viewing quality is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
本申请实施例还提出了一种存储装置,包括:确定模块,用于确定目标传输视频中所有视频片段的观看概率以及码率分配结果;预测模块,用于根据所述码率分配结果以及所述观看概率,预测预期有效质量以及预期传输损耗;优化模块,用于根据所述预期有效质量、所述预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息;所述存储决策信息包括每一所述视频片段的第三存储决策值,所述第三存储决策值表征所述视频片段是否需要存储于边缘节点;存储模块,用于根据所述存储决策信息进行所述视频片段的存储;所述确定目标传输视频中所有视频片段的观看概率以及码率分配结果,包括:获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点的存储状态以及所述观看概率确定。The embodiment of the present application also proposes a storage device, including: a determination module, configured to determine viewing probabilities and code rate allocation results of all video segments in the target transmission video; The viewing probability is predicted to predict the expected effective quality and the expected transmission loss; the optimization module is used to perform storage optimization processing according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node to obtain storage decision information; the storage The decision information includes a third storage decision value for each of the video clips, the third storage decision value representing whether the video clip needs to be stored in an edge node; a storage module configured to perform the video processing according to the storage decision information. The storage of segment; The viewing probability and code rate distribution result of all video clips in the described determination target transmission video, comprise: obtain target video data; Described target video data comprises the viewing probability of all video clips in target transmission video; According to the Viewing probability performs code rate allocation to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the The viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
本申请实施例还提出了一种电子设备,所述电子设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述码率分配方法或存储方法。The embodiment of the present application also provides an electronic device, the electronic device includes a processor and a memory, the memory stores at least one instruction, at least one program, a code set or an instruction set, the at least one instruction, the At least one segment of program, the code set or instruction set is loaded and executed by the processor to implement the code rate allocation method or storage method described above.
本申请实施例还提出了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述码率分配方法或存储方法。The embodiment of the present application also provides a computer-readable storage medium, the storage medium stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the The code set or instruction set is loaded and executed by the processor to implement the above code rate allocation method or storage method.
附图说明Description of drawings
图1为本申请码率分配方法的步骤流程示意图;Fig. 1 is a schematic flow chart of the steps of the code rate allocation method of the present application;
图2为本申请具体实施例存储方法的步骤流程示意图;Fig. 2 is a schematic flow chart of the steps of the storage method in a specific embodiment of the present application;
图3为本申请具体实施例其中一个应用场景的码率分配方法以及存储方法的流程示意图;FIG. 3 is a schematic flow diagram of a code rate allocation method and a storage method in one application scenario of a specific embodiment of the present application;
图4为本申请具体实施例另一个应用场景的码率分配方法以及存储方法的流程示意图;FIG. 4 is a schematic flowchart of a code rate allocation method and a storage method in another application scenario according to a specific embodiment of the present application;
图5为本申请具体实施例码率分配装置的示意图;FIG. 5 is a schematic diagram of a code rate allocation device according to a specific embodiment of the present application;
图6为本申请具体实施例存储装置的示意图;FIG. 6 is a schematic diagram of a storage device according to a specific embodiment of the present application;
图7为本申请具体实施例电子设备的示意图。FIG. 7 is a schematic diagram of an electronic device according to a specific embodiment of the present application.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身没有特有的意义。因此,“模块”、“部件”或“单元”可以混合地使用。In the subsequent description, use of suffixes such as 'module', 'part' or 'unit' for denoting elements is only for facilitating the description of the present application and has no specific meaning by itself. Therefore, 'module', 'part' or 'unit' may be used in combination.
实施例一Embodiment one
如图1所示,本申请实施例提供一种码率分配方法,至少包括但不限于步骤S100-S200:As shown in Figure 1, the embodiment of the present application provides a code rate allocation method, at least including but not limited to steps S100-S200:
S100、获取目标视频数据。S100. Obtain target video data.
本申请实施例中,目标视频数据包括目标传输视频中所有视频片段的观看概率。在一示 例中,目标视频数据为用户需要观看的视频的数据,即目标传输视频的相关数据,用户端可以基于用户的输入指令生成视频请求而获取得到。需要说明的是,云端(或称为服务器端)能够对目标传输视频进行分块处理,生成多个不同空间位置的视频片段。在一示例中,时长为T的目标传输视频在云端被编码为K个质量级别,每个质量级别的视频在时间上被切分为长度为Δt秒(包括但不限于2秒)的视频段,每个长度为Δt秒的视频段在空间上为被分块处理成M×N个视频片段,其中,M、N分别为的水平和垂直方向的视频片段数量。本申请实施例中,为了获取用户对视频片段的感兴趣区域(Region of Interest,RoI),可以通过用户关注点轨迹确定每个视频片段的观看概率,而位于用户RoI内的观看概率之和为1,即满足
Figure PCTCN2022144132-appb-000001
m,n为视频片段的索引位置,p(m,n)为视频片段(m,n)对应的观看概率(位于用户RoI内的概率),观看概率越高代表用户的关注度越大。需要说明的是,当目标传输视频为沉浸式视频,例如VR(Virtual Reality,虚拟现实)视频和AR(Augmented Reality,增强现实)视频,用户通过佩戴头显设备,可以看到360度的视频内容,此时RoI区域为用户在头戴式显示器中实际的可观看视口(Field of View,FoV)区域,通过眼部运动轨迹可以确定观看概率;而当目标传输视频为其他类型的视频可以通过获取采集实时鼠标的操作区域频率或通过视频图像处理来预测用户的可能关注位置的重要程度来计算。例如,用Saliency map(显著图)预测视频显著性区域,通过不同区域的灰度值比率来判断观看区域的重要程度从而确定观看概率。
In this embodiment of the present application, the target video data includes viewing probabilities of all video segments in the target transmission video. In an example, the target video data is the data of the video that the user needs to watch, that is, the relevant data of the target transmission video, which can be obtained by the user end by generating a video request based on the user's input instruction. It should be noted that the cloud (or server side) can divide the target transmission video into blocks to generate multiple video clips in different spatial positions. In an example, the target transmission video with a duration of T is encoded into K quality levels in the cloud, and the video of each quality level is temporally divided into video segments with a length of Δt seconds (including but not limited to 2 seconds) , each video segment with a length of Δt seconds is spatially divided into M×N video segments, where M and N are the number of horizontal and vertical video segments respectively. In the embodiment of the present application, in order to obtain the region of interest (Region of Interest, RoI) of the user on the video segment, the viewing probability of each video segment can be determined through the user's attention point track, and the sum of the viewing probabilities located in the user's RoI is 1, which satisfies
Figure PCTCN2022144132-appb-000001
m,n is the index position of the video segment, p(m,n) is the viewing probability (probability within the user RoI) corresponding to the video segment (m,n), the higher the viewing probability, the greater the user's attention. It should be noted that when the target transmission video is immersive video, such as VR (Virtual Reality, virtual reality) video and AR (Augmented Reality, augmented reality) video, the user can see 360-degree video content by wearing a head-mounted display device , at this time, the RoI area is the user's actual field of view (Field of View, FoV) area in the head-mounted display, and the viewing probability can be determined through the eye movement trajectory; and when the target transmission video is other types of video, it can be passed Acquire real-time operation area frequency of the mouse or predict the importance of the user's possible attention position through video image processing. For example, use the Saliency map (saliency map) to predict the salient area of the video, and judge the importance of the viewing area through the gray value ratio of different areas to determine the viewing probability.
S200、根据观看概率对视频片段进行码率分配,以使视频片段的总效用满足目标值。S200. Perform code rate allocation on the video segment according to the viewing probability, so that the total utility of the video segment meets the target value.
本申请实施例中,码率分配指的是确定每一视频片段的目标传输码率,即边缘节点传输每一视频片段至用户端的码率;目标传输视频的总效用根据视频片段的用户观看质量以及传输损耗确定。在一示例中,目标传输视频的总效用通过总效用函数表征:In the embodiment of this application, code rate allocation refers to determining the target transmission code rate of each video segment, that is, the code rate at which the edge node transmits each video segment to the user end; the total utility of the target video transmission is based on the user viewing quality of the video segment and the transmission loss is determined. In one example, the total utility of the target transmission video is characterized by a total utility function:
Figure PCTCN2022144132-appb-000002
Figure PCTCN2022144132-appb-000002
其中,Utility t为t时刻的总效用,即t时刻的码率分配系统效用,QoP t(m,n)为t时刻视频片段(m,n)的用户观看质量,TC t(m,n)为t时刻视频片段(m,n)的传输损耗,δ为传输损耗的权重。从公式可知道,根据每一视频片段对应的用户观看质量减去传输损耗与传输损耗的权重的乘积可以得到每一视频片段的效用,所有视频片段的效用之和即为目标传输视频的总效用。在一示例中,一个视频片段的若存储于云端(云服务器),则一个视频片段的传输损耗包括边缘节点从云端获取视频片段的通信损耗以及边缘节点传输视频片段至用户端的通信损耗;若一个视频片段存储于边缘节点,视频片段的传输损耗包括边缘节点传输视频片段至用户端的通信损耗,或者还包括边缘节点的转码损耗。需要说明的是,目标传输视频的总效用满足目标值包括但不限于使得总效用函数达到最大值(MAX Utility t);或者,先设定总效用目标阈值,当利用总效用函数计算得到的总效用大于等于总效用目标阈值则认为满足目标值,在确定总效用满足目标值时,可以提高目标传输视频的用户观看质量并减小从云端获取视频片段的传输通信损耗,提高了用户端带宽受限条件下用户观看目标传输视频的质量。 Among them, Utility t is the total utility at time t, that is, the utility of the code rate distribution system at time t, QoP t (m,n) is the user viewing quality of video segment (m,n) at time t, TC t (m,n) is the transmission loss of the video segment (m,n) at time t, and δ is the weight of the transmission loss. It can be seen from the formula that the utility of each video segment can be obtained by subtracting the product of the transmission loss and the weight of the transmission loss from the user viewing quality corresponding to each video segment, and the sum of the utilities of all video segments is the total utility of the target transmission video . In one example, if a video segment is stored in the cloud (cloud server), the transmission loss of a video segment includes the communication loss of the edge node obtaining the video segment from the cloud and the communication loss of the edge node transmitting the video segment to the user end; if a The video clips are stored in the edge node, and the transmission loss of the video clip includes the communication loss of the edge node transmitting the video clip to the client, or also includes the transcoding loss of the edge node. It should be noted that the total utility of the target transmission video satisfies the target value, including but not limited to making the total utility function reach the maximum value (MAX Utility t ); or, first setting the total utility target threshold, when the total If the utility is greater than or equal to the total utility target threshold, it is considered to meet the target value. When the total utility is determined to meet the target value, it can improve the user viewing quality of the target video transmission and reduce the transmission and communication loss of video clips obtained from the cloud. Under certain conditions, users can watch the quality of the target transmission video.
本申请实施例中,以满足目标值为使得总效用函数为最大值为例子进行说明,码率分配问题可以描述为:In the embodiment of the present application, an example is used to satisfy the target value so that the total utility function is the maximum value, and the code rate allocation problem can be described as:
Figure PCTCN2022144132-appb-000003
Figure PCTCN2022144132-appb-000003
Figure PCTCN2022144132-appb-000004
Figure PCTCN2022144132-appb-000004
Figure PCTCN2022144132-appb-000005
Figure PCTCN2022144132-appb-000005
Figure PCTCN2022144132-appb-000006
Figure PCTCN2022144132-appb-000006
其中,B max为边缘节点与用户端之间传输的最大带宽,
Figure PCTCN2022144132-appb-000007
为t时刻的码率,在一示例中是t时刻为视频片段(m,n)分配的用于边缘节点向用户端传输视频片段(m,n)的传输码率,c t-1(m,n)为t-1时刻的存储决策值,当存储决策值为1表示视频片段(m,n)存储在边缘节点,而当存储决策值为0代表视频片段(m,n)未存储在边缘节点;
Figure PCTCN2022144132-appb-000008
为t-1时刻的码率,在一示例中是t-1时刻视频片段(m,n)被边缘节点存储时的存储码率。本申请实施例中,在计算总效用时带宽约束条件限定所有
Figure PCTCN2022144132-appb-000009
之和不能超过最大带宽B max;对于已存储在边缘节点的视频片段(即c t-1(m,n)=1)的码率约束条件规定,t时刻给已存储的视频片段分配的码率不能高于
Figure PCTCN2022144132-appb-000010
从而使得用户可以直接从边缘节点获取视频片段,减少从云端获取视频片段造成额外的通信损耗。其中,
Figure PCTCN2022144132-appb-000011
为t时刻分配给视频片段(m,n)的质量级,q K为第K个质量级别,可以通过ffprobe工具(FFmpeg中可用于查看文件格式信息工具)测得,质量级别与视频片段的码率和内容复杂度o t(m,j)紧密相关,即
Figure PCTCN2022144132-appb-000012
Figure PCTCN2022144132-appb-000013
F()表示一种映射关系。需要说明的是,通过公式(2)计算总效用时,通过整数规划问题计算复杂度为O(K M·N),然后可以确定各个视频片段的目标质量级求解最大的总效用,而本申请实施例中由于整数规划问题计算复杂度为O(K M·N)的规模时,执行时间过大,因此采用贪婪算法求得近似解最优解以减小计算复杂度,从而使得总效用满足目标值而确定每一视频片段的目标质量级以及对应的目标传输码率,实现码率分配。
Among them, B max is the maximum bandwidth transmitted between the edge node and the user end,
Figure PCTCN2022144132-appb-000007
is the code rate at time t, in an example, it is the transmission code rate allocated for video segment (m, n) at time t for the edge node to transmit video segment (m, n) to the user end, c t-1 (m , n) is the storage decision value at time t-1. When the storage decision value is 1, it means that the video segment (m, n) is stored in the edge node, and when the storage decision value is 0, it means that the video segment (m, n) is not stored in edge node;
Figure PCTCN2022144132-appb-000008
is the code rate at time t-1, in an example, it is the storage code rate when the video segment (m,n) is stored by the edge node at time t-1. In the embodiment of this application, when calculating the total utility, the bandwidth constraints limit all
Figure PCTCN2022144132-appb-000009
The sum cannot exceed the maximum bandwidth B max ; for the video clips already stored in the edge node (i.e. c t-1 (m,n)=1), the code rate constraints stipulate that the code allocated to the stored video clips at time t rate cannot be higher than
Figure PCTCN2022144132-appb-000010
In this way, users can directly obtain video clips from edge nodes, reducing the additional communication loss caused by obtaining video clips from the cloud. in,
Figure PCTCN2022144132-appb-000011
is the quality level assigned to the video segment (m,n) at time t, q K is the Kth quality level, which can be measured by the ffprobe tool (a tool that can be used to view file format information in FFmpeg), and the quality level is related to the code of the video segment The rate is closely related to the content complexity o t (m,j), namely
Figure PCTCN2022144132-appb-000012
Figure PCTCN2022144132-appb-000013
F() represents a mapping relationship. It should be noted that when the total utility is calculated by formula (2), the calculation complexity of the integer programming problem is O(K M N ), and then the target quality level of each video segment can be determined to solve the maximum total utility, while the present application In the embodiment, when the computational complexity of the integer programming problem is O(K M N ), the execution time is too large, so the greedy algorithm is used to obtain the optimal solution of the approximate solution to reduce the computational complexity, so that the total utility satisfies Determine the target quality level of each video segment and the corresponding target transmission bit rate based on the target value to realize bit rate allocation.
在一示例中,本申请实施例中为确定公式(2)的总效用函数为最大值,步骤S200中根据观看概率对视频片段进行码率分配,包括步骤B201和B202,以及B203和/或B204:In an example, in the embodiment of the present application, in order to determine that the total utility function of formula (2) is the maximum value, in step S200, the code rate is allocated to the video segment according to the viewing probability, including steps B201 and B202, and B203 and/or B204 :
B201、获取目标传输视频的总效用的新效用值和旧效用值,以及获取视频片段的质量级以及码率。B201. Obtain the new utility value and the old utility value of the total utility of the target transmission video, and acquire the quality level and bit rate of the video segment.
本申请实施例中,新效用值为目标传输视频的总效用的初始值,即公式(1)中的总效用函数的初始值,该初始值可以为随机确定的值或者设定值,记为newU t,例如newU t=0。而旧效用值可以为根据质量级以及码率确定的目标传输视频的总效用的当前值,记为oldU t。以当前时刻为t时刻,质量级以及码率可以为视频片段当前分配的质量级以及码率,或者也均可以为设定值,而本申请实施例中以质量级为设定值为例进行说明,不作具体限定。在一示例中,将每一视频片段(m,n)的质量级设置为最低的质量级别1,记为
Figure PCTCN2022144132-appb-000014
构成所有视频片段(m,n)的质量级矩阵
Figure PCTCN2022144132-appb-000015
并可以通过质量级确定对应的码率或者设定初始的码率记为
Figure PCTCN2022144132-appb-000016
从而得到所有视频片段(m,n)的码率矩阵
Figure PCTCN2022144132-appb-000017
从而代入公式(1)或者 (2)中就可以计算得到oldU t
In the embodiment of the present application, the new utility value is the initial value of the total utility of the target transmission video, that is, the initial value of the total utility function in formula (1), and the initial value can be a randomly determined value or a set value, denoted as newU t , eg newU t =0. The old utility value may be the current value of the total utility of the target video transmission determined according to the quality level and bit rate, which is denoted as oldU t . Taking the current moment as the moment t, the quality level and the code rate can be the quality level and the code rate currently assigned to the video segment, or both can be set values, and in the embodiment of the present application, the quality level is used as the set value as an example. description, without specific limitation. In one example, the quality level of each video segment (m, n) is set to the lowest quality level 1, denoted as
Figure PCTCN2022144132-appb-000014
A matrix of quality levels constituting all video clips (m,n)
Figure PCTCN2022144132-appb-000015
And the corresponding code rate can be determined through the quality level or the initial code rate can be set as
Figure PCTCN2022144132-appb-000016
So as to get the code rate matrix of all video clips (m,n)
Figure PCTCN2022144132-appb-000017
Thus oldU t can be calculated by substituting it into formula (1) or (2).
B202、判断码率的总和是否小于或等于带宽容量。B202. Determine whether the sum of code rates is less than or equal to the bandwidth capacity.
在一示例中,判断所有视频片段的码率的总和是否小于或等于带宽容量,即判断
Figure PCTCN2022144132-appb-000018
是否成立。
In one example, it is judged whether the sum of the code rates of all video clips is less than or equal to the bandwidth capacity, that is, it is judged
Figure PCTCN2022144132-appb-000018
Whether it is established.
B203、当码率的总和小于或等于带宽容量,遍历视频片段对质量级进行增加处理,根据增加处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,返回判断码率的总和是否小于或等于带宽容量的步骤,直至码率的总和大于带宽容量,得到每一视频片段的目标传输码率。B203, when the sum of the code rates is less than or equal to the bandwidth capacity, traverse the video segment to increase the quality level, according to the increase processing result, new utility value and old utility value, update the quality level and the code rate of the video segment, and return to determine the code rate The step of whether the sum of the sum is less than or equal to the bandwidth capacity, until the sum of the code rate is greater than the bandwidth capacity, to obtain the target transmission code rate of each video segment.
本申请实施例中,当所有视频片段的码率的总和
Figure PCTCN2022144132-appb-000019
遍历视频片段对质量级进行增加处理。例如,以M×N个视频片段构成的视频片段矩阵中的第M列第N个视频片段作为第一个视频片段开始遍历过程直至遍历至第1列第1个视频片段,在遍历过程中进行视频片段的质量级的增加处理,例如,对视频片段的质量级
Figure PCTCN2022144132-appb-000020
进行增加处理后得到增加处理结果
Figure PCTCN2022144132-appb-000021
相当于增加后的质量级;其中,增加的数值可根据需要调整,本申请以1为示例而不对增加的数值构成具体的限制。然后,在增加处理后根据增加处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,返回步骤B202直至
Figure PCTCN2022144132-appb-000022
从而得到每一视频片段的目标传输码率。
In the embodiment of this application, when the sum of the code rates of all video clips
Figure PCTCN2022144132-appb-000019
The quality level is incremented by traversing the video segments. For example, the Nth video clip in the Mth column in the video clip matrix composed of M×N video clips is used as the first video clip to start the traversal process until the first video clip in the first column is traversed. Increased processing of the quality level of the video segment, for example, the quality level of the video segment
Figure PCTCN2022144132-appb-000020
After the increase processing, the result of the increase processing is obtained
Figure PCTCN2022144132-appb-000021
It is equivalent to an increased quality level; wherein, the increased value can be adjusted as required, and the present application uses 1 as an example without constituting a specific limit to the increased value. Then, after increasing processing, according to increasing processing result, new utility value and old utility value, update the quality level and code rate of video segment, return to step B202 until
Figure PCTCN2022144132-appb-000022
Thus, the target transmission bit rate of each video segment is obtained.
B204、当码率的总和非小于或非等于带宽容量,遍历视频片段对质量级进行减少处理,根据减少处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,返回判断码率的总和是否小于或等于带宽容量的步骤,直至码率的总和小于或等于带宽容量,得到每一视频片段的目标传输码率。B204, when the sum of the code rates is not less than or not equal to the bandwidth capacity, traverse the video segment to reduce the quality level, and update the quality level and the code rate of the video segment according to the reduction processing result, new utility value and old utility value, and return to judge The step of whether the sum of the code rates is less than or equal to the bandwidth capacity, until the sum of the code rates is less than or equal to the bandwidth capacity, to obtain the target transmission code rate of each video segment.
在一示例中,当码率的总和非小于或非等于带宽容量,即
Figure PCTCN2022144132-appb-000023
遍历视频片段对质量级进行减少处理。例如,以M×N个视频片段构成的视频片段矩阵中的第M列第N个视频片段作为第一个视频片段开始遍历过程直至遍历至第1列第1个视频片段,在遍历过程中进行视频片段的质量级的减少处理,例如,对视频片段的质量级
Figure PCTCN2022144132-appb-000024
进行减少处理后得到减少处理结果
Figure PCTCN2022144132-appb-000025
相当于减少后的质量级;类似地,减少的数值可根据需要调整,本申请以1为示例而不对减少的数值构成具体的限制。然后,在减少处理后根据减少处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,返回步骤B202直至
Figure PCTCN2022144132-appb-000026
从而得到每一视频片段的目标传输码率。
In an example, when the sum of code rates is not less than or not equal to the bandwidth capacity, that is
Figure PCTCN2022144132-appb-000023
Iterate over the video clips to reduce the quality level. For example, the Nth video clip in the Mth column in the video clip matrix composed of M×N video clips is used as the first video clip to start the traversal process until the first video clip in the first column is traversed. The reduction processing of the quality level of the video segment, for example, the quality level of the video segment
Figure PCTCN2022144132-appb-000024
After the reduction processing, the reduction processing result is obtained
Figure PCTCN2022144132-appb-000025
It is equivalent to the reduced quality level; similarly, the reduced value can be adjusted as needed, and the present application uses 1 as an example without constituting a specific limit to the reduced value. Then, after reducing the processing result, new utility value and old utility value, update the quality level and code rate of the video segment, return to step B202 until
Figure PCTCN2022144132-appb-000026
Thus, the target transmission bit rate of each video segment is obtained.
需要说明的是,当执行步骤B202或者B203后,当某一个或多个视频片段的质量级以及码率被更新后,得到更新的以M×N个视频片段构成的视频片段矩阵以及更新后的质量级矩阵,从而根据更新后的质量级矩阵确定视频片段的目标传输码率。需要说明的是,目标传输码率指的是最终确定的t时刻的视频片段的
Figure PCTCN2022144132-appb-000027
记为
Figure PCTCN2022144132-appb-000028
最终得到的目标传输码率矩阵 为
Figure PCTCN2022144132-appb-000029
It should be noted that after step B202 or B203 is executed, when the quality level and code rate of one or more video segments are updated, an updated video segment matrix composed of M×N video segments and the updated quality level matrix, so as to determine the target transmission bit rate of the video segment according to the updated quality level matrix. It should be noted that the target transmission bit rate refers to the final determined
Figure PCTCN2022144132-appb-000027
recorded as
Figure PCTCN2022144132-appb-000028
The final target transmission code rate matrix obtained is
Figure PCTCN2022144132-appb-000029
在一示例中,步骤B203中根据增加处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,包括步骤B211或者B212:In an example, in step B203, update the quality level and code rate of the video segment according to the increase processing result, new utility value and old utility value, including step B211 or B212:
B211、当增加处理结果小于或等于质量等级阈值,根据增加处理结果对新效用值进行更新,计算更新后的新效用值与旧效用值的第一差值,确定视频片段的第一影响因子,将第一影响因子的最大值对应的视频片段作为第一更新视频片段,将第一更新视频片段的质量级更新为第一更新视频片段对应的增加处理结果,并根据第一更新视频片段对应的增加处理结果更新第一更新视频片段的码率。B211. When the increase processing result is less than or equal to the quality level threshold, update the new utility value according to the increase processing result, calculate the first difference between the updated new utility value and the old utility value, and determine the first impact factor of the video segment, Taking the video segment corresponding to the maximum value of the first impact factor as the first updated video segment, updating the quality level of the first updated video segment to the increase processing result corresponding to the first updated video segment, and according to the corresponding The code rate of the first updated video segment is updated by increasing the processing result.
在一示例中,质量等级阈值为质量级别K,当增加处理结果小于或等于质量等级阈值即
Figure PCTCN2022144132-appb-000030
假设此时遍历过程中m=M,n=N,对当前的视频片段的质量级
Figure PCTCN2022144132-appb-000031
进行增加处理后得到的质量级矩阵为
Figure PCTCN2022144132-appb-000032
利用公式(1)或公式(2)可以计算得到更新后的新效用值,即更新后的newU t。然后,根据更新后的newU t与旧效用值oldU t的第一差值,确定当前的视频片段(m=M,n=N)的第一影响因子diff t(m,n)。需要说明的是,在遍历开始前,可以生成一个初始的第一影响因子矩阵,然后在遍历过程中更新初始的第一影响因子矩阵中的每一个初始的第一影响因子。可以理解的是,在m=M以及n=N至m=1以及n=1的遍历过程中每一视频片段通过相同的处理过程可以得到每一视频片段对应的更新后的第一影响因子,构成第一影响因子矩阵记为
Figure PCTCN2022144132-appb-000033
第一影响因子矩阵
Figure PCTCN2022144132-appb-000034
中每一视频片段对应一个第一影响因子,表示视频片段进行增加处理后总效用的变化程度。
In one example, the quality level threshold is quality level K, when the added processing result is less than or equal to the quality level threshold, namely
Figure PCTCN2022144132-appb-000030
Assuming that m=M, n=N in the traversal process at this time, the quality level of the current video segment
Figure PCTCN2022144132-appb-000031
The quality level matrix obtained after adding processing is
Figure PCTCN2022144132-appb-000032
The updated new utility value, that is, the updated newU t , can be calculated by formula (1) or formula (2). Then, according to the first difference between the updated newU t and the old utility value oldU t , the first impact factor diff t (m, n) of the current video segment (m=M, n=N) is determined. It should be noted that, before the traversal starts, an initial first influence factor matrix may be generated, and then each initial first influence factor in the initial first influence factor matrix is updated during the traversal process. It can be understood that, in the traversal process from m=M and n=N to m=1 and n=1, each video segment can obtain the updated first impact factor corresponding to each video segment through the same process, Form the first impact factor matrix and write it as
Figure PCTCN2022144132-appb-000033
First Impact Factor Matrix
Figure PCTCN2022144132-appb-000034
Each video segment corresponds to a first impact factor, indicating the degree of change in the total utility of the video segment after the increase processing.
在一示例中,在视频片段的遍历完成确定第一影响因子矩阵
Figure PCTCN2022144132-appb-000035
后,寻找最大值的位置索引
Figure PCTCN2022144132-appb-000036
例如:若第一影响因子最大的视频片段的位置为(M-3,N-3),则此时m=M-3,n=N-3,将视频片段矩阵中位于(M-3,N-3)位置的视频片段作为第一更新视频片段,将第一更新视频片段的质量级更新为第一更新视频片段对应的增加处理结果
Figure PCTCN2022144132-appb-000037
并根据第一更新视频片段对应的增加处理结果更新第一更新视频片段的码率,
Figure PCTCN2022144132-appb-000038
或者可以表示为:根据第一更新视频片段对应的增加处理结果得到新的质量级矩阵
Figure PCTCN2022144132-appb-000039
从而确定更新后的码率矩阵
Figure PCTCN2022144132-appb-000040
然后返回步骤B202。其中,F()代表映射。
In an example, after the traversal of the video segment is completed, the first influencing factor matrix is determined
Figure PCTCN2022144132-appb-000035
After that, find the position index of the maximum value
Figure PCTCN2022144132-appb-000036
For example: if the position of the largest video segment of the first impact factor is (M-3, N-3), then m=M-3 at this moment, n=N-3, be positioned at (M-3, N-3 in the video segment matrix N-3) The video clip at the position is used as the first updated video clip, and the quality level of the first updated video clip is updated as the corresponding increase processing result of the first updated video clip
Figure PCTCN2022144132-appb-000037
And update the code rate of the first updated video segment according to the increase processing result corresponding to the first updated video segment,
Figure PCTCN2022144132-appb-000038
Or it can be expressed as: get a new quality level matrix according to the increase processing result corresponding to the first updated video segment
Figure PCTCN2022144132-appb-000039
So as to determine the updated code rate matrix
Figure PCTCN2022144132-appb-000040
Then return to step B202. Among them, F() stands for mapping.
B212、当增加处理结果大于质量等级阈值,将视频片段的质量级作为更新后的质量级以及将视频片段的码率作为更新后的码率。B212. When the increase processing result is greater than the quality level threshold, use the quality level of the video segment as the updated quality level and the bit rate of the video segment as the updated bit rate.
本申请实施例中,例如当前视频片段为第M行,第N列的视频片段,若
Figure PCTCN2022144132-appb-000041
此时保持视频片段的质量级以及码率不变,即将视频片段的质量级作为更新后的质量级,将视频片段的码率作为更新后的码率,然后遍历第M行,第N列的视频片段的下一个视频片段,判断增加处理结果与质量等级阈值的大小关系。
In the embodiment of the present application, for example, the current video segment is the Mth row and the Nth column's video segment, if
Figure PCTCN2022144132-appb-000041
At this time, keep the quality level and code rate of the video segment unchanged, that is, the quality level of the video segment is used as the updated quality level, and the bit rate of the video segment is used as the updated bit rate, and then traverse the M row and N column The next video segment of the video segment determines the relationship between the added processing result and the quality level threshold.
在一示例中,步骤B211中对根据增加处理结果对新效用值进行更新,包括步骤B221-B226:In an example, in step B211, the new utility value is updated according to the increase processing result, including steps B221-B226:
B221、计算当前时刻的传输损耗以及计算当前时刻的非新鲜度。B221. Calculate the transmission loss at the current moment and calculate the staleness at the current moment.
在一示例中,当前时刻的传输损耗的计算步骤包括B2211-B2212,B2212包括步骤A1或者A2:In an example, the calculation steps of the transmission loss at the current moment include B2211-B2212, and B2212 includes step A1 or A2:
B2211、根据前一时刻的视频片段在边缘节点的存储状态,确定第一存储决策值。B2211. Determine a first storage decision value according to the storage status of the video segment at the edge node at the previous moment.
本申请实施例中,当前时刻记为t时刻,前一时刻记为t-1时刻,其他实施例中前一时刻可以为t-2时刻或者其他时刻,不作具体限定。In the embodiment of the present application, the current time is recorded as time t, and the previous time is recorded as time t-1. In other embodiments, the previous time may be time t-2 or other time, which is not specifically limited.
在一示例中,前一时刻的视频片段与边缘节点的存储状态包括存储于边缘节点或者不存储于边缘节点,当存储于边缘节点此时对应的第一存储决策值数值为1,即第一存储决策值数值c t-1(m,n)=1时表征视频片段存储于边缘节点;相反当第一存储决策值数值c t-1(m,n)=0时表征视频片段不存储于边缘节点,此时边缘节点需要从云端获取该视频片段。需要说明的是,第一存储决策值指的是前一时刻的视频片段的存储状态对应的存储决策值。 In an example, the storage state of the video segment and the edge node at the previous moment includes whether it is stored in the edge node or not stored in the edge node. When it is stored in the edge node, the corresponding first storage decision value is 1, that is, the first When the storage decision value c t-1 (m,n)=1, the representative video segment is stored in the edge node; on the contrary, when the first storage decision value c t-1 (m,n)=0, the representative video segment is not stored in The edge node, at this time, the edge node needs to obtain the video clip from the cloud. It should be noted that the first storage decision value refers to the storage decision value corresponding to the storage state of the video segment at the previous moment.
A1、当第一存储决策值表征视频片段存储于边缘节点,获取视频片段的存储码率,计算存储码率与视频片段的码率的码率差,根据第一存储决策值、码率差以及第一损耗系数,确定第一损耗,并根据第二损耗系数与视频片段的码率确定第二损耗,根据第一损耗和第二损耗的和,确定当前时刻的传输损耗。A1. When the first storage decision value indicates that the video clip is stored in the edge node, obtain the storage code rate of the video clip, and calculate the code rate difference between the stored code rate and the code rate of the video clip, according to the first storage decision value, the code rate difference and The first loss coefficient is used to determine the first loss, and the second loss is determined according to the second loss coefficient and the code rate of the video segment, and the transmission loss at the current moment is determined according to the sum of the first loss and the second loss.
A2、当第一存储决策值表征视频片段不存储于边缘节点,根据第三损耗系数以及视频片段的码率确定第三损耗,并根据第二损耗系数与视频片段的码率确定第二损耗,根据第三损耗和第二损耗的和,确定当前时刻的传输损耗。A2. When the first storage decision value indicates that the video segment is not stored in the edge node, the third loss is determined according to the third loss coefficient and the code rate of the video segment, and the second loss is determined according to the second loss coefficient and the code rate of the video segment, According to the sum of the third loss and the second loss, the transmission loss at the current moment is determined.
在一示例中,为了便于描述假设δ=1,此时公式(2)中t时刻的视频片段(m,n)的传输损耗表示为TC t(m,n),公式为: In one example, in order to facilitate the description, it is assumed that δ=1, the transmission loss of the video segment (m,n) at time t in the formula (2) is expressed as TC t (m,n), and the formula is:
Figure PCTCN2022144132-appb-000042
Figure PCTCN2022144132-appb-000042
其中,c 1为第一损耗系数,c 2为第二损耗系数,c 3为第三损耗系数。需要说明的是,在计算视频片段的传输损耗时,c t-1(m,n)为t-1时刻视频片段的第一存储决策值,
Figure PCTCN2022144132-appb-000043
为t时刻视频片段的码率,
Figure PCTCN2022144132-appb-000044
为t-1时刻视频片段的码率;第一损耗系数、第二损耗系数以及第三损耗系数的大小可根据需要调整。可以理解的是,通过计算每一视频片段的传输损耗之和即可以得到目标传输视频的传输损耗。在一示例中,从公式(6)中可知道传输损耗分为三个部分,以其中一个视频片段(m,n)为例:
Among them, c 1 is the first loss coefficient, c 2 is the second loss coefficient, and c 3 is the third loss coefficient. It should be noted that when calculating the transmission loss of a video segment, c t-1 (m,n) is the first storage decision value of the video segment at time t-1,
Figure PCTCN2022144132-appb-000043
is the bit rate of the video segment at time t,
Figure PCTCN2022144132-appb-000044
is the code rate of the video segment at time t-1; the sizes of the first loss coefficient, the second loss coefficient and the third loss coefficient can be adjusted as required. It can be understood that the transmission loss of the target transmission video can be obtained by calculating the sum of the transmission losses of each video segment. In an example, it can be known from formula (6) that the transmission loss is divided into three parts, taking one of the video segments (m,n) as an example:
1、
Figure PCTCN2022144132-appb-000045
即第一损耗,代表边缘节点的转码损耗;
1,
Figure PCTCN2022144132-appb-000045
That is, the first loss represents the transcoding loss of the edge node;
2、
Figure PCTCN2022144132-appb-000046
即第二损耗,代表边缘节点传输视频片段至用户端的通信损耗;
2,
Figure PCTCN2022144132-appb-000046
That is, the second loss represents the communication loss of the edge node transmitting video clips to the client;
3、
Figure PCTCN2022144132-appb-000047
即第三损耗,代表边缘节点从云端获取视频片段的通信损耗。
3.
Figure PCTCN2022144132-appb-000047
That is, the third loss represents the communication loss for edge nodes to obtain video clips from the cloud.
因此,步骤A1中,当c t-1(m,n)=1时,视频片段存储于边缘节点因此只可能存在第一损耗或者第二损耗,并且当
Figure PCTCN2022144132-appb-000048
大于等于
Figure PCTCN2022144132-appb-000049
时,可使得边缘节点可以直接将视频片段传输至用户端中以提高存储效率,否则当
Figure PCTCN2022144132-appb-000050
小于
Figure PCTCN2022144132-appb-000051
时,仍然需要从云端重新获取。而当
Figure PCTCN2022144132-appb-000052
等于
Figure PCTCN2022144132-appb-000053
时,第一损耗最小为0;当
Figure PCTCN2022144132-appb-000054
大于
Figure PCTCN2022144132-appb-000055
时,边缘节点需要将视频片段转码成系统分配的码率后再传输给用户端。在一示例中,此时存储码率为
Figure PCTCN2022144132-appb-000056
即视频片段已经以
Figure PCTCN2022144132-appb-000057
被存储在边缘节点中,计算码率差
Figure PCTCN2022144132-appb-000058
Figure PCTCN2022144132-appb-000059
从而确定上述的第一损耗,根据第二损耗系数c 2与视频片段的码率
Figure PCTCN2022144132-appb-000060
确定第二损耗,根据第一损耗和第二损耗的和,确定当前时刻的传输损耗为
Figure PCTCN2022144132-appb-000061
Figure PCTCN2022144132-appb-000062
Therefore, in step A1, when c t-1 (m,n)=1, the video segment is stored in the edge node so only the first loss or the second loss may exist, and when
Figure PCTCN2022144132-appb-000048
greater or equal to
Figure PCTCN2022144132-appb-000049
When , the edge node can directly transmit video clips to the client to improve storage efficiency, otherwise when
Figure PCTCN2022144132-appb-000050
less than
Figure PCTCN2022144132-appb-000051
, it still needs to be retrieved from the cloud. And when
Figure PCTCN2022144132-appb-000052
equal
Figure PCTCN2022144132-appb-000053
When , the first loss is at least 0; when
Figure PCTCN2022144132-appb-000054
more than the
Figure PCTCN2022144132-appb-000055
, the edge node needs to transcode the video segment into the code rate allocated by the system before transmitting it to the client. In an example, the storage code rate at this time is
Figure PCTCN2022144132-appb-000056
That is, the video clip has been
Figure PCTCN2022144132-appb-000057
is stored in the edge node and calculates the code rate difference
Figure PCTCN2022144132-appb-000058
Figure PCTCN2022144132-appb-000059
Thereby determine the above-mentioned first loss, according to the second loss coefficient c 2 and the code rate of the video segment
Figure PCTCN2022144132-appb-000060
Determine the second loss, and according to the sum of the first loss and the second loss, determine the transmission loss at the current moment as
Figure PCTCN2022144132-appb-000061
Figure PCTCN2022144132-appb-000062
同理,当步骤A2中,当c t-1(m,n)=0时,视频片段不存储于边缘节点,边缘节点从云端获取视频片段并以
Figure PCTCN2022144132-appb-000063
存储,此时不存在第一损耗,而存在第二损耗和第三损耗。在一示例中,根据第三损耗系数c 3以及视频片段的码率
Figure PCTCN2022144132-appb-000064
确定第三损耗,并根据第二损耗系数c 2与视频片段的码率
Figure PCTCN2022144132-appb-000065
确定第二损耗,根据第三损耗和第二损耗的和,确定当前时刻的传输损耗
Figure PCTCN2022144132-appb-000066
Similarly, when c t-1 (m,n)=0 in step A2, the video segment is not stored in the edge node, and the edge node obtains the video segment from the cloud and uses
Figure PCTCN2022144132-appb-000063
For storage, the first loss does not exist at this time, but the second loss and the third loss exist. In an example, according to the third loss coefficient c 3 and the code rate of the video segment
Figure PCTCN2022144132-appb-000064
Determine the third loss, and according to the second loss coefficient c 2 and the code rate of the video segment
Figure PCTCN2022144132-appb-000065
Determine the second loss, and determine the transmission loss at the current moment according to the sum of the third loss and the second loss
Figure PCTCN2022144132-appb-000066
在一示例中,步骤B221中计算当前时刻的非新鲜度包括步骤B2213-B2215,B2215包括步骤B1或者B2:In an example, calculating the non-freshness at the current moment in step B221 includes steps B2213-B2215, and B2215 includes steps B1 or B2:
B2213、确定前一时刻的非新鲜度。B2213. Determine the non-freshness at the previous moment.
同样地,设前一时刻为t-1时刻,若前一时刻视频片段存储于边缘节点则通过非新鲜度公式计算前一时刻的非新鲜度uf t-1(m,n);若前一时刻视频片段不存储于边缘节点,则非新鲜度uf t-1(m,n)为0,在其他实施例中非新鲜度可以为其他数值,例如接近于0的较小数值。 Similarly, assuming that the previous moment is t-1, if the video segment at the previous moment is stored in the edge node, the non-freshness degree uf t-1 (m,n) at the previous moment is calculated by the non-freshness formula; if the previous When the video segment is not stored in the edge node, the non-freshness uft -1 (m,n) is 0. In other embodiments, the non-freshness can be other values, such as a smaller value close to 0.
B2214、根据当前时刻视频片段在边缘节点的存储状态,确定第二存储决策值。B2214. Determine a second storage decision value according to the storage status of the video segment at the edge node at the current moment.
在一示例中,如步骤B2211的方法,确定第二存储决策值,第二存储决策值为1时表征前一时刻视频片段存储于边缘节点,当第二存储决策值为0时表征前一时刻视频片段不存储于边缘节点。需要说明的是,第二存储决策值指的是当前时刻的视频片段的存储状态对应的存储决策值。In one example, as in the method of step B2211, the second storage decision value is determined. When the second storage decision value is 1, it indicates that the video segment is stored in the edge node at the previous moment, and when the second storage decision value is 0, it indicates the previous moment. Video clips are not stored on edge nodes. It should be noted that the second storage decision value refers to a storage decision value corresponding to the storage state of the video segment at the current moment.
B1、当第二存储决策值表征视频片段存储于边缘节点,计算预设新鲜度增长因子与观看概率的乘积,并根据乘积与前一时刻的非新鲜度的和,确定当前时刻的非新鲜度。B1. When the second storage decision value indicates that the video segment is stored in the edge node, calculate the product of the preset freshness growth factor and the viewing probability, and determine the non-freshness at the current moment according to the sum of the product and the non-freshness at the previous moment .
B2、或者,当第二存储决策值表征视频片段不存储于边缘节点,确定当前时刻的非新鲜度为0。B2. Alternatively, when the second storage decision value indicates that the video segment is not stored in the edge node, it is determined that the non-freshness at the current moment is 0.
本申请实施例中,非新鲜度的计算公式为:In the embodiment of the present application, the calculation formula of non-freshness is:
Figure PCTCN2022144132-appb-000067
Figure PCTCN2022144132-appb-000067
其中,uf t(m,n)为当前时刻即t时刻的视频片段(m,n)的非新鲜度,当第二存储决策值为0时,非新鲜度为0;当c t(m,n)为1时通过上述公式(7)计算,p t(m,n)为视频片段(m,n)的观看概率;α为预设新鲜度增长因子,uf t-1(m,n)为前一时刻的视频片段的非新鲜度。可以理解的是,每一视频片段的非新鲜度可以通过公式(7)计算得到;在一示例中,uf 0(m,n)为0,前一时刻的非新鲜度可以通过公式(7)计算得到。需要说明的是,非新鲜度可以用于反映视频片段过时的程度,存储在边缘节点的视频片段内容新鲜度较低,过时的视频片段内容会削弱用户观看的体验质量。另外,从公式(7)可以知道uf t(m,n)与视频片段的观看概率p t(m,n)以及视频片段在边缘节点的存储状态有关,当用户对观看视野内的片段的画面质量敏感,在用户RoI内的画面内容需要不断更新以呈现用户最新鲜的内容,因此p t(m,n)越大,视频片段的新鲜度损耗越快。 Among them, uft (m, n) is the non-freshness of the video segment (m, n) at the current moment, that is, t moment, when the second storage decision value is 0, the non-freshness is 0; when c t (m, When n) is 1, it is calculated by the above formula (7), p t (m,n) is the viewing probability of the video clip (m,n); α is the preset freshness growth factor, uf t-1 (m,n) is the non-freshness of the video segment at the previous moment. It can be understood that the non-freshness of each video segment can be calculated by formula (7); in one example, uf 0 (m,n) is 0, and the non-freshness of the previous moment can be calculated by formula (7) calculated. It should be noted that the non-freshness can be used to reflect the outdated degree of video clips. The freshness of video clips stored in edge nodes is low, and outdated video clips will weaken the quality of user viewing experience. In addition, from formula (7), it can be known that uft (m, n) is related to the viewing probability p t (m, n) of the video segment and the storage status of the video segment at the edge node. Quality-sensitive, the picture content in the user RoI needs to be constantly updated to present the freshest content of the user, so the larger p t (m,n) is, the faster the freshness of the video clip will be lost.
B222、根据当前时刻的非新鲜度、观看概率以及当前时刻的质量级,确定有效质量。B222. Determine the effective quality according to the non-freshness at the current moment, the viewing probability, and the quality level at the current moment.
本申请实施例中,有效质量的计算公式为:In the embodiment of the present application, the calculation formula of the effective mass is:
Figure PCTCN2022144132-appb-000068
Figure PCTCN2022144132-appb-000068
其中,EQ t(m,n)为t时刻视频片段(m,n)的有效质量,μ为有效质量的因数。需要说明的是,步骤B221中各个子步骤在计算有效质量时,当前时刻的质量级
Figure PCTCN2022144132-appb-000069
为步骤B201中的质量级或者也可以为步骤B203中的增加处理结果。可以理解的是,每一视频片段的有效质量可以通过上述公式(8)计算。本申请实施例中,有效质量的设置目的在于给用户关注区域内的视频片段分配更高的质量级并保证视频片段新鲜度损耗较小,因为如果视频片段的质量级较高,但是内容已经过时,那么视频片段的有效质量会变差,需要给用户感兴趣区域内的视频片段分配高质量级以保证用户能清晰地看到感兴趣区域内的视频片段内容。
Among them, EQ t (m,n) is the effective quality of the video segment (m,n) at time t, and μ is a factor of the effective quality. It should be noted that when each sub-step in step B221 calculates the effective quality, the quality level at the current moment
Figure PCTCN2022144132-appb-000069
It is the quality level in step B201 or it can also be the result of adding processing in step B203. It can be understood that the effective quality of each video segment can be calculated by the above formula (8). In the embodiment of this application, the purpose of setting the effective quality is to assign a higher quality level to the video clips in the user's attention area and ensure that the freshness of the video clips is less, because if the quality level of the video clip is high, but the content is outdated , then the effective quality of the video segment will become poor, and it is necessary to assign a high quality level to the video segment in the user's region of interest to ensure that the user can clearly see the content of the video segment in the region of interest.
需要说明的是,本申请实施例中,将非新鲜度作为非新鲜度因子用于计算有效质量,而有效质量用于确定用户观看质量,即用户观看质量根据非新鲜度因子所确定。相关技术中,考虑到移动边缘计算(Mobile Edge Computing,MEC)就近服务的优点,通常通过将多媒体内容如热门视频提前缓存至基站处的MEC服务器中,以达到增强用户观看体验的目的。但是这些策略适用于视频已经存在的视频,即在用户发送内容请求前,边缘节点已经获取视频片被观看概率的信息,因此不能适用于直播视频等实时视频,因为实时视频的信息(比如内容、被观看概率)实时更新,播放的同时被记录下来,不可提前获取。然而,一些视频内容通常在新鲜时价值最高,例如在自动驾驶中,实时了解机动车的位置、方向和速度是必不可少的,当视频片段存储在边缘节点时间较长时,视频片的新鲜度下降,这将影响用户的观看质量,因此本申请实施例中引入非新鲜度因子,可以描述已存储在边缘节点的视频片段以及未存在边缘节点的视频片段的内容过时程度,将非新鲜度因子应用于码率分配以及存储决策的存储方法中,能够提高用户观看视频的质量以及降低传输损耗。It should be noted that, in this embodiment of the present application, non-freshness is used as a non-freshness factor to calculate effective quality, and effective quality is used to determine user viewing quality, that is, user viewing quality is determined according to non-freshness factor. In related technologies, considering the advantages of mobile edge computing (Mobile Edge Computing, MEC) nearby services, multimedia content such as popular videos is usually cached in the MEC server at the base station in advance to achieve the purpose of enhancing the viewing experience of users. However, these strategies are applicable to videos that already exist, that is, before the user sends a content request, the edge node has already obtained the information about the probability of the video piece being watched, so it cannot be applied to real-time videos such as live videos, because real-time video information (such as content, Probability of being watched) is updated in real time, recorded while playing, and cannot be obtained in advance. However, some video content is usually the most valuable when it is fresh. For example, in autonomous driving, it is essential to know the position, direction and speed of the motor vehicle in real time. When the video clips are stored in the edge node for a long time, the freshness of the video clips This will affect the user's viewing quality. Therefore, the non-freshness factor is introduced in the embodiment of this application, which can describe the obsolete degree of the content of the video clips that have been stored in the edge node and the video clips that do not exist in the edge node. The non-freshness The factor is applied to the storage method of code rate allocation and storage decision, which can improve the quality of video watched by users and reduce transmission loss.
B223、获取视频片段前一时刻的质量级,根据当前时刻的质量级、观看概率以及前一时刻的质量级,计算得到当前时刻的第一主观质量和前一时刻的第二主观质量,并根据第一主观质量和第二主观质量的差值确定时间质量损耗。B223. Obtain the quality level of the previous moment of the video clip, and calculate the first subjective quality at the current moment and the second subjective quality at the previous moment according to the quality level at the current moment, the viewing probability and the quality level at the previous moment, and according to The difference between the first subjective quality and the second subjective quality determines the temporal quality loss.
本申请实施例中,主观质量的计算公式为:In the embodiment of this application, the calculation formula of subjective quality is:
Figure PCTCN2022144132-appb-000070
Figure PCTCN2022144132-appb-000070
时间质量损耗的计算公式为:The formula for calculating the time quality loss is:
TQ t(m,n)=|Q t(m,n)-Q t-1(m,n)|     (10) TQ t (m,n)=|Q t (m,n)-Q t-1 (m,n)| (10)
其中,Q t(m,n)为t时刻(当前时刻)视频片段(m,n)的主观质量(即第一主观质量),TQ t(m,n)为t时刻视频片段(m,n)的时间质量损耗,Q t-1(m,n)为t-1时刻(前一时刻)的视频片段(m,n)的主观质量(即第二主观质量)。因此,通过公式(9)可以计算得到第一主观质量,并通过视频片段前一时刻的质量以及前一时刻的观看概率(近似地可以利用当前时刻的观看概率)通过公式(9)计算得到第二主观质量,然后结合公式(10)可以计算视频片段的时间质量损耗。需要说明的是,视频片段的主观质量表示取决于用户对视频片段的敏感程度,本申请实施例中通过以e为底的数学方式将视频片段的观看概率因素归一化至0-1区间,Q t(m,n)与p t(m,n)成反比例关系,如果视频片段的重要性程度低,即使给该视频片段分配较低的
Figure PCTCN2022144132-appb-000071
主观质量Q t(m,n)依然可被用户接受。另外,由于用户对相邻时间画面的质量差异敏感,如果用户的主观质量在相邻时间差异很大,那么即使给视频片段分配较高的质量,质量差异仍然会对用户观看体验产生负面影响,因此时间质量损耗定义为公式(10)。
Among them, Q t (m, n) is the subjective quality (i.e. the first subjective quality) of the video segment (m, n) at time t (current time), and TQ t (m, n) is the video segment (m, n) at time t ), Q t-1 (m,n) is the subjective quality (ie, the second subjective quality) of the video segment (m,n) at the moment t-1 (the previous moment). Therefore, the first subjective quality can be calculated by formula (9), and the first subjective quality can be calculated by formula (9) through the quality of the video segment at the previous moment and the viewing probability at the previous moment (approximately, the viewing probability at the current moment can be used) Two subjective quality, then combined with formula (10) can calculate the temporal quality loss of the video segment. It should be noted that the subjective quality representation of a video clip depends on the user's sensitivity to the video clip. In the embodiment of the present application, the viewing probability factor of the video clip is normalized to the interval of 0-1 by using a mathematical method with the base e as the basis. Q t (m,n) is inversely proportional to p t (m,n). If the importance of the video segment is low, even if the video segment is assigned a lower
Figure PCTCN2022144132-appb-000071
The subjective quality Q t (m,n) is still acceptable to the user. In addition, since the user is sensitive to the quality difference of images in adjacent times, if the user's subjective quality differs greatly in adjacent times, even if a higher quality is assigned to the video segment, the quality difference will still have a negative impact on the user's viewing experience. Therefore the temporal mass loss is defined as Equation (10).
B224、根据第一主观质量和视频片段的总数量计算主观质量均值,并根据第一主观质量与主观质量均值的差值,确定空间质量损耗。B224. Calculate the subjective quality mean value according to the first subjective quality and the total number of video segments, and determine the spatial quality loss according to the difference between the first subjective quality and the subjective quality mean value.
本申请实施例中,空间质量损耗的计算公式为:In the embodiment of this application, the calculation formula of spatial quality loss is:
Figure PCTCN2022144132-appb-000072
Figure PCTCN2022144132-appb-000072
其中,SQ t(m,n)为t时刻视频片段(m,n)的空间质量损耗,
Figure PCTCN2022144132-appb-000073
为主观质量均值,M·N为视频片段的总数量,t时刻的Q t(m,n)为上述的第一主观质量。需要说明的是,除了视频片段相邻时间质量差会削弱用户观看体验,视频片段空间上的主观质量差异也会对用户观看体验产生影响,因此需要考虑空间质量损耗。
Among them, SQ t (m,n) is the spatial quality loss of the video segment (m,n) at time t,
Figure PCTCN2022144132-appb-000073
is the mean value of subjective quality, M·N is the total number of video clips, and Q t (m,n) at time t is the above-mentioned first subjective quality. It should be noted that, in addition to the poor quality of adjacent video clips that will weaken the user's viewing experience, the subjective quality difference in the space of video clips will also affect the user's viewing experience, so the spatial quality loss needs to be considered.
B225、根据有效质量、时间质量损耗、空间质量损耗以及对应的预设权重,得到用户观看质量。B225. Obtain the viewing quality of the user according to the effective quality, the temporal quality loss, the spatial quality loss and the corresponding preset weights.
在一示例中,公式(1)和公式(2)中的用户观看质量QoP t(m,n)计算公式为: In an example, the calculation formula of user viewing quality QoP t (m, n) in formula (1) and formula (2) is:
QoP t(m,n)=β 1·EQ t(m,n)-β 2·TQ t(m,n)-β 3·SQ t(m,n)   (12) QoP t (m,n)=β 1 EQ t (m,n)-β 2 TQ t (m,n)-β 3 SQ t (m,n) (12)
需要说明的是,有效质量、时间质量损耗、空间质量损耗以及对应的预设权重可以相同也可以不相同,本申请实施例中有效质量的对应的预设权重为预设第一权重β 1,时间质量损耗的对应的预设权重为预设第二权重β 2,空间质量损耗的对应的预设权重为预设第三权重β 3。本申请实施例中,通过公式(12)可以分别计算每一视频片段对应的用户观看质量。 It should be noted that the effective quality, temporal quality loss, spatial quality loss and the corresponding preset weights may be the same or different, and the corresponding preset weight of the effective quality in the embodiment of the present application is the preset first weight β 1 , The corresponding preset weight of the temporal quality loss is the preset second weight β 2 , and the corresponding preset weight of the spatial quality loss is the preset third weight β 3 . In the embodiment of the present application, the viewing quality of the user corresponding to each video segment can be calculated respectively by formula (12).
B226、根据用户观看质量与当前时刻的传输损耗的差值,得到更新后的新效用值。B226. Obtain an updated new utility value according to the difference between the viewing quality of the user and the transmission loss at the current moment.
在一示例中,通过公式(1)可以计算得到更新后的新效用值,即更新后的newU t。需要说明的是,本申请实施例中以δ为1作为例子,当δ不为1时,此时计算用户观看质量与当前时刻的传输损耗的差值时,该差值指的是用户观看质量减去δ与当前时刻的传输损耗的乘积的差值,根据差值之和可以得到更新后的新效用值。 In an example, the updated new utility value, that is, the updated newU t , can be calculated by formula (1). It should be noted that in the embodiment of this application, δ is 1 as an example. When δ is not 1, when calculating the difference between the user's viewing quality and the transmission loss at the current moment, the difference refers to the user's viewing quality Subtract the difference between δ and the product of transmission loss at the current moment, and the updated new utility value can be obtained according to the sum of the differences.
在一示例中,步骤B204中根据减少处理结果、新效用值和旧效用值,更新视频片段的质量级以及码率,包括步骤B231或者B232:In an example, in step B204, according to the reduction processing result, the new utility value and the old utility value, the quality level and the code rate of the video segment are updated, including step B231 or B232:
B231、当减少处理结果大于或等于预设阈值,根据减少处理结果对新效用值进行更新,计算更新后的新效用值与旧效用值的第二差值,确定视频片段的第二影响因子,将第二影响因子的最大值对应的视频片段作为第二更新视频片段,将第二更新视频片段的质量级更新为第二更新视频片段对应的减少处理结果,并根据第二更新视频片段对应的减少处理结果更新第二更新视频片段的码率。B231. When the reduction processing result is greater than or equal to the preset threshold value, update the new utility value according to the reduction processing result, calculate the second difference between the updated new utility value and the old utility value, and determine the second impact factor of the video segment, Taking the video segment corresponding to the maximum value of the second impact factor as the second updated video segment, updating the quality level of the second updated video segment to the reduction processing result corresponding to the second updated video segment, and according to the corresponding The code rate of the second updated video segment is updated by reducing the processing result.
在一示例中,以预设阈值可以根据需要设定,本申请实施例中以预设阈值为1即最低的质量级别为例。在一示例中,当减小处理结果
Figure PCTCN2022144132-appb-000074
大于或等于1,根据减小处理结果
Figure PCTCN2022144132-appb-000075
对步骤B201中的新效用值newU t进行更新,根据更新后的新效用值与旧效用值的第二差值,确定视频片段的第二影响因子diff t′(m,n),将第二影响因子的最大值对应的质量级更新为减小处理结果,将第二影响因子的最大值对应的视频片段作为第二更新视频片段,将第二更新视频片段的质量级更新为第二更新视频片段对应的减少处理结果,然后根据第二更新视频片段对应的减少处理结果更新第二更新视频片段的码率。需要说明的是,第二影响因子diff t′(m,n)确定方法与第一影响因子diff t(m,n)类似,通过遍历视频片段根据更新后的新效用值与旧效用值oldU t的差值,确定各个视频片段的第二影响因子diff t′(m,n),从 而得到新的第二影响因子矩阵
Figure PCTCN2022144132-appb-000076
第二影响因子矩阵
Figure PCTCN2022144132-appb-000077
中每一视频片段对应一个第二影响因子,表示视频片段进行减小处理后总效用的变化程度,寻找最大值的位置索引
Figure PCTCN2022144132-appb-000078
例如:则此时m=M-3,n=N-3,将视频片段矩阵中位于(M-3,N-3)位置的视频片段作为第二更新视频片段,将第二更新视频片段的质量级更新为第二更新视频片段对应的减少处理结果
Figure PCTCN2022144132-appb-000079
并根据第二更新视频片段对应的减少处理结果更新第二更新视频片段的码率,
Figure PCTCN2022144132-appb-000080
Figure PCTCN2022144132-appb-000081
或者可以表示为:根据第二更新视频片段对应的增加处理结果得到新的质量级矩阵
Figure PCTCN2022144132-appb-000082
从而确定更新后的码率矩阵
Figure PCTCN2022144132-appb-000083
然后返回步骤B202。其中,F()代表映射。
In an example, the preset threshold can be set as required. In this embodiment of the present application, the preset threshold is 1, which is the lowest quality level, as an example. In one example, when reducing the processing result
Figure PCTCN2022144132-appb-000074
Greater than or equal to 1, according to the reduction processing result
Figure PCTCN2022144132-appb-000075
The new utility value newU t in the step B201 is updated, according to the second difference between the new utility value after updating and the old utility value, determine the second impact factor diff t ' (m, n) of the video segment, and the second The quality level corresponding to the maximum value of the impact factor is updated as the result of the reduction process, and the video segment corresponding to the maximum value of the second impact factor is used as the second update video segment, and the quality level of the second update video segment is updated to the second update video The reduction processing result corresponding to the segment, and then update the code rate of the second updated video segment according to the reduction processing result corresponding to the second updated video segment. It should be noted that the determination method of the second impact factor diff t ′(m,n) is similar to that of the first impact factor diff t (m,n), by traversing the video clips according to the updated new utility value and the old utility value oldU t to determine the second impact factor diff t '(m,n) of each video segment, so as to obtain a new second impact factor matrix
Figure PCTCN2022144132-appb-000076
Second Impact Factor Matrix
Figure PCTCN2022144132-appb-000077
Each video segment corresponds to a second impact factor, which indicates the degree of change in the total utility of the video segment after reduction processing, and finds the position index of the maximum value
Figure PCTCN2022144132-appb-000078
For example: then this moment m=M-3, n=N-3, with the video clip that is positioned at (M-3, N-3) position in the video clip matrix as the second update video clip, the second update video clip The quality level is updated as the reduction processing result corresponding to the second updated video segment
Figure PCTCN2022144132-appb-000079
and updating the code rate of the second updated video segment according to the reduction processing result corresponding to the second updated video segment,
Figure PCTCN2022144132-appb-000080
Figure PCTCN2022144132-appb-000081
Or it can be expressed as: obtain a new quality level matrix according to the increase processing result corresponding to the second updated video segment
Figure PCTCN2022144132-appb-000082
So as to determine the updated code rate matrix
Figure PCTCN2022144132-appb-000083
Then return to step B202. Among them, F() stands for mapping.
B232、当减小处理结果小于预设阈值,将视频片段的质量级作为更新后的质量级以及将视频片段的码率作为更新后的码率。B232. When the reduction processing result is less than the preset threshold, use the quality level of the video segment as the updated quality level and the bit rate of the video segment as the updated bit rate.
本申请实施例中,例如当前视频片段为第M行,第N列的视频片段,若
Figure PCTCN2022144132-appb-000084
此时保持视频片段的质量级以及码率不变,即将视频片段的质量级作为更新后的质量级,将视频片段的码率作为更新后的码率,然后遍历第M行,第N列的视频片段的下一个视频片段,判断减少处理结果与质量等级阈值的大小关系。
In the embodiment of the present application, for example, the current video segment is the Mth row and the Nth column's video segment, if
Figure PCTCN2022144132-appb-000084
At this time, keep the quality level and code rate of the video segment unchanged, that is, the quality level of the video segment is used as the updated quality level, and the bit rate of the video segment is used as the updated bit rate, and then traverse the M row and N column The next video segment of the video segment determines the relationship between the reduction processing result and the quality level threshold.
本申请实施例中,进行码率分配的详细流程如下:In the embodiment of the present application, the detailed process of code rate allocation is as follows:
输入:
Figure PCTCN2022144132-appb-000085
B max(
Figure PCTCN2022144132-appb-000086
为t-1时刻每一视频片段的质量级构成的矩阵,
Figure PCTCN2022144132-appb-000087
为t-1时刻每一视频片段c t-1(m,n)构成的矩阵,
Figure PCTCN2022144132-appb-000088
为t-1时刻每一视频片段的uf t-1构成的矩阵,B max为最大带宽)
enter:
Figure PCTCN2022144132-appb-000085
B max (
Figure PCTCN2022144132-appb-000086
is the matrix formed by the quality level of each video segment at time t-1,
Figure PCTCN2022144132-appb-000087
is the matrix formed by each video segment c t-1 (m,n) at time t-1,
Figure PCTCN2022144132-appb-000088
is the matrix formed by uf t-1 of each video segment at time t-1, B max is the maximum bandwidth)
输出:
Figure PCTCN2022144132-appb-000089
output:
Figure PCTCN2022144132-appb-000089
Figure PCTCN2022144132-appb-000090
Figure PCTCN2022144132-appb-000090
Figure PCTCN2022144132-appb-000091
Figure PCTCN2022144132-appb-000091
其中,最终输出/return的
Figure PCTCN2022144132-appb-000092
为则为最终的质量级矩阵
Figure PCTCN2022144132-appb-000093
Utility()表示利用公式(1)或(2)进行计算。需要说明的是,当
Figure PCTCN2022144132-appb-000094
在遍历视频片段的过程中执行上述代码中的第6行至第11行,在一次遍历完成后执行第12行至第14行,然后返回第第5行判断直至在第5行判断时
Figure PCTCN2022144132-appb-000095
输出最终的质量级矩阵
Figure PCTCN2022144132-appb-000096
而当
Figure PCTCN2022144132-appb-000097
在遍历视频片段的过程中执行上述代码中的第17行至第22行,在一次遍历完成后执行第23行至第25行,然后返回第16行判断直至在第16行判断时
Figure PCTCN2022144132-appb-000098
输出最终的质量级矩阵
Figure PCTCN2022144132-appb-000099
Among them, the final output/return
Figure PCTCN2022144132-appb-000092
is the final quality level matrix
Figure PCTCN2022144132-appb-000093
Utility() means to use formula (1) or (2) to calculate. It should be noted that when
Figure PCTCN2022144132-appb-000094
Execute lines 6 to 11 in the above code during the traversal of video clips, execute lines 12 to 14 after a traversal is completed, and then return to line 5 until judgment is made in line 5
Figure PCTCN2022144132-appb-000095
Output the final quality level matrix
Figure PCTCN2022144132-appb-000096
And when
Figure PCTCN2022144132-appb-000097
Execute lines 17 to 22 in the above code during the traversal of video clips, execute lines 23 to 25 after a traversal is completed, and then return to line 16 until the judgment is made at line 16
Figure PCTCN2022144132-appb-000098
Output the final quality level matrix
Figure PCTCN2022144132-appb-000099
实施例二Embodiment two
如图2所示,本申请实施例还提供一种存储方法,至少包括但不限于步骤S300-S600:As shown in Figure 2, the embodiment of the present application also provides a storage method, at least including but not limited to steps S300-S600:
S300、通过码率分配方法得到目标传输视频中所有视频片段的观看概率以及码率分配结果。S300. Obtain the viewing probabilities of all video clips in the target transmission video and the code rate allocation results by using a code rate allocation method.
需要说明的是,码率分配方法指实施例中的码率分配方法,码率分配结果包括每一视频片段(m,n)的目标传输码率
Figure PCTCN2022144132-appb-000100
以及目标质量级
Figure PCTCN2022144132-appb-000101
It should be noted that the code rate allocation method refers to the code rate allocation method in the embodiment, and the code rate allocation result includes the target transmission code rate of each video segment (m, n)
Figure PCTCN2022144132-appb-000100
and the target quality level
Figure PCTCN2022144132-appb-000101
S400、根据码率分配结果以及观看概率,预测预期有效质量以及预期传输损耗。S400. Predict expected effective quality and expected transmission loss according to the code rate allocation result and viewing probability.
本申请实施例中,在对边缘节点进行存储决策时,假设t时刻为当前时刻,由于t+1时刻(下一时刻)的视频片段有效质量和传输损耗未知,用t时刻的码率分配结果、视频片段以及 目标质量级进行预测。其中,
Figure PCTCN2022144132-appb-000102
Figure PCTCN2022144132-appb-000103
分别表示视频片段(m,n)预期有效质量和预期传输损耗,分别预测了时刻t存储的视频片段对t+1时刻视频片段有效质量的影响和从云端获取视频片段的传输损耗。
In the embodiment of the present application, when making a storage decision for an edge node, it is assumed that time t is the current time, since the effective quality and transmission loss of the video segment at time t+1 (the next time) are unknown, the code rate allocation result at time t is used , video segment, and target quality level for prediction. in,
Figure PCTCN2022144132-appb-000102
and
Figure PCTCN2022144132-appb-000103
Represent the expected effective quality and expected transmission loss of the video segment (m, n) respectively, and respectively predict the impact of the video segment stored at time t on the effective quality of the video segment at time t+1 and the transmission loss of obtaining the video segment from the cloud.
在一示例中,步骤S400包括步骤B401-B403:In an example, step S400 includes steps B401-B403:
B401、获取当前时刻的非新鲜度以及当前时刻视频片段的第四存储决策值,并根据当前时刻的非新鲜度、观看概率以及第四存储决策值,确定预测非新鲜度。B401. Obtain the non-freshness at the current moment and the fourth storage decision value of the video segment at the current moment, and determine the predicted non-freshness according to the non-freshness at the current moment, viewing probability and the fourth storage decision value.
在一示例中,预测非新鲜度
Figure PCTCN2022144132-appb-000104
的公式为:
In one example, predicting non-freshness
Figure PCTCN2022144132-appb-000104
The formula is:
Figure PCTCN2022144132-appb-000105
Figure PCTCN2022144132-appb-000105
其中,uf t(m,n)为当前时刻的非新鲜度,c t(m,n)为第四存储决策值,α为非新鲜度的增长因子,p t(m,n)为观看概率。需要说明的是,第四存储决策值指的是当前时刻的视频片段的存储状态对应的存储决策值,与第二存储决策值相同。 Among them, uft (m,n) is the non-freshness at the current moment, c t (m,n) is the fourth storage decision value, α is the growth factor of non-freshness, p t (m,n) is the viewing probability . It should be noted that the fourth storage decision value refers to the storage decision value corresponding to the storage state of the video segment at the current moment, which is the same as the second storage decision value.
B402、根据预测非新鲜度、观看概率以及目标质量级,确定预期有效质量。B402. Determine the expected effective quality according to the predicted non-freshness, viewing probability and target quality level.
在一示例中,预期有效质量
Figure PCTCN2022144132-appb-000106
的公式为:
In one example, the expected effective mass
Figure PCTCN2022144132-appb-000106
The formula is:
Figure PCTCN2022144132-appb-000107
Figure PCTCN2022144132-appb-000107
其中,μ为有效质量的因数,
Figure PCTCN2022144132-appb-000108
为目标质量级。
where μ is a factor of the effective mass,
Figure PCTCN2022144132-appb-000108
is the target quality level.
B403、根据目标传输码率、第四存储决策值、第二损耗系数以及第三损耗系数,确定预期传输损耗。B403. Determine expected transmission loss according to the target transmission code rate, the fourth storage decision value, the second loss coefficient, and the third loss coefficient.
在一示例中,预期传输损耗
Figure PCTCN2022144132-appb-000109
的公式为:
In one example, the expected transmission loss
Figure PCTCN2022144132-appb-000109
The formula is:
Figure PCTCN2022144132-appb-000110
Figure PCTCN2022144132-appb-000110
其中,c 2为第二损耗系数,c 3为第三损耗系数,
Figure PCTCN2022144132-appb-000111
为目标传输码率,c t(m,n)为第四存储决策值。
Among them, c 2 is the second loss coefficient, c 3 is the third loss coefficient,
Figure PCTCN2022144132-appb-000111
is the target transmission code rate, and c t (m,n) is the fourth storage decision value.
S500、根据预期有效质量、预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息。S500. Perform storage optimization processing according to the expected effective quality, expected transmission loss, and maximum capacity of the edge node, to obtain storage decision information.
本申请实施例中,存储决策信息包括每一视频片段(m,n)的第三存储决策值c t′(m,n),并构成第三存储决策值矩阵
Figure PCTCN2022144132-appb-000112
第三存储决策值表征视频片段是否需要存储于边缘节点,例如第三存储决策值为1,表征需要存储于边缘节点,第三存储决策值为0,表征不需要存储于边缘节点。需要说明的是,第三存储决策值指的通过存储优化处理后得到的视频片段对应的存储决策值。
In the embodiment of the present application, the storage decision information includes the third storage decision value c t '(m,n) of each video segment (m,n), and constitutes the third storage decision value matrix
Figure PCTCN2022144132-appb-000112
The third storage decision value represents whether the video segment needs to be stored in the edge node, for example, the third storage decision value is 1, the representation needs to be stored in the edge node, and the third storage decision value is 0, the representation does not need to be stored in the edge node. It should be noted that the third storage decision value refers to a storage decision value corresponding to the video segment obtained after storage optimization processing.
在一示例中,存储方法的目标是提高预期有效质量并降低预期传输损耗,因此存储优化处理问题可表示为:In one example, the goal of the storage method is to increase the expected effective quality and reduce the expected transmission loss, so the storage optimization processing problem can be expressed as:
Figure PCTCN2022144132-appb-000113
Figure PCTCN2022144132-appb-000113
Figure PCTCN2022144132-appb-000114
Figure PCTCN2022144132-appb-000114
Figure PCTCN2022144132-appb-000115
Figure PCTCN2022144132-appb-000115
其中,β 1为预设第一权重,s t(m,n)为视频片段(m,n)的大小,C max为边缘节点的最大存储容量即最大容量,δ为传输损耗的权重。可以理解的是,通过求解每一视频片段的
Figure PCTCN2022144132-appb-000116
以及
Figure PCTCN2022144132-appb-000117
然后可以利用公式(16)存储优化处理。而约束条件规定存储的视频片段的大小之和不能超过边缘节点的存储能力,并且存储问题要解决的缓存策略是一个二进制变量矩阵,本申请实施例中通过分支定界算法求解第三存储决策值。
Among them, β 1 is the preset first weight, s t (m,n) is the size of the video clip (m,n), C max is the maximum storage capacity of the edge node, that is, the maximum capacity, and δ is the weight of transmission loss. Understandably, by solving the
Figure PCTCN2022144132-appb-000116
as well as
Figure PCTCN2022144132-appb-000117
The optimization process can then be stored using equation (16). The constraints stipulate that the sum of the sizes of the stored video clips cannot exceed the storage capacity of the edge nodes, and the cache strategy to be solved for the storage problem is a binary variable matrix. In the embodiment of the present application, the third storage decision value is solved by the branch and bound algorithm .
在一示例中,步骤B500包括步骤B511-B513:In an example, step B500 includes steps B511-B513:
B511、根据预期有效质量与预期传输损耗确定预期效用,根据当前时刻视频片段存储于边缘节点以及不存储于边缘节点分别对应的预期效用确定存储增益,并根据存储增益确定每一视频片段的视频价值。B511. Determine the expected utility according to the expected effective quality and expected transmission loss, determine the storage gain according to the expected utility corresponding to the video segment stored in the edge node and not stored in the edge node at the current moment, and determine the video value of each video segment according to the storage gain .
在一示例中,预期效用为
Figure PCTCN2022144132-appb-000118
分为两种情况,第一种情况的预期效用记为第一预期效用DC t(m,n),即当前时刻视频片段存储于边缘节点的预期效用,第二种情况的预期效用记为第二预期效用DC t′(m,n),即当前时刻视频片段不存储于边缘节点的预期效用,在一示例中:
In one example, the expected utility is
Figure PCTCN2022144132-appb-000118
It is divided into two cases. The expected utility of the first case is recorded as the first expected utility DC t (m,n), that is, the expected utility of the video segment stored in the edge node at the current moment, and the expected utility of the second case is recorded as the first expected utility. 2. Expected utility DC t ′(m,n), that is, the expected utility of not storing the video segment in the edge node at the current moment, in an example:
1)、当视频片段在t时刻(即当前时刻)存储于边缘节点,即第四存储决策值c t(m,n)=1: 1), when the video segment is stored in the edge node at time t (ie the current time), that is, the fourth storage decision value c t (m,n)=1:
Figure PCTCN2022144132-appb-000119
Figure PCTCN2022144132-appb-000119
其中,μ为有效质量的因数,β 1为预设第一权重,δ为传输损耗的权重,α为非新鲜度的增长因子,c 2为第二损耗系数。 Among them, μ is a factor of effective quality, β 1 is the preset first weight, δ is the weight of transmission loss, α is the growth factor of non-freshness, and c 2 is the second loss coefficient.
2)、当视频片段在t时刻(即当前时刻)不存储于边缘节点,即第四存储决策值c t(m,n)=0: 2) When the video segment is not stored in the edge node at time t (ie the current time), that is, the fourth storage decision value c t (m,n)=0:
Figure PCTCN2022144132-appb-000120
Figure PCTCN2022144132-appb-000120
其中,c 3为第三损耗系数。 Among them, c 3 is the third loss coefficient.
本申请实施例中,视频价值的计算公式为:In the embodiment of this application, the calculation formula of video value is:
Figure PCTCN2022144132-appb-000121
Figure PCTCN2022144132-appb-000121
其中,v t(m,n)为视频片段(m,n)的视频价值,DC t(m,n)-DC t′(m,n)为存储增益。需要说明的是,本申请实施例中利用0-1背包问题通过0-1分支定界算法求解最优解以得到存储决策信息,因此假定C max为背包的大小,视频片段相当于需要放到背包中的物品,对应的大小和价值分别定义为w t(m,n)和v t(m,n),其中w t(m,n)=s t(m,n),求解最优解以得到存储决策信息的目标是使得视频片段的大小满足小于等于C max且价值之和尽可能达到最大价值。 Among them, v t (m,n) is the video value of the video segment (m,n), and DC t (m,n)-DC t '(m,n) is the storage gain. It should be noted that in the embodiment of this application, the 0-1 knapsack problem is used to solve the optimal solution through the 0-1 branch and bound algorithm to obtain the storage decision information. Therefore, assuming that C max is the size of the knapsack, the video clip is equivalent to the The items in the backpack, the corresponding size and value are defined as w t (m,n) and v t (m,n), where w t (m,n)=s t (m,n), find the optimal solution The goal of obtaining storage decision information is to make the size of the video segment less than or equal to C max and the sum of the values reaches the maximum value as much as possible.
B512、获取所有视频片段的大小并根据视频片段的大小与存储增益确定价值密度。B512. Obtain the sizes of all the video clips and determine the value density according to the sizes of the video clips and the storage gain.
本申请实施例中,价值密度ds t(m,n)公式为: In the embodiment of this application, the value density ds t (m, n) formula is:
Figure PCTCN2022144132-appb-000122
Figure PCTCN2022144132-appb-000122
B513、根据视频价值以及价值密度通过分支定界算法得到每一视频片段的第三存储决策值。B513. Obtain a third storage decision value for each video segment through a branch and bound algorithm according to the video value and value density.
本申请实施例中,通过分支定界算法计算得到的第三存储决策值可以为第一数值或者第二数值,例如第一数值为1,第二数值为0。在一示例中:当视频片段的第三存储决策值为c t(m,n)为1时,表示该视频片段需要存储在边缘节点中,而当视频片段的第三存储决策值为c t(m,n)为0时,表示该视频片段不需要存储在边缘节点中 In this embodiment of the present application, the third storage decision value calculated through the branch and bound algorithm may be a first value or a second value, for example, the first value is 1, and the second value is 0. In an example: when the third storage decision value of the video segment c t (m,n) is 1, it means that the video segment needs to be stored in the edge node, and when the third storage decision value of the video segment is c t When (m,n) is 0, it means that the video segment does not need to be stored in the edge node
在一示例中,分支定界算法实则是生成一个二叉树,二叉树中具有多个节点,处理流程为:In an example, the branch and bound algorithm actually generates a binary tree with multiple nodes, and the processing flow is as follows:
1)、将视频片段按照价值密度的大小进行排列(例如由高至低:nodesList=upToLow(ds t(m,n))),通过nodesList以及C max初始化上界值upprofit,使得upprofit为(modesList,C max)中的最大值得到初始上界值。 1) Arrange the video clips according to the value density (for example, from high to low: nodesList=upToLow(ds t (m,n))), initialize the upper limit value upprofit through nodesList and C max , so that upprofit is (modesList ,C max ) to get the initial upper bound value.
2)、初始化队列pq:2), initialize the queue pq:
需要说明的是,初始化队列pq中具有一个初始化为0的节点(简称初始节点),二叉树中 的节点的数据类型是一个包含存储质量(例如重量、价值等)、分配的码率等变量的结构体,且初始节点的存储质量均为0,初始化时赋给父节点,之后的遍历中通过pq.get()返回队列pq的头部节点作为新的父节点并删除头部节点且队列pq中其余的节点向前移动,左、右节点在每一轮迭代时都先被初始化为0,每次判断左右节点是否可能存在解时会在队列中加入新的节点,依次更新父节点。It should be noted that there is a node initialized to 0 in the initialization queue pq (referred to as the initial node), and the data type of the node in the binary tree is a structure containing variables such as storage quality (such as weight, value, etc.), allocated code rate, etc. body, and the storage quality of the initial node is 0, which is assigned to the parent node during initialization, and the head node of the queue pq is returned through pq.get() in the subsequent traversal as the new parent node and the head node is deleted and the queue pq The rest of the nodes move forward. The left and right nodes are initialized to 0 in each round of iteration. Every time it is judged whether the left and right nodes may have a solution, a new node will be added to the queue, and the parent nodes will be updated in turn.
3)迭代过程:3) Iterative process:
在一示例中,初始化后将初始节点作为父节点,然后根据modesList中价值密度最大的视频片段判断左节点:生成虚构的左节点,将价值密度最大的视频片段作为待确认片段,当待确认片段加入背包后总容量小于C max,其中,加入背包后总容量指的是二叉树中虚构的左节点所在分支上所有节点的大小之和,例如:此时的总容量为待确认片段的大小与初始节点的大小之和。而当加入背包后总容量小于C max,此时虚构的左节点确定为实际生成的左节点并且标记待确认片段的第四存储决策值为第一数值,将该左节点加入至队列pq中,此时左节点中保存的为待确认片段的大小以及待确认片段的视频价值;然后利用左节点更新价值阈值即上界值upprofit,更新以生成新的价值阈值upprofit。其中,upprofit价值阈值表征当前边缘节点可容纳的最大的视频价值,为0-1背包问题求解过程中,用贪心算法计算出的背包剩余可装入节点的最大价值。需要说明的是,当待确认片段加入背包后总容量大于等于C max,即待确认片段的大小与初始节点的大小之和大于等于C max,此时不生成实际的左节点并且不更新上界值upprofit。 In one example, after initialization, the initial node is used as the parent node, and then the left node is judged according to the video segment with the highest value density in the modesList: a fictitious left node is generated, and the video segment with the highest value density is used as the segment to be confirmed, when the segment to be confirmed The total capacity after adding the knapsack is less than C max , where the total capacity after adding the knapsack refers to the sum of the sizes of all nodes on the branch where the imaginary left node is located in the binary tree, for example: the total capacity at this time is the size of the segment to be confirmed and the initial The sum of the sizes of the nodes. And when the total capacity is less than C max after adding the knapsack, at this time the imaginary left node is determined to be the actual left node and the fourth storage decision value of the fragment to be confirmed is the first value, and the left node is added to the queue pq, At this time, the size of the segment to be confirmed and the video value of the segment to be confirmed are saved in the left node; then, the value threshold, the upper limit value upprofit, is updated by using the left node to generate a new value threshold upprofit. Among them, the upprofit value threshold represents the maximum video value that the current edge node can hold, which is the maximum value of the remaining knapsack nodes that can be loaded into the knapsack calculated by the greedy algorithm during the process of solving the 0-1 knapsack problem. It should be noted that when the total capacity of the segment to be confirmed is greater than or equal to C max after being added to the backpack, that is, the sum of the size of the segment to be confirmed and the size of the initial node is greater than or equal to C max , the actual left node will not be generated and the upper bound will not be updated at this time Value upprofit.
在一示例中,在判断左节点后,同样将价值密度最大的视频片段作为待确认片段,利用待确认片段判断右节点:生成虚构的右节点,判断待确认片段不加入背包的最大效用值是否大于更新后的上界值upprofit,若待确认片段不加入背包的最大效用值大于更新后的上界值upprofit,则虚构的右节点确定为实际生成的右节点,此时右节点中保存的为待确认片段的大小以及待确认片段的视频价值,然后将待确认片段的第四存储决策值标记为第二数值并将右节点加入队列pq中;而若待确认片段不加入背包的最大效用值小于等于更新后的上界值upprofit,则不生成实际的右节点。需要说明的是,最大效用值指的是二叉树中虚构的右节点的所在分支上所有节点的视频价值之和,例如:在上述过程中,最大效用值为待确认片段与初始节点的视频价值之和。In one example, after determining the left node, the video segment with the highest value density is also used as the segment to be confirmed, and the segment to be confirmed is used to determine the right node: generate a fictitious right node, and determine whether the maximum utility value of the segment to be confirmed is not added to the backpack is greater than the updated upper limit value upprofit, if the maximum utility value of the segment to be confirmed not added to the backpack is greater than the updated upper limit value upprofit, then the imaginary right node is determined to be the actually generated right node, and the value saved in the right node at this time is The size of the segment to be confirmed and the video value of the segment to be confirmed, then mark the fourth storage decision value of the segment to be confirmed as the second value and add the right node to the queue pq; and if the segment to be confirmed is not added to the maximum utility value of the backpack If it is less than or equal to the updated upper bound value upprofit, the actual right node will not be generated. It should be noted that the maximum utility value refers to the sum of the video values of all nodes on the branch of the fictitious right node in the binary tree. For example: in the above process, the maximum utility value is the difference between the segment to be confirmed and the video value of the initial node. and.
需要说明的是,在本申请实施例中,队列采用python的队列数据结构Queue(pq),在python的队列数据结构Queue(pq)中服从先进先出的顺序,pq.get()表示返回队列pq的头节点,并删除头节点,然后从队列pq中确定新的父节点。而在迭代过程3)中,父节点会被删除,然后将队列中按顺序排列的第一个节点作为新的父节点,执行上述判断左节点以及判断左节点的步骤,而在每一次确定待确认片段时,将未曾作为待确认片段且价值密度最大的视频片段作为新的待确认片段进行判断左节点以及判断左节点的步骤,进行不断迭代。其中,在迭代过程中,当生成实际的左节点时均将对应的待确认片段标记为第一数值,而当生成实际的右节点时均将对应的待确认片段标记为第二数值,而在不断迭代的过程中直至队列pq为空时结束迭代,得到每一视频片段的第三存储决策值。It should be noted that, in the embodiment of this application, the queue adopts the queue data structure Queue(pq) of python, and the queue data structure Queue(pq) of python obeys the order of first-in-first-out, and pq.get() indicates the return queue The head node of pq, and delete the head node, and then determine the new parent node from the queue pq. In the iterative process 3), the parent node will be deleted, and then the first node arranged in order in the queue will be used as the new parent node, and the above-mentioned steps of judging the left node and judging the left node will be performed, and each time it is determined When confirming a segment, the video segment that has not been used as a segment to be confirmed and has the highest value density is used as a new segment to be confirmed, and the steps of judging the left node and judging the left node are continuously iterated. Among them, in the iterative process, when the actual left node is generated, the corresponding segment to be confirmed is marked as the first value, and when the actual right node is generated, the corresponding segment to be confirmed is marked as the second value, and in In the process of continuous iteration, the iteration ends when the queue pq is empty, and the third storage decision value of each video segment is obtained.
本申请实施例中,为了使得存储增益为非负数,对存储决策值矩阵进行更新,还包括步骤B514:In the embodiment of the present application, in order to make the storage gain non-negative, the storage decision value matrix is updated, and step B514 is also included:
B514、将第三存储决策值为第一数值且存储增益小于0的视频片段对应的第三存储决策 值更新为第二数值,得到存储决策信息。B514. Update the third storage decision value corresponding to the video segment whose third storage decision value is the first value and the storage gain is less than 0 to the second value to obtain storage decision information.
在一示例中,对步骤B513中得到的M·N个视频片段的第三存储决策值进行搜索,将c t(m,n)=1且存储增益DC t(m,n)-DC′ t(m,n)<0的视频片段的第三存储决策值c t(m,n)更新为0,其他第三存储决策值保持不变,得到存储决策信息,即目标存储矩阵
Figure PCTCN2022144132-appb-000123
In one example, the third storage decision value of the M·N video clips obtained in step B513 is searched, c t (m,n)=1 and the storage gain DC t (m,n)-DC′ t The third storage decision value c t (m,n) of the video segment (m,n)<0 is updated to 0, and the other third storage decision values remain unchanged, and the storage decision information is obtained, that is, the target storage matrix
Figure PCTCN2022144132-appb-000123
S600、根据存储决策信息进行视频片段的存储。S600. Store the video segment according to the storage decision information.
在一示例中,边缘节点可以根据存储决策信息,即根据目标存储矩阵
Figure PCTCN2022144132-appb-000124
中的第三存储决策值确定存储策略。例如:将第三存储决策值为1的视频片段在边缘节点中进行存储,更新边缘节点的存储内容,用于为下一时刻即未来用户端请求获取目标传输视频时的服务。本申请实施例中,当用户端获取的目标传输视频为实时视频时,本申请实施例的码率分配方法以及存储方法作为在线算法能够实时响应用户端的请求信息,当前时刻更新的边缘节点的存储内容能够为用户未来获取视频片段进行服务。
In an example, the edge node can store decision information according to the target storage matrix
Figure PCTCN2022144132-appb-000124
The third storage decision value in determines the storage policy. For example: store the video segment with the third storage decision value of 1 in the edge node, update the storage content of the edge node, and use it for the next moment, that is, the service when the user end requests to obtain the target video transmission. In the embodiment of the present application, when the target transmission video acquired by the client is real-time video, the code rate allocation method and the storage method of the embodiment of the present application can be used as an online algorithm to respond to the request information of the client in real time, and the storage of edge nodes updated at the current moment The content can provide services for users to obtain video clips in the future.
本申请实施例中,存储方法中得到存储决策信息的流程如下:In the embodiment of this application, the process of obtaining storage decision information in the storage method is as follows:
输入:
Figure PCTCN2022144132-appb-000125
C max
enter:
Figure PCTCN2022144132-appb-000125
Cmax
输出:
Figure PCTCN2022144132-appb-000126
output:
Figure PCTCN2022144132-appb-000126
Figure PCTCN2022144132-appb-000127
Figure PCTCN2022144132-appb-000127
其中,当执行第7行判断时,当队列不为空,执行第8行-第15行,然后返回第7行判断直至队列为空,然后执行第16行至第20行,输出
Figure PCTCN2022144132-appb-000128
需要说明的是,
Figure PCTCN2022144132-appb-000129
为t时刻的 存储决策矩阵,return的
Figure PCTCN2022144132-appb-000130
即为
Figure PCTCN2022144132-appb-000131
Among them, when executing the judgment on line 7, when the queue is not empty, execute lines 8-15, and then return to line 7 to judge until the queue is empty, then execute lines 16 to 20, and output
Figure PCTCN2022144132-appb-000128
It should be noted,
Figure PCTCN2022144132-appb-000129
is the storage decision matrix at time t, return
Figure PCTCN2022144132-appb-000130
that is
Figure PCTCN2022144132-appb-000131
以下,以两个具体应用场景对本申请实施例的码率分配方法以及存储方法进行说明:In the following, the code rate allocation method and the storage method of the embodiment of the present application are described in two specific application scenarios:
如图3所示,当目标传输视频为沉浸式视频,服务器进行视频切片,即分块处理,得到视频片段,然后记录用户关注点运动轨迹;当用户通过用户端发送请求,服务器根据用户关注点运动轨迹计算视频片段位于用户F0V的概率,得到观看概率,然后进行码率分配(具体参照步骤S200),判断所有视频片(即视频片段)是否已经全部发给用户:若是的话进行存储决策,即执行上述存储方法,从而更新边缘节点存储的视频片(即视频片段);若不是则判断视频片是否存储在边缘节点中,若视频片(即视频片段)不存储在边缘节点中,则需要将请求转发至云端,使得云端的云服务器将视频片发送至边缘节点,边缘节点再将视频片(即视频片段)发送至用户端,而若视频片(即视频片段)存储在边缘节点中,用户可以从边缘节点中读取该视频片(即视频片段)的内容。As shown in Figure 3, when the target transmission video is an immersive video, the server slices the video, that is, divides it into blocks, obtains the video segment, and then records the movement track of the user's focus; when the user sends a request through the client, the server The motion trajectory calculates the probability that the video segment is located in the user's FOV, obtains the viewing probability, and then performs code rate allocation (refer to step S200 for details), and judges whether all video segments (i.e. video segments) have all been sent to the user: if so, carry out storage decisions, that is Execute the above storage method to update the video slices (i.e. video clips) stored in the edge nodes; The request is forwarded to the cloud, so that the cloud server in the cloud sends the video clip to the edge node, and the edge node then sends the video clip (that is, the video clip) to the client, and if the video clip (that is, the video clip) is stored in the edge node, the user The content of the video piece (ie video clip) can be read from the edge node.
如图4所示,当目标传输视频为面向云桌应用场景中的视频,服务器进行视频切片,即分块处理,得到视频片段,然后记录用户与云桌面交互轨迹(例如鼠标或触屏的操作区域频率);当用户通过用户端发送请求,服务器根据云桌面交互轨迹计算视频片位于用户ROI的概率,得到观看概率,然后进行码率分配(具体参照步骤S200),判断所有视频片(即视频片段)是否已经全部发给用户:若是的话进行存储决策,即执行上述存储方法,从而更新边缘节点存储的视频片;若不是则判断视频片(即视频片段)是否存储在边缘节点中,若视频片(即视频片段)不存储在边缘节点中,则需要将请求转发至云端,使得云端的云服务器将视频片(即视频片段)发送至边缘节点,边缘节点再将视频片(即视频片段)发送至用户端,而若视频片(即视频片段)存储在边缘节点中,用户可以从边缘节点中读取该视频片(即视频片段)的内容。As shown in Figure 4, when the target transmission video is a video in a cloud desktop application scenario, the server slices the video, that is, divides it into blocks, obtains the video segment, and then records the interaction track between the user and the cloud desktop (such as the operation of the mouse or touch screen) region frequency); when the user sends a request through the client terminal, the server calculates the probability that the video slice is located in the ROI of the user according to the cloud desktop interactive trajectory, obtains the viewing probability, and then performs code rate allocation (refer to step S200 for details), and judges that all video slices (i.e. video segment) has been sent to the user: if so, make a storage decision, that is, execute the above storage method, thereby updating the video segment stored in the edge node; if not, determine whether the video segment (ie video segment) is stored in the edge node, if the video If the video clips (that is, video clips) are not stored in the edge nodes, the request needs to be forwarded to the cloud, so that the cloud server in the cloud sends the video clips (that is, video clips) to the edge nodes, and the edge nodes then send the video clips (that is, video clips) sent to the user end, and if the video piece (ie video clip) is stored in the edge node, the user can read the content of the video piece (ie video clip) from the edge node.
需要说明的是,本申请实施例中的码率分配方法由用户端的终端执行,其他实施例中也可以由云端的云服务器执行,存储方法可以由边缘节点执行或者由云端的云服务器执行后,边缘节点根据存储决策信息进行视频片段的存储。其中,边缘节点指物联网中靠近用户侧的具有缓存能力的网络节点。It should be noted that the code rate allocation method in the embodiment of the present application is executed by the terminal of the user end, and in other embodiments, it can also be executed by the cloud server in the cloud, and the storage method can be executed by the edge node or executed by the cloud server in the cloud, Edge nodes store video clips according to storage decision information. Among them, the edge node refers to a network node with caching capability close to the user side in the Internet of Things.
如图5所示,本申请实施例还提供一种码率分配装置,包括:As shown in Figure 5, the embodiment of the present application also provides a code rate allocation device, including:
获取模块,用于获取目标视频数据;目标视频数据包括目标传输视频中所有视频片段的观看概率;The obtaining module is used to obtain target video data; the target video data includes the viewing probability of all video clips in the target transmission video;
处理模块,用于根据观看概率对视频片段进行码率分配,以使目标传输视频的总效用满足目标值;目标传输视频的总效用根据视频片段的用户观看质量以及传输损耗确定,用户观看质量根据非新鲜度因子确定,非新鲜度因子根据视频片段在边缘节点的存储状态以及观看概率确定。The processing module is used to allocate the code rate to the video segment according to the viewing probability, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the user's viewing quality is determined according to The non-freshness factor is determined, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
如图6所示,本申请实施例还提供一种存储装置,包括:As shown in Figure 6, the embodiment of the present application also provides a storage device, including:
确定模块,用于确定目标传输视频中所有视频片段的观看概率以及码率分配结果;Determine module, be used for determining the viewing probability of all video clips in the target transmission video and the code rate allocation result;
预测模块,用于根据码率分配结果以及观看概率,预测预期有效质量以及预期传输损耗;A prediction module, configured to predict expected effective quality and expected transmission loss according to the code rate allocation result and viewing probability;
优化模块,用于根据预期有效质量、预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息;存储决策信息包括每一视频片段的第三存储决策值,第三存储决策值表征视频片段是否需要存储于边缘节点;The optimization module is used to perform storage optimization processing according to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node to obtain storage decision information; the storage decision information includes the third storage decision value of each video clip, the third storage decision value Indicating whether video clips need to be stored in edge nodes;
存储模块,用于根据存储决策信息进行视频片段的存储;A storage module, configured to store video clips according to storage decision information;
获取目标视频数据;目标视频数据包括目标传输视频中所有视频片段的观看概率;Obtain target video data; the target video data includes viewing probabilities of all video clips in the target transmission video;
根据观看概率对视频片段进行码率分配,以使目标传输视频的总效用满足目标值;目标传输视频的总效用根据视频片段的用户观看质量以及传输损耗确定,用户观看质量根据非新鲜度因子确定,非新鲜度因子根据视频片段在边缘节点的存储状态以及观看概率确定。According to the viewing probability, the code rate of the video segment is allocated so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, and the user's viewing quality is determined according to the non-freshness factor , the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
上述方法实施例中的内容均适用于本装置实施例中,本装置实施例所具体实现的功能与上述方法实施例相同,并且达到的有益效果与上述方法实施例所达到的有益效果也相同。The content in the above-mentioned method embodiment is applicable to this device embodiment, and the specific functions realized by this device embodiment are the same as those of the above-mentioned method embodiment, and the beneficial effects achieved are also the same as those achieved by the above-mentioned method embodiment.
如图7所示,本申请实施例还提供了一种电子设备,电子设备包括处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现前述实施例的码率分配或存储方法。本申请实施例的电子设备包括但不限于手机、平板电脑、电脑、车载电脑、服务器等等。As shown in Figure 7, the embodiment of the present application also provides an electronic device, the electronic device includes a processor and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, at least one instruction, at least one The program, code set or instruction set is loaded and executed by the processor to implement the code rate allocation or storage method of the foregoing embodiments. The electronic devices in the embodiments of the present application include, but are not limited to, mobile phones, tablet computers, computers, vehicle-mounted computers, servers, and the like.
上述方法实施例中的内容均适用于本设备实施例中,本设备实施例所具体实现的功能与上述方法实施例相同,并且达到的有益效果与上述方法实施例所达到的有益效果也相同。The content in the above-mentioned method embodiment is applicable to this device embodiment. The specific functions realized by this device embodiment are the same as those of the above-mentioned method embodiment, and the beneficial effects achieved are also the same as those achieved by the above-mentioned method embodiment.
本申请实施例还提供一种计算机可读存储介质,存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现前述实施例的码率分配或存储方法。The embodiment of the present application also provides a computer-readable storage medium. At least one instruction, at least one program, code set or instruction set is stored in the storage medium, and at least one instruction, at least one program, code set or instruction set is loaded by the processor. And execute to implement the code rate allocation or storage method of the foregoing embodiments.
本申请实施例还提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行前述实施例的码率分配或存储方法。The embodiment of the present application also provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the code rate allocation or storage method of the foregoing embodiments.
本申请实施例提出的一种码率分配方法、存储方法、装置、设备及存储介质,通过获取目标视频数据,目标视频数据包括目标传输视频中所有视频片段的观看概率,其中引入观看概率有利于提高用户的观看质量;根据观看概率对视频片段进行码率分配,以使目标传输视频的总效用满足目标值;而目标传输视频的总效用根据视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,非新鲜度因子根据视频片段在边缘节点的存储状态以及观看概率确定,在目标传输视频的总效用满足目标值时得到相应的码率分配结果,能够达到在提高用户对视频片段的用户观看质量的同时减小传输损耗的目标。A code rate allocation method, storage method, device, device, and storage medium proposed in the embodiments of the present application, by acquiring target video data, the target video data includes viewing probabilities of all video segments in the target transmission video, and the introduction of viewing probabilities is beneficial to Improve the user's viewing quality; according to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; and the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment, said User viewing quality is determined according to the non-freshness factor. The non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability. The goal of reducing transmission loss while improving user viewing quality of video clips.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通 常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application-specific integrated circuit . Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
以上参照附图说明了本申请的若干实施例,并非因此局限本申请的权利范围。本领域技术人员不脱离本申请的范围和实质内所作的任何修改、等同替换和改进,均应在本申请的权利范围之内。Several embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of rights of the present application is not limited thereto. Any modifications, equivalent replacements and improvements made by those skilled in the art without departing from the scope and essence of the present application shall fall within the scope of rights of the present application.

Claims (15)

  1. 一种码率分配方法,包括:A code rate allocation method, comprising:
    获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;Obtain target video data; the target video data includes the viewing probability of all video clips in the target transmission video;
    根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点的存储状态以及所述观看概率确定。According to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment , the viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  2. 根据权利要求1所述码率分配方法,其中:所述根据所述观看概率对所述视频片段进行码率分配,包括:The code rate allocation method according to claim 1, wherein: said performing code rate allocation on said video clips according to said viewing probability comprises:
    获取所述目标传输视频的总效用的新效用值和旧效用值,以及获取所述视频片段的质量级以及码率;Obtaining the new utility value and the old utility value of the total utility of the target transmission video, and acquiring the quality level and bit rate of the video segment;
    判断所述码率的总和是否小于或等于带宽容量;Judging whether the sum of the code rates is less than or equal to the bandwidth capacity;
    当所述码率的总和小于或等于带宽容量,遍历所述视频片段对所述质量级进行增加处理,根据增加处理结果、所述新效用值和所述旧效用值,更新所述视频片段的质量级以及码率,返回所述判断所述码率的总和是否小于或等于带宽容量的步骤,直至所述码率的总和大于带宽容量,得到每一所述视频片段的目标传输码率;所述增加处理结果为增大后的质量级;When the sum of the code rates is less than or equal to the bandwidth capacity, traverse the video segment to increase the quality level, and update the video segment according to the increase processing result, the new utility value and the old utility value Quality level and code rate, return to the step of judging whether the sum of the code rates is less than or equal to the bandwidth capacity, until the sum of the code rates is greater than the bandwidth capacity, obtain the target transmission code rate of each of the video segments; The above-mentioned increase processing result is the increased quality level;
    当所述码率的总和非小于或非等于带宽容量,遍历所述视频片段对所述质量级进行减少处理,根据减少处理结果、所述新效用值和所述旧效用值,更新所述视频片段的质量级以及码率,返回所述判断所述码率的总和是否小于或等于带宽容量的步骤,直至所述码率的总和小于或等于带宽容量,得到每一所述视频片段的目标传输码率;所述减少处理结果为减少后的质量级。When the sum of the code rates is not less than or not equal to the bandwidth capacity, traverse the video segment to reduce the quality level, and update the video according to the reduction processing result, the new utility value and the old utility value The quality level and code rate of the segment, return to the step of judging whether the sum of the code rates is less than or equal to the bandwidth capacity, until the sum of the code rates is less than or equal to the bandwidth capacity, and obtain the target transmission of each video segment code rate; the reduction processing result is the reduced quality level.
  3. 根据权利要求2所述码率分配方法,其中:所述根据增加处理结果、所述新效用值和所述旧效用值,更新所述视频片段的质量级以及码率,包括:The code rate allocation method according to claim 2, wherein: updating the quality level and code rate of the video segment according to the increase processing result, the new utility value and the old utility value, comprising:
    当所述增加处理结果小于或等于质量等级阈值,根据所述增加处理结果对所述新效用值进行更新,计算更新后的新效用值与所述旧效用值的第一差值,确定所述视频片段的第一影响因子,将第一影响因子的最大值对应的视频片段作为第一更新视频片段,将所述第一更新视频片段的质量级更新为所述第一更新视频片段对应的增加处理结果,并根据所述第一更新视频片段对应的增加处理结果更新所述第一更新视频片段的码率。When the increase processing result is less than or equal to the quality level threshold, update the new utility value according to the increase processing result, calculate a first difference between the updated new utility value and the old utility value, and determine the The first impact factor of the video segment, the video segment corresponding to the maximum value of the first impact factor is used as the first updated video segment, and the quality level of the first updated video segment is updated to the corresponding increase of the first updated video segment processing results, and update the code rate of the first updated video segment according to the increase processing result corresponding to the first updated video segment.
  4. 根据权利要求3所述码率分配方法,其中:所述根据所述增加处理结果对所述新效用值进行更新,包括:The code rate allocation method according to claim 3, wherein: said updating said new utility value according to said increase processing result comprises:
    计算当前时刻的传输损耗以及计算当前时刻的非新鲜度;Calculate the transmission loss at the current moment and calculate the non-freshness at the current moment;
    根据所述当前时刻的非新鲜度、所述观看概率以及当前时刻的质量级,确定有效质量;Determine the effective quality according to the non-freshness at the current moment, the viewing probability, and the quality level at the current moment;
    获取所述视频片段前一时刻的质量级,根据所述当前时刻的质量级、所述观看概率以及所述前一时刻的质量级,计算得到所述当前时刻的第一主观质量和所述前一时刻的第二主观质量,并根据所述第一主观质量和所述第二主观质量的差值确定时间质量损耗;Acquire the quality level of the video clip at the previous moment, and calculate the first subjective quality at the current moment and the previous The second subjective quality at a moment, and determine the time quality loss according to the difference between the first subjective quality and the second subjective quality;
    根据所述第一主观质量和所述视频片段的总数量计算主观质量均值,并根据所述第一主观质量与所述主观质量均值的差值,确定空间质量损耗;calculating a subjective quality mean based on the first subjective quality and the total number of video segments, and determining a spatial quality loss based on a difference between the first subjective quality and the subjective quality mean;
    根据所述有效质量、所述时间质量损耗、所述空间质量损耗以及对应的预设权重,得到 用户观看质量;Obtain user viewing quality according to the effective quality, the temporal quality loss, the spatial quality loss and the corresponding preset weights;
    根据所述用户观看质量与所述当前时刻的传输损耗的差值,得到更新后的新效用值。An updated new utility value is obtained according to the difference between the viewing quality of the user and the transmission loss at the current moment.
  5. 根据权利要求4所述码率分配方法,其中:所述计算当前时刻的传输损耗,包括:The code rate allocation method according to claim 4, wherein: said calculating the transmission loss at the current moment comprises:
    根据所述前一时刻的所述视频片段在边缘节点的存储状态,确定第一存储决策值;determining a first storage decision value according to the storage status of the video segment at the edge node at the previous moment;
    当所述第一存储决策值表征所述视频片段存储于边缘节点,获取所述视频片段的存储码率,计算所述存储码率与所述视频片段的码率的码率差,根据所述第一存储决策值、所述码率差以及第一损耗系数确定第一损耗,根据第二损耗系数与所述视频片段的码率确定第二损耗,根据所述第一损耗和所述第二损耗的和,确定所述当前时刻的传输损耗;When the first storage decision value indicates that the video clip is stored in the edge node, obtain the storage code rate of the video clip, calculate the code rate difference between the stored code rate and the video clip code rate, according to the The first storage decision value, the code rate difference and the first loss coefficient determine the first loss, determine the second loss according to the second loss coefficient and the code rate of the video segment, and determine the second loss according to the first loss and the second The sum of losses, to determine the transmission loss at the current moment;
    或者,当所述第一存储决策值表征所述第一片段不存储于边缘节点,根据第三损耗系数以及所述视频片段的码率确定第三损耗,并根据所述第二损耗系数与所述视频片段的码率确定第二损耗,根据所述第三损耗和所述第二损耗的和,确定所述当前时刻的传输损耗;Or, when the first storage decision value indicates that the first segment is not stored in the edge node, determine the third loss according to the third loss coefficient and the code rate of the video segment, and determine the third loss according to the second loss coefficient and the Determine the second loss based on the code rate of the video segment, and determine the transmission loss at the current moment according to the sum of the third loss and the second loss;
    其中,所述第一损耗为边缘节点的转码损耗,所述第二损耗为边缘节点传输所述视频片段至用户端的通信损耗,所述第三损耗为边缘节点从云端获取所述视频片段的通信损耗。Wherein, the first loss is the transcoding loss of the edge node, the second loss is the communication loss of the edge node transmitting the video segment to the client, and the third loss is the edge node's acquisition of the video segment from the cloud communication loss.
  6. 根据权利要求4所述码率分配方法,其中:所述计算当前时刻的非新鲜度,包括:The code rate allocation method according to claim 4, wherein: said calculation of the non-freshness at the current moment includes:
    确定前一时刻的非新鲜度;Determine the non-freshness of the previous moment;
    根据所述当前时刻所述视频片段在边缘节点的存储状态,确定第二存储决策值;Determine a second storage decision value according to the storage status of the video segment at the edge node at the current moment;
    当所述第二存储决策值表征所述视频片段存储于边缘节点,计算预设新鲜度增长因子与所述观看概率的乘积,并根据所述乘积与所述前一时刻的非新鲜度的和,确定当前时刻的非新鲜度;When the second storage decision value indicates that the video segment is stored in the edge node, calculate the product of the preset freshness growth factor and the viewing probability, and calculate the product according to the sum of the product and the non-freshness at the previous moment , to determine the non-freshness of the current moment;
    或者,当所述第二存储决策值表征所述视频片段不存储于边缘节点,确定当前时刻的非新鲜度为0。Or, when the second storage decision value indicates that the video clip is not stored in the edge node, it is determined that the non-freshness at the current moment is 0.
  7. 根据权利要求2所述码率分配方法,其中:所述根据减少处理结果、所述新效用值和所述旧效用值,更新所述视频片段的质量级以及码率,包括:The code rate allocation method according to claim 2, wherein: updating the quality level and code rate of the video segment according to the reduction processing result, the new utility value and the old utility value, comprising:
    当所述减少处理结果大于或等于预设阈值,根据所述减少处理结果对所述新效用值进行更新,计算更新后的新效用值与所述旧效用值的第二差值,确定所述视频片段的第二影响因子,将第二影响因子的最大值对应的视频片段作为第二更新视频片段,将所述第二更新视频片段的质量级更新为所述第二更新视频片段对应的减少处理结果,并根据所述第二更新视频片段对应的减少处理结果更新所述第二更新视频片段的码率。When the reduction processing result is greater than or equal to a preset threshold, update the new utility value according to the reduction processing result, calculate a second difference between the updated new utility value and the old utility value, and determine the The second impact factor of the video segment, the video segment corresponding to the maximum value of the second impact factor is used as the second updated video segment, and the quality level of the second updated video segment is updated to the corresponding reduction of the second updated video segment processing results, and update the code rate of the second updated video segment according to the reduction processing result corresponding to the second updated video segment.
  8. 一种存储方法,包括:A storage method comprising:
    通过如权利要求1-7任一项所述的码率分配方法得到目标传输视频中所有视频片段的观看概率以及码率分配结果;Obtain the viewing probability and the code rate distribution result of all video clips in the target transmission video by the code rate distribution method as described in any one of claims 1-7;
    根据所述码率分配结果以及所述观看概率,预测预期有效质量以及预期传输损耗;Predict expected effective quality and expected transmission loss according to the code rate allocation result and the viewing probability;
    根据所述预期有效质量、所述预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息;所述存储决策信息包括每一所述视频片段的第三存储决策值,所述第三存储决策值表征所述视频片段是否需要存储于边缘节点;According to the expected effective quality, the expected transmission loss and the maximum capacity of the edge node, perform storage optimization processing to obtain storage decision information; the storage decision information includes a third storage decision value for each of the video segments, the The third storage decision value represents whether the video segment needs to be stored in the edge node;
    根据所述存储决策信息进行所述视频片段的存储。The video segment is stored according to the storage decision information.
  9. 根据权利要求8所述存储方法,其中:所述码率分配结果包括每一所述视频片段的目标传输码率,每一所述目标传输码率具有对应的目标质量级,所述根据所述码率分配结果以及所述观看概率,预测预期有效质量以及预期传输损耗,包括:The storage method according to claim 8, wherein: said code rate allocation result includes a target transmission code rate of each said video segment, and each said target transmission code rate has a corresponding target quality level, said according to said The code rate allocation result and the viewing probability predict the expected effective quality and expected transmission loss, including:
    获取当前时刻的非新鲜度以及当前时刻所述视频片段的第四存储决策值,并根据所述当前时刻的非新鲜度、所述观看概率以及所述第四存储决策值,确定预测非新鲜度;Obtain the non-freshness at the current moment and the fourth storage decision value of the video segment at the current moment, and determine the predicted non-freshness according to the non-freshness at the current moment, the viewing probability and the fourth storage decision value ;
    根据所述预测非新鲜度、所述观看概率以及所述目标质量级,确定所述预期有效质量;determining the expected effective quality based on the predicted non-freshness, the viewing probability, and the target quality level;
    根据所述目标传输码率、所述第四存储决策值、第二损耗系数以及第三损耗系数,确定所述预期传输损耗。The expected transmission loss is determined according to the target transmission code rate, the fourth stored decision value, the second loss coefficient, and the third loss coefficient.
  10. 根据权利要求8所述存储方法,其中:所述码率分配结果包括每一所述视频片段的目标传输码率,每一所述目标传输码率具有对应的目标质量级,所述根据所述预期有效质量、所述预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息,包括:The storage method according to claim 8, wherein: said code rate allocation result includes a target transmission code rate of each said video segment, and each said target transmission code rate has a corresponding target quality level, said according to said The expected effective quality, the expected transmission loss and the maximum capacity of the edge node are processed for storage optimization to obtain storage decision information, including:
    根据所述预期有效质量与所述预期传输损耗确定预期效用,根据当前时刻所述视频片段存储于边缘节点以及不存储于边缘节点分别对应的预期效用确定存储增益,并根据所述存储增益确定每一所述视频片段的视频价值;Determine the expected utility according to the expected effective quality and the expected transmission loss, determine the storage gain according to the expected utility corresponding to the video segment stored in the edge node and not stored in the edge node at the current moment, and determine each according to the storage gain a video value of said video segment;
    获取所述视频片段的大小,并根据视频片段的大小与所述存储增益确定所述视频片段的价值密度;Obtaining the size of the video clip, and determining the value density of the video clip according to the size of the video clip and the storage gain;
    根据所述视频价值以及所述价值密度通过分支定界算法得到每一所述视频片段的第三存储决策值;所述第三存储决策值为第一数值或第二数值,所述第一数值表征视频片段需要存储于边缘节点,所述第二数值表征视频片段不需要存储于边缘节点。According to the video value and the value density, the third storage decision value of each video segment is obtained through a branch and bound algorithm; the third storage decision value is a first value or a second value, and the first value is The representative video segment needs to be stored in the edge node, and the second value representative video segment does not need to be stored in the edge node.
  11. 根据权利要求10所述存储方法,还包括:The storage method according to claim 10, further comprising:
    将第三存储决策值为第一数值且所述存储增益小于0的视频片段对应的第三存储决策值更新为第二数值,得到存储决策信息。Update the third storage decision value corresponding to the video segment whose third storage decision value is the first value and whose storage gain is less than 0 to the second value to obtain storage decision information.
  12. 一种码率分配装置,包括:A code rate distribution device, comprising:
    获取模块,用于获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;An acquisition module, configured to acquire target video data; the target video data includes the viewing probability of all video clips in the target transmission video;
    处理模块,用于根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点的存储状态以及所述观看概率确定。A processing module, configured to perform code rate allocation on the video segment according to the viewing probability, so that the total utility of the target transmission video meets a target value; the total utility of the target transmission video is based on the user viewing of the video segment The quality and transmission loss are determined. The user viewing quality is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  13. 一种存储装置,包括:A storage device comprising:
    确定模块,用于确定目标传输视频中所有视频片段的观看概率以及码率分配结果;Determine module, be used for determining the viewing probability of all video clips in the target transmission video and the code rate allocation result;
    预测模块,用于根据所述码率分配结果以及所述观看概率,预测预期有效质量以及预期传输损耗;A prediction module, configured to predict expected effective quality and expected transmission loss according to the code rate allocation result and the viewing probability;
    优化模块,用于根据所述预期有效质量、所述预期传输损耗以及边缘节点的最大容量,进行存储优化处理,得到存储决策信息;所述存储决策信息包括每一所述视频片段的第三存储决策值,所述第三存储决策值表征所述视频片段是否需要存储于边缘节点;An optimization module, configured to perform storage optimization processing according to the expected effective quality, the expected transmission loss, and the maximum capacity of the edge node to obtain storage decision information; the storage decision information includes the third storage of each video segment Decision value, the third storage decision value represents whether the video segment needs to be stored in the edge node;
    存储模块,用于根据所述存储决策信息进行所述视频片段的存储;a storage module, configured to store the video clips according to the storage decision information;
    所述确定目标传输视频中所有视频片段的观看概率以及码率分配结果,包括:The viewing probability and code rate allocation results of all video clips in the determined target transmission video include:
    获取目标视频数据;所述目标视频数据包括目标传输视频中所有视频片段的观看概率;Obtain target video data; the target video data includes the viewing probability of all video clips in the target transmission video;
    根据所述观看概率对所述视频片段进行码率分配,以使所述目标传输视频的总效用满足目标值;所述目标传输视频的总效用根据所述视频片段的用户观看质量以及传输损耗确定,所述用户观看质量根据非新鲜度因子确定,所述非新鲜度因子根据所述视频片段在边缘节点 的存储状态以及所述观看概率确定。According to the viewing probability, the code rate is allocated to the video segment, so that the total utility of the target transmission video meets the target value; the total utility of the target transmission video is determined according to the user's viewing quality and transmission loss of the video segment , the viewing quality of the user is determined according to the non-freshness factor, and the non-freshness factor is determined according to the storage status of the video segment at the edge node and the viewing probability.
  14. 一种电子设备,包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1-7任一项所述码率分配方法或者如权利要求8-11任一项所述存储方法。An electronic device, including a processor and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, and the at least one instruction, the at least one program, the code set or the instruction set It is loaded and executed by the processor to implement the code rate allocation method according to any one of claims 1-7 or the storage method according to any one of claims 8-11.
  15. 一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1-7任一项所述码率分配方法或者如权利要求8-11任一项所述存储方法。A computer-readable storage medium, wherein at least one instruction, at least one program, code set or instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set or the instruction set are processed by The code rate allocation method according to any one of claims 1-7 or the storage method according to any one of claims 8-11 is implemented.
PCT/CN2022/144132 2021-12-30 2022-12-30 Code rate allocation method and apparatus, storage method and apparatus, device, and storage medium WO2023125970A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111658162.0 2021-12-30
CN202111658162.0A CN116419016A (en) 2021-12-30 2021-12-30 Code rate allocation method, storage method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023125970A1 true WO2023125970A1 (en) 2023-07-06

Family

ID=86998190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/144132 WO2023125970A1 (en) 2021-12-30 2022-12-30 Code rate allocation method and apparatus, storage method and apparatus, device, and storage medium

Country Status (2)

Country Link
CN (1) CN116419016A (en)
WO (1) WO2023125970A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022685A1 (en) * 2008-03-27 2011-01-27 Matthew David Walker Device content management
US20180041788A1 (en) * 2015-02-07 2018-02-08 Zhou Wang Method and system for smart adaptive video streaming driven by perceptual quality-of-experience estimations
CN108551586A (en) * 2018-03-14 2018-09-18 上海交通大学 360 degree of video stream server end code check self-adapting distribution methods of multi-user and system
CN110248212A (en) * 2019-05-27 2019-09-17 上海交通大学 360 degree of video stream server end code rate adaptive transmission methods of multi-user and system
US20200204810A1 (en) * 2018-12-21 2020-06-25 Hulu, LLC Adaptive bitrate algorithm with cross-user based viewport prediction for 360-degree video streaming

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022685A1 (en) * 2008-03-27 2011-01-27 Matthew David Walker Device content management
US20180041788A1 (en) * 2015-02-07 2018-02-08 Zhou Wang Method and system for smart adaptive video streaming driven by perceptual quality-of-experience estimations
CN108551586A (en) * 2018-03-14 2018-09-18 上海交通大学 360 degree of video stream server end code check self-adapting distribution methods of multi-user and system
US20200204810A1 (en) * 2018-12-21 2020-06-25 Hulu, LLC Adaptive bitrate algorithm with cross-user based viewport prediction for 360-degree video streaming
CN110248212A (en) * 2019-05-27 2019-09-17 上海交通大学 360 degree of video stream server end code rate adaptive transmission methods of multi-user and system

Also Published As

Publication number Publication date
CN116419016A (en) 2023-07-11

Similar Documents

Publication Publication Date Title
US10819645B2 (en) Combined method for data rate and field of view size adaptation for virtual reality and 360 degree video streaming
Sun et al. Flocking-based live streaming of 360-degree video
WO2019037706A1 (en) Determining a future field of view (fov) for a particular user viewing a 360 degree video stream in a network
CN109286855B (en) Panoramic video transmission method, transmission device and transmission system
CN110198495B (en) Method, device, equipment and storage medium for downloading and playing video
US20110202633A1 (en) Cache server control device, content distribution system, method of distributing content, and program
WO2008013651A1 (en) Glitch-free media streaming
CN108881931B (en) Data buffering method and network equipment
US20110202596A1 (en) Cache server control device, content distribution system, method of distributing content, and program
WO2015120766A1 (en) Video optimisation system and method
CN111093094A (en) Video transcoding method, device and system, electronic equipment and readable storage medium
CN113282786B (en) Panoramic video edge collaborative cache replacement method based on deep reinforcement learning
KR20150067643A (en) Method and apparatus for sharing file in cloud storage service
CN110809167A (en) Video playing method and device, electronic equipment and storage medium
CN108777802B (en) Method and device for caching VR (virtual reality) video
WO2023125970A1 (en) Code rate allocation method and apparatus, storage method and apparatus, device, and storage medium
US20230396845A1 (en) Method for playing on a player of a client device a content streamed in a network
CN113473172B (en) VR video caching method and device, caching service device and storage medium
US20210268375A1 (en) Method for playing on a player of a client device a content streamed in a network
Kan et al. A server-side optimized hybrid multicast-unicast strategy for multi-user adaptive 360-degree video streaming
US10841490B2 (en) Processing method and processing system for video data
CN116366876A (en) Method and system for deploying and scheduling film and television resources under Bian Yun collaborative scene
US20190190843A1 (en) Arbitration of competing flows
US20150271440A1 (en) Information processing apparatus, information processing method, program, and information processing system
CN109962948B (en) P2P task processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22915217

Country of ref document: EP

Kind code of ref document: A1