WO2018080688A1 - Bitrate optimization for multi-representation encoding using playback statistics - Google Patents

Bitrate optimization for multi-representation encoding using playback statistics Download PDF

Info

Publication number
WO2018080688A1
WO2018080688A1 PCT/US2017/053318 US2017053318W WO2018080688A1 WO 2018080688 A1 WO2018080688 A1 WO 2018080688A1 US 2017053318 W US2017053318 W US 2017053318W WO 2018080688 A1 WO2018080688 A1 WO 2018080688A1
Authority
WO
WIPO (PCT)
Prior art keywords
segment
quality
encoding
bitrate
representations
Prior art date
Application number
PCT/US2017/053318
Other languages
French (fr)
Inventor
Chao Chen
Yao-Chung Lin
Anil Kokaram
Steve Benting
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to CN201780072261.6A priority Critical patent/CN110268717B/en
Priority to EP17784732.4A priority patent/EP3533232B1/en
Publication of WO2018080688A1 publication Critical patent/WO2018080688A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/38Flow control; Congestion control by adapting coding or compression rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Definitions

  • This disclosure relates to the field of video streaming and, in particular, to bitrate optimization for m ulti -representati on encoding using playback statistics.
  • the streaming of multimedia (e.g., videos) to a client device over a network may be based on adaptive bitrate streaming. For example, bandwidth and processing capability of the client device may be detected in real time. In response to a change of the detected bandwidth and viewport size, the video stream accessed by the client dev ice may be adjusted accordingly. As an example, a video may be encoded at different bitrates. The client device may switch from a first representation of the video to a second representation of the video in response to the changing resources or capabilities of the client device.
  • a method includes generating multiple versions of a segment of a source media item (such as, for example, a source video), the versions comprising encodings of the segment at different encoding bitrates for each resolution of the segment, measuring a quality metric for each version of the segment, generating rate-quality models for each resolution of the segment based on the measured quality metrics
  • the segment may include the entire source media item (video).
  • the requesting probability for one of the representations is further based on the encoding bitrate of the representation and a relation of the encoding bitrate to network speed in the joint probability distribution, and the resolution of the representation and a relation of the resolution to viewport size in the joint probability di stribution.
  • the client-side feedback statistics include playback traces transmitted from media players at client devices, the playback traces comprising network speed measurements and viewport sizes, wherein the joint probability distribution is generated from cumulative measurements of the network speeds determined from the playback traces and from cumulative measurements of the viewport sizes determined from the playback traces.
  • the playback traces may be collected from media players at client devices located in a first defined geographic region, and the joint probability distribution may be specific to the first defined geographic region.
  • playback traces may be collected from media players at client devices located in a second defined geographic region, and there may be another joint probability distribution specific to the second defined geographic region, ) Additionally or alternatively, the playback traces may be collected for a type of the source media item (video), and the joint probabi lity distribution may be specific to the type of the source media item (video).
  • representations further comprises minimizing an average egress traffic for the segment such that an average quality of the segment (as determined by any suitable quali ty measure) is maintained at or above a defined quality level, wherein the average egress traffic is a function of the different encoding bitrates and the requesting probabilities, and wherein the average quality is a function of the quality metrics and the requesting probabilities.
  • determining the encoding bitrate for each of the representations further comprises maximizing an average quality for the segment such that an av erage egress traffic of the segment i s maintained at or below a defined media item (video) egress traffic level, wherein the average quality is a function of the quality metrics and the requesting
  • assigning the determined encoding bitrates to the corresponding representations may further include providing the selected encoding bitrates to at least one
  • the representation may include a bitrate/resolution combination of the segment, and wherein the segment may include one or more representations for each of the resolutions of the segment.
  • the quality metric may include a Peak Signal-to-Noise Ratio (PSNR) measurement or a Structural Simi larity (SSIM ) measurement.
  • a method includes determining a joint probability di stribution for network speed and viewport size based on feedback statistics received from client systems; generating rate-quality models for resolutions of a segment of a media item based on quality metrics measured for the segment; estimating a delivered quality and egress for representations of the segment based on the generated rate-quality models and based on requesting probabilities that the representations are requested, wherein the requesting probabi lities are based on the joint probability distribution; and determining a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to mi nimize the egress hile maintaining the deli vered quality at or above a quality threshold value.
  • a method in another aspect of the di sclosure, includes determining a joint probability distribution for network speed and viewport size based on feedback statistics received from client systems; generating rate-quality model s for resolutions of a segment of a media item based on quality metrics measure for the segment; estimating, by the processing device, a deliv ered quality and egress for representations of the segment based on the generated rate- quality models and based on requesting probabilities that the representations is requested, wherein the requesting probabilities are based on the joint probability distribution; and determining, by the processing device, a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to maximize the deliv ered quality while keeping the egress at or below an egress threshold value.
  • a system compri ses a processing device configured to perform a method according to any aspect or implementation of the present disclosure.
  • the system may further comprise a memory, and the processing device may be coupled to the memory.
  • a machine-readable storage medium (which may be a non-transitory machine-readable storage medium, although the invention is not limited to this) stores instructions which, when executed, cause a processing device to perform operations comprising a method according to any aspect or implementation of the present di sclosure.
  • Computing devices for performing the operations of the above described method and the various implementations described herein are disclosed.
  • Computer-readable media that store instaictions for performing operations associated with the above described method and the various implementations described herein are also disclosed.
  • Figure 1 is a block diagram illustrating an exemplary network architecture in which implementations of the disclosure may be implemented.
  • Figure 2 is a block diagram of an encoding bitrate optimization component, in accordance with an implementation of the disclosure.
  • Figure 3 is a flow diagram illustrating a method for bitrate optimization for multi- representation encoding using playback stati stics according to an implementation.
  • Figure 4 is a flow diagram illustrating a method for multi -representation encoding bitrate optimization to minimize egress based on playback statistics according to an implementation.
  • Figure 5 is a flow diagram illustrating a method for multi-representation encoding bitrate optimization to maximize quality based on playback statistics, according to an implementation.
  • Figure 6 is a block diagram illustrating one implementation of a computer system, according to an implementation.
  • Adaptive bitrate streaming may be used to stream multimedia (e.g., a video) from a serv er (e.g., an adaptive video streaming system ) to a client system (e.g., a media player on a client dev ice) ov er a network.
  • the adaptive video streaming system encodes a source video into several representations of different encoding bitrates and/or resolutions. That is, the segment of the media item may be converted (eg, transcoded) into multiple resolutions (eg, multiple spatial resolutions).
  • Each of the multiple resolutions of the segment may then be encoded at a plurality of different bitrates, to produce multiple "representations" of the segment of the media item.
  • a representation may refer to a result of encoding a video and/or a video segment at one resolution using one bitrate.
  • This set of encoded representations allows the client systems to adaptively-select appropriate encoded representations according to the network bandwidth and viewport size during the video streaming. For example, a media player of a client device may switch from a first representation or encoding of a video to a second representation or encoding of the video of a different quality in response to changing conditions (e.g., CPU, network bandwidth, viewport size, etc.) associated with the client device.
  • a video may be transcoded into multiple resolutions (e.g., 1080p, 720p, 480p, 360p, etc.) by the adaptive video streaming system.
  • each resolution of the video may be encoded at one or more encoding bitrates.
  • an encoding bitrate may also be referred to herein as a "bitrate").
  • Multi-representation encoding may refer to having one or more encoding bitrate representations for each resolution.
  • a bitrate may refer to an amount of information or data stored per unit of playback time of the video.
  • a bitrate in video streaming may vary between 400kbits/s and 40 MBits/s. In some cases, a higher bitrate correlates to better clarity and/or higher quality of the video to a viewer.
  • Conventional systems for adaptive bitrate streaming may utili e generic encoding configurations (e.g., an encoding bitrate to use for each representation ) for encoding a video or video segment (also referred to as a "portion, " "chunk,” or “clip " of the video).
  • the generic encoding configurations may refer to a pre-defined bitrate selected for each representation of the video or video segment that i s to be encoded.
  • the generic encoding configurations are selected to be "good on av erage " (e.g., satisfies a determined video quality measurement based on aggregated quality measurements at multiple client dev ices) for videos or video segments of a particular resolution.
  • each video and/or video segment is different, and the encoding confi urations for the encoder should be chosen such that the encoded versions created for each video segment are appropriate for the specific v ideo segment.
  • the selection of encoding configurations has an impact on the deliv ered video quality and the cost for storage and transmission. For example, the selection of a higher encoding bitrate for a resolution may result in better video quality, but it may also increase the system resources needed for the adaptive video streaming system because the system requires more resources to provide greater performance for deliv ering video traffic to client systems and for storing the data (i .e.
  • the higher the bitrate, the more data to transfer and store, and the higher the requirements for the system resources (additionally also increasing financial costs of the vid eo streaming system).
  • the quality of the video delivered to user devices may deteriorate. This is because network capacity may be limited. If the encoding bitrate is higher than the network throughput of a user device, the video cannot be delivered to the user device without re-buffering, which negatively affects the quality of video experienced by the user.
  • different videos have different characteristics and a general and/or generic encoding setting is unlikely to be universally-optimal for all videos.
  • Implementations of the disclosure analyze the trade-offs between the cost (e.g., transmission and storage costs) and delivered video quality for an encoding configuration based on information about playback statistics received from client systems.
  • the playback statistics may refer to client-measured bandwidth (also referred to as network speed ) and/or client viewport size.
  • These playback statistics are used to determine an optimal set of encoding configuratio s (e.g., bitrate defined for each representation to be encoded) for each video and/or video segment.
  • This optimal set of encoding configurations is used to minimize egress traffic from the adaptive video streaming system to client systems, while maintaining delivered quality of the video and/or video segment (as compared to conventional systems).
  • "Egress traffic” or “egress” may refer to the rate that data is transmitted from a data source and/or a network (amount of data per unit of time).
  • Implementations of the disclosure provide a technical improvement for adaptive video streaming systems by improving the efficiency of the encoding process (via optimized encoding configuration selection), thus reducing size and/or number of transmissions (improves utility of transmission bandwidth) as well as storage used for adaptive bitrate streaming, while maintaining video quality for client systems.
  • the disclosure often references videos for simplicity and brevity.
  • teachings of the disclosure are applied to media items generally and can be applied to various types of content or media items, including for example, video, audio, text, images, etc.
  • FIG. 1 illustrates an example system architecture 100, in accordance with one implementation of the disclosure.
  • the system architecture 100 includes client devices 1 1 OA through 1 1 0Z, a network 105, a data store 106, a content sharing platform 120, and a serv er 130.
  • network 105 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.1 1 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combi nation thereof.
  • a public network e.g., the Internet
  • a private network e.g., a local area network (LAN) or wide area network (WAN)
  • a wired network e.g., Ethernet network
  • a wireless network e.g., an 802.1 1 network or a Wi-Fi network
  • a cellular network e.g., a Long Term Evolution (LTE) network
  • the data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or dev ice capable of storing data.
  • the data store 106 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers).
  • the client devices 1 10A through I 10Z may each include computing devices such as personal computers (PCs), laptops, mobile phones, smart phones, tablet computers, netbook computers, network -connected telev isions, etc.
  • client device 1 1 OA through 1 10Z may also be referred to as "user devices.
  • Each client device includes a media viewer 1 1 1 .
  • the media viewers 1 1 1 may be applications that allow users to view content, such as images, videos, web pages, documents, etc.
  • the media viewer 1 1 1 may be a web browser that can access, retrieve, present, and/or navigate content (e.g., web pages such as Hyper Text Markup Language (HTML) pages, digital media items, etc. ) served by a web server.
  • HTML Hyper Text Markup Language
  • the media viewer 1 I 1 may render, display, and/or present the content (e.g., a web page, a media viewer) to a user.
  • the media viewer 1 1 1 may also display an embedded media player (e.g., a Flash® player or an HTML 5 player) that is embedded in a web page (e.g., a web page that may prov ide information about a product sold by an online merchant).
  • the media viewer 1 1 1 may be a standalone application (e.g., a mobile application or app) that allows users to view digital media items (e.g., digital videos, digital images, electronic books, etc. ).
  • the media viewer 1 1 1 may be a content sharing platform application with bitrate optimization for m ulti -representati on encoding using playback stati stics.
  • the media viewers 1 1 1 may be provided to the client devices I 10A through 1 10Z by the server 130 and/or content sharing platfonn 120.
  • the media viewers 1 1 1 may be embedded media players that are embedded in web pages provided by the content sharing platform 120.
  • the media v iewers 1 1 1 may be applications that are downloaded from the server 130, and/or downloaded from a separate server (not shown).
  • 100311 It should be noted that functions described in one i mplementation as being performed by the content sharing pi atform 120 can also be performed on the client dev ices 1 10A through 1 10Z in other implementations, if appropriate.
  • the functionality attributed to a particular component can be performed by different or multiple components operating together.
  • the content sharing platform 120 can also he accessed as a serv ice provided to other systems or devices through appropriate application programming interfaces, and thus is not limited to use in websites.
  • the content sharing platform 120 may be one or more computing devices (such as a rack mount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to provide a user with access to media items and/or provi de the media items to the user.
  • the content sharing platform 120 may allow a user to consume, upload, search for, approve of ("like"), dislike, and/or comment on media items.
  • the content sharing platform 120 may also include a website (e.g., a webpage) or application back-end software that may be used to provide a user with access to the media items.
  • a "user” may be represented as a single individual.
  • other implementations of the disclosure encompass a "user” being an entity controlled by a set of users and/or an automated source.
  • a set of individual users federated as a community in a social network may be considered a "user”.
  • an automated consumer may be an automated ingestion pipeline, such as a topic channel, of the content sharing platform 120.
  • the content sharing platform 120 may host data content, such as media items 121.
  • the data content can be digital content chosen by a user, digital content made available by a user, digital content uploaded by a user, digital content chosen by a content provider, digital content chosen by a broadcaster, etc.
  • Examples of a media item 121 can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web biogs, real simple syndication (RSS) feeds, electronic comic books, etc.
  • media item 121 is also referred to as a content item.
  • a media item 121 may be consumed via the Internet and/or via a mobile device application.
  • an online video also hereinafter referred to as a video
  • “media,” media item,” “online media item,” “digital media,” “digital media item,” “content,” and “content item” can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity.
  • the content sharing platform 120 may store the media items 1 2 1 using the data store 106.
  • the server 130 may be one or more computing dev ices (e.g., a rackmount server, a serv er computer, etc. ). In one implementation, the server 130 may be included in the content sharing platform 120. As an example, users of the client devices
  • I 1 OA- 1 10Z may each transmit a request to the serv er 130 ov er the network 105 for one or more videos stored at the data store 106.
  • the videos may be stored at the data store 106 in segments based on a resolution for each video and determined optimal bitrate for each resolution of each video, as discussed in further detail below.
  • each segment of a video may be decoded separately for video playback.
  • the videos that have been div ided into segments may be associated with the same segment boundaries (e.g., time boundaries) to enable switching between different bitrates and/or resolutions at the segment boundaries.
  • the data store 106 may store multiple videos where each video is div ided into multiple segments.
  • the data store 106 may further include a manifest file that may be transmitted by the server 130 to the client devices 1 lOA-1 10Z.
  • the manifest file may identify the available representations of the video (e.g., the av ailable resolutions at av ailable bitrates) and the segment boundaries for each segment of the video.
  • the manifest file may be transmitted by the serv er 1 30 in response to a request for the streaming of a video i n the data store 106 by the client dev ices
  • Each of the client devices 1 1 OA-1 10Z may use the manifest fi le to switch between encoded v ersions of a stream from the serv er 130 based on the available resources (e.g., CPU and bandwidth ) of the respectiv e client dev ice 1 1 OA- 1 1 0Z.
  • a first encoded v ersion of the stream of a video may be transmitted from the server 130 to the client device 1 10A based on the viewport si ze of the client device 1 10 A and the network bandwi dth associated with the client dev ice 1 1 OA.
  • a second encoded version of the stream of the same v ideo may be transmitted from the serv er 130 to a different client device 1 10Z based on the v iewport size of the client dev ice 1 10Z and the network bandwidth associated with the client dev ice 1 1 0Z.
  • the server 130 may include an encoding bitrate optimization component 140.
  • the encoding bitrate optimization component 140 determines a set of encoding bitrates (i .e., one or more bitrates for each resolution ) for a source video.
  • the encoding bitrate optimization component 140 may determine the set of encoding bitrates for each segment of the video.
  • each bitrate in the determined set of encoding bitrates corresponds to a different resolution of the segment.
  • there may be multiple bitrates associated with a resolution e.g., 3 versions of 720p each at different bitrates).
  • the video may then be stored at the data store 106 in segments based on a representation for each video segment and the optimal encoding bitrate(s) determined for the representation of the vi deo segment.
  • optimal encoding bitrates may be determined for the entire video in addition to, and/or in lieu of, segments of the video.
  • the encoding bitrate optimization component 140 determines the set of encoding bitrates for a video based on playback statistics (e.g., media player feedback) and rate-quality characteristics of the video.
  • the playback statistics may refer to client-measured bandwidth and client viewport size.
  • Bandwidth may refer to the average rate of successful data transfer through a communication path, and may be measured in bits per second.
  • a bit stream's bandwidth is proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream ) during a studied time interv al .
  • Viewport size may refer to an area (typically rectangular) expressed in renderi ng-devi ce-speci li c coordinates, e.g. pixels for screen coordinates, in which the objects of interest are going to be rendered.
  • Implementations of the di sclosure generate a rate-quality model for the video segment to be encoded based on quality characteristics of the video. This generated rate- quality model is used, along with the feedback stati stics, to predict the egress and delivered quality of the video to client systems 1 l OA- l 1 0Z . Based on these predictions, a non-linear optimization process may be applied to determine the optimized encoding bitrates for different representations of the source video.
  • the optimized encoding bitrates may then be used by a transcoding component 150 to encode a video and/or video segment(s) at the specific bitrate for each representation of the video and/or video segment(s).
  • the transcoding component 150 may be located in the same computing device (e.g., serv er 130) as encoding bitrate optimization component 140, or may be located remote to encoding bitrate optimization component 140 and/or server 130.
  • Transcoding component 1 50 includes components (e.g., hardware and/or instructions executable by a processing device) that convert media files from a source format into versions that can be played on a client device, such as a desktop computer, a mobile device, a tablet, and so on.
  • Transcoding component 1 50 may be a single master transcoder component 1 50 or may be multiple different transcoding components dispersed locally and/or remote to server 130.
  • Transcoding component 1 50 may utilize the optimized encoding bitrates as input to guide the transcoding operations performed for a video and/or video segment(s), and to generate a final stream for the video and/or video segment(s) that can be delivered to a media player (e.g., media viewer 1 1 1 ) at a client device I I OA- 1 l OZ via an adaptive bitrate streaming process.
  • the optimized encoding bitrates determined by implementations of the disclosure minimize egress traffic from the server 130 to client systems 1 1 OA- 1 lOZ, without compromising delivered quality of the video.
  • encoding bitrate optimization component 140 of server 130 may interact with content sharing platform 120 to provide multi-representation encoding bitrate optimization based on playback statistics. Further description of the encoding bitrate optimization component 140, as well as its specific functions, is described in more detail below with respect to FIG 2.
  • implementations of the disclosure are discussed in terms of content sharing platforms and promoting social network sharing of a content item on the content sharing platform, implementations may al so be general ly applied to any type of social network providing connections between users. Implementations of the disclosure are not limited to content sharing platforms that provide channel subscriptions to users.
  • the systems di scussed here collect personal information about users (e g , collection of feedback statistics from media viewers 1 1 1, collection of feedback data, etc. ) or may make use of personal information
  • the users may be provided with an opportunity to control whether the content sharing platform 120 collects user information (e.g., information about a user' s social network, social actions or activities, profession, a user's preferences, or a user' s current location ), or to control whether and/or how to receive content from the content server that may be more relevant to the user.
  • user information e.g., information about a user' s social network, social actions or activities, profession, a user's preferences, or a user' s current location
  • certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed.
  • a user' s identity may be treated so that no personally identifiable information can be determi ned for the user, or a user' s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level ), so that a particular location of a user cannot be determined.
  • location information such as to a city, ZIP code, or state level
  • the user may have control over how informati on is collected about the user and used by the content sharing platform 120.
  • FIG. 2 is a block diagram illustrating encoding bitrate optimization component 140 in accordance with one implementation of the disclosure.
  • the encoding bitrate optimization component 140 may interact with a single social network, or may be utilized among multiple social networks (e.g., prov ided as a serv ice of a content sharing platform that is utilized by other third party social networks).
  • the encoding bitrate optimization component 140 includes a probability distribution module 210, a rate-quality model generation module 220, a requesting probability determination module 230, and an encoding bitrate selection module 240. More or less components may be included in the encoding bitrate optimization component 140 without loss of generality.
  • two of the modules may be combined into a single module, or one of the modules may be divided into two or more modules.
  • one or more of the modules may reside on different computing devices (e.g., different server computers, on a single client device, or distributed among multiple client devices, etc.).
  • one or more of the modules may reside on different content sharing platforms, third party social networks, and/or external servers.
  • the encoding bitrate optimization component 140 is communicatively coupled to the data store 106.
  • the encoding bitrate optimization component 140 may be coupled to the data store 106 via a network (e.g., via network 105 as illustrated in FIG. 1).
  • the data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data.
  • the data store 106 may also include multiple storage components (e.g., multiple drives or multiple database ) that may al so span multiple computing devices (e.g., multiple server computers).
  • the data store 106 includes media item data 290, feedback data 291 , probability di tribution data 292, rate quality model data 293, quality data 295, and egress constraint data 296.
  • the encoding bitrate optimization component 140 determines and/or calculates a set of optimal encoding bitrates (i .e., one or more bitrates for each representation) for a video and/or segments of a video.
  • each bitrate in the determined set of encoding bitrates corresponds to a different representation of the video or video segment.
  • the video may then be stored at the data store 106 in segments based on a representation for each v ideo segment, at the optimal one or more bitrate(s) determined for the representation of the video segment.
  • the encoding bitrate optimization component 140 is described as determining the set of optimal encoding bitrates for a segment of a video.
  • the encoding bitrate optimization component 140 initiates the probability distribution module 2 10 to generate probability di stributions based on feedback statistics (al so referred to herein as client-side feedback stati stics) from media players.
  • a probabi lity distribution may refer to is an equation or function that links each outcome of a statistical experiment with its probability of occurrence.
  • the probability distribution module 210 utilizes playback statistics to estimate a first probability distribution of network speed and a second probability distribution of viewport size.
  • a joint probability distribution of bandwidth and viewport size may be utilized. With respect to the joint probability distribution, the bandwidth may be independent of the viewport size.
  • a product of the first and second probability distribtuions may be used as an approximation of the joint probability distribution.
  • P [X, V] may be referred to as the joint probability distribution.
  • the variable X may denote network speed at a client device and the variable V may denote viewport size of the client device.
  • the playback stati stics may refer to client-measured bandwidth (also referred to herein as network speed) and client viewport size.
  • the variable X may denote network speed at a client and the variable V may denote viewport size of the client.
  • Probability distribution module 210 may access feedback data 291 at data store 106 to generate the first and second probability distributions.
  • Feedback data 291 may include statistics garnered from Quality of Experience (QoE) pings received from media players at client systems.
  • the QoE pings may include, for example, measurements of throughput (e.g., bandwidth and/or network speed) at the media player and the viewport size adopted at the media player.
  • the QoE pings may be provided at least once per playback at a media player.
  • the network speed data from feedback data 291 may be aggregated to estimate the first probability distribution for network speed X.
  • the viewport size data from feedback data 291 may be aggregated to estimate the second probability distribution for viewport size V.
  • the first and second probability distributions, and/or the joint probability distribution of X and V i.e., P [X, V]), may be stored as probability distribution data 292 in data store 106.
  • probability distributions may be estimated based on different granularities, such as geographic locations, genres, channels, content type, and so on. For example, feedback data 291 from media players in a specific geographic region may be analyzed to estimate the first and second probability distributions, thus providing a view of aggregated probabilities of network speed and viewport sizes for the particular geographic region.
  • the rate-quality model generation module 220 generates one or more rate-quality models for a video that is to be encoded.
  • the video may be referred to herein as a source video, which may be stored in media item data 290 of data store 106.
  • the source video is first partitioned into segments, where each segment contains several seconds of video.
  • the rate-quality model generation module 220 may then process each video segment of the video.
  • the rate-qual ity model generation module 220 may encode a segment of the video into different bitrates at each supported resolution. For example, a segment can be encoded into different bitrates at each of the resolutions 240p, 360p, 480p, 720p and 1080p, respectively.
  • the rate-quality model generation module 220 may, for each resolution, encode the source video into versions with M bitrates. The versions can be indexed using the variable m, where a larger m corresponds to a higher encoding bitrate.
  • the encoding bitrate of the m'th version may be denoted as b m.
  • Encoding bitrate b m described herein may refer to encoding bitrates used as part of the rate-quality model generation process, while encoding bitrate x i described herein may refer to encoding bitrates used as part of the rate quality measurement process described below.
  • the rate-quality model generation module 220 may measure a corresponding quality metric.
  • a quality metric may be a measurement resulting from a video quality evaluation.
  • Video quality evaluation may be performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (e.g., by mathematical models) or subj ectively (e.g., b asking users for their rating). Al so, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality).
  • Some models that are used for video quality assessment are image quality- models whose output is calculated for every frame of a video sequence. This quality measure of every fram e can then be recorded over time to assess the quality of an entire video sequence.
  • q m may denote the measured quality of the encoded version with encoding bitrate b m .
  • the PSNR model may be used to measure the quality metric, where PSNR provides for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation.
  • the SSIM model may be used to measure the quality metric.
  • the SSIM model is an index that measures the similarity between two images where the measurement or prediction of image quality is based on an initial uncompressed or distortion-free image as reference.
  • a combination of the PSNR model and the SSIM model may be used to measure the quality metric. Any type of quality metric may be applied to measure the quality in implementations of the di sclosure.
  • the rate-quality generation module 220 fits a rate-qual ity model Q r i(x i), where Q r i represents the quality measurement at the resolution of the i 'th representation (i.e., rj) encoded at the x i encoding bitrate.
  • Q r i represents the quality measurement at the resolution of the i 'th representation (i.e., rj) encoded at the x i encoding bitrate.
  • each resolution/bitrate version of the segment has a corresponding quality measurement represented in the fitted rate-quality module.
  • each resolution of the segment has a quality measurement (represented in rate- quality model QJ(xj)) corresponding to a potential encoding bitrate for the representation.
  • the rate-quality model of implementations of the disclosure may be obtained using processes different than the above-described process.
  • the rate-quality model may be obtained by analyzing the content of the video and creating a rate-quality model from the resulting analysis.
  • the generated rate-quality model for the source video may be stored in rate-quality model data 293 of data store 106.
  • the requesting probability determination module 230 utilizes the probability di stributions (generated by probability di tribution module 210) to predict the probability, for each representation i of the segment encoded at a particular bitrate, of being requested for streaming of the segment to a media player. This predicted probability may be represented as P i . There are two cases at which the media player may request the i'th resolution.
  • the first case i s when the bandwidth X falls between x i and x i 1 and viewport size is larger than r_i (the resolution at "i").
  • the second case is when the bandwidth is higher than x_i+l and r_i+l is larger than r j, but the viewport size i s equal to r i .
  • If.) is an indicator function, where the function operates to equal one if a condition is satisfied, and otherwise it equals zero.
  • P_i may be referred to herein as a requesting probability.
  • the probability distribution of X and V (e.g., joint probability distribution P [X, V]) that was previously estimated by the probability distribution module 2 10 and may be accessed as probability distribution data 292 in data store 106.
  • the probability distributions are estimated based on a particular granularity, such as geographic region, genre, and so on, then the predicted probability may represent the probability that the version of the segment is selected by client systems that are grouped into the particular granularity (e.g., client systems in the geographic region, client systems requesting that genre of video, etc.).
  • the requesting probability determination module 230 predicts the requesting probability at any encoding bitrate vector ( x 1 , ... , x N) when these representations of the video segment are requested.
  • the encoding bitrate selection module 240 may utilize the requesting probabilities determined by requesting probability determination module 230, as well as the rate-quality model 293 generated by rate-quality model generation module 220, to determine an optimal set of encoding bitrates for the segment of the source video.
  • the encoding bitrate selection module 240 may apply a n on -linear optimization process to determine the optimized encoding bitrates for the different representation of the segment of the source video.
  • the average quality of a segment such as the segment of the source video
  • the encoding bitrate selection module 240 may implement a non-linear optimizer to minimize R_avg, such that Q_avg is greater than or equal to a quality threshold, Q (also referred to herein quality threshold value).
  • Q also referred to herein quality threshold value
  • the quality threshold value may be configured by an administrator of the adaptive bitrate streaming system and stored in quality data 295 in data store 106.
  • the quality threshold, Q is the average quality the server aims to achieve.
  • P i is the encoding bitrate-resolution probability estimated by the requesting probability determination module 230.
  • Q i is given by the rate-quality model generated by rate-quality model generation module 220 and stored in rate-quality model data 293.
  • the solution determined by the encoding bitrate selection module 240 provides the encoding bitrates that minimize the egress video traffic while allowing for the average video quality to be higher than quality threshold value Q.
  • R is the constraint on average video egress traffic (also referred to herein an egress threshold value or egress constraint threshold value), which may be found in egress constraint data 296 of data store 106.
  • the solution to this problem gives the encoding bitrates that maximizes the average video quality traffic while allowing the average video traffic egress to be no more than R.
  • Implementations of the disclosure are not limited to optimizing based on the above two examples. In further implementations, other optimizations may be solved.
  • the encoding bitrate selection module 240 may determine the optimized set of encoding bitrates that minimizes storage size while maintaining quality.
  • the encoding bitrate selection module 240 may add temporal direction constt aint(s), such as the quality difference between adjacent video segments being less than a threshold.
  • the encoding bitrate selection module 240 may utilize other network metrics to select the optimized set of encoding bitrates, such as adding a constraint that the lowest representation should have bitrate less than (or, in some implementations, higher than) a threshold.
  • the optimized encoding bitrates determined by the encoding bitrate selection module 240 may be used to minimize egress traffic from the server to client systems, without compromising delivered quality of the video.
  • the encoding bitrate optimization component 140 may apply to a single video as a whole and/or may apply to segments of the video. For example, the encoding bitrate optimization component 140 may partition the video into segments and apply the process described above to each segment to determine the optimal encoding bitrate selections adaptive to different contents of a video.
  • FIG. 3 is a flow diagram illustrati ng a method 300 for bitrate optimi zation for multi- representation encoding using playback statistics according to some implementations of the disclosure.
  • the method 300 may be performed by processing logic that compri ses hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc. ), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • method 300 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2. 10069] Method 300 begins at block 3 10 where multiple versions of a segment of a source video are generated. The versions may include encodings of the segment at different encoding bitrates for each resolutions associated with the segment. At block 320, a quality metric is measured for each version of the segment. The quality metric may be a PSNR measurement or a SSIM measurement, to name a few examples. In one implementation, the measured quality metrics are used to generate rate-quality model s for each of the different resolutions.
  • a probability model i s generated to predict requesting probabilities that representations of the segment are requested.
  • the probability model may be based on an empirical joint probability distribution of network speed and viewport size that is generated from client-side feedback statistics associated with prior playbacks of other videos.
  • the representation of the segment may refer to one of multipl e encoding bitrates selected for a single resolution (e.g. , 2 bitrates selected for 240p: 240p_ 100kbps and 240p 200kbps).
  • an encoding bitrate is determined for each of the representations of the segment.
  • the encoding bitrate is determined for a representation based on the rate-quality models (determined at bl ock 320) and the probability model (determined at block 330).
  • a non-linear optimizer is used to determine the set of encoding bitrates the representations of the segment.
  • determined encoding bitrates are assigned to corresponding representation of the representations of the segment. This bitrate/representation assignment may be used by an encoder as an encoding configuration for the v ideo segment.
  • FIG. 4 is a flow diagram illustrating a method 400 for multi -representation encoding bitrate optimization to minimize egress based on playback stati stics, according to an implementation of the disclosure.
  • the method 400 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof
  • method 400 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2.
  • Method 400 begins at block 410 where a joint probability distribution for network speed and viewport size is determined.
  • the joint probability distributions may be based on feedback statistics received from client systems.
  • the joint probabi lity distribution may be estimated for geographic regions and/or other categories by utilizing feedback statistics corresponding to those categories (e.g., feedback statistics gathered from media players located in the geographic region, etc. ).
  • rate-quality models for resolutions of a segment of a video are generated based on quality metrics measured for the segment.
  • a delivered quality and egress are esti mated for the encodings based on the generated rate-quality models and based on requesting probabilities that the representations are requested.
  • the requesting probabilities may be based on the joint probability distribution.
  • the delivered quality is a weighted sum of a quality of the encodings, where the weight is the probability that the encoding i s requested.
  • the egress may be the weighted sum of the bitrates, where the weight is the probabil ity that the encoding i s requested.
  • An arbitrary bitrate for each representation may be selected to generate the estimated delivered quality and egress, resulting in different possible delivered quality and egress results.
  • a set of bitrates are determined that minimize the egress (at block 430) while maintai ning the delivered quality (at block 430) at or above a quality threshold value.
  • the set of bitrates includes a bitrate that corresponds to each representation of the segment.
  • a non-linear optimizer is used to determine that set of bitrates for the corresponding representations that minimizes the determined egress hile maintaining the determined quality.
  • FIG. 5 is a flow diagram illustrati ng a method 500 for m ul ti -representati on encoding bitrate optimization to maximize quality based on playback statistics, according to an implementation of the disclosure.
  • the method 500 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • method 500 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2.
  • Blocks 510 through 530 of method 500 are similar to blocks 410 through 430 of method 400.
  • a set of bitrates are determined that maximize the delivered quality (at block 5 0) while keeping the determined egress (at block 530) at or below an egress threshold value.
  • the set of bitrates includes a bitrate that corresponds to each representation of the segment.
  • a nonlinear optimizer is used to determine that set of bi trates for the corresponding representations that maximizes the delivered quality while keeping the determined egress at or below the egress threshold value.
  • FIG. 6 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server or a client machine in a client-serv er network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB ), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwi se) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • web appliance a web appliance
  • server a network router, switch or bridge
  • computer system 600 may be representative of a server, such as server 102, executing an encoding bitrate optimization component 140, as described with respect to FIGS. 1 and 2.
  • the exemplary computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 618, which communicate with each other via a bus 630.
  • ROM read-only memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • RDRAM Rambus DRAM
  • Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses.
  • the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.
  • Processing device 602 represents one or more general -purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word ( VLIW ) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 902 may also be one or more special- purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA ), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute processing logic 626 for performing the operations and steps discussed herein.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the computer system 600 may further include a network interface device 608.
  • the computer system 600 also may include a video di splay unit 610 (e.g., a liquid crystal display (LC) or a cathode ray tube (CRT)), an alphanumeric input device 6 12 (e.g., a keyboard), a cursor control device 6 14 (e.g., a mouse), and a signal generation device 6 1 6 (e.g., a speaker).
  • the data storage device 6 18 may include a computer-readable storage medium 628 (al so referred to as a machine-readable storage medium ), on which is stored one or more set of instructions 622 (e.g., software) embodying any one or more of the methodologies of functions described herein.
  • the instructions 622 may al o reside, completely or at least partial ly, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600; the main memory 604 and the processing device 602 also constituting machine-readable storage media.
  • the instructions 622 may further be transmitted or receiv ed over a network 620 via the network interface dev ice 608.
  • the computer-readable storage medium 628 may al so be used to store instructions to perform a method for bitrate optimization for multi-representation encoding using playback stati stics, as described herein . While the computer-readable storage medium 628 is shown in an exemplary implementation to be a single medium, the term "machine-readable storage medium " should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • a machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
  • the machine-readable medium may include, but i s not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM ); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
  • magnetic storage medium e.g., floppy diskette
  • optical storage medium e.g., CD-ROM
  • magneto-optical storage medium e.g., read-only memory (ROM); random-access memory (RAM ); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
  • ROM read-only memory
  • RAM random-access memory
  • EPROM and EEPROM erasable programmable memory
  • flash memory or another type of medium suitable for storing electronic instructions.
  • implementation means that a particular feature, structure, or characteri stic described in connection with the implementation is included in at least one implementation.
  • appearances of the phrase “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same

Abstract

Implementations disclose bitrate optimization for multi-representation encoding using playback statistics. A method includes generating multiple versions of a segment of a source video, the versions comprising encodings of the segment at different encoding bitrates for each resolution of the segment, measuring a quality metric for each version of the segment, generating rate-quality models for each resolution of the segment based on the measured quality metrics corresponding to the resolutions, generating a probability model to predict requesting probabilities that representations of the segment are requested, the probability model based on a joint probability distribution of network speed and viewport size that is generated from client-side feedback statistics associated with prior playbacks of other videos, determining an encoding bitrate for each of the representations of the segment based on the rate-quality models and the probability model, and assigning determined encoding bitrates to corresponding representations of the segment.

Description

BITRATE OPTIMIZATION FOR MULTI-REPRESENTATION
ENCODING USING PLAYBACK STATISTICS
TECHNICAL FIELD
[001] This disclosure relates to the field of video streaming and, in particular, to bitrate optimization for m ulti -representati on encoding using playback statistics.
BACKGROUND
[002] The streaming of multimedia (e.g., videos) to a client device over a network may be based on adaptive bitrate streaming. For example, bandwidth and processing capability of the client device may be detected in real time. In response to a change of the detected bandwidth and viewport size, the video stream accessed by the client dev ice may be adjusted accordingly. As an example, a video may be encoded at different bitrates. The client device may switch from a first representation of the video to a second representation of the video in response to the changing resources or capabilities of the client device.
SUMMARY
[003] The following is a simplified summary of the di cl osure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It i s intended to neither identify key or critical elements of the disclosure, nor delineate any scope of the particular implementations of the disclosure or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detai led description that is presented later.
1004 J In an aspect of the disclosure, a method includes generating multiple versions of a segment of a source media item (such as, for example, a source video), the versions comprising encodings of the segment at different encoding bitrates for each resolution of the segment, measuring a quality metric for each version of the segment, generating rate-quality models for each resolution of the segment based on the measured quality metrics
corresponding to the resolutions, generating a probability model to predict requesting probabi lities that representations of the segment are requested, the probabi lity model based on a joint probability distribution of network speed and viewport size that is generated from client-side feedback statistics associated with prior playbacks of other media items (videos ), determining an encoding bitrate for each of the representations of the segment based on the rate-quality models and the probability model, and assigning determined encoding bitrates to corresponding representations of the segment.
100 1 In one implementation, the segment may include the entire source media item (video). In addition, the requesting probability for one of the representations is further based on the encoding bitrate of the representation and a relation of the encoding bitrate to network speed in the joint probability distribution, and the resolution of the representation and a relation of the resolution to viewport size in the joint probability di stribution.
10061 In some implementations, the client-side feedback statistics include playback traces transmitted from media players at client devices, the playback traces comprising network speed measurements and viewport sizes, wherein the joint probability distribution is generated from cumulative measurements of the network speeds determined from the playback traces and from cumulative measurements of the viewport sizes determined from the playback traces. Furthermore, the playback traces may be collected from media players at client devices located in a first defined geographic region, and the joint probability distribution may be specific to the first defined geographic region. ( In a further
implementation, playback traces may be collected from media players at client devices located in a second defined geographic region, and there may be another joint probability distribution specific to the second defined geographic region, ) Additionally or alternatively, the playback traces may be collected for a type of the source media item (video), and the joint probabi lity distribution may be specific to the type of the source media item (video).
[007] In one implementation, determining the encoding bitrate for each of the
representations further comprises minimizing an average egress traffic for the segment such that an average quality of the segment (as determined by any suitable quali ty measure) is maintained at or above a defined quality level, wherein the average egress traffic is a function of the different encoding bitrates and the requesting probabilities, and wherein the average quality is a function of the quality metrics and the requesting probabilities. In some implementations, determining the encoding bitrate for each of the representations further comprises maximizing an average quality for the segment such that an av erage egress traffic of the segment i s maintained at or below a defined media item (video) egress traffic level, wherein the average quality is a function of the quality metrics and the requesting
probabilities, and wherein the average egress traffic i s a function of the multiple bitrates and the requesting probabilities.
10081 Furthermore, assigning the determined encoding bitrates to the corresponding representations may further include providing the selected encoding bitrates to at least one
-?- transcoder for encoding of each of the representations of the segment at the corresponding bitrate. In addition, the representation may include a bitrate/resolution combination of the segment, and wherein the segment may include one or more representations for each of the resolutions of the segment. In some implementations, the quality metric may include a Peak Signal-to-Noise Ratio (PSNR) measurement or a Structural Simi larity (SSIM ) measurement. 10091 In another aspect of the disclosure, a method includes determining a joint probability di stribution for network speed and viewport size based on feedback statistics received from client systems; generating rate-quality models for resolutions of a segment of a media item based on quality metrics measured for the segment; estimating a delivered quality and egress for representations of the segment based on the generated rate-quality models and based on requesting probabilities that the representations are requested, wherein the requesting probabi lities are based on the joint probability distribution; and determining a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to mi nimize the egress hile maintaining the deli vered quality at or above a quality threshold value.
[0010] In another aspect of the di sclosure, a method includes determining a joint probability distribution for network speed and viewport size based on feedback statistics received from client systems; generating rate-quality model s for resolutions of a segment of a media item based on quality metrics measure for the segment; estimating, by the processing device, a deliv ered quality and egress for representations of the segment based on the generated rate- quality models and based on requesting probabilities that the representations is requested, wherein the requesting probabilities are based on the joint probability distribution; and determining, by the processing device, a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to maximize the deliv ered quality while keeping the egress at or below an egress threshold value.
jOOl 11 In other aspects of the disclosure a system compri ses a processing device configured to perform a method according to any aspect or implementation of the present disclosure. The system may further comprise a memory, and the processing device may be coupled to the memory.
[0012] In other aspects of the disclosure a machine-readable storage medium (which may be a non-transitory machine-readable storage medium, although the invention is not limited to this) stores instructions which, when executed, cause a processing device to perform operations comprising a method according to any aspect or implementation of the present di sclosure. 10013] Computing devices for performing the operations of the above described method and the various implementations described herein are disclosed. Computer-readable media that store instaictions for performing operations associated with the above described method and the various implementations described herein are also disclosed.
BRIEF DESCRI PTION OF THE DRAWINGS
[0014] The disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
[0015] Figure 1 is a block diagram illustrating an exemplary network architecture in which implementations of the disclosure may be implemented.
[0016] Figure 2 is a block diagram of an encoding bitrate optimization component, in accordance with an implementation of the disclosure.
1001 7] Figure 3 is a flow diagram illustrating a method for bitrate optimization for multi- representation encoding using playback stati stics according to an implementation.
[0018] Figure 4 is a flow diagram illustrating a method for multi -representation encoding bitrate optimization to minimize egress based on playback statistics according to an implementation.
[0019] Figure 5 is a flow diagram illustrating a method for multi-representation encoding bitrate optimization to maximize quality based on playback statistics, according to an implementation.
[0020] Figure 6 is a block diagram illustrating one implementation of a computer system, according to an implementation.
DETAILED DESCRIPTION
[0021] Aspects and implementations of the disclosure are described for bitrate opti mization for multi-representation encoding using playback statistics. Adaptive bitrate streaming may be used to stream multimedia (e.g., a video) from a serv er (e.g., an adaptive video streaming system ) to a client system (e.g., a media player on a client dev ice) ov er a network. The adaptive video streaming system encodes a source video into several representations of different encoding bitrates and/or resolutions. That is, the segment of the media item may be converted (eg, transcoded) into multiple resolutions (eg, multiple spatial resolutions). Each of the multiple resolutions of the segment may then be encoded at a plurality of different bitrates, to produce multiple "representations" of the segment of the media item. A representation may refer to a result of encoding a video and/or a video segment at one resolution using one bitrate. This set of encoded representations allows the client systems to adaptively-select appropriate encoded representations according to the network bandwidth and viewport size during the video streaming. For example, a media player of a client device may switch from a first representation or encoding of a video to a second representation or encoding of the video of a different quality in response to changing conditions (e.g., CPU, network bandwidth, viewport size, etc.) associated with the client device.
[0022] To support the switching between quality levels or formats of a video may result in a video (or individual segments of the video) may be transcoded into multiple resolutions (e.g., 1080p, 720p, 480p, 360p, etc.) by the adaptive video streaming system. Furthermore, each resolution of the video may be encoded at one or more encoding bitrates. (an encoding bitrate may also be referred to herein as a "bitrate"). Multi-representation encoding may refer to having one or more encoding bitrate representations for each resolution. A bitrate may refer to an amount of information or data stored per unit of playback time of the video. For example, a bitrate in video streaming may vary between 400kbits/s and 40 MBits/s. In some cases, a higher bitrate correlates to better clarity and/or higher quality of the video to a viewer.
[0023] Conventional systems for adaptive bitrate streaming may utili e generic encoding configurations (e.g., an encoding bitrate to use for each representation ) for encoding a video or video segment (also referred to as a "portion," "chunk," or "clip" of the video). The generic encoding configurations may refer to a pre-defined bitrate selected for each representation of the video or video segment that i s to be encoded. In the conventional systems, the generic encoding configurations are selected to be "good on av erage" (e.g., satisfies a determined video quality measurement based on aggregated quality measurements at multiple client dev ices) for videos or video segments of a particular resolution.
[0024] However, contrary to the abov e conventional approach, it has been realized that each video and/or video segment is different, and the encoding confi urations for the encoder should be chosen such that the encoded versions created for each video segment are appropriate for the specific v ideo segment. The selection of encoding configurations (e.g., resolutions, bitrates, etc. ) has an impact on the deliv ered video quality and the cost for storage and transmission. For example, the selection of a higher encoding bitrate for a resolution may result in better video quality, but it may also increase the system resources needed for the adaptive video streaming system because the system requires more resources to provide greater performance for deliv ering video traffic to client systems and for storing the data (i .e. , the higher the bitrate, the more data to transfer and store, and the higher the requirements for the system resources (additionally also increasing financial costs of the vid eo streaming system). Furthermore, as the encoding bitrate is increased, the quality of the video delivered to user devices may deteriorate. This is because network capacity may be limited. If the encoding bitrate is higher than the network throughput of a user device, the video cannot be delivered to the user device without re-buffering, which negatively affects the quality of video experienced by the user. In addition, different videos have different characteristics and a general and/or generic encoding setting is unlikely to be universally-optimal for all videos.
[0025] Implementations of the disclosure analyze the trade-offs between the cost (e.g., transmission and storage costs) and delivered video quality for an encoding configuration based on information about playback statistics received from client systems. The playback statistics may refer to client-measured bandwidth (also referred to as network speed ) and/or client viewport size. These playback statistics are used to determine an optimal set of encoding configuratio s (e.g., bitrate defined for each representation to be encoded) for each video and/or video segment. This optimal set of encoding configurations is used to minimize egress traffic from the adaptive video streaming system to client systems, while maintaining delivered quality of the video and/or video segment (as compared to conventional systems). "Egress traffic" or "egress" may refer to the rate that data is transmitted from a data source and/or a network (amount of data per unit of time).
[0026] Conventional systems did not consider client-side feedback in order to determine optimal encoding bitrates for specific videos and/or video segments. Implementations of the disclosure provide a technical improvement for adaptive video streaming systems by improving the efficiency of the encoding process (via optimized encoding configuration selection), thus reducing size and/or number of transmissions (improves utility of transmission bandwidth) as well as storage used for adaptive bitrate streaming, while maintaining video quality for client systems.
[0027] The disclosure often references videos for simplicity and brevity. However, the teachings of the disclosure are applied to media items generally and can be applied to various types of content or media items, including for example, video, audio, text, images, etc.
[0028 J FIG. 1 illustrates an example system architecture 100, in accordance with one implementation of the disclosure. The system architecture 100 includes client devices 1 1 OA through 1 1 0Z, a network 105, a data store 106, a content sharing platform 120, and a serv er 130. In one implementation, network 105 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.1 1 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combi nation thereof. In one implementation, the data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or dev ice capable of storing data. The data store 106 may also include multiple storage components (e.g., multiple drives or multiple databases) that may also span multiple computing devices (e.g., multiple server computers).
10029] The client devices 1 10A through I 10Z may each include computing devices such as personal computers (PCs), laptops, mobile phones, smart phones, tablet computers, netbook computers, network -connected telev isions, etc. In some implementations, client device 1 1 OA through 1 10Z may also be referred to as "user devices." Each client device includes a media viewer 1 1 1 . In one implementation, the media viewers 1 1 1 may be applications that allow users to view content, such as images, videos, web pages, documents, etc. For example, the media viewer 1 1 1 may be a web browser that can access, retrieve, present, and/or navigate content (e.g., web pages such as Hyper Text Markup Language (HTML) pages, digital media items, etc. ) served by a web server. The media viewer 1 I 1 may render, display, and/or present the content (e.g., a web page, a media viewer) to a user. The media viewer 1 1 1 may also display an embedded media player (e.g., a Flash® player or an HTML 5 player) that is embedded in a web page (e.g., a web page that may prov ide information about a product sold by an online merchant). In another example, the media viewer 1 1 1 may be a standalone application (e.g., a mobile application or app) that allows users to view digital media items (e.g., digital videos, digital images, electronic books, etc. ). According to aspects of the disclosure, the media viewer 1 1 1 may be a content sharing platform application with bitrate optimization for m ulti -representati on encoding using playback stati stics.
100301 The media viewers 1 1 1 may be provided to the client devices I 10A through 1 10Z by the server 130 and/or content sharing platfonn 120. For example, the media viewers 1 1 1 may be embedded media players that are embedded in web pages provided by the content sharing platform 120. In another example, the media v iewers 1 1 1 may be applications that are downloaded from the server 130, and/or downloaded from a separate server (not shown). 100311 It should be noted that functions described in one i mplementation as being performed by the content sharing pi atform 120 can also be performed on the client dev ices 1 10A through 1 10Z in other implementations, if appropriate. In addition, the functionality attributed to a particular component can be performed by different or multiple components operating together. The content sharing platform 120 can also he accessed as a serv ice provided to other systems or devices through appropriate application programming interfaces, and thus is not limited to use in websites.
[0032] In one implementation, the content sharing platform 120 may be one or more computing devices (such as a rack mount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases), networks, software components, and/or hardware components that may be used to provide a user with access to media items and/or provi de the media items to the user. For example, the content sharing platform 120 may allow a user to consume, upload, search for, approve of ("like"), dislike, and/or comment on media items. The content sharing platform 120 may also include a website (e.g., a webpage) or application back-end software that may be used to provide a user with access to the media items.
[0033] In implementations of the disclosure, a "user" may be represented as a single individual. However, other implementations of the disclosure encompass a "user" being an entity controlled by a set of users and/or an automated source. For example, a set of individual users federated as a community in a social network may be considered a "user". In another example, an automated consumer may be an automated ingestion pipeline, such as a topic channel, of the content sharing platform 120.
[0034] The content sharing platform 120 may host data content, such as media items 121. The data content can be digital content chosen by a user, digital content made available by a user, digital content uploaded by a user, digital content chosen by a content provider, digital content chosen by a broadcaster, etc. Examples of a media item 121 can include, and are not limited to, digital video, digital movies, digital photos, digital music, website content, social media updates, electronic books (ebooks), electronic magazines, digital newspapers, digital audio books, electronic journals, web biogs, real simple syndication (RSS) feeds, electronic comic books, etc. In some implementations, media item 121 is also referred to as a content item.
[0035] A media item 121 may be consumed via the Internet and/or via a mobile device application. For brevity and simplicity, an online video (also hereinafter referred to as a video) is used as an example of a media item 121 throughout this document. As used herein, "media," media item," "online media item," "digital media," "digital media item," "content," and "content item" can include an electronic file that can be executed or loaded using software, firmware or hardware configured to present the digital media item to an entity. In one implementation, the content sharing platform 120 may store the media items 1 2 1 using the data store 106.
100361 In one implementation, the server 130 may be one or more computing dev ices (e.g., a rackmount server, a serv er computer, etc. ). In one implementation, the server 130 may be included in the content sharing platform 120. As an example, users of the client devices
I 1 OA- 1 10Z may each transmit a request to the serv er 130 ov er the network 105 for one or more videos stored at the data store 106. In some implementations, the videos may be stored at the data store 106 in segments based on a resolution for each video and determined optimal bitrate for each resolution of each video, as discussed in further detail below. For example, each segment of a video may be decoded separately for video playback. Furthermore, the videos that have been div ided into segments may be associated with the same segment boundaries (e.g., time boundaries) to enable switching between different bitrates and/or resolutions at the segment boundaries.
10037] Thus, the data store 106 may store multiple videos where each video is div ided into multiple segments. In some implementations, the data store 106 may further include a manifest file that may be transmitted by the server 130 to the client devices 1 lOA-1 10Z. In some implementations, the manifest file may identify the available representations of the video (e.g., the av ailable resolutions at av ailable bitrates) and the segment boundaries for each segment of the video. The manifest file may be transmitted by the serv er 1 30 in response to a request for the streaming of a video i n the data store 106 by the client dev ices
I I OA- 1 10Z. Each of the client devices 1 1 OA-1 10Z may use the manifest fi le to switch between encoded v ersions of a stream from the serv er 130 based on the available resources (e.g., CPU and bandwidth ) of the respectiv e client dev ice 1 1 OA- 1 1 0Z. For example, a first encoded v ersion of the stream of a video may be transmitted from the server 130 to the client device 1 10A based on the viewport si ze of the client device 1 10 A and the network bandwi dth associated with the client dev ice 1 1 OA. Furthermore, a second encoded version of the stream of the same v ideo may be transmitted from the serv er 130 to a different client device 1 10Z based on the v iewport size of the client dev ice 1 10Z and the network bandwidth associated with the client dev ice 1 1 0Z.
100381 In implementations of the disclosure, the server 130 may include an encoding bitrate optimization component 140. The encoding bitrate optimization component 140 determines a set of encoding bitrates (i .e., one or more bitrates for each resolution ) for a source video. In some implementations, the encoding bitrate optimization component 140 may determine the set of encoding bitrates for each segment of the video. In one implementation, each bitrate in the determined set of encoding bitrates corresponds to a different resolution of the segment. In some implementations, there may be multiple bitrates associated with a resolution (e.g., 3 versions of 720p each at different bitrates). The video may then be stored at the data store 106 in segments based on a representation for each video segment and the optimal encoding bitrate(s) determined for the representation of the vi deo segment. In some implementations, optimal encoding bitrates may be determined for the entire video in addition to, and/or in lieu of, segments of the video.
[0039] In some implementations, the encoding bitrate optimization component 140 determines the set of encoding bitrates for a video based on playback statistics (e.g., media player feedback) and rate-quality characteristics of the video. The playback statistics may refer to client-measured bandwidth and client viewport size. Bandwidth may refer to the average rate of successful data transfer through a communication path, and may be measured in bits per second. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream ) during a studied time interv al . Viewport size may refer to an area (typically rectangular) expressed in renderi ng-devi ce-speci li c coordinates, e.g. pixels for screen coordinates, in which the objects of interest are going to be rendered.
10040] Implementations of the di sclosure generate a rate-quality model for the video segment to be encoded based on quality characteristics of the video. This generated rate- quality model is used, along with the feedback stati stics, to predict the egress and delivered quality of the video to client systems 1 l OA- l 1 0Z . Based on these predictions, a non-linear optimization process may be applied to determine the optimized encoding bitrates for different representations of the source video.
10041 ] The optimized encoding bitrates may then be used by a transcoding component 150 to encode a video and/or video segment(s) at the specific bitrate for each representation of the video and/or video segment(s). The transcoding component 150 may be located in the same computing device (e.g., serv er 130) as encoding bitrate optimization component 140, or may be located remote to encoding bitrate optimization component 140 and/or server 130.
Transcoding component 1 50 includes components (e.g., hardware and/or instructions executable by a processing device) that convert media files from a source format into versions that can be played on a client device, such as a desktop computer, a mobile device, a tablet, and so on. Transcoding component 1 50 may be a single master transcoder component 1 50 or may be multiple different transcoding components dispersed locally and/or remote to server 130. Transcoding component 1 50 may utilize the optimized encoding bitrates as input to guide the transcoding operations performed for a video and/or video segment(s), and to generate a final stream for the video and/or video segment(s) that can be delivered to a media player (e.g., media viewer 1 1 1 ) at a client device I I OA- 1 l OZ via an adaptive bitrate streaming process. The optimized encoding bitrates determined by implementations of the disclosure minimize egress traffic from the server 130 to client systems 1 1 OA- 1 lOZ, without compromising delivered quality of the video.
[0042] In some implementations, encoding bitrate optimization component 140 of server 130 may interact with content sharing platform 120 to provide multi-representation encoding bitrate optimization based on playback statistics. Further description of the encoding bitrate optimization component 140, as well as its specific functions, is described in more detail below with respect to FIG 2.
100431 Although implementations of the disclosure are discussed in terms of content sharing platforms and promoting social network sharing of a content item on the content sharing platform, implementations may al so be general ly applied to any type of social network providing connections between users. Implementations of the disclosure are not limited to content sharing platforms that provide channel subscriptions to users.
[0044] In situations in which the systems di scussed here collect personal information about users (e g , collection of feedback statistics from media viewers 1 1 1, collection of feedback data, etc. ) or may make use of personal information, the users may be provided with an opportunity to control whether the content sharing platform 120 collects user information (e.g., information about a user' s social network, social actions or activities, profession, a user's preferences, or a user' s current location ), or to control whether and/or how to receive content from the content server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user' s identity may be treated so that no personally identifiable information can be determi ned for the user, or a user' s geographic location may be generalized where location information is obtained ( such as to a city, ZIP code, or state level ), so that a particular location of a user cannot be determined. Thus, the user may have control over how informati on is collected about the user and used by the content sharing platform 120.
[0045] FIG. 2 is a block diagram illustrating encoding bitrate optimization component 140 in accordance with one implementation of the disclosure. As di scussed above, the encoding bitrate optimization component 140 may interact with a single social network, or may be utilized among multiple social networks (e.g., prov ided as a serv ice of a content sharing platform that is utilized by other third party social networks). In one implementation, the encoding bitrate optimization component 140 includes a probability distribution module 210, a rate-quality model generation module 220, a requesting probability determination module 230, and an encoding bitrate selection module 240. More or less components may be included in the encoding bitrate optimization component 140 without loss of generality. For example, two of the modules may be combined into a single module, or one of the modules may be divided into two or more modules. In one implementation, one or more of the modules may reside on different computing devices (e.g., different server computers, on a single client device, or distributed among multiple client devices, etc.). Furthermore, one or more of the modules may reside on different content sharing platforms, third party social networks, and/or external servers.
[0046 j The encoding bitrate optimization component 140 is communicatively coupled to the data store 106. For example, the encoding bitrate optimization component 140 may be coupled to the data store 106 via a network (e.g., via network 105 as illustrated in FIG. 1). The data store 106 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data. The data store 106 may also include multiple storage components (e.g., multiple drives or multiple database ) that may al so span multiple computing devices (e.g., multiple server computers). The data store 106 includes media item data 290, feedback data 291 , probability di tribution data 292, rate quality model data 293, quality data 295, and egress constraint data 296.
[0047] As di scussed above, the encoding bitrate optimization component 140 determines and/or calculates a set of optimal encoding bitrates (i .e., one or more bitrates for each representation) for a video and/or segments of a video. In one implementation, each bitrate in the determined set of encoding bitrates corresponds to a different representation of the video or video segment. In some implementations, there may be multiple bitrates associated with a resolution. The video may then be stored at the data store 106 in segments based on a representation for each v ideo segment, at the optimal one or more bitrate(s) determined for the representation of the video segment. For ease of the following discussi on, the encoding bitrate optimization component 140 is described as determining the set of optimal encoding bitrates for a segment of a video.
j 0048] Initially, to determine the set of encoding bitrates, the encoding bitrate optimization component 140 initiates the probability distribution module 2 10 to generate probability di stributions based on feedback statistics (al so referred to herein as client-side feedback stati stics) from media players. A probabi lity distribution may refer to is an equation or function that links each outcome of a statistical experiment with its probability of occurrence. For example, the probability distribution module 210 utilizes playback statistics to estimate a first probability distribution of network speed and a second probability distribution of viewport size. In some implementations, a joint probability distribution of bandwidth and viewport size may be utilized. With respect to the joint probability distribution, the bandwidth may be independent of the viewport size. Furthermore, a product of the first and second probability distribtuions may be used as an approximation of the joint probability distribution. In one implementation, P [X, V] may be referred to as the joint probability distribution. The variable X may denote network speed at a client device and the variable V may denote viewport size of the client device.
[0049 j The playback stati stics may refer to client-measured bandwidth (also referred to herein as network speed) and client viewport size. In one implementation, the variable X may denote network speed at a client and the variable V may denote viewport size of the client. Probability distribution module 210 may access feedback data 291 at data store 106 to generate the first and second probability distributions. Feedback data 291 may include statistics garnered from Quality of Experience (QoE) pings received from media players at client systems. The QoE pings may include, for example, measurements of throughput (e.g., bandwidth and/or network speed) at the media player and the viewport size adopted at the media player. The QoE pings may be provided at least once per playback at a media player. The network speed data from feedback data 291 may be aggregated to estimate the first probability distribution for network speed X. Similarly, the viewport size data from feedback data 291 may be aggregated to estimate the second probability distribution for viewport size V. The first and second probability distributions, and/or the joint probability distribution of X and V (i.e., P [X, V]), may be stored as probability distribution data 292 in data store 106.
[0050] In some implementations, probability distributions may be estimated based on different granularities, such as geographic locations, genres, channels, content type, and so on. For example, feedback data 291 from media players in a specific geographic region may be analyzed to estimate the first and second probability distributions, thus providing a view of aggregated probabilities of network speed and viewport sizes for the particular geographic region.
[0051] Subsequent to, or in parallel with, the estimation of probabil ity di stributions for network speed and viewport size, the rate-quality model generation module 220 generates one or more rate-quality models for a video that is to be encoded. The video may be referred to herein as a source video, which may be stored in media item data 290 of data store 106. In some implementations, the source video is first partitioned into segments, where each segment contains several seconds of video. The rate-quality model generation module 220 may then process each video segment of the video.
[0052] The rate-qual ity model generation module 220 may encode a segment of the video into different bitrates at each supported resolution. For example, a segment can be encoded into different bitrates at each of the resolutions 240p, 360p, 480p, 720p and 1080p, respectively. For purposes of discussion, the rate-quality model generation module 220 may, for each resolution, encode the source video into versions with M bitrates. The versions can be indexed using the variable m, where a larger m corresponds to a higher encoding bitrate. The encoding bitrate of the m'th version may be denoted as b m. Encoding bitrate b m described herein may refer to encoding bitrates used as part of the rate-quality model generation process, while encoding bitrate x i described herein may refer to encoding bitrates used as part of the rate quality measurement process described below.
[0053] For each encoded version of the segment, the rate-quality model generation module 220 may measure a corresponding quality metric. A quality metric may be a measurement resulting from a video quality evaluation. Video quality evaluation may be performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (e.g., by mathematical models) or subj ectively (e.g., b asking users for their rating). Al so, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality). Some models that are used for video quality assessment (including, but not limited to, peak signal-to-noise ratio (PS R) or structural similarity (SSIM )) are image quality- models whose output is calculated for every frame of a video sequence. This quality measure of every fram e can then be recorded over time to assess the quality of an entire video sequence. For the purpose of discussion, for a given resolution, q m may denote the measured quality of the encoded version with encoding bitrate b m .
[0054] In one example, the PSNR model may be used to measure the quality metric, where PSNR provides for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. In another example, the SSIM model may be used to measure the quality metric. The SSIM model is an index that measures the similarity between two images where the measurement or prediction of image quality is based on an initial uncompressed or distortion-free image as reference. In other examples, a combination of the PSNR model and the SSIM model may be used to measure the quality metric. Any type of quality metric may be applied to measure the quality in implementations of the di sclosure.
[0055] Using the resulting bitrate-quality measurements (b_ l , q_ 1 ), (b_M, q_M), the rate-quality generation module 220 fits a rate-qual ity model Q r i(x i), where Q r i represents the quality measurement at the resolution of the i 'th representation (i.e., rj) encoded at the x i encoding bitrate. As such, each resolution/bitrate version of the segment has a corresponding quality measurement represented in the fitted rate-quality module. Or, in other words, each resolution of the segment has a quality measurement (represented in rate- quality model QJ(xj)) corresponding to a potential encoding bitrate for the representation. The rate-quality model of implementations of the disclosure may be obtained using processes different than the above-described process. For example, the rate-quality model may be obtained by analyzing the content of the video and creating a rate-quality model from the resulting analysis. In one implementation, the generated rate-quality model for the source video may be stored in rate-quality model data 293 of data store 106.
10056] The requesting probability determination module 230 utilizes the probability di stributions (generated by probability di tribution module 210) to predict the probability, for each representation i of the segment encoded at a particular bitrate, of being requested for streaming of the segment to a media player. This predicted probability may be represented as P i . There are two cases at which the media player may request the i'th resolution.
[0057] The first case i s when the bandwidth X falls between x i and x i 1 and viewport size is larger than r_i (the resolution at "i"). The probability for thi s case is P[x_i <= X < x i+1, V >= v i].
10058] The second case is when the bandwidth is higher than x_i+l and r_i+l is larger than r j, but the viewport size i s equal to r i . The probabi lity for this case is thus l(r_i+l > r J)P[X >= x i+1, V=v i]. If.) is an indicator function, where the function operates to equal one if a condition is satisfied, and otherwise it equals zero. In sum, P_i = P[x_i <= X < x_i+l , V >= v_i] + l(r_i+l > r_i) P[X >= x_i+l, V=v_i]. P_i may be referred to herein as a requesting probability.
[0059] As di scussed above, the probability distribution of X and V (e.g., joint probability distribution P [X, V]) that was previously estimated by the probability distribution module 2 10 and may be accessed as probability distribution data 292 in data store 106. In one implementation, if the probability distributions are estimated based on a particular granularity, such as geographic region, genre, and so on, then the predicted probability may represent the probability that the version of the segment is selected by client systems that are grouped into the particular granularity (e.g., client systems in the geographic region, client systems requesting that genre of video, etc.).
[0060] P i (i.e., requesting probability) is a function of encoding bitrate {x i: l<=i<=N }. Using the above equation, P i can be estimated by the requesting probability determination module 230 given any arbitrary encoding bitrate setting { x i: l<=i<=N }. In some implementations, the requesting probability determination module 230 predicts the requesting probability at any encoding bitrate vector ( x 1 , ... , x N) when these representations of the video segment are requested.
[0061] The encoding bitrate selection module 240 may utilize the requesting probabilities determined by requesting probability determination module 230, as well as the rate-quality model 293 generated by rate-quality model generation module 220, to determine an optimal set of encoding bitrates for the segment of the source video. The encoding bitrate selection module 240 may apply a n on -linear optimization process to determine the optimized encoding bitrates for the different representation of the segment of the source video.
[0062] In one example, the average quality of a segment, such as the segment of the source video, delivered to client system can be estimated by: Q_avg = Q 1 * P 1 + QJ2 * P 2 + ... + Q N * P_N, where Q_i = Q_r_i(x_i) is the quality of encoded version i. Similarly, the average video bitrate (egress traffic) from a server is R_avg = x_l * P_l + x_2 * P 2 + x__N * PJSL where x i is the encoding bitrate of version i. The encoding bitrate selection module 240 may implement a non-linear optimizer to minimize R_avg, such that Q_avg is greater than or equal to a quality threshold, Q (also referred to herein quality threshold value). The quality threshold value may be configured by an administrator of the adaptive bitrate streaming system and stored in quality data 295 in data store 106.
[0063] In the above example, the quality threshold, Q, is the average quality the server aims to achieve. P i is the encoding bitrate-resolution probability estimated by the requesting probability determination module 230. Q i is given by the rate-quality model generated by rate-quality model generation module 220 and stored in rate-quality model data 293. To solve this optimization, the encoding bitrate selection module 240 may implement a generic non-linear programing solver (also referred to as a non-linear optimizer) to obtain the set of x i values, which represent the optimal encoding bitrates at representation i = Ι,. , . ,Ν. The solution determined by the encoding bitrate selection module 240 provides the encoding bitrates that minimize the egress video traffic while allowing for the average video quality to be higher than quality threshold value Q. 10064] In some implementations, the encoding bi trate selection module 240 may further solve the dual problem: Maximize Q 1 * P I + Q 2 * P 2 + ...+ Q N * P N, such that (x 1 * P_ 1 + x_2 * P_2 + ... + x_N * P_N) <= R. In this above example, R is the constraint on average video egress traffic (also referred to herein an egress threshold value or egress constraint threshold value), which may be found in egress constraint data 296 of data store 106. The solution to this problem gives the encoding bitrates that maximizes the average video quality traffic while allowing the average video traffic egress to be no more than R.
[0065] Implementations of the disclosure are not limited to optimizing based on the above two examples. In further implementations, other optimizations may be solved. For example, the encoding bitrate selection module 240 may determine the optimized set of encoding bitrates that minimizes storage size while maintaining quality. In another example, the encoding bitrate selection module 240 may add temporal direction constt aint(s), such as the quality difference between adjacent video segments being less than a threshold. In one example, the encoding bitrate selection module 240 may utilize other network metrics to select the optimized set of encoding bitrates, such as adding a constraint that the lowest representation should have bitrate less than (or, in some implementations, higher than) a threshold.
[0066] The optimized encoding bitrates determined by the encoding bitrate selection module 240 may be used to minimize egress traffic from the server to client systems, without compromising delivered quality of the video. The encoding bitrate optimization component 140 may apply to a single video as a whole and/or may apply to segments of the video. For example, the encoding bitrate optimization component 140 may partition the video into segments and apply the process described above to each segment to determine the optimal encoding bitrate selections adaptive to different contents of a video.
[0067] FIG. 3 is a flow diagram illustrati ng a method 300 for bitrate optimi zation for multi- representation encoding using playback statistics according to some implementations of the disclosure. The method 300 may be performed by processing logic that compri ses hardware ( e.g., circuitry, dedicated logic, programmable logic, microcode, etc. ), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
[0068] For simplicity of explanati on, the methods of this disclosure are depicted and described as a sen es of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and ith other acts not presented and described herein . Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art should understand and appreciate that the methods could alternatively be represented as a series of interrelated states' via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term "article of manufacture," as used herein, is intended to encompass a computer program accessible from any com puter-readabl e device or storage media. In one implementation, method 300 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2. 10069] Method 300 begins at block 3 10 where multiple versions of a segment of a source video are generated. The versions may include encodings of the segment at different encoding bitrates for each resolutions associated with the segment. At block 320, a quality metric is measured for each version of the segment. The quality metric may be a PSNR measurement or a SSIM measurement, to name a few examples. In one implementation, the measured quality metrics are used to generate rate-quality model s for each of the different resolutions.
[0070] Subsequently, at block 330, a probability model i s generated to predict requesting probabilities that representations of the segment are requested. The probability model may be based on an empirical joint probability distribution of network speed and viewport size that is generated from client-side feedback statistics associated with prior playbacks of other videos. As discussed above, the representation of the segment may refer to one of multipl e encoding bitrates selected for a single resolution (e.g. , 2 bitrates selected for 240p: 240p_ 100kbps and 240p 200kbps).
[0071 ] At block 340, an encoding bitrate is determined for each of the representations of the segment. The encoding bitrate is determined for a representation based on the rate-quality models (determined at bl ock 320) and the probability model (determined at block 330). In one implementation, a non-linear optimizer is used to determine the set of encoding bitrates the representations of the segment. Lastly, at block 350, determined encoding bitrates are assigned to corresponding representation of the representations of the segment. This bitrate/representation assignment may be used by an encoder as an encoding configuration for the v ideo segment.
[0072 J FIG. 4 is a flow diagram illustrating a method 400 for multi -representation encoding bitrate optimization to minimize egress based on playback stati stics, according to an implementation of the disclosure. The method 400 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof In one implementation, method 400 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2.
[0073] Method 400 begins at block 410 where a joint probability distribution for network speed and viewport size is determined. The joint probability distributions may be based on feedback statistics received from client systems. In some implementations, the joint probabi lity distribution may be estimated for geographic regions and/or other categories by utilizing feedback statistics corresponding to those categories (e.g., feedback statistics gathered from media players located in the geographic region, etc. ).
10074] At block 420, rate-quality models for resolutions of a segment of a video are generated based on quality metrics measured for the segment.
[0075] Subsequently, at block 430, a delivered quality and egress are esti mated for the encodings based on the generated rate-quality models and based on requesting probabilities that the representations are requested. The requesting probabilities may be based on the joint probability distribution. In one implementation, the delivered quality is a weighted sum of a quality of the encodings, where the weight is the probability that the encoding i s requested. The egress may be the weighted sum of the bitrates, where the weight is the probabil ity that the encoding i s requested. An arbitrary bitrate for each representation may be selected to generate the estimated delivered quality and egress, resulting in different possible delivered quality and egress results.
[0076 j Lastly, at block 440, a set of bitrates are determined that minimize the egress (at block 430) while maintai ning the delivered quality (at block 430) at or above a quality threshold value. The set of bitrates includes a bitrate that corresponds to each representation of the segment. In one implementation, a non-linear optimizer is used to determine that set of bitrates for the corresponding representations that minimizes the determined egress hile maintaining the determined quality.
[0077] FIG. 5 is a flow diagram illustrati ng a method 500 for m ul ti -representati on encoding bitrate optimization to maximize quality based on playback statistics, according to an implementation of the disclosure. The method 500 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof. In one implementation, method 500 may be performed by encoding bitrate optimization component 140 as shown in FIG. 2. 10078] Blocks 510 through 530 of method 500 are similar to blocks 410 through 430 of method 400. The description provided above for block 410 through 430 may similarly apply to blocks 510 through 530 of method 500. At block 540 of method 500, a set of bitrates are determined that maximize the delivered quality (at block 5 0) while keeping the determined egress (at block 530) at or below an egress threshold value. The set of bitrates includes a bitrate that corresponds to each representation of the segment. In one implementation, a nonlinear optimizer is used to determine that set of bi trates for the corresponding representations that maximizes the delivered quality while keeping the determined egress at or below the egress threshold value.
10079] FIG. 6 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-serv er network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB ), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwi se) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one implementation, computer system 600 may be representative of a server, such as server 102, executing an encoding bitrate optimization component 140, as described with respect to FIGS. 1 and 2.
[00801 The exemplary computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 618, which communicate with each other via a bus 630. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.
[0081] Processing device 602 represents one or more general -purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word ( VLIW ) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 902 may also be one or more special- purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA ), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute processing logic 626 for performing the operations and steps discussed herein.
[0082] The computer system 600 may further include a network interface device 608. The computer system 600 also may include a video di splay unit 610 (e.g., a liquid crystal display ( LCD) or a cathode ray tube (CRT)), an alphanumeric input device 6 12 (e.g., a keyboard), a cursor control device 6 14 (e.g., a mouse), and a signal generation device 6 1 6 (e.g., a speaker). 100831 The data storage device 6 18 may include a computer-readable storage medium 628 (al so referred to as a machine-readable storage medium ), on which is stored one or more set of instructions 622 (e.g., software) embodying any one or more of the methodologies of functions described herein. The instructions 622 may al o reside, completely or at least partial ly, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600; the main memory 604 and the processing device 602 also constituting machine-readable storage media. The instructions 622 may further be transmitted or receiv ed over a network 620 via the network interface dev ice 608.
[0084] The computer-readable storage medium 628 may al so be used to store instructions to perform a method for bitrate optimization for multi-representation encoding using playback stati stics, as described herein . While the computer-readable storage medium 628 is shown in an exemplary implementation to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but i s not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory ( RAM ); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
[0085] The preceding description sets forth numerous speci ic details such as examples of speci ic systems, components, methods, and so forth, in order to provide a good
understanding of several implementations of the disclosure. It should be apparent to one skil led in the art, however, that at least some implementations of the di sclosure may be practiced without these specific details. In other i nstances, well-known components or methods are not described in detai 1 or are presented in simple block diagram format in order to avoid unnecessarily obscuring the disclosure. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the disclosure.
[0086] Reference throughout thi s specification to "one implementation" or "an
implementation" means that a particular feature, structure, or characteri stic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrase "in one implementation" or "in an implementation" in various places throughout this specification are not necessarily all referring to the same
implementation . In addition, the term "or" i s intended to mean an inclusive "or" rather than an exclusive "or."
[0087] Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently w ith other operations. In another implementation, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner. Additionally or alternatively, features described herein with reference to one aspect or implementation may be applied to any other aspect or implementation described herein.

Claims

1 . A method comprising:
generating multiple versions of a segment of a source media item, the versions comprising encodings of the segment at different encoding bitrates for each resolution of the segment;
measuring a quality metric for each version of the segment;
generating rate-quality models for each resolution of the segment based on the measured quality metrics corresponding to the resolutions;
generating, by a processing device, a probability model to predict requesting probabilities that representations of the segment are requested, the probability model based on a joint probability distribution of network speed and viewport size that is generated from client-side feedback stati stics associated with prior playbacks of other media items;
determining, by the processing device, an encoding bitrate for each of the
representations of the segment based on the rate-quality model s and the probability model; and
assigning determined encoding bitrates to corresponding representations of the segment.
2. The method of claim 1, wherein the segment comprises the entire source media item.
3. The method of claim 1 or 2, wherein the requesting probability for one of the representations is further based on:
the encoding bitrate of the representation and a relation of the encoding bitrate to network speed in the joint probability distribution; and
the resolution of the representation and a relation of the resolution to viewport size in the joint probability distribution.
4. The method of claim 1 , 2, or 3, wherein the client-side feedback statistics comprise playback traces transmitted from media players at client devices, the playback traces comprising network speed measurements and viewport sizes, and wherein the joint probability distribution is generated from cumulative measurements of the network speeds determined from the playback traces and from cumulative measurements of the viewport sizes determined from the playback traces.
5. The method of claim 4, wherein the playback traces are collected from a geographic region of the source media item, and wherein the j oint probability distribution i s specific to the geographic region of the source media item .
6. The method of claim 4 or 5, wherein the playback traces are collected for a type of the source media item, and wherein the joint probabi lity distribution is specific to the type of the source media item .
7. The method of any preceding claim, wherein determining the encoding bitrate for each of the representations further comprises minimizing an average egress traffic for the segment such that an average quality of the segment is maintained at or above a defined quality level , wherein the average egress traffic is a function of the different encoding bitrates and the requesting probabilities, and wherein the average quality is a function of the quality metrics and the requesting probabi lities.
8. The method of any one of claims 1 to 6, wherein determi ning the encoding bitrate for each of the representations further comprises maximizing an average quality for the segment such that an average egress traffic of the segment i s maintained at or below a defined media item egress traffic level, wherein the average quality is a function of the quality metrics and the requesting probabilities, and wherein the average egress traffic is a function of the multiple bitrates and the requesting probabilities.
9. The method of any preceding claim, wherein assigning the determined encoding bitrates to the corresponding representations further comprises providing the selected encoding bitrates to at least one transcoder for encoding of each of the representations of the segment at the corresponding bitrate.
10. The method of any preceding claim, wherein the representation comprises a bitrate/resolution combination of the segment, and wherein the segment comprises one or more representations for each of the resolutions of the segment.
I 1 . The method of any preceding claim 1, wherein the quality metric compri ses at least one of a Peak Signal-to-Noise Ratio (PSNR) measurement or a Structural Similarity ( SSIM) measurement.
12. A system comprising:
a memory; and
a processing device coupled to the memory, wherein the processing device is to: determine a joint probability distribution for network speed and viewport size based on feedback statistics received from client systems;
generate rate-quality models for resolutions of a segment of a media item based on quality metrics measured for the segment;
estimate a delivered quality and egress for representations of the segment based on the generated rate-quality models and based on requesting probabilities that the representations are requested, wherein the requesting probabilities are based on the joint probability distribution; and
determine a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to minimize the egress while maintaining the delivered quality at or above a quality threshold value.
13. The system of claim 1 2, wherein the requesting probabi lity that one of the representations is requested is further based on:
the bitrate of the representation and a relation of the bitrate to network speed in the joint probability distribution; and
the resolution of the representation and a relation of the resolution to viewport size in the joint probability di stribution.
14. The system of claim I 2 or 13, wherein the feedback statistics comprise playback traces transmitted from media players of the client systems, the playback traces comprising network speed measurements and viewport sizes, and wherein the joint probability distribution is generated from cumulative measurements of the network speeds determined from the playback traces and from cumulative measurements of the viewport sizes determined from the playback traces.
15. The system of claim 14, wherein the playback traces are collected from a geographic region of the media item, and wherein the joint probability distribution is specific to the geographic region of the media item .
1 6. The system of claim 1 4 or 15, wherein the playback traces are coll ected for a type of the media item, and wherein the joint probability distribution is specific to the type of the media item.
17. The system of any one of claims 12 to 16, wherein the processing device is further to provide the determined set of bitrates to at least one transcoder for encoding of each of the representations of the segment at the corresponding bitrate.
18. The system of any one of claims 12 to 17, wherein the delivered quality is based on at least one of Peak Signal -to-Noise Ratio (PSNR) measurements of the encodings or Structural Similarity (SSIM ) measurements of the encodings.
19. A machine-readable storage medium storing instructions which, when executed, cause a processing device to perform operations comprising:
determining a joint probability distribution for network speed and viewport size based on feedback statistics received from client systems;
generating rate-quality models for resolutions of a segment of a media item based on quality metrics measure for the segment;
estimating, by the processing device, a delivered quality and egress for
representations of the segment based on the generated rate-quality models and based on requesting probabil ities that the representations is requested, herein the requesting probabilities are based on the joint probability di stribution; and
determining, by the processing device, a set of bitrates comprising a bitrate to correspond to each of the representations of the segment, the set of bitrates determined to maximize the delivered quality while keeping the egress at or below an egress threshold value.
20. The machine-readable storage medium of claim 19, wherein the requesting probability that one of the representations is requested is further based on:
the bitrate of the representation and a relation of the bitrate to network speed in the joint probability distribution; and
the resolution of the representation and a relation of the resolution to viewport size in the joint probability distribution.
2 1 . The machine-readable storage medium of claim 19 or 20, wherein the feedback statistics comprise playback traces transmitted from media players of the client systems, the playback traces compri sing network speed measurements and viewport sizes, and herein the joint probability distribution is generated from cumulative measurements of the network speeds determined from the playback traces and from cumulative measurements of the viewport sizes determined from the playback traces.
22. The machine-readable storage medium of claim 2 1 , wherein the playback traces are collected from a geographic region of the media item, and wherein the joint probabi lity distribution is specific to the geographic region of the media item.
23. The machine-readable storage medium of claim 2 1 or 22, wherein the playback traces are collected for a type of the media item, and wherein the joint probability distribution is specific to the type of media item.
24. The machine-readable storage medium of any one of claims 19 to 23, wherein the processing device is further to provide the determined set of bitrates to at least one transcoder for encoding of each of the representations of the segment at the corresponding bitrate.
25. The machine-readable storage medium of any one of claims 19 to 24, wherein the delivered quality is based on at least one of Peak Signal -to- oise Ratio (PSNR)
measurements of the segment or Structural Similarity ( SSIM ) measurements of the segment.
PCT/US2017/053318 2016-10-28 2017-09-25 Bitrate optimization for multi-representation encoding using playback statistics WO2018080688A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780072261.6A CN110268717B (en) 2016-10-28 2017-09-25 Bit rate optimization for encoding multiple representations using playback statistics
EP17784732.4A EP3533232B1 (en) 2016-10-28 2017-09-25 Bitrate optimization for multi-representation encoding using playback statistics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/337,806 US10454987B2 (en) 2016-10-28 2016-10-28 Bitrate optimization for multi-representation encoding using playback statistics
US15/337,806 2016-10-28

Publications (1)

Publication Number Publication Date
WO2018080688A1 true WO2018080688A1 (en) 2018-05-03

Family

ID=60117754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/053318 WO2018080688A1 (en) 2016-10-28 2017-09-25 Bitrate optimization for multi-representation encoding using playback statistics

Country Status (4)

Country Link
US (1) US10454987B2 (en)
EP (1) EP3533232B1 (en)
CN (1) CN110268717B (en)
WO (1) WO2018080688A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10148989B2 (en) * 2016-06-15 2018-12-04 Divx, Llc Systems and methods for encoding video content
US10834406B2 (en) 2016-12-12 2020-11-10 Netflix, Inc. Device-consistent techniques for predicting absolute perceptual video quality
US11019349B2 (en) 2017-01-20 2021-05-25 Snap Inc. Content-based client side video transcoding
US10306250B2 (en) * 2017-06-16 2019-05-28 Oath Inc. Video encoding with content adaptive resource allocation
US11146608B2 (en) 2017-07-20 2021-10-12 Disney Enterprises, Inc. Frame-accurate video seeking via web browsers
TW201931866A (en) * 2017-12-29 2019-08-01 圓剛科技股份有限公司 Video/audio stream control device and control method thereof
US10778938B2 (en) * 2018-12-20 2020-09-15 Hulu, LLC Video chunk combination optimization
US10887660B2 (en) * 2018-12-27 2021-01-05 Comcast Cable Communications, Llc Collaborative media quality determination
JP6980162B2 (en) * 2019-07-26 2021-12-15 三菱電機株式会社 Sub-channel coding device, sub-channel decoding device, sub-channel coding method, sub-channel decoding method and sub-channel multiplex optical communication system
US11343567B1 (en) * 2019-08-07 2022-05-24 Meta Platforms, Inc. Systems and methods for providing a quality metric for media content
JP7472286B2 (en) 2019-12-11 2024-04-22 グーグル エルエルシー Method, system, and medium for selecting a format for streaming a media content item - Patents.com
US11425184B2 (en) * 2020-04-21 2022-08-23 Google Llc Initial bitrate for real time communication
WO2021236059A1 (en) * 2020-05-19 2021-11-25 Google Llc Dynamic parameter selection for quality-normalized video transcoding
WO2022013326A1 (en) * 2020-07-16 2022-01-20 Nokia Technologies Oy Viewport dependent delivery methods for omnidirectional conversational video
CN113382241A (en) * 2021-06-08 2021-09-10 北京奇艺世纪科技有限公司 Video encoding method, video encoding device, electronic equipment and storage medium
CN113747245A (en) * 2021-09-06 2021-12-03 北京字跳网络技术有限公司 Multimedia resource uploading method and device, electronic equipment and readable storage medium
US20230089154A1 (en) * 2021-09-22 2023-03-23 Netflix, Inc. Virtual and index assembly for cloud-based video processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130007263A1 (en) * 2011-06-29 2013-01-03 Divx, Llc Systems and Methods for Estimating Available Bandwidth and Performing Initial Stream Selection When Streaming Content
US20130223509A1 (en) * 2012-02-28 2013-08-29 Azuki Systems, Inc. Content network optimization utilizing source media characteristics
US20150032854A1 (en) * 2013-07-24 2015-01-29 Futurewei Technologies Inc. System and method for network-assisted adaptive streaming
US20150089557A1 (en) * 2013-09-25 2015-03-26 Verizon Patent And Licensing Inc. Variant playlist optimization
US20160073106A1 (en) * 2014-09-08 2016-03-10 Apple Inc. Techniques for adaptive video streaming
CA2975904A1 (en) * 2015-02-07 2016-08-11 Zhou Wang Method and system for smart adaptive video streaming driven by perceptual quality-of-experience estimations

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8875208B1 (en) * 2007-11-21 2014-10-28 Skype High quality multimedia transmission from a mobile device for live and on-demand viewing
US20110249954A1 (en) * 2010-04-09 2011-10-13 Microsoft Corporation Capturing presentations in online conferences
US20110262102A1 (en) * 2010-04-13 2011-10-27 Lahr Nils B System and methods for optimizing buffering heuristics in media
US8396983B1 (en) * 2012-03-13 2013-03-12 Google Inc. Predictive adaptive media streaming
EP2680527A1 (en) * 2012-06-28 2014-01-01 Alcatel-Lucent Adaptive streaming aware node, encoder and client enabling smooth quality transition
US20150341411A1 (en) * 2013-01-10 2015-11-26 Telefonaktiebolaget L M Ericsson (Publ) Apparatus and Method for Controlling Adaptive Streaming of Media
US9106934B2 (en) * 2013-01-29 2015-08-11 Espial Group Inc. Distribution of adaptive bit rate live streaming video via hyper-text transfer protocol
US20140281000A1 (en) * 2013-03-14 2014-09-18 Cisco Technology, Inc. Scheduler based network virtual player for adaptive bit rate video playback
US9904936B2 (en) * 2013-11-19 2018-02-27 Adobe Systems Incorporated Method and apparatus for identifying elements of a webpage in different viewports of sizes
US11451798B2 (en) 2015-01-05 2022-09-20 Arris Enterprises Llc Method of encoding video with film grain
CN109155861B (en) * 2016-05-24 2021-05-25 诺基亚技术有限公司 Method and apparatus for encoding media content and computer-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130007263A1 (en) * 2011-06-29 2013-01-03 Divx, Llc Systems and Methods for Estimating Available Bandwidth and Performing Initial Stream Selection When Streaming Content
US20130223509A1 (en) * 2012-02-28 2013-08-29 Azuki Systems, Inc. Content network optimization utilizing source media characteristics
US20150032854A1 (en) * 2013-07-24 2015-01-29 Futurewei Technologies Inc. System and method for network-assisted adaptive streaming
US20150089557A1 (en) * 2013-09-25 2015-03-26 Verizon Patent And Licensing Inc. Variant playlist optimization
US20160073106A1 (en) * 2014-09-08 2016-03-10 Apple Inc. Techniques for adaptive video streaming
CA2975904A1 (en) * 2015-02-07 2016-08-11 Zhou Wang Method and system for smart adaptive video streaming driven by perceptual quality-of-experience estimations

Also Published As

Publication number Publication date
EP3533232A1 (en) 2019-09-04
CN110268717A (en) 2019-09-20
US10454987B2 (en) 2019-10-22
CN110268717B (en) 2021-08-27
EP3533232B1 (en) 2021-06-09
US20180124146A1 (en) 2018-05-03

Similar Documents

Publication Publication Date Title
EP3533232B1 (en) Bitrate optimization for multi-representation encoding using playback statistics
EP3542537B1 (en) Leveraging aggregated network statistics for enhancing quality and user experience for live video streaming from mobile devices
CN103999471B (en) The rate-distortion of the Video coding guided by video presentation length-complexity optimization
US8806519B2 (en) Method to evaluate the geographic popularity of geographically located user-generated content items
Toni et al. Optimal set of video representations in adaptive streaming
Laghari et al. Impact of video file format on quality of experience (QoE) of multimedia content
Darwich et al. Cost efficient repository management for cloud-based on-demand video streaming
Zabrovskiy et al. ComplexCTTP: complexity class based transcoding time prediction for video sequences using artificial neural network
Erfanian et al. LwTE: Light-weight transcoding at the edge
US10271103B2 (en) Relevance table aggregation in a database system for providing video recommendations
Wang et al. Data analysis on video streaming QoE over mobile networks
Taha et al. An automated model for the assessment of QoE of adaptive video streaming over wireless networks
EP4027616A1 (en) Global constraint-based content delivery network (cdn) selection in a video streaming system
Amour et al. Q2ABR: QoE‐aware adaptive video bit rate solution
Bulkan et al. Predicting quality of experience for online video service provisioning
Lee et al. Multimedia contents adaptation by modality conversion with user preference in wireless network
Ben Letaifa WBQoEMS: Web browsing QoE monitoring system based on prediction algorithms
EP3631752B1 (en) Mutual noise estimation for videos
Tan et al. An engagement model based on user interest and QoS in video streaming systems
Carmona et al. Video loss prediction model in wireless networks
Kim et al. Efficient video quality assessment for on-demand video transcoding using intensity variation analysis
Zeng et al. Towards secure and network state aware bitrate adaptation at IoT edge
Wang et al. Implementation and Demonstration of QoE measurement platform
Darwich et al. Video quality adaptation using CNN and RNN models for cost-effective and scalable video streaming Services
Rodrigues et al. Audiovisual quality of live music streaming over mobile networks using MPEG-DASH

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17784732

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017784732

Country of ref document: EP

Effective date: 20190528