WO2014066975A1 - Procédés et systèmes pour contrôler la qualité d'une session multimédia - Google Patents

Procédés et systèmes pour contrôler la qualité d'une session multimédia Download PDF

Info

Publication number
WO2014066975A1
WO2014066975A1 PCT/CA2013/000283 CA2013000283W WO2014066975A1 WO 2014066975 A1 WO2014066975 A1 WO 2014066975A1 CA 2013000283 W CA2013000283 W CA 2013000283W WO 2014066975 A1 WO2014066975 A1 WO 2014066975A1
Authority
WO
WIPO (PCT)
Prior art keywords
qoe
control point
media session
target
quality
Prior art date
Application number
PCT/CA2013/000283
Other languages
English (en)
Inventor
Anthony P. Joch
Roman C. Kordasiewicz
Michael Gallant
Kevin Goertz
Original Assignee
Avvasi Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avvasi Inc. filed Critical Avvasi Inc.
Publication of WO2014066975A1 publication Critical patent/WO2014066975A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/38Flow control; Congestion control by adapting coding or compression rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • H04N21/6379Control signals issued by the client directed to the server or network components directed to server directed to encoder, e.g. for requesting a lower encoding rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2408Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio

Definitions

  • the described embodiments relate to controlling quality of experience of a media session.
  • the described embodiments relate to controlling quality of a media session to correspond to a target quality of experience.
  • Multimedia content on the Internet tends to be diverse and unmanaged.
  • Internet multimedia content is diverse across many variables, such as, formats, quality levels, resolution, bit rates etc. and is consumed on a wide range of devices.
  • the diversity can be better managed by organizing and delivering multimedia content according to a common quality metric that normalizes across such variables.
  • the described embodiments may use one or more models to predict one or more perceptual quality metrics for, and which reflect a viewer's satisfaction or quality of experience (QoE) with, a media session.
  • the models may operate over "prediction horizons".
  • the models may be based on content complexity (motion, texture), quantization level, frame rate, resolution, and target device.
  • the models may also be based on network conditions such as expected throughput, expected encoding bit rate, and the state of the encoder output and client playback buffers.
  • a quantization level, frame rate, resolution for a given content complexity and target device can largely determine the quality level which generally correlates to a QoE.
  • a particular set of values for each of these parameters may define an operating point or control point for a media session.
  • a control point may be selected from a set of possible control points via filtering such that only those that can achieve the target quality level are considered.
  • the filtered control points are each considered and a best control point is selected based on criteria that include: minimizing the bit rate, minimizing transcoding resource requirements, satisfying additional policy constraints (e.g., subscriber X may be prohibited from receiving an HD resolution video), etc.
  • Calculation of the predicted quality level may be influenced by the viewing client device, content characteristics, subscriber preferences, etc. For example, a larger screen at the client device typically requires a higher resolution for equivalent quality level as compared to a smaller screen. Likewise, high action (e.g., sports) content typically requires a high frame rate to achieve adequate quality level. Subscribers may have preferences for finer quantization levels, e.g. less blocking, at the cost of lower frame rate and/or resolution (or vice-versa).
  • control point Once a control point is selected for a media session, it may be periodically reevaluated. To minimize frequent changes in control point, the selection of a new or updated control point may be made with an eye on a "prediction horizon" (e.g., a predetermined time window for which the control point is expected to be suitable).
  • a "prediction horizon" e.g., a predetermined time window for which the control point is expected to be suitable.
  • Initial immutable or fixed parameters for a media session may be selected by anticipating the range of bit rates/quality-levels that are likely to be encountered in a media session lifetime and making static (session start time) decisions based on this knowledge. Such parameters may be selected to provide most flexibility (optimize quality over likely range of conditions) over the life of a media session.
  • consistent perceptual quality can be provided by re-using quantization information from the input bitstream. This generally produces variable bit rate (VBR) streams, since more complex scenes require a higher bit rate than less complex scenes in order to achieve the same perceptual quality. More complex scenes can also be encoded with higher levels of quantization than less complex scenes while achieving similar levels of perceptual quality. Reuse of quantization information from the input bitstream produces a more consistent perceptual quality because the input bitstreams generally use VBR encoding and have been produced using multi-pass encoding, which leads to optimal bit allocation from scene-to-scene.
  • the quantization level pattern of the input bitstream from scene to scene can be leveraged during transcoding, in order to benefit from the optimal bit rate allocation determined by the original multi-pass encoding. For example, if an input bitstream has a quantization level pattern of 30-20-40, the transcoded quantization level pattern may follow a similar pattern of 15-10-20.
  • a method of controlling transcoding of a media session by a transcoder on a network comprising: selecting a target quality of experience (QoE) for the media session; for each of a plurality of control points, computing a predicted QoE associated with the control point, wherein each control point has a plurality of transcoding parameters associated therewith; selecting an initial control point of the plurality of control points, wherein the predicted QoE for the initial control point substantially corresponds with the target QoE; and signaling the transcoder to use the initial control point for the media session.
  • QoE target quality of experience
  • the initial control point may be selected based on an optimization function.
  • the method further comprises determining that a real-time QoE for the media session does not substantially correspond with the target QoE; for each of the plurality of control points, re-computing the predicted QoE, wherein the predicted QoE is based on a real-time QoE for the media session; selecting an updated control point from the plurality of control points, wherein the predicted QoE for the updated control point substantially corresponds with the target QoE; and signaling the transcoder to use the updated control point for the media session.
  • the method further comprises determining a client buffer condition, wherein the updated control point is selected based on the client buffer condition.
  • the updated control point may be selected based on an optimization function.
  • a policy rule may be an input to the optimization function.
  • At least one device capability of a device receiving the media session may be an input to the optimization function.
  • a bit rate of the media session may be an input to the optimization function.
  • Transcoding resource requirements may be an input to the optimization function.
  • the plurality of transcoding parameters may comprise at least one parameter selected from the group consisting of: quantization level, resolution, and frame rate.
  • the predicted QoE may be computed for a predetermined forward window, and wherein the selected control point is selected to substantially correspond with the target QoE over the length of the predetermined forward window.
  • the target QoE may comprise a QoE range.
  • QoE may be computed based on at least one of a presentation quality score and a delivery quality score.
  • an apparatus for controlling transcoding of a media session by a transcoder on a network comprising: a memory; a network interface; a processor, the processor configured to carry out the methods described herein, comprising: select a target quality of experience (QoE) for the media session; for each of a plurality of control points, compute a predicted QoE associated with the control point, wherein each control point has a plurality of transcoding parameters associated therewith; select an initial control point of the plurality of control points, wherein the predicted QoE for the initial control point substantially corresponds with the target QoE; and signal the transcoder to use the initial control point for the media session.
  • QoE target quality of experience
  • the processor is further configured to: determine that a real-time QoE for the media session does not substantially correspond with the target QoE; for each of the plurality of control points, re-computing the predicted QoE, wherein the predicted QoE is based on a real-time QoE for the media session; select an updated control point from the plurality of control points, wherein the predicted QoE for the updated control point substantially corresponds with the target QoE; and signal the transcoder to use the updated control point for the media session.
  • the processor is further configured to determine a client buffer condition, wherein the updated control point is selected based on the client buffer condition.
  • the processor is further configured to select the updated control point based on an optimization function.
  • the processor is configured to compute the predicted QoE for a predetermined forward window, and the processor is configured to select the updated control point to substantially correspond with the target QoE over the length of the predetermined forward window.
  • a non-transitory computer-readable medium storing computer-executable instructions, the instructions for causing a processor to perform a method of controlling transcoding of a media session by a transcoder on a network as described herein, the method comprising, for example: selecting a target quality of experience (QoE) for the media session; for each of a plurality of control points, computing a predicted QoE associated with the control point, wherein each control point has a plurality of transcoding parameters associated therewith; selecting an initial control point of the plurality of control points, wherein the predicted QoE for the initial control point substantially corresponds with the target QoE; and signaling the transcoder to use the initial control point for the media session.
  • QoE target quality of experience
  • FIG. 1 is a block diagram of a network with a media session control system in accordance with an example embodiment
  • FIG. 2A is a block diagram of a media session control system in accordance with an example embodiment
  • FIG. 2B is an example process flow that may be followed by an evaluator of a QoE controller
  • FIG. 3 is an example process flow that may be followed by a QoE controller
  • FIG. 4 is another example process flow that may be followed by a QoE controller.
  • the embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. These embodiments may be implemented in computer programs executing on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface.
  • the various programmable computers may be a server, network appliance, set-top box, embedded device, computer expansion module, personal computer, laptop, mobile telephone, smartphone or any other computing device capable of being configured to carry out the methods described herein.
  • Each program may be implemented in a high level procedural or object oriented programming or scripting language, or both, to communicate with a computer system. However, alternatively the programs may be implemented in assembly or machine language, if desired. The language may be a compiled or interpreted language. Each such computer program may be stored on a non-transitory computer readable storage medium (e.g. read-only memory, magnetic disk, optical disc). The storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • a non-transitory computer readable storage medium e.g. read-only memory, magnetic disk, optical disc.
  • a module includes a functional block that is implemented in hardware or software, or both, that performs one or more functions such as the processing of an input signal to produce an output signal.
  • a module may contain submodules that themselves are modules.
  • the described methods and systems generally allow the quality of a media session to be adjusted or controlled in order to correspond to a target quality.
  • the quality of the media session can be controlled by encoding the media session.
  • Encoding is the operation of converting a media signal, such as, an audio and/or a video signal from a source format, typically an uncompressed format, to a compressed format.
  • a format is defined by characteristics such as bit rate, sampling rate (frame rate and spatial resolution), coding syntax, etc.
  • the quality of the media session can be controlled by transcoding the media session.
  • Transcoding is the operation of converting a media signal, such as, an audio signal and/or a video signal, from one format into another. Transcoding may be applied, for example, in order to change the encoding format (e.g., from H.264 to VP8), or for bit rate reduction to adapt media content to an allocated bandwidth.
  • the quality of a media session that is delivered using an adaptive streaming protocol can be controlled using methods applicable specifically to such protocols.
  • adaptive streaming control include request- response modification, manifest editing, conventional shaping or policing, and may include transcoding.
  • request-response modification may cause client segment requests for high definition content to be replaced with similar requests for standard definition content.
  • Manifest editing may include modifying the media stream manifest files that are sent in response to a client request to modify or reduce the available operating points in order to control the operating points that are available to the client. Accordingly, the client may make further requests based on the altered manifest.
  • Conventional shaping or policing may be applied to adaptive streaming to limit the media session bandwidth, thereby forcing the client to remain at or below a certain operating point.
  • Media content is typically encoded or transcoded by selecting a target bit rate.
  • quality is assessed based on factors such as format, encoding options, resolutions and bit rates.
  • the large variety of options, coupled with the wide range of devices on which content may be viewed, has conventionally resulted in widely varying quality across sessions and across viewers.
  • Adaptation based purely on bit rate reduction, does little to improve this situation. It is generally beneficial if the adaptation is based on one or more targets for one or more quality metrics that can normalize across these options.
  • the described methods and systems may control quality of the media session by selecting a target quality level in a more comprehensive quality metric, for example based on quality of experience.
  • the quality metric may be in the form of a numerical score.
  • the quality metric may be in some other form, such as, for example, a letter score, a descriptive (e.g. 'high', 'medium', 'low') etc.
  • the quality metric may be expressed as a range of scores or an absolute score.
  • a Quality of Experience (QoE) measurement on a Mean Opinion Score (MOS) scale is one example of a perceptual quality metric, which reflects a viewer's opinion of the quality of the media session.
  • QoE Quality of Experience
  • MOS Mean Opinion Score
  • a QoE score or measurement can be considered a subjective way of describing how well a user is satisfied with a media presentation.
  • a QoE measurement may reflect a user's actual or anticipated of the viewing quality of the media session. Such a calculation may be based on events that impact viewing experience, such as network induced re-buffering events wherein the playback stalls.
  • a model of human dissatisfaction may be used to provide QoE measurement.
  • a user model may map a set of video buffer state events to a level of subjective satisfaction for a media session.
  • QoE may reflect an objective score where an objective session model may map a set of hypothetical video buffer state events to an objective score for a media session.
  • a QoE score may in some cases consist of two separate scores, for example a Presentation Quality Score (PQS) and a Delivery Quality Score (DQS).
  • PQS generally measures the quality level of a media session, taking into account the impact of media encoding parameters and optionally device-specific parameters on the user experience, while ignoring the impact of delivery.
  • relevant audio, video and device key performance indicators (KPIs) may be considered from each media session. These parameters may be incorporated into a no-reference bitstream model of satisfaction with the quality level of the media session.
  • KPIs that can be used to compute the PQS may include codec type, resolution, bits per pixel, frame rate, device type, display size, and dots per inch. Additional KPIs may include coding parameters parsed from the bitstream, such as macroblock mode, macroblock quantization parameter, coded macroblock size in bits, intra prediction mode, motion compensation mode, motion vector magnitude, transform coefficient size, transform coefficient distribution and coded frame size etc.
  • the PQS may also be based, at least in part, on content complexity and content type (e.g., movies, news, sports, music videos etc.). The PQS can be computed for the entirety of a media session, or computed periodically throughout a media session.
  • DQS measures the success of the network in streaming delivery, reflecting the impact of network delivery on QoE while ignoring the source quality.
  • DQS calculation may be based on a set of factors, such as, the number, frequency and duration of re- buffering events, the delay before playback begins at the start of the session of following a seek operation, buffer fullness measures (such as average, minimum and maximum values over various intervals), and durations of video downloaded/streamed and played/watched.
  • additional factors may include a number of stream switch events, a location in the media stream, duration of the stream switch event, and a change in operating point for the stream switch event.
  • the described methods and systems may enable service provides to provide their subscribers with assurance that content accessed by the subscribers conform to one or more agreed upon quality levels. This may enable creation of pricing models based on the quality of the subscriber experiences.
  • the described methods and systems may also enable service providers to provide multimedia content providers and aggregators with assurance that the content is delivered at one or more agreed upon quality levels. This may also enable creation of pricing models based on the assured level of content quality.
  • the described methods and system may further enable service providers to deliver the same or similar multimedia quality across one or more disparate sessions in a given network location.
  • FIG. 1 there is illustrated a simplified block diagram of a network system with an example media session control system.
  • System 1 generally includes a data network 10, such as the Internet, which connects a media server 30, a personal computer 25 and a media session control system 100.
  • a data network 10 such as the Internet
  • Media session control system 100 is further connected to one or more access networks 15 for client devices 20, which may be mobile computing devices such as smartphones, for example.
  • access networks 15 may include radio access networks (RANs) and backhaul networks, in the case of a wireless data network.
  • RANs radio access networks
  • backhaul networks in the case of a wireless data network.
  • RANs radio access networks
  • backhaul networks in the case of a wireless data network.
  • the exemplary embodiments are shown primarily in the context of mobile data networks, it will be appreciated that the described systems and methods are also applicable to other network configurations.
  • the described systems and methods could be applied to data networks using satellite, digital subscriber line (DSL) or data over cable service interface specification (DOCSIS) technology in lieu of, or in addition to a mobile data network.
  • DSL digital subscriber line
  • DOCSIS data over cable service interface specification
  • Media session control system 100 is generally configured to forward data packets associated with the data sessions of each client device 20 to and from network 10, preferably with minimal latency. In some cases, as described herein further, media session control system 100 may modify the data sessions, particularly in the case of media sessions (e.g., streaming video or audio).
  • media sessions e.g., streaming video or audio.
  • Client devices 20 generally communicate with one or more servers 30 accessible via network 10. It will be appreciated that servers 30 may not be directly connected to network 10, but may be connected via intermediate networks or service providers. In some cases, servers 30 may be edge nodes of a content delivery network (CDN).
  • CDN content delivery network
  • network system 1 shows only a subset of a larger network, and that data networks will generally have a plurality of networks, such as network 10 and access networks 15.
  • Control system 100 generally has a transcoder 105, a QoE controller 110, a policy engine 115, a network resource model module 120, a client buffer model module 125.
  • Control system 100 is generally in communication with a client device which is receiving data into its client buffer 135, via a network 130.
  • Policy Engine 115 may maintain a set of policies, and other configuration settings in order to perform active control and management of media sessions.
  • the policy engine 115 is configurable by the network operator.
  • the configuration of the policy engine 115 may be dynamically changed by the network operator.
  • policy engine 115 may be implemented as part of a Policy Charging and Rules Function (PCRF) server.
  • PCRF Policy Charging and Rules Function
  • Policy engine 115 provides policy rules and constraints 182 to the QoE controller 110 to be used for a media session under management by system 100.
  • Policy rules and constraints 182 may include one or more of a quality metric and an associated target quality level, a policy action, scope or constraints associated with the policy action, preferences for the media session characteristics, etc. Policy rules and constraints 182 can be based on the subscriber or client device, or may be based on other factors.
  • the target quality level may be an absolute quality level, such as, a numerical value on a MOS scale.
  • the target quality level may alternatively be a QoE range, i.e., a range of values with a minimum level and a maximum level.
  • Policy engine 115 may specify a wide variety of quality metrics and associated target quality levels.
  • the quality metric may be based on an acceptable encoding and display quality, or a presentation QoE score (PQS).
  • the quality metric may be based on an acceptable network transmission and stalling impact on quality, or a delivery QoE score (DQS).
  • the quality metric may be based on the combination of the presentation and the delivery QoE scores, or a combined QoE score (CQS).
  • Policy engine 115 may determine policy actions for media session, which may include a plurality of actions.
  • a policy action may include a transcoding action, an adaptive streaming action which may also include a transcoding action, or some combination thereof.
  • Policy engine 115 may specify the scope or constraints associated with policy actions.
  • policy engine 1 5 may specify constraints associated with a transcoding action. Such constraints may include specifying the scope of one or more individual or aggregate media session characteristics. Examples of media session characteristics may include bit rate, resolution, frame rate, etc. Policy engine 1 5 may specify one or more of a target value, a minimum value and a maximum value for the media session characteristics.
  • Policy engine 115 may also specify the preference for the media session characteristic as an absolute value, a range of values and/or a value with qualifiers.
  • policy engine 1 5 may specify a preference with qualifiers for the media session characteristic by providing that the minimum frame rate value of 10 is a 'strong' preference.
  • policy engine 115 may specify that the minimum frame rate value is a 'medium' or a 'weak' preference.
  • Network Resource Model (NRM) module 120 may implement a hierarchical subscriber and network model and a load detection system that receives location and bandwidth information from the rest of the system (e.g., networks 10 and 15 of system 1) or from external network nodes, such as radio access network (RAN) probes, to generate and update a real-time model of the state of a mobile data network, in particular congested domains, e.g. sectors.
  • RAN radio access network
  • NRM module 120 may update and maintain a NRM based on data from at least one network domain, where the data may be collected by a network event collector (not shown) using one or more node feeds or reference points.
  • the NRM module may implement a location-level congestion detection algorithm using measurement data, including location, RTT, throughput, packet loss rates, windows sizes, and the like.
  • NRM module 120 may receive updates to map subscribers and associated traffic and media sessions to locations.
  • NRM module 120 provides network statistics 184 to the QoE controller 110.
  • Network statistics 184 may include one or more of the following statistics, such as, for example, current bit rate/throughput for session, current sessions for location, predicted bit rate/throughput for session, and predicted sessions for location, etc.
  • Client buffer model module 125 may use network feedback and video packet timing information specific to a particular ongoing media session to estimate the amount of data in a client device's playback buffer at any point in time in the media session.
  • Client buffer model module 125 generally uses the estimates regarding amount of data in a client device's playback buffer, such as client buffer 135, to model location, duration and frequency of stall events.
  • the client buffer model module 125 may directly provide raw data to the QoE controller 110 so that it may select a setting that minimizes the likelihood of stalling, with the goal of achieving better streaming media performance and improved QoE metric, where the QoE metric can include presentation quality, delivery quality or other metrics.
  • Client buffer model module 125 generally provides client buffer statistics 186 to the QoE controller 110.
  • Client buffer statistics 186 may include one or more of statistics such as current buffer fullness, buffer fill rate, a playback indicator/time stamp at the client buffer, and an input indicator/timestamp at the client buffer, etc.
  • Transcoder 105 generally includes a decoder 150 and an encoder 155. Decoder 50 has an associated decoder input buffer 160 and encoder 155 has an associated encoder output buffer 165, each of which may contain bitstream data.
  • Decoder 150 may process the input video stream at an application and/or a container layer level and, as such, may include a demuxer. Decoder 140 provides input stream statistics 188 to the QoE controller 110. Input stream statistics 188 may include one or more statistics or information about the input stream.
  • the input stream may be a video stream, an audio stream, or a combination of the video and the audio streams.
  • Input stream statistics 188 provided to the QoE controller 10 may include one or more of streaming protocol, container type, device type, codec, quantization parameter values, frame rate, resolution, scene complexity estimate, picture complexity estimate, Group of Pictures (GOP) structure, picture type, bits per GOP, bits per picture, etc.
  • Encoder 155 may be a conventional video or audio encoder and, in some cases, may include a muxer or remuxer. Encoder 155 typically receives decoded pictures 140 and encodes them according to one or more encoding parameters. Encoder 155 typically handles picture type selection, bit allocation within the picture to achieve the overall quantization level selected by control point evaluation, etc. Encoder 155 may include a look-ahead buffer to enable such decision making. Encoder may also include a scaler/resizer for resolution and frame rate reduction. Encoder 155 may make decisions based on encoder settings 190 received from the QoE controller 1 10.
  • Encoder 155 provides output stream statistics 192 to the QoE controller 110.
  • Output stream statistics 192 may include one or more of the following statistics or information about the transcoded/output stream, such as, for example, container type, streaming protocol, codec, quantization parameter values, scene complexity estimate, picture complexity estimate, GOP structure, picture type, frame rate, resolution, bits/GOP, bits/picture, etc.
  • QoE Controller 110 is generally configured to select one control point from a set of control points during a control point evaluation process.
  • a control point is set of attributes that define a particular operating point for a media session, which may be used to guide an encoder, such as encoder 155, and/or a transcoder, such as transcoder 105.
  • the set of attributes that make up a control point may be transcoding parameters, such as, for example, resolution, frame rate, quantization level etc.
  • the QoE controller 1 10 generates various control points. In some other cases, QoE controller 1 10 receives various control points via network 130.
  • the QoE controller 110 may receive the control points, or constraints for control points, from the policy engine 115 or some external processor.
  • the media streams that represent a particular control point may already exist on a server (e.g. for adaptive streams) and these control points may be considered as part of the control point evaluation process. Selecting one of the control points for which a corresponding media stream already exists may eliminate the need for transcoding to achieve the control point. In such cases, other mechanisms such as shaping, policing, and request modification may be applied to deliver the media session at the selected control point.
  • Control point evaluation may occur at media session initiation as well as dynamically throughout the course of the session. In some cases, some of the parameters associated with a control point may be immutable once selected (e.g., resolution in some formats).
  • QoE controller 110 provides various encoder settings 190 to the transcoder 105 (or encoder or adaptive stream controller).
  • Encoder settings 190 may include resolution, frame rate, quantization level (i.e., what amount of quantization to apply to the stream, scene, or picture), bits/frame, etc.
  • QoE controller 110 may include various modules to facilitate the control point evaluation process. Such modules generally include an evaluator 70, an estimator 175 and a predictor 180.
  • Predictor 180 - which may also be referred to as stall predictor 180 - is generally configured to predict a "stalling" bit rate for a media session over a certain "prediction horizon". Predictor 180 may predict the "stall" bit rate by using some or all of the expected bit rate for a given control point, the amount of transcoded data currently buffered within the system (waiting to be transmitted), the amount of data currently buffered on the client (from the Client Buffer Model module 125), and the current and predicted network throughput.
  • the "stall” bit rate is the output media bit rate at which a client buffer model expects that playback on the client will stall given its current state and a predicted network throughput, over a given "prediction horizon".
  • the "stall" bit rate may be used by the evaluator 170 as described herein.
  • Estimator 175 - which may also be referred to as visual quality estimator 175 - is generally configured to estimate encoding results for a given control point and the associated visual or coding and device impact on QoE for each control point. This may be achieved using a function or model which estimates a QoE metric, e.g. PQS, as well as the associated bit rate.
  • QoE metric e.g. PQS
  • Estimator 175 may also be generally configured to estimate transmission results for a given control point and the associated stalling or delivery impact on QoE for each control point. This may be achieved using a function or model which estimates the impact of delivery impairments on a QoE metric (e.g. DQS). Estimator 175 may also model, for each control point, a combined or overall score, which considers all of visual, device and delivery impact on QoE.
  • a function or model which estimates the impact of delivery impairments on a QoE metric (e.g. DQS).
  • Estimator 175 may also model, for each control point, a combined or overall score, which considers all of visual, device and delivery impact on QoE.
  • Evaluator 170 is generally configured to evaluate a set of control points based on their ability to satisfy policy rules and constraints, such as policy rules and constraints 182 and achieve a target QoE for the media session. Control points may be reevaluated periodically throughout the session.
  • a change in control point is typically implemented by a change in the quantization level, which is a key factor in determining quality level (and associated bit rate) of the encoded or transcoded video.
  • the controller may also change the frame rate, which affects the temporal smoothness of the video as well as the bit rate.
  • the controller may also change the video resolution if permitted by the format, which affects the spatial detail as well as the bit rate.
  • the evaluator 170 detects that network throughput is degraded, resulting in degraded QoE.
  • Current or imminently poor DQS may be detected by identifying client buffer fullness (for example by using a buffer fullness model), TCP retries, RTT, window size, etc.
  • the evaluator 170 may select control points with a reduced bit rate to ensure uninterrupted playback, thereby maximizing overall QoE score.
  • control point evaluation is carried out in two stages.
  • a first stage may include filtering of control points based on absolute criteria, such as removing control points that do not meet all constraints (e.g., policy rules and constraints 182).
  • a second stage may include scoring and ranking of the set of the filtered control points that meet all constraints, that is, selecting the best control point based on certain optimization criteria.
  • control points are removed if they do not meet applicable policies, PQS targets, DQS targets, or a combination thereof. For example, if the operator has specified a minimum frame rate (e.g. 12 frames per second), then points with a frame rate less than the minimum fail this selection.
  • a minimum frame rate e.g. 12 frames per second
  • evaluator 170 may evaluate the estimated PQS for the control points based on parameters such as, for example, resolution, frame rate, quantization level, client device characteristics (estimated viewing distance and screen size), estimated scene complexity (based on input bitstream characteristics), etc.
  • evaluator 170 may estimate a bit rate that a particular control point will produce based on similar parameters such as, for example, resolution, frame rate, quantization level, estimated scene complexity (based on input bitstream characteristics), etc. If the estimated bit rate is higher than what is expected or predicted to be available on the network (in a particular sector or network node), the control point may be excluded. [89] In some cases, evaluator 170 may estimate bit rate based on previously generated statistics from previous encodings at one or more of the different control points, if such statistics are available.
  • an optimization score is computed for each of the qualified control points that meet the constraints of the first stage.
  • the score may be computed based on a weighted sum of a number of penalties.
  • penalties may be assigned based on an operator preference expressed in a policy. For example, an operator could specify a strong, moderate, or weak preference to avoid frame rates below 10 fps. Such a preference can be specified in a policy and used in the computation of the penalties for each control point.
  • other ways of computing a score for the control points may be used.
  • the score is computed based on the penalties
  • factors determining optimality of each control point in a system may be considered. Such factors may include expected output bit rate, the amount of computational resources required in the system, and operator preferences expressed as a policy.
  • the computational resources required in the system may be computed using the number of output macroblocks per second of the output configuration. In general, the use of fewer computational resources (e.g., number of cycles required) is preferred, as this may use less power and/or allow simultaneous transcoding of more channels or streams.
  • the penalty for each control point may be computed as a weighted sum of the output bit rate (e.g., estimated kilobits per second), amount of computational resources (e.g., number of cycles required, output macroblocks per second, etc.), or operator preferences expressed as policy (e.g., frame rate penalty, resolution penalty, quantization penalty, etc.).
  • This example penalty calculation also can be expressed by way of the following optimization function:
  • Each part of the penalty may have a weight W determining how much the part contributes to the overall penalty.
  • the frame rate, resolution and quantization may only contribute if they are outside the range of preference as specified in a policy.
  • the frame rate penalty may be computed as outlined in the pseudocode below:
  • the frame rate penalty may be computed as:
  • the resolution preference may be expressed in terms of the image width. In some further cases, the resolution preferences may be expressed in terms of the overall number of macroblocks.
  • the strength of the preference specified in the policy may determine how much each particular element contributes to the scoring of the control points that are not in the desired range.
  • values of the Strong, Moderate, and Weak Penalty values might be 300, 200, and 100, respectively.
  • the operator may specify penalties in other ways, having any suitable number of levels where any suitable range of values may be associated with those levels.
  • scoring is based on penalties
  • scoring may instead be based on "bonuses", in which case higher scores would be more desirable. It will be appreciated that various other scoring schemes also can be used.
  • the evaluator 170 selects the control point with the best score (e.g., lowest overall penalty).
  • Process flow 200 may be carried out by evaluator 170 of the QoE controller 10. The steps of the process flow 200 are illustrated by way of an example input bit rate with resolution 854x480 and frame rate 24 fps, although it will be appreciated that the process flow may be applied to an input bit rate of any other resolution and frame rate.
  • the evaluator 170 of the QoE controller 110 determines various candidate output resolutions and frame rate.
  • the various combinations of the candidate resolutions and frame rates may be referred to as candidate control points 230.
  • the various candidate output resolutions may include resolutions of 854x480, 640x360, 572x320, 428x240, 288x160, 216x120, computed by multiplying the width and the height of the input bit rate by multipliers 1 , 0.75, 0.667, 0.5, 0.333, 0.25.
  • the various candidate output frame rates may include frame rates of 24, 12, 8, 6, 4.8, 4, derived by dividing the input frame rate by divisors 1 , 2, 3, 4, 5, 6.
  • Various combinations of candidate resolutions and candidate frame rates can be used to generate candidate control points. In this example, there are 36 such control points. Other parameters may also be used in generating candidate control points as described herein, although these are omitted in this example to aid understanding.
  • the evaluator 170 determines which of the candidate control points 230 satisfy the policy rules and constraints 282 received from a policy engine, such as the policy engine 115. The control points that do not satisfy the policy rules and constraints 282 are excluded from further analysis at 225. The remaining control points are further processed at 210.
  • the QoE controller can determine if the remaining control points satisfy a quality level target (e.g., target PQS).
  • a quality level target e.g., target PQS
  • the estimated quality level is received from a QoE estimator, such as the estimator 175.
  • Control points that fail to meet the target quality level are excluded 225 from the analysis.
  • the remaining control points are further processed at 2 5.
  • the determination of whether or not the remaining control points satisfy the target PQS is made by predicting a PQS for each one of the remaining control points and comparing the predicted PQS with the target PQS to determine the control points to be excluded and control points to be further analyzed.
  • the PQS for the control points may be predicted as follows. First, a maximum PQS or a maximum spatial PQS that is achievable or reproducible at the client device may be determined based on the device type and the candidate resolution. Here, it is assumed that there are no other impairments and other factors that may affect video quality, such as reduced frame rate, quantization level, etc., are ideal. For example, a resolution of 640x360 on a tablet may yield a maximum PQS score of 4.3.
  • the maximum spatial PQS score may be adjusted for the candidate frame rate of the control point to yield a frame rate adjusted PQS score.
  • a resolution of 640x360 on a tablet with a frame rate of 2 fps may yield a frame rate adjusted PQS score of 3.2.
  • a quantization level may be selected that most closely achieves the target PQS given a particular resolution and frame rate. For example, if the target PQS is 2.7 and the control point has a resolution of 640x360 and frame rate of 12 fps, selecting an average quantization parameter of 30 (e.g., in the H.264 codec) achieves a PQS of 2.72. If the quantization parameter is increased to 31 (in the H.264 codec), the PQS estimate is 2.66. [111] Evaluator 70 can repeat the PQS prediction steps for one or more (and typically all) of the remaining control points. In some cases, one or more of the remaining control points may be incapable of achieving the target PQS.
  • control points in the example of FIG. 2B there may be resolution and frame rate combinations that may never achieve the target PQS irrespective of the quantization level.
  • control points with frame rates of 8 or lower, and all resolutions of 288x160 or below would yield a PQS that is below the target PQS of 2.7 regardless of the quantization parameter.
  • Evaluator 170 determines which of the control points would never achieve the target PQS, such as, for example, the target PQS of 2.7, and excludes 225 such control points.
  • the QoE controller determines if the remaining control points from 210 satisfy a delivery quality target or other such stalling metric. Accordingly, at 215, the QoE controller can determine if the remaining control points satisfy a delivery quality target (e.g., target DQS).
  • the delivery quality target is received from a stall rate predictor, such as predictor 180.
  • the control points that do not satisfy the delivery quality network are excluded 225 from the analysis. The remaining control points are considered at 220.
  • bit rate that would be produced by the remaining control points is predicted.
  • the following model based on the resolution, frame rate, quantization level and characteristics of the input bitstream (e.g. the input bit rate) may be used to predict the output bit rate:
  • bitsPerSecond InputFactor * ( (A*log(MBPF) + B) * (e 'c*FPS + D)) / ((E-MBPF * F) Qp )
  • InputFactor is an estimate of the complexity of the input content. This estimate may be based on the input bit rate. For example, an InputFactor with a value of .0 may mean average complexity.
  • MBPF is an estimate of output macroblocks per frame.
  • FPS is an estimate of output frames per second.
  • Values A through F may be constants based on the characteristics of the encoder being used, which can be determined based on past encoding runs with the encoder.
  • control points that have an estimated bit rate that is at or near the bandwidth estimated to be available to the client on the network may be excluded 225 from the set of possible control points. This is because the predicted DQS may be too low to meet the overall QoE target.
  • control points are scored and ranked to select the best control point.
  • the criteria for determining whether a control point is the best may be a penalty based model as discussed herein.
  • one or more of 205, 210 and 215 may be omitted to provide a simplified evaluation.
  • a target QoE may be based on PQS alone, and evaluator 170 may only perform target PQS evaluation, omitting policy evaluation and target DQS evaluation.
  • Table I illustrates example control points and associated parameter values to illustrate the scoring and ranking that may be performed by the evaluator 170.
  • Control points 1 to 3 in Table I are control points that, for example, meet the policy rules and constraints 282, and target QoE constraints.
  • Evaluator 170 can compute scores (e.g., penalty values) for these remaining control points.
  • Output macroblocks per second may be computed directly from the output resolution and frame rate based on an average or estimated number of macroblocks for a given quantization level.
  • the penalty values are computed based on the following optimization function discussed herein:
  • control point 2 is determined to have the best balance of bit rate and complexity, as it has the lowest total penalty.
  • both the bit rate and the frame rate preferences can be taken into account.
  • all the weights other than W b and W c may be set to 0.
  • Table III illustrates example control points where the operator has specified a strong preference to avoid frame rates below 15 fps.
  • both the W b and the W f may be set to 1 to determine the control point with the best balance of bit rate and frame rate.
  • Both the control points 1 and 2 may have a frame rate penalty of 300 applied due to the "strong" preference and the fact that their frame rates are below 15 fps. In this case, control point 2 may be the selected option.
  • FIG. 3 illustrating a process flow diagram 300 that may be executed by an exemplary QoE controller 1 10.
  • Process flow 300 begins at 305 by receiving a media stream, for example at the commencement of a media session.
  • the control system may select a target quality level - or target QoE - for the media session.
  • the target QoE may be a composite value computed based on PQS, DQS or combinations thereof.
  • the target QoE may be a tuple comprising individual target scores.
  • target QoE may generally be weighted in favor of PQS, since this is easier to control.
  • the target QoE may be provided to the QoE controller by the policy engine.
  • the target QoE may be calculated based on factors such as the viewing device, the content characteristics, subscriber preference, etc.
  • the QoE controller may calculate the target QoE based on policy received from the policy engine.
  • the QoE controller may receive the policy that a larger viewing device screen requires a higher resolution for equivalent QoE than a smaller screen. In this case, the QoE controller may determine the target QoE based on this policy and the device size. It will be appreciated that in some cases the term QoE is not limited to values based on PQS or DQS. In some cases, QoE may be determined based on various one or more other objective or subjective metrics for determining a quality level.
  • a policy may state that high action content, such as, for example, sports, requires a higher frame rate to achieve adequate QoE.
  • the QoE controller may then determine the target QoE based on this policy and the content type.
  • the policy may provide that the subscriber receiving the media session ' " >s a preference for better quantization at the cost of lower frame rate and/or resolution, or vice-versa.
  • the QoE controller may then determine the target QoE based on this policy.
  • a predicted quality level - or predicted QoE - associated with each control point may be computed as described herein.
  • Each control point has a plurality of transcoding parameters, such as, for example, resolution, frame rate, quantization level, etc. associated with it.
  • QoE controller may generate a plurality of control points based on the input media session.
  • the incoming media session may be processed by a decoder, such as decoder 150.
  • the media session may be processed at an application and/or a container level to generate input stream statistics, such as the input stream statistics 188.
  • the input stream statistics may be used by the QoE controller to generate a plurality of candidate control points.
  • the plurality of candidate control points may, in addition or alternatively, be generated based on the policy rules and constraints, such as policy rules and constraints 182, 282.
  • an initial control point may be selected from the plurality of control points.
  • the initial control point may be selected so that the predicted QoE associated with the initial control point substantially corresponds to the target QoE.
  • the initial control point may be selected based on the evaluation carried out by evaluator 170.
  • the optimization function model to calculate penalties may be used by the evaluator 170 to select the initial control point as described herein. Selection of optimal control point may be based on one or more of the criteria such as minimizing bit rate, minimizing transcoding resource requirements and satisfying additional policy constrains, for example, device type, subscriber tier, service plan, time of the day etc.
  • the QoE controller may compute the target QoE and/or the predicted QoE for a media stream in a media session for a range or duration of time, referred to as a "prediction horizon".
  • the duration of time for which the QoE is predicted or computed may be based on content complexity (motion, texture), quantization level, frame rate, resolution, and target device.
  • the QoE controller may anticipate the range of bit rates/quality-levels that are sly to be encountered in a session lifetime. Based on this anticipation, the QoE controller may select initial parameters, such as the initial control point, to provide most flexibility over life of the session. In some cases, some or all of the initial parameters selected by the QoE controller may be set to be unchangeable over life of the session.
  • the media session is encoded based on the initial control point.
  • the media session may be encoded by an encoder, such as encoder 55.
  • FIG. 4 illustrating a process flow diagram that may be executed by an exemplary QoE controller 1 0.
  • Process flow 400 begins at 405 by receiving a media stream, for example while a media session is in progress. In some cases, process flow 400 may continue from 325 of process flow 300 in FIG. 3.
  • the QoE controller determines whether the real-time QoE of the media session substantially corresponds to the target QoE.
  • the target QoE may be provided to the QoE controller by a policy engine, such as the policy engine 115.
  • the target QoE may be set by the network operator.
  • the target QoE may be calculated by the QoE controller as described herein.
  • a predicted QoE associated with each control point may be re-computed using a process similar to 315 of process flow 300.
  • the predicted QoE may be based on the real-time QoE of the media stream. In various cases, the interval for re-evaluation or re-computation is much shorter than the prediction horizon used by the QoE controller.
  • an updated control point may be selected from the plurality of control points using a process similar to 320 of process flow 300.
  • the updated control point is selected so that the predicted QoE associated with the updated control point substantially corresponds to the target QoE.
  • the updated control point may be selected based on the evaluation carried out by evaluator 170.
  • the optimization function model to calculate penalties may be used by the evaluator 170 to select the updated control point.
  • the media session may be encoded based on the updated control point.
  • the media session may be encoded by an encoder, such as encoder 155. Accordingly, if the media session was initially being encoded using an initial control point, the encoder may switch to using an updated control point following its selection at 520.
  • the target and the predicted QoE computed in process flows 300 and 400 may be based on the visual presentation quality of the media session, such as that determined by a PQS score.
  • the target and the predicted QoE may be based on the delivery network quality, such as that determined by the DQS score.
  • the target and the predicted QoE correspond to a combined presentation and network delivery score, as determined by CQS.
  • the elements related to network delivery may be optional.
  • the network resource model 120 and the client buffer model 125 of system 100 may be optional.
  • predictor 180 of the QoE controller 110 may be an optional.
  • the target PQS and target DQS may be combined into the single target score or CQS.
  • the CQS may be computed according to the following formula, for example:
  • CQS CO + C1 * (PQS + DQS) + C2 * (PQS * DQS) + C3 * (PQS ⁇ 2) * (DQS ⁇ 2)
  • the constants may be given different values by, for example, a network operator.
  • CQS scores give more influence to the lower of the two scores, namely PQS and DQS.
  • audio and video streams may both be combined to compute an overall PQS, for example, according to the following formula:
  • Video_weight and Audio_weight may be selected so that their sum is 1. Based on the determination regarding the importance of the audio or the video, the weights may be adjusted accordingly. For example, if it is decided that video is more important, then the Video_weight may be 2/3 and the Audio_weight may be 1/3.
  • the value of p may determine how much influence the lower of the two input values has on the final score.
  • the described embodiments generally enable service providers to provide their subscribers with assurance that content they access will conform to one or more agreed upon quality levels, permitting creation of pricing models based on the quality of their subscribers' experiences.
  • the described embodiments also enable service providers to provide content providers and aggregators with assurances that their content will be delivered at one or more agreed upon quality levels, permitting creation of pricing models based on an assured level of content quality.
  • the described embodiments enable service providers to deliver the same or similar video quality across one or more disparate media sessions in a given network location.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des procédés et des systèmes pour contrôler la qualité d'un flux multimédia dans une session multimédia. Les procédés et systèmes décrits contrôlent la qualité du flux multimédia par contrôle du transcodage de la session multimédia. Le transcodage est contrôlé au début de la session multimédia et de manière dynamique durant la durée d'activité de la session multimédia. Le transcodage est contrôlé par sélection d'une qualité d'expérience (QoE) cible pour la session multimédia, calcul d'une QoE prédite pour chacun d'une pluralité de points de contrôle, chaque point de contrôle ayant une pluralité de paramètres de transcodage qui lui sont associés, sélection d'un point de contrôle parmi la pluralité de points de contrôle, la QoE prédite pour le point de contrôle sélectionné correspondant sensiblement à la QoE cible, et notification au transcodeur d'utiliser le point de contrôle sélectionné pour la session multimédia.
PCT/CA2013/000283 2012-10-30 2013-03-28 Procédés et systèmes pour contrôler la qualité d'une session multimédia WO2014066975A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261719989P 2012-10-30 2012-10-30
US61/719,989 2012-10-30

Publications (1)

Publication Number Publication Date
WO2014066975A1 true WO2014066975A1 (fr) 2014-05-08

Family

ID=50626242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2013/000283 WO2014066975A1 (fr) 2012-10-30 2013-03-28 Procédés et systèmes pour contrôler la qualité d'une session multimédia

Country Status (1)

Country Link
WO (1) WO2014066975A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702666A (zh) * 2015-01-30 2015-06-10 北京邮电大学 用户体验质量确定方法及系统
US10623788B2 (en) 2018-03-23 2020-04-14 At&T Intellectual Property I, L.P. Methods to estimate video playback buffer
JP2021513274A (ja) * 2018-02-09 2021-05-20 華為技術有限公司Huawei Technologies Co.,Ltd. データ処理方法、サーバ、およびデータ収集デバイス
EP3972259A4 (fr) * 2019-05-17 2022-08-17 Beijing Dajia Internet Information Technology Co., Ltd. Procédé et appareil de transcodage de vidéos, dispositif électronique et support de stockage
WO2022256145A1 (fr) * 2021-06-03 2022-12-08 Microsoft Technology Licensing, Llc Mesure de la qualité d'expérience vidéo sur la base d'une fréquence image décodée

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157660A1 (en) * 2002-01-23 2005-07-21 Davide Mandato Model for enforcing different phases of the End-to-End Negotiation Protocol (E2ENP) aiming QoS support for multi-stream and multimedia applications
CA2632885A1 (fr) * 2005-12-09 2007-06-14 Eyecon Technologies, Inc Controleur et procede de commande pour recherche, routage et lecture multimedia
US20080205389A1 (en) * 2007-02-26 2008-08-28 Microsoft Corporation Selection of transrate and transcode processes by host computer
CA2737476A1 (fr) * 2008-12-12 2010-06-17 Ecole De Technologie Superieure Procede et systeme de transcodage a faible complexite d'images avec une qualite presque optimale
US8290038B1 (en) * 2009-11-30 2012-10-16 Google Inc. Video coding complexity estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157660A1 (en) * 2002-01-23 2005-07-21 Davide Mandato Model for enforcing different phases of the End-to-End Negotiation Protocol (E2ENP) aiming QoS support for multi-stream and multimedia applications
CA2632885A1 (fr) * 2005-12-09 2007-06-14 Eyecon Technologies, Inc Controleur et procede de commande pour recherche, routage et lecture multimedia
US20080205389A1 (en) * 2007-02-26 2008-08-28 Microsoft Corporation Selection of transrate and transcode processes by host computer
CA2737476A1 (fr) * 2008-12-12 2010-06-17 Ecole De Technologie Superieure Procede et systeme de transcodage a faible complexite d'images avec une qualite presque optimale
US8290038B1 (en) * 2009-11-30 2012-10-16 Google Inc. Video coding complexity estimation

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702666A (zh) * 2015-01-30 2015-06-10 北京邮电大学 用户体验质量确定方法及系统
CN104702666B (zh) * 2015-01-30 2019-05-28 北京邮电大学 用户体验质量确定方法及系统
JP2021513274A (ja) * 2018-02-09 2021-05-20 華為技術有限公司Huawei Technologies Co.,Ltd. データ処理方法、サーバ、およびデータ収集デバイス
JP7059382B2 (ja) 2018-02-09 2022-04-25 華為技術有限公司 データ処理方法およびサーバ
US11936930B2 (en) 2018-02-09 2024-03-19 Huawei Technologies Co., Ltd. Data processing method, server, and data collection device
US10623788B2 (en) 2018-03-23 2020-04-14 At&T Intellectual Property I, L.P. Methods to estimate video playback buffer
US11109079B2 (en) 2018-03-23 2021-08-31 At&T Intellectual Property I, L.P. Methods to estimate video playback buffer
US11533524B2 (en) 2018-03-23 2022-12-20 At&T Intellectual Property I, L.P. Methods to estimate video playback buffer
EP3972259A4 (fr) * 2019-05-17 2022-08-17 Beijing Dajia Internet Information Technology Co., Ltd. Procédé et appareil de transcodage de vidéos, dispositif électronique et support de stockage
WO2022256145A1 (fr) * 2021-06-03 2022-12-08 Microsoft Technology Licensing, Llc Mesure de la qualité d'expérience vidéo sur la base d'une fréquence image décodée
US11558668B2 (en) 2021-06-03 2023-01-17 Microsoft Technology Licensing, Llc Measuring video quality of experience based on decoded frame rate

Similar Documents

Publication Publication Date Title
US20130304934A1 (en) Methods and systems for controlling quality of a media session
US20140181266A1 (en) System, streaming media optimizer and methods for use therewith
US11076187B2 (en) Systems and methods for performing quality based streaming
EP3338455B1 (fr) Système et procédé pour gérer la livraison des segments et largeur de bande en réponse à les métriques de complexité de codage
US10298985B2 (en) Systems and methods for performing quality based streaming
CN106537923B (zh) 自适应视频流的技术
US20150082345A1 (en) System for generating enhanced advertizements and methods for use therewith
US20150026309A1 (en) Systems and methods for adaptive streaming control
US20150163273A1 (en) Media bit rate estimation based on segment playback duration and segment data length
Ramakrishnan et al. SDN based QoE optimization for HTTP-based adaptive video streaming
CN109729437B (zh) 流媒体自适应传输方法、终端和系统
US20170347159A1 (en) Qoe analysis-based video frame management method and apparatus
WO2013184374A1 (fr) Techniques pour une lecture en flux vidéo adaptative
KR101982290B1 (ko) 적응적 스트리밍 서비스의 체감 품질 향상을 위한 콘텐츠 특성 기반 스트리밍 시스템 및 방법
EP3563540B1 (fr) Procédé et système permettant de fournir des services vidéo en continu de qualité variable dans des réseaux de communication mobile
EP3993419A1 (fr) Transcodage adaptatif d'échelle de profil de vidéos
JP2011019068A (ja) 品質制御装置、品質制御システム、品質制御方法およびプログラム
US20160028594A1 (en) Generating and Utilizing Contextual Network Analytics
WO2014066975A1 (fr) Procédés et systèmes pour contrôler la qualité d'une session multimédia
KR102652518B1 (ko) 비디오 스트리밍을 위한 세션 기반 적응적 재생 프로파일 판정
US9313243B2 (en) Video streaming over data networks
JP2022545623A (ja) ビデオプレイバックにおける予測ベースドロップフレームハンドリング論理
Martini et al. QoE control, monitoring, and management strategies
He et al. Favor: Fine-grained video rate adaptation
Zhang et al. A QOE-driven approach to rate adaptation for dynamic adaptive streaming over http

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13850098

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013850098

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE