WO2018058587A1 - 一种视频质量的评估方法、评估装置和系统 - Google Patents

一种视频质量的评估方法、评估装置和系统 Download PDF

Info

Publication number
WO2018058587A1
WO2018058587A1 PCT/CN2016/101227 CN2016101227W WO2018058587A1 WO 2018058587 A1 WO2018058587 A1 WO 2018058587A1 CN 2016101227 W CN2016101227 W CN 2016101227W WO 2018058587 A1 WO2018058587 A1 WO 2018058587A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
parameter
initial
quality
duration
Prior art date
Application number
PCT/CN2016/101227
Other languages
English (en)
French (fr)
Inventor
王斌
聂章艳
赵其勇
李鹏
季莉
史成龙
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/101227 priority Critical patent/WO2018058587A1/zh
Publication of WO2018058587A1 publication Critical patent/WO2018058587A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a method, apparatus, and system for evaluating video quality.
  • KPI Key Performance Indicator
  • Embodiments of the present invention describe a method, apparatus, and system for evaluating video quality, which are used to reduce the troubles caused by existing evaluation methods for product design and experience presentation.
  • an embodiment of the present invention provides a method for evaluating video quality.
  • the method comprises: determining a video source quality parameter and a video buffering parameter of a video signal, wherein the video buffering parameter is related to a video playing duration of the video signal; and determining a video quality evaluation result of the video signal according to the video source quality parameter and the video buffering parameter .
  • the solution provided by the embodiment of the present invention can determine the video quality evaluation result of the video signal by determining the video source quality parameter and the video buffer parameter of the video signal, so that the video quality can be evaluated based on the video quality evaluation result.
  • This evaluation method can use a unified indicator to evaluate the video quality, which is convenient for product design and experience presentation.
  • the video source quality parameter is used to indicate the quality of the video slice source;
  • the video buffer parameter is a parameter related to the initial buffer and the carton of the video, and the video quality assessment result is a unified indicator for evaluating the video quality.
  • the video buffering parameters of the video signal may be determined by determining an initial buffering parameter and a video carding parameter of the video signal; and determining a video buffering parameter based on the initial buffering parameter and the video-cantoning parameter.
  • the initial buffering parameter is a parameter related to an initial buffering delay, where the initial buffering delay may be a delay from the initiation of the viewing request to the appearance of the video picture, and the unit may be seconds; the video cardon parameter is the video.
  • the video buffering parameter satisfies:
  • Y1 max(0,(1- ⁇ (5-A)- ⁇ (5-B)))), where Y1 is the video buffer parameter, A is the initial buffer parameter, B is the video cardon parameter, and ⁇ is The weight loss weight coefficient of the initial buffer, and ⁇ is the weight loss weight coefficient of Carton.
  • ⁇ and ⁇ are attenuated as the experience deteriorates.
  • the video quality assessment results are:
  • F1 (X-1) ⁇ Y1+1, where F1 is the video quality evaluation result, X is the video source quality parameter, and Y1 is the video buffer parameter.
  • the video buffering parameter satisfies:
  • Y2 1- ⁇ (5-A)- ⁇ (5-B), where Y2 is the video buffer parameter, A is the initial buffer parameter, B is the video cardon parameter, and ⁇ is the initial buffered quality loss weight coefficient. , ⁇ is the mass loss weight coefficient of Carton. Among them, the above ⁇ and ⁇ are attenuated as the experience deteriorates.
  • the video quality assessment results are:
  • F2 (X-1) ⁇ max(0,Y2)+1, where F2 is the video quality evaluation result, X is the video source quality parameter, and Y2 is the video buffer parameter.
  • ⁇ and ⁇ can be determined according to the following manners:
  • v17 and v18 are the weighting factors of the initial buffer; v19 is a constant and can take a fixed value; v20 and v21 are the weighting factors of the Carton.
  • the initial buffering parameter can be determined by determining the initial buffering delay of the video signal and the duration of the video playback; and determining the initial buffering parameter based on the initial buffering delay and the length of the video playback.
  • the length of the video playing period refers to the pure playing time of the user watching the video, and does not include the initial buffering delay and the video carding duration, and the unit may be seconds.
  • the initial buffering parameter satisfies:
  • A is the initial buffering parameter
  • C is the original initial buffering score without considering the influence of playing time
  • D is the maximum score correction interval with different initial buffering delay
  • E is the influence factor of different video playing time
  • v6, v7, v8 And v9 is the calculation factor of the initial buffer parameter and is constant
  • v10 and v11 are the correction factors of the initial buffer parameter and are constant
  • IBL is the initial buffer delay
  • VPD is the video playback duration.
  • the video cardon parameter can be determined by determining the ratio of the duration of the video signal and the duration of the video playback; and determining the video cardon parameter according to the ratio of the duration of the card and the duration of the video.
  • the ratio of the length of the card to the length of the video card and the duration of the video play may be obtained by dividing the duration of the video card by the duration of the video play.
  • the video stuck parameter satisfies:
  • B is the video cardon parameter
  • G is the original video card score when the video playback time is not considered
  • H is the influence factor of different video playback durations
  • v12, v13, v14, v15, and v16 are the calculations of the video cardon parameters.
  • the factor is constant
  • SR is the ratio of the length of the card
  • VPD is the duration of the video playback.
  • the video source quality parameter of the video signal may be determined by: determining a video resolution coefficient, a video coding algorithm coefficient, a video coding level coefficient, and an audio and video mixed code rate parameter of the video signal; and according to the video resolution
  • the coefficient, the video coding algorithm coefficient, the video coding level coefficient, and the audio and video mixed code rate parameters determine the video source quality parameter.
  • the video resolution coefficient is a coefficient related to video resolution, and common video resolutions are 360P, 480P, 720P, 1080P, 2K, 4K, etc.;
  • the video coding algorithm coefficient is a coefficient related to a video coding algorithm, and the video The coding algorithm may also be referred to as a video coding mode.
  • Common video coding algorithms include H.264, H.265, VP9, etc.; the above video coding level coefficients are coefficients related to a video coding level or a video coding level, and a common video coding level.
  • the video coding level has a base profile, a main profile, a high profile, etc.;
  • the above audio and video mixed code rate parameter is a parameter related to the audio and video mixed code rate, wherein the audio and video mixed code rate refers to audio and/or video in unit time.
  • the data transmission rate within, for example, can be expressed by a code stream, a code rate or a bit rate.
  • the video resolution coefficient includes a first resolution coefficient and a second resolution coefficient, wherein the first resolution coefficient can be used to represent the best quality score of the specified resolution video, by video resolution , video coding algorithm, video coding level and video rate determination; second resolution coefficient, also known as coding attenuation coefficient, used to indicate the degree of coding attenuation, subject to video resolution, video coding algorithm, video coding level and The effect of the bit rate of the video.
  • the video source quality parameter satisfies:
  • X is the video source quality parameter
  • v1 is the first resolution coefficient
  • v2 is the second resolution coefficient
  • v3 is the video coding algorithm coefficient
  • v4 is the video coding level coefficient
  • v5 is a constant
  • MBR is the audio and video mixture. Rate parameter.
  • part or all of the following video information may be acquired: initial buffer delay, video playback duration, carton duration ratio, video resolution, audio and video mixed code. Rate, video coding algorithm, video coding level, etc.
  • the above method examples may be performed by a terminal, a base station or a server; or may be performed by a function module in a terminal, a base station or a server.
  • an embodiment of the present invention provides an apparatus for evaluating video quality, the evaluation apparatus having a function of implementing the above method example.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the evaluation device includes a processor configured to support the evaluation device to perform a corresponding function in the above method. Further, the evaluation device may further include a transmitter and a receiver for supporting communication between the evaluation device and other devices. Further, the evaluation device may further include a memory for coupling with the processor, which stores program instructions and data necessary for the evaluation device.
  • the above evaluation device may be a terminal, a base station or a server; or may be a function module in a terminal, a base station or a server, such as the Mobile MOS SDK.
  • an embodiment of the present invention provides a communication system, which includes the video quality evaluation apparatus described in the above aspect.
  • an embodiment of the present invention provides a computer storage medium for storing the foregoing.
  • the solution of the embodiment of the present invention can determine the video quality evaluation result of the video signal by determining the video source quality parameter and the video buffer parameter of the video signal, so that the video quality can be performed based on the video quality evaluation result. Evaluation.
  • This evaluation method can use a unified indicator to evaluate the video quality, which is convenient for product design and experience presentation.
  • FIG. 1 is a schematic diagram of a possible application scenario according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a possible network architecture according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a video quality evaluation method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a video playing process according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a Mobile MOS SDK according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart diagram of another method for evaluating video quality according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of still another method for evaluating video quality according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic block diagram of an apparatus for evaluating video quality according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a base station according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • the network architecture and the service scenario described in the embodiments of the present invention are used to more clearly illustrate the technical solutions of the embodiments of the present invention, and do not constitute a limitation of the technical solutions provided by the embodiments of the present invention.
  • the evolution of the architecture and the emergence of new business scenarios, this issue The technical solutions provided by the embodiments are equally applicable to similar technical problems.
  • FIG. 1 shows an application scenario that may be applicable to an embodiment of the present invention.
  • the terminal accesses a carrier Internet Protocol (IP) service network through a Radio Access Network (RAN) and a Core Network (CN), as shown in FIG. 1A, for example, An IP Multimedia System (IMS) network, a Packet Switched Streaming Service (JPS) network, and the like.
  • IP Internet Protocol
  • RAN Radio Access Network
  • CN Core Network
  • IMS IP Multimedia System
  • JPS Packet Switched Streaming Service
  • LTE Long Term Evolution
  • CDMA Code Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • FIG. 2 illustrates a possible network architecture provided by an embodiment of the present invention.
  • the network architecture includes: a terminal, a base station, a core network device, the Internet, and a server.
  • the video stream is transmitted between the terminal and the server via the base station, the core network device, and the Internet.
  • the quality of the video stream can be evaluated by the terminal, base station or server.
  • a video application Application, APP
  • the terminal can evaluate the quality of the video stream through the video APP.
  • the base station can evaluate the quality of the video stream.
  • the video app provider's server can evaluate the quality of the video stream as the user views the video.
  • the above evaluation of the quality of the video stream can be understood as an assessment of the quality of the video signal.
  • the terminal involved in the embodiments of the present invention may include various handheld devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user equipment (User Equipment). , UE), mobile station (MS), terminal device, and the like.
  • UE User Equipment
  • MS mobile station
  • terminals the devices mentioned above are collectively referred to as terminals.
  • a base station (BS) according to an embodiment of the present invention is a device deployed in a radio access network to provide a wireless communication function for a terminal.
  • the base station may include various forms of macro base stations, micro base stations, relay stations, access points, and the like.
  • the name of a device with a base station function may be different, for example, in a Long Term Evolution (LTE) system, called an evolved Node B (eNB). Or eNodeB), in a 3G communication system, called a Node B or the like.
  • LTE Long Term Evolution
  • eNB evolved Node B
  • Node B 3G communication system
  • the foregoing apparatus for providing a wireless communication function to a terminal is collectively referred to as a base station or a BS.
  • embodiments of the present invention provide a method for evaluating video quality, and an apparatus and system based on the method.
  • the method comprises: determining a video source quality parameter and a video buffering parameter of a video signal, wherein the video buffering parameter is related to a video playing duration of the video signal; and determining a video quality evaluation result of the video signal according to the video source quality parameter and the video buffering parameter .
  • the solution provided by the embodiment of the present invention can determine the video quality evaluation result of the video signal by determining the video source quality parameter and the video buffer parameter of the video signal, so that the video quality can be evaluated based on the video quality evaluation result.
  • This evaluation method can use a unified indicator to evaluate the video quality, which is convenient for product design and experience presentation.
  • a video source quality parameter and a video buffer parameter of the video signal are determined, wherein the video buffer parameter is related to a video playback duration of the video signal.
  • the video source quality parameter can be determined by first determining a video resolution factor, a video coding algorithm coefficient, a video coding level coefficient, and an audio and video mix of the video signal.
  • the code rate parameter is further determined according to the video resolution coefficient, the video coding level coefficient, the video coding algorithm coefficient, and the audio and video mixed code rate parameter.
  • the video buffering parameters may be determined by first determining an initial buffering parameter and a video of the video signal; and then determining the video buffering parameter based on the initial buffering parameter and the video-cantoning parameter.
  • the initial buffering parameter may be determined by first determining an initial buffering delay of the video signal and a video playing duration; and determining the initial buffering parameter according to the initial buffering delay and the video playing duration.
  • the video jamon parameter may be determined by first determining the ratio of the duration of the video signal and the duration of the video playback; and determining the video jamon parameter according to the ratio of the duration of the video and the duration of the video playback.
  • a video quality assessment result of the video signal is determined based on the video source quality parameter and the video buffer parameter.
  • Video Quality Assessment Results A unified metric for evaluating video quality, such as the Video Mean Opinion Score (Video MOS) or Mobile Mean Opinion Score (Mobile MOS).
  • Video MOS or Mobile MOS may include different score levels, and different score levels represent different degrees of video quality.
  • Video MOS can be defined to include 1 to 5 points. The higher the score, the better the video quality. Among them, 1 point means the worst video quality, and 5 points means the best video quality.
  • Video source quality parameter used to indicate the quality of the video source, for example, can be represented by a video source quality score, where the video source score can be referred to as sQuality.
  • sQuality may include different score levels, and different score levels represent different degrees of quality of the video source.
  • sQuality can be defined to include 1 to 5 points. The higher the score, the better the quality of the video source. Among them, 1 point means that the quality of the video source is the worst, and 5 points means that the quality of the video source is the best.
  • Video resolution factor A factor related to Video Resolution (VR).
  • VR is used to describe the resolution of video detail, commonly measured in pixels per inch (Pixels per Inch, PPI). VR is often used to describe the clarity of the video. The higher the VR, the more detail it can show. Common VRs are 360P, 480P, 720P, 1080P, 2K, 4K, etc.
  • Video Coding Algorithm Coefficient A coefficient related to the Video Coding (VC) algorithm.
  • the video coding algorithm may also be referred to as a video coding method, which refers to a method of converting a video format file into another video format file by using a specific compression technique.
  • Common video coding algorithms or video coding methods are H.264, H.265, VP9, and so on.
  • Video coding level coefficient A coefficient related to the video coding level or video coding level.
  • the video coding level or video coding level refers to the "level” or “tier” in the video coding standard to meet the needs of different applications to limit the bit rate after video coding.
  • lower-level video has more limitations than high-level video
  • lower-level video has more limitations than high-level video.
  • a decoder conforms to a given level and level, indicating that the decoder is required to be able to decode video encoded according to the level and level or at a level and level below that level and level.
  • Common video coding levels or video coding levels include base profile, main profile, and high profile. Among them, the base profile can also be expressed as PROFILE_BASE, and the main profile can also be expressed as
  • PROFILE_MAIN high profile can also be expressed as PROFILE_HIGH.
  • Audio and video mixed code rate parameters parameters related to the audio and video mixed code rate.
  • Audio and video mixed code rate refers to the data transmission rate of audio and / or video in unit time, such as available code stream, bit rate or bit rate, etc., the unit can be kilobit per second (kbps) .
  • the bit rate is generally related to several factors: the sampling frequency of the original material, the number of bits used for sampling, the encoding of the data, the algorithm or degree of information compression, and the like.
  • Video buffer parameters parameters related to the initial buffer and carton of the video, for example, can be represented by a video buffer score, which can be called sBuffering.
  • sBuffering can include different score levels, and different score levels represent different degrees of initial buffering of the video and the experience of the card.
  • sBuffering can be defined to include 1 to 5 points. The higher the score, the better the experience. Among them, 1 point means the worst experience, and 5 points means the best experience.
  • Initial buffering parameters Parameters related to the initial buffering delay, such as may be represented by an initial buffering score, which may be referred to as sLoading.
  • sLoading can include different score levels, and different score levels represent different degrees of experience of the initial buffer.
  • sLoading can be defined to include 1 to 5 points. The higher the score, the better the experience. Among them, 1 point means the worst experience, and 5 points means the best experience.
  • Video Carton parameter A parameter related to the duration of the video card, such as the available video card score Said that the video Karton score can be called sStalling.
  • sStalling can include different score levels, and different score levels represent different degrees of experience brought by video card.
  • sStalling can be defined to include 1 to 5 points. The higher the score, the better the experience. Among them, 1 point means the worst experience, and 5 points means the best experience.
  • the length of the video card is the length of time during which the video is played, and the unit can be seconds (s).
  • the initial buffering delay may refer to a delay from the initiation of the viewing request by the terminal to the appearance of the video picture, and the unit may be seconds.
  • Video playback duration The pure playback duration of the video that the user watches, does not include the initial buffer delay and the video jam time, and the unit can be seconds.
  • Carton duration The ratio of the video card duration to the video playback time can be obtained by dividing the video card duration by the video playback duration.
  • initial buffer delay Before implementing the method shown in FIG. 3, some or all of the following video information may be acquired: initial buffer delay, video playback duration, card time duration ratio, video resolution, audio and video mixed code rate, and video coding algorithm. , video coding level, etc.
  • the video resolution, the audio/video mixed code rate, the video encoding algorithm, or the video encoding level may be obtained by parsing the source of the video through a video player.
  • FIG. 4 shows a video playing process provided by an embodiment of the present invention.
  • the terminal 4 initiates viewing request, for example, may be a video play button on the terminal by the user clicked.
  • the initial buffer download starts.
  • video playback begins.
  • time t 5 the end of the video player.
  • a video jam may occur between time t 2 and time t 5 , for example, as shown in FIG. 4 , at time t 3 , the carton starts, and at time t 4 , the carton ends.
  • FIG. 4 shows that although the example in FIG. 4 only appears once, the actual occurrence may occur two or more times, and the embodiment of the present invention is not limited.
  • the time from t 0 to t 1 is the preparation phase.
  • the terminal initiates the viewing request to the video source server to return the real address file of the video source (for example, Hypertext Transfer Protocol (HTTP) live stream (HTTP Live) Streaming, HLS) m3u8 file, HTTP Dynamic Adaptive Streaming over HTTP (DASH) Media Presentation Description (MPD), etc.
  • HTTP Hypertext Transfer Protocol
  • HLS HTTP Dynamic Adaptive Streaming over HTTP
  • MPD Media Presentation Description
  • This stage may include downloads of advertisements, pictures, etc., as well as analysis of video anti-theft chains.
  • the length of the preparation phase can be called the preparation delay.
  • RTT refers to round trip time
  • N refers to: preparation time delay divided by multiples obtained by RTT.
  • N can be 8, and the RTT can take the average of the remaining four round trip times after removing the maximum value in 5 round trip times.
  • time t 1 ⁇ t 2 is the time to download the initial buffer stage, to be understood that the client start request to store the video file sheet source real address of the video starts to phase.
  • the length of the initial buffer download phase can be referred to as the initial buffer download time.
  • the polling period is 10 milliseconds (ms).
  • the timer is terminated and the duration of the timer is recorded.
  • the duration of the timer running is the initial buffer download time.
  • initial buffering delay preparation delay + initial buffer download time.
  • the initial buffering delay is the time length from time t 0 to time t 2 .
  • the time t 2 to t 3 and the time t 4 to t 5 are video playback phases, and refer to a phase including pure video playback without including a video jam.
  • the duration of the video playback phase can be referred to as the video playback duration.
  • the time from t 3 to t 4 is the Karton stage.
  • the carton phase shown in FIG. 4 contains only one jam, and in practice, the carton phase may include one or at least two jams.
  • the time of a single jam can be determined by the following method: when the video buffer is empty, the card starts, and the timer starts to start timing; when the buffer is full, the video screen starts to play, and the timer is terminated. And record the running time of the timer.
  • the running time of this timer is the time of a single jam.
  • the duration of the Carton phase can be called the duration of the carton. When the Carton phase contains only one jam, the duration of the jam is the time of the jam; when the Stann phase contains at least two jams, the duration of the jam is the sum of the times of the at least two jams.
  • the video buffer parameter The method of determining the number, initial buffering parameter, video carding parameter, video quality evaluation result or video source quality parameter is further explained.
  • the video buffer parameters can be determined based on the initial buffer parameters and the video cardon parameters.
  • the video buffer parameters satisfy:
  • Y1 represents the video buffer parameter
  • A represents the initial buffer parameter
  • B represents the video cardon parameter
  • represents the initial buffered quality loss weight coefficient
  • represents the Carton mass loss weight coefficient, where ⁇ and ⁇ deteriorate with experience. And attenuation.
  • the video buffer parameter satisfies:
  • Y2 represents the video buffer parameter
  • A represents the initial buffer parameter
  • B represents the video cardon parameter
  • represents the initial buffered quality loss weight coefficient
  • represents the Carton mass loss weight coefficient
  • ⁇ and ⁇ can be determined in the following manner:
  • v17 and v18 are the weighting factors of the initial buffer; v19 is a constant and can take a fixed value; v20 and v21 are the weighting factors of the Carton.
  • v17 0.092
  • v20 0.108
  • v18, v19 and v21 may take values according to actual conditions.
  • ⁇ and ⁇ may be respectively: 0.092 ⁇ ⁇ ⁇ 0.25, and 0.108 ⁇ ⁇ ⁇ 0.3.
  • v17, v18, v19, v20, and v21 may have other values or values, which are not limited in the embodiment of the present invention.
  • the initial buffering parameter can be determined based on the initial buffering delay and the length of video playback.
  • the initial buffering parameters are:
  • A represents the initial buffering parameter
  • C represents the original initial buffering score that does not take into account the influence of the playing duration
  • D represents the score correction interval with the largest initial buffering delay
  • E represents a different The influence factor of the duration of video playback.
  • C, D, and E can be determined as follows:
  • v6, v7, v8, and v9 are the calculation factors of the initial buffer parameter and are constant, and IBL represents the initial buffer delay.
  • v10 and v11 are correction factors of the initial buffer parameter and are constant.
  • VPD indicates the duration of video playback.
  • v6, v7, v8, v9, v10, and v11 may be coefficients that are fitted according to subjective experimental results. For example, in subjective experiments, different users give their own subjective experience scores when watching a video (eg, may include scoring for initial buffering). Then, fitting can be performed based on these subjective experience scores to obtain v6, v7, v8, v9, v10, and v11. This method can comprehensively consider the user experience, and is beneficial for the operator or network provider to plan or expand the network according to the evaluation result, thereby improving the user experience.
  • v6, v7, v8, v9, v10, and v11 may have other values or values, which are not limited in the embodiment of the present invention.
  • the video cardon parameter can be determined according to the ratio of the duration of the card and the duration of the video playing.
  • the video stuck parameter satisfies:
  • B is the video cardon parameter
  • G is the original video card score when the video playback time is not considered
  • H is the influence factor of the different video playback duration
  • v12 is the calculation factor of the video cardon parameter and is constant.
  • G and H can be determined as follows:
  • v13, v14, v15 and v16 are also the calculation factors of the video cardon parameter and are constant, and SR represents the proportion of the cardon duration.
  • VPD indicates the duration of video playback.
  • v12, v13, v14, v15 and v16 may be subjective
  • the experimental results are fitted to the obtained coefficients.
  • different users give their own subjective experience scores when watching a video (eg, may include a score for a video card).
  • fitting can be performed based on these subjective experience scores to obtain v12, v13, v14, v15, and v16.
  • This method can comprehensively consider the user's experience experience, and help the operator or network provider to plan or expand the network according to the evaluation result, and improve the user experience.
  • v12, v13, v14, v15, and v16 may have other values or values, which are not limited in the embodiment of the present invention.
  • the video source quality parameters may be determined based on video resolution coefficients, video coding level coefficients, video coding algorithm coefficients, and audio and video mixed code rate parameters.
  • the video resolution coefficients include a first resolution factor and a second resolution factor.
  • the first resolution coefficient can be used to represent the best quality score of the specified resolution video, for example, can be determined by video resolution, video encoding algorithm, video encoding level, and video code rate;
  • second resolution coefficient can be used to indicate encoding
  • the degree of attenuation for example, may be affected by video resolution, video coding algorithms, video coding levels, and the code rate of the video, where the second resolution factor may also be referred to as a coded attenuation coefficient.
  • the video source quality parameters are:
  • X represents the video source quality parameter
  • v1 represents the first resolution coefficient
  • v2 represents the second resolution coefficient
  • v3 represents the video coding algorithm coefficient
  • v4 represents the video coding level coefficient
  • v5 is a constant
  • MBR represents the audio and video mixed code rate. parameter.
  • v1, v2, v3, v4, and v5 may be determined according to the following manners, respectively:
  • V1 can be determined according to the video resolution.
  • the value can be set according to the correspondence shown in Table 1.
  • the different video resolutions may include: 360P, 480P, 720P, 1080P, 2K (1440P) or 4K (2160P). ).
  • the value of v2 may be related to the video resolution: for example, 1800 ⁇ v2 ⁇ 3000, wherein when the video resolution is higher, the value of v2 is larger; for example, v2 may also be calculated according to the video resolution.
  • the value of v3 may be related to coding efficiency: for example, 0 ⁇ v3 ⁇ 3, wherein the higher the coding efficiency, the larger the value of v3.
  • V4 can take values within a certain range, for example, 0 ⁇ v4 ⁇ 1.
  • V5 is a constant.
  • v1, v2, v3, v4, and v5 may have other values or values, which are not limited in the embodiment of the present invention.
  • the video quality evaluation result of the video signal can be determined according to the video source quality parameter and the video buffer parameter.
  • the video quality assessment result satisfies:
  • F1 represents the video quality evaluation result
  • X represents the video source quality parameter
  • Y1 represents the video buffer parameter.
  • F2 represents the video quality evaluation result
  • X represents the video source quality parameter
  • Y2 represents the video buffer parameter.
  • the solution of the embodiment of the present invention may be further implemented according to at least one of the above (1) to (5).
  • FIG. 5 is a schematic diagram of a Mobile MOS SDK according to an embodiment of the present invention.
  • the video quality assessment result is Mobile MOS
  • the video source quality parameter is sQuality
  • the video buffer parameter is sBuffering
  • the initial buffer parameter is sLoading
  • the video cardon parameter is sStalling as an example, and the scheme shown in FIG. 5 is explained.
  • the Mobile MOS SDK includes an input module, a calculation module, and an output module.
  • the input module is configured to obtain a video resolution, an audio and video mixed code rate, a video encoding algorithm, a video encoding level, an initial buffering delay, a cardinal time ratio, and a video playing duration.
  • the specific obtaining manners may be referred to FIG. 3 and FIG. 4 . The manner in which these contents are obtained in the illustrated scheme will not be described here.
  • the calculation module is used to calculate sQuality, sLoading, sStalling, sBuffering and Mobile MOS.
  • the calculation module can calculate sQuality, sLoading, and sStalling first; then calculate sBuffering based on sLoading and sStalling; finally calculate Mobile MOS based on sQuality and sBuffering.
  • the output module is used to output Mobile MOS.
  • the output module is further configured to output at least one of sQuality, sLoading, sStalling, and sBuffering.
  • the Mobile MOS SDK shown in FIG. 5 described above can be applied to the terminal, base station or server shown in FIG. 2.
  • the method for evaluating the video quality by the terminal, the base station or the server will be described below based on the schemes shown in FIGS. 3 and 4.
  • FIG. 6 is a diagram showing another method for evaluating video quality according to an embodiment of the present invention, in which a video quality evaluation result is taken as an example of Mobile MOS.
  • the above evaluation method may include parts 601 to 604 from the beginning to the end.
  • part 601 can be executed by the terminal;
  • parts 602 to 604 can be executed by the terminal or the server.
  • the following is an example of the execution of the terminal in the range of 601 to 604, and the implementation of the 602 to 604 is performed by the server, and is not described here.
  • section 601 a viewing request is initiated.
  • the user clicks on the play button of the video APP on the terminal to cause the terminal to initiate a viewing request.
  • the above video information may include some or all of the following contents: audio and video mixed code rate, video Frequency resolution, video coding algorithm, video coding level, initial buffer delay, card time duration, or video playback time.
  • these video information can be periodically counted during video playback. For example, the entire video can be sliced and then the video information in each cycle can be acquired based on the slice.
  • the video information can be counted after the end of the video playback.
  • the server may receive the video information from the terminal after the terminal acquires the video information.
  • sQuality, sLoading, and sStalling can be calculated first; then sBuffering is calculated based on sLoading and sStalling; and Mobile MOS is calculated based on sQuality and sBuffering.
  • the specific calculation methods of the above sQuality, sLoading, sStalling, sBuffering, and Moblie MOS can refer to the video source quality parameter, the initial buffer parameter, the video cardon parameter, the video buffer parameter, and the video quality evaluation result in the method shown in FIG. 3, respectively. And can be implemented by referring to the corresponding examples described in (1) to (5) above, and details are not described herein.
  • the above evaluation results include Mobile MOS.
  • the foregoing evaluation result may further include at least one of sQuality, sLoading, sStalling, or sBuffering.
  • sections 603 and 604 can be implemented by calling the Mobile MOS SDK shown in FIG. 5.
  • the calculation module executes part 603, and the output module executes the 604 module.
  • the network operator or the provider of the video APP can locate the area with lower Mobile MOS score according to the output evaluation result, to specifically improve the video quality provided. Further, the network operator or the video APP provider can locate the problem according to sQuality, sLoading, sStalling or sBuffering, and then optimize accordingly, for example, providing a clearer source, increasing network bandwidth, and optimizing the network architecture.
  • FIG. 7 shows still another method for evaluating video quality according to an embodiment of the present invention, wherein The video quality assessment results are described by Mobile MOS as an example.
  • the method shown in Figure 7 can be performed by a base station.
  • the above evaluation method may include parts 701 to 706 from the beginning to the end, wherein the 702 part and the 703 part are selected for execution.
  • the base station determines if the video stream is over.
  • the base station performs a full video spot.
  • "doping” can be understood as recording video parameters or video information.
  • the video information of the base station performing the complete video spotting may include a video OTT (over the top) name, an audio and video mixed code rate, an initial buffering delay, a cardon duration ratio, and a video playing duration.
  • OTT refers to providing various application services to users through the Internet.
  • the video OTT name can be obtained by domain name resolution based on the Domain Name System (DNS).
  • DNS Domain Name System
  • the audio/video mixed code rate may be an encrypted scene rate identification scheme (for example, a Service Classification (SC) scheme) or an unencrypted scene rate identification scheme (for example, a Deep Packet Inspection (DPI) scheme). Get it by other means.
  • SC Service Classification
  • DPI Deep Packet Inspection
  • the entire video can be periodically sliced during a video playback.
  • the base station can synthesize each slice cycle for complete video management.
  • the base station performs a video stream cycle check.
  • the 703 part is similar to the 702 part. The difference is that the audio and video mixed code rate, the initial buffering delay, the puncturing duration, and the video playing time of the video information of the video stream cycle are respectively within the current sharding period. Audio and video mixed code rate, initial buffer delay, carton duration and video playback time.
  • the base station performs OTT code rate mapping to complement the video information.
  • the base station can obtain video resolution, video coding algorithm, and video coding level through OTT code rate mapping.
  • the base station can perform OTT code rate mapping based on the above video OTT name.
  • the base station can obtain all of the following video information: audio and video mixed code rate, video resolution, video coding algorithm, video coding level, initial buffer delay, cardon duration, and video playback duration.
  • the base station calculates Mobile MOS.
  • the base station outputs the evaluation result.
  • the above parts 705 and 706 are similar to the parts of 603 and 604 in FIG. 6, respectively, and the detailed descriptions of parts 603 and 604 can be referred to, and are not described herein.
  • parts 703 and 704 can be implemented by calling the Mobile MOS SDK shown in FIG. 5.
  • the calculation module executes part 703, and the output module executes the 704 module.
  • the video quality indicators at the network level, the cell level or the grid level can be presented using the output evaluation results to evaluate and optimize the video quality in real time.
  • a Radio Resource Management (RRM) algorithm or solution for a related video may be scheduled or optimized according to the evaluation result to achieve an ideal video quality.
  • RRM Radio Resource Management
  • An embodiment of the present invention further provides an apparatus for evaluating video quality.
  • the evaluation device includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above functions.
  • the embodiments of the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the technical solutions of the embodiments of the present invention.
  • the embodiment of the present invention may divide the functional unit by the above-mentioned evaluation device according to the above method example.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 8 shows a schematic block diagram of a video quality evaluation apparatus provided in an embodiment of the present invention.
  • the evaluation device 800 includes a processing unit 802.
  • the processing unit 802 is configured to control the management of the actions of the evaluation device 800, for example, the processing unit 802 is configured to support the evaluation device 800 to perform the methods illustrated in Figures 3, 6, and 7, and/or for the techniques described herein Other processes.
  • the evaluation device may also include a communication unit 803 for supporting communication between the evaluation device 800 and other network elements.
  • Evaluation device 800 A storage unit 801 for storing program codes and data of the evaluation device 800 may also be included.
  • the processing unit 802 can be a processor or a controller, and can be, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific). Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication unit 803 can be a communication interface, a transceiver or a transceiver circuit, etc., wherein the communication interface is a collective name.
  • the communication interface can include one or at least two interfaces.
  • the storage unit 801 can be a memory.
  • the above-mentioned evaluation device may be a terminal, a base station or a server; or may be a function module in a terminal, a base station or a server, for example, may be the Moblie MOS SDK shown in FIG. 5 above, and the embodiment of the present invention is not limited.
  • the embodiment of the present invention will be further described with reference to FIGS. 9-11, taking the evaluation device as a terminal, a base station or a server as an example.
  • FIG. 9 shows a simplified schematic diagram of one possible design structure of a terminal provided in an embodiment of the present invention.
  • the terminal 900 includes a transmitter 901, a receiver 902, and a processor 903.
  • the processor 903 may also be a controller, and is represented as "controller/processor 903" in FIG.
  • the terminal 900 may further include a modem processor 905, wherein the modem processor 905 may include an encoder 907, a modulator 907, a decoder 908, and a demodulator 909.
  • the transmitter 901 conditions (eg, analog transforms, filters, amplifies, upconverts, etc.) the output samples and generates an uplink signal that is transmitted via an antenna to the base station described in the above embodiments. .
  • the antenna receives the downlink signal transmitted by the base station in the above embodiment.
  • Receiver 902 conditions (eg, filters, amplifies, downconverts, digitizes, etc.) the signals received from the antenna and provides input samples.
  • encoder 907 receives the traffic data and signaling messages to be transmitted on the uplink and processes (e.g., formats, codes, and interleaves) the traffic data and signaling messages.
  • Modulator 907 further processes (e.g., symbol maps and modulates) the encoded traffic data and signaling messages and provides output samples.
  • Demodulator 909 processes (e.g., demodulates) the input samples and provides symbol estimates.
  • Decoder 908 processes (e.g., deinterleaves and decodes) the symbol estimate and provides decoded data and signals that are sent to terminal 900. Order the message.
  • Encoder 907, modulator 907, demodulator 909, and decoder 908 may be implemented by a composite modem processor 905. These units are processed according to the radio access technology employed by the radio access network (e.g., access technologies of LTE and other evolved systems). It should be noted that when the terminal 900 does not include the modem processor 905, the above functions of the modem processor 905 can also be completed by the processor 903.
  • the processor 903 performs control management on the actions of the terminal 900 for performing the processing performed by the terminal 900 in the above embodiment of the present invention.
  • the processor 903 is further configured to perform the processes related to the terminal in the method shown in FIG. 3 and FIG. 6 and/or other processes of the technical solutions described in the embodiments of the present invention.
  • the terminal 900 may further include a memory 904 for storing program codes and data for the terminal 900.
  • FIG. 10 is a schematic diagram showing a possible structure of a base station according to an embodiment of the present invention.
  • Base station 1000 includes a processor 1002 and a communication interface 1004.
  • the processor 1002 may also be a controller, and is represented as "controller/processor 1002" in FIG.
  • the communication interface 1004 is configured to support the base station to communicate with other network elements (eg, other base stations, core network devices, etc.).
  • the base station 1000 may further include a transmitter/receiver 1001.
  • the transmitter/receiver 1001 is configured to support transmission and reception of information between the base station and the terminal in the above embodiment, and to support radio communication between the terminal and other terminals.
  • the processor 1002 performs various functions for communicating with the terminal.
  • an uplink signal from the terminal is received via an antenna, demodulated by the receiver 1001 (eg, demodulating the high frequency signal into a baseband signal), and further processed by the processor 1002 to recover the terminal.
  • the traffic data and signaling messages are processed by the processor 1002 and modulated by the transmitter 1001 (e.g., modulating the baseband signal into a high frequency signal) to generate a downlink signal and transmitted to the terminal via the antenna.
  • the above demodulation or modulation function may also be completed by the processor 1002.
  • the processor 1002 is further configured to perform the processes related to the base station in the methods shown in FIG. 3 and FIG. 7 and/or other processes of the technical solutions described in the embodiments of the present invention.
  • the base station 1000 may further include a memory 1003 for storing program codes and data of the base station 1000.
  • FIG. 10 only shows a simplified design of base station 1000.
  • the base station 1000 can include any number of transmitters, receivers, processors, controllers, memories, communication units, etc., and all of the base stations that can implement the embodiments of the present invention are within the protection scope of the embodiments of the present invention.
  • FIG. 11 is a schematic structural diagram of a server provided by an embodiment of the present invention.
  • the server 1100 includes a processor 1102, a communication interface 1103, and a memory 1101.
  • the server 1100 may further include a bus 1104.
  • the communication interface 1103, the processor 1102, and the memory 1101 may be connected to each other through a bus 1104.
  • the bus 1104 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (abbreviated). EISA) bus and so on.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 1104 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 11, but it does not mean that there is only one bus or one type of bus.
  • the processor 1102 controls and manages the actions of the server 1100 for performing the processing performed by the server 1100 in the above-described embodiment of the present invention.
  • the processor 1102 is configured to perform the processes related to the terminal in the method shown in FIG. 3 and FIG. 6 and/or other processes of the technical solutions described in the embodiments of the present invention.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in an evaluation device for video quality. Of course, the processor and the storage medium can also exist as discrete components in the evaluation device of the video quality.
  • the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions can be stored on a computer readable medium or as a meter Transferring one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

一种视频质量的评估方法中,确定视频信号的视频源质量参数和视频缓冲参数,其中,视频缓冲参数与视频信号的视频播放时长有关(301);并根据视频源质量参数和视频缓冲参数,确定视频信号的视频质量评估结果(302)。其中,在确定视频缓冲参数时,可以先确定视频信号的初始缓冲参数和视频卡顿参数;再根据初始缓冲参数和视频卡顿参数确定视频缓冲参数。所提供的方案,能够确定上述视频质量评估结果来对视频质量进行评价。这种评价方式能采用统一的指标来评价视频质量,便于产品设计和体验呈现。

Description

一种视频质量的评估方法、评估装置和系统 技术领域
本发明涉及通信技术领域,尤其涉及视频质量的评估方法、装置和系统。
背景技术
随着移动宽带网络的发展,视频业务的发展越来越快。目前业界采用传统的关键性能指标(Key Performance Indicator,KPI)的方式来评价视频质量的好坏。然而,传统的KPI的方式需要同时采用较多指标来评价视频质量,而且,由于应用特征的不同,针对不同应用的指标的定义之间存在差异。这种采用较多指标且各指标的定义之间存在差异的评价方式给产品设计和体验呈现带来不少困扰。
发明内容
本发明实施例描述了一种视频质量的评估方法、装置和系统,用以减少现有的评价方式给产品设计和体验呈现带来的困扰。
一方面,本发明实施例提供一种视频质量的评估方法。该方法包括:确定视频信号的视频源质量参数和视频缓冲参数,其中,视频缓冲参数与视频信号的视频播放时长有关;并根据视频源质量参数和视频缓冲参数,确定视频信号的视频质量评估结果。本发明实施例提供的方案,可以通过确定视频信号的视频源质量参数和视频缓冲参数来确定视频信号的视频质量评估结果,从而可以基于该视频质量评估结果对视频质量进行评价。这种评价方式能采用统一的指标来评价视频质量,便于产品设计和体验呈现。
其中,上述视频源质量参数用于指示视频片源的质量;上述视频缓冲参数是与视频的初始缓冲和卡顿有关的参数,上述视频质量评估结果是用于评价视频质量的统一指标。
在一个可能的设计中,可以通过以下方式确定视频信号的视频缓冲参数:确定视频信号的初始缓冲参数和视频卡顿参数;并根据初始缓冲参数和视频卡顿参数确定视频缓冲参数。
其中,上述初始缓冲参数是与初始缓冲时延有关的参数,其中,初始缓冲时延可以指从发起观看请求到出现视频画面的时延,其单位可以为秒;上述视频卡顿参数是与视频卡顿时长有关的参数,其中,视频卡顿时长是指视频播放过程中,出现卡顿的时长,其单位可以为秒。
在一种可能的实施方式中,视频缓冲参数满足:
Y1=max(0,(1-α×(5-A)-β×(5-B))),其中,Y1为视频缓冲参数,A为初始缓冲参数,B为视频卡顿参数,α为初始缓冲的质量损失权重系数,β为卡顿的质量损失权重系数。其中,上述α和β随体验变差而衰减。
在这种情况下,视频质量评估结果满足:
F1=(X-1)×Y1+1,其中,F1为视频质量评估结果,所述X为视频源质量参数,Y1为视频缓冲参数。
在另一种可能的实施方式中,视频缓冲参数满足:
Y2=1-α×(5-A)-β×(5-B),其中,Y2为视频缓冲参数,A为初始缓冲参数,B为视频卡顿参数,α为初始缓冲的质量损失权重系数,β为卡顿的质量损失权重系数。其中,上述α和β随体验变差而衰减。
在这种情况下,视频质量评估结果满足:
F2=(X-1)×max(0,Y2)+1,其中,F2为视频质量评估结果,X为视频源质量参数,Y2为视频缓冲参数。
上述两种可能的实施方式中,α和β可以根据以下方式确定:
α=v17+v18×ev19×A,β=v20+v21×ev19×B
其中,v17和v18为初始缓冲的权重因子;v19为常量,可以取固定值;v20和v21为卡顿的权重因子。
在一个可能的设计中,可以通过以下方式确定初始缓冲参数:确定视频信号的初始缓冲时延和视频播放时长;并根据初始缓冲时延和视频播放时长,确定初始缓冲参数。其中,视频播放时长是指用户观看视频的纯播放时长,不包含初始缓冲时延及视频卡顿时长,其单位可以为为秒。
在一种可能的实施方式中,初始缓冲参数满足:
A=max(1,min(5,(C+D×E))),其中,
D=max(0,v10-v11×C),
Figure PCTCN2016101227-appb-000002
其中,A为初始缓冲参数,C为未考虑播放时长影响的原始初始缓冲得分,D为不同初始缓冲时延最大的得分修正区间,E为不同的视频播放时长的影响因子,v6、v7、v8和v9为初始缓冲参数的计算因子且为常量,v10和v11为初始缓冲参数的修正因子且为常量,IBL为初始缓冲时延,VPD为视频播放时长。
在一个可能的设计中,可以通过以下方式确定视频卡顿参数:确定视频信号的卡顿时长占比和视频播放时长;并根据卡顿时长占比和视频卡顿时长确定视频卡顿参数。其中,卡顿时长占比为视频卡顿时长与视频播放时长的比值,可由视频卡顿时长除以视频播放时长得到。
在一种可能的实施方式中,视频卡顿参数满足:
B=G-v12×(H-1)×(5-G),其中,
G=max(0,IF(SR<0.15,v13-v14×SR,v16-v17×SR)),
H=max(1,VPD/60),
其中,B为视频卡顿参数,G为未考虑视频播放时长时的原始视频卡顿得分,H为不同视频播放时长的影响因子,v12、v13、v14、v15和v16为视频卡顿参数的计算因子且为常量,SR为卡顿时长占比,VPD为视频播放时长。
在一个可能的设计中,可以通过以下方式确定视频信号的视频源质量参数:确定视频信号的视频分辨率系数、视频编码算法系数、视频编码等级系数和音视频混合码率参数;并根据视频分辨率系数、视频编码算法系数、视频编码等级系数和音视频混合码率参数,确定视频源质量参数。
其中,上述视频分辨率系数是与视频分辨率有关的系数,常见的视频分辨率有360P、480P、720P、1080P、2K、4K等;上述视频编码算法系数是与视频编码算法有关的系数,视频编码算法也可以称为视频编码方式,常见的视频编码算法有H.264、H.265、VP9等;上述视频编码等级系数是与视频编码等级或视频编码层级有关的系数,常见的视频编码等级或视频编码层级有base profile、main profile、high profile等;上述音视频混合码率参数是与音视频混合码率有关的参数,其中,音视频混合码率是指音频和/或视频在单位时间内的数据传输率,例如可以用码流、码率或比特率表示。
在一种可能的实施方式中,视频分辨率系数包括第一分辨率系数和第二分辨率系数,其中,第一分辨率系数可用于表示指定分辨率视频的最好质量得分,由视频分辨率、视频编码算法、视频编码等级和视频的码率确定;第二分辨率系数,也可以称为编码衰减系数,用于表示编码衰减的程度,受视频分辨率、视频编码算法、视频编码等级和视频的码率的影响。在这种实施方式中,视频源质量参数满足:
Figure PCTCN2016101227-appb-000003
其中,X为视频源质量参数,v1为第一分辨率系数,v2为第二分辨率系数,v3为视频编码算法系数,v4为视频编码等级系数,v5为常量,MBR为所述音视频混合码率参数。
在一个可能的设计中,在实施本发明提供的方法之前,可以先获取以下视频信息的部分或全部:初始缓冲时延、视频播放时长、卡顿时长占比、视频分辨率、音视频混合码率、视频编码算法、视频编码等级等。
上述方法示例可以由终端、基站或服务器执行;也可以由终端、基站或服务器中的功能模块执行。
另一方面,本发明实施例提供一种视频质量的评估装置,该评估装置具有实现上述方法示例的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
在一个可能的设计中,该评估装置包括处理器,所述处理器被配置为支持该评估装置执行上述方法中相应的功能。进一步的,该评估装置还可以包括发射器和接收器,所述发射器和接收器用于支持评估装置与其他设备之间的通信。进一步的,该评估装置还可以包括存储器,所述存储器用于与处理器耦合,其保存该评估装置必要的程序指令和数据。
在一个可能的设计中,上述评估装置可以为终端、基站或服务器;也可以为终端、基站或服务器中的功能模块,例如Mobile MOS SDK。
又一方面,本发明实施例提供一种通信系统,该系统包括上述方面所述的视频质量的评估装置。
再一方面,本发明实施例提供一种计算机存储介质,用于储存为上述用 于视频质量的评估装置所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
相较于现有技术,本发明实施例的方案,可以通过确定视频信号的视频源质量参数和视频缓冲参数来确定视频信号的视频质量评估结果,从而可以基于该视频质量评估结果对视频质量进行评价。这种评价方式能采用统一的指标来评价视频质量,便于产品设计和体验呈现。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种可能的应用场景的示意图;
图2为本发明实施例提供的一种可能的网络架构的示意图;
图3为本发明实施例提供的一种视频质量的评估方法的流程示意图;
图4为本发明实施例提供的一种视频播放过程的示意图;
图5为本发明实施例提供的一种Mobile MOS SDK的示意图;
图6为本发明实施例提供的另一种视频质量的评估方法的流程示意图;
图7为本发明实施例提供的又一种视频质量的评估方法的流程示意图;
图8为本发明实施例提供的一种视频质量的评估装置的示意性框图;
图9为本发明实施例提供的一种终端的结构示意图;
图10为本发明实施例提供的一种基站的结构示意图;
图11为本发明实施例提供的一种服务器的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述。
本发明实施例描述的网络架构以及业务场景是为了更加清楚的说明本发明实施例的技术方案,并不构成对于本发明实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本发 明实施例提供的技术方案对于类似的技术问题,同样适用。
下面首先结合图1~图2对本发明实施例适用的一些可能的应用场景及网络架构进行介绍。
图1示出了本发明实施例可能适用的一种应用场景。如图1所示,如图1A所示,终端通过无线接入网(Radio Access Network,RAN)及核心网(Core Network,CN)接入运营商互联网协议(Internet Protocol,IP)业务网络,例如多媒体子系统(IP Multimedia System,IMS)网络、包交换流业务(Packet Switched Streaming Service,简PSS)网络等。本发明描述的技术方案可以适用于长期演进(Long Term Evolution,LTE)系统,或其他采用各种无线接入技术的无线通信系统,例如采用码分多址(Code Division Multiple Access,CDMA)、频分多址(Frequency Division Multiple Access,FDMA)、时分多址(Time Division Multiple Access,TDMA)、正交频分多址(Orthogonal Frequency Division Multiple Access,OFDMA)、单载波频分多址(Single Carrier Frequency Division Multiple Access,SC-FDMA)等接入技术的系统。此外,还可以适用于LTE系统后续的演进系统,如第五代(5th Generation,5G)系统等。为清楚起见,这里仅以LTE系统为例进行说明。在LTE系统中,演进的通用陆地无线接入网(Evolved Universal Terrestrial Radio Access Network,E-UTRAN)作为无线接入网,演进分组核心网(Evolved Packet Core,EPC)作为核心网。终端通过E-UTRAN及EPC接入IMS网络。需要说明的是,当本发明实施例的方案应用于5G系统或未来可能出现的其他系统时,基站、终端的名称可能发生变化,但这并不影响本发明实施例方案的实施。
基于上述应用场景,图2示出了本发明实施例提供的一种可能的网络架构。如图2所示,该网络架构包括:终端、基站、核心网设备、因特网和服务器。视频流在终端与服务器之间,经过基站、核心网设备和因特网进行传输。图2中,终端、基站或服务器等可以对视频流的质量进行评估。例如,终端中安装有视频应用(Application,APP),当用户使用终端上的视频APP观看视频时,终端可以通过视频APP对视频流的质量进行评估。又例如,在用户观看视频的过程中,基站可以对视频流的质量进行评估。又例如,在用户观看视频的过程中,视频APP提供商的服务器可以对视频流的质量进行评估。上述对视频流的质量进行评估可以理解为对视频信号的质量进行评估。
本发明实施例中,名词“网络”和“系统”经常交替使用,但本领域技术人员可以理解其含义。本发明实施例所涉及到的终端可以包括各种具有无限通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为终端。本发明实施例所涉及到的基站(Base Station,BS)是一种部署在无线接入网中用以为终端提供无线通信功能的装置。所述基站可以包括各种形式的宏基站,微基站,中继站,接入点等等。在采用不同的无线接入技术的系统中,具备基站功能的设备的名称可能会有所不同,例如在长期演进(Long Term Evolution,LTE)系统中,称为演进的节点B(evolved NodeB,eNB或eNodeB),在3G通信系统中,称为节点B(Node B)等等。为方便描述,本发明实施例中,上述为终端提供无线通信功能的装置统称为基站或BS。
下面将基于上面所述的本发明实施例涉及的共性方面,对本发明实施例进一步详细说明。
传统的评价方式中,采用的指标较多,且针对不同应用的指标的定义之间存在差异,这给产品设计和体验呈现带来不少困扰。
有鉴于此,本发明实施例提供一种视频质量的评估方法,和基于这个方法的装置和系统。该方法包括:确定视频信号的视频源质量参数和视频缓冲参数,其中,视频缓冲参数与视频信号的视频播放时长有关;并根据视频源质量参数和视频缓冲参数,确定视频信号的视频质量评估结果。本发明实施例提供的方案,可以通过确定视频信号的视频源质量参数和视频缓冲参数来确定视频信号的视频质量评估结果,从而可以基于该视频质量评估结果对视频质量进行评价。这种评价方式能采用统一的指标来评价视频质量,便于产品设计和体验呈现。
下面结合附图3,对本发明实施例提供的方案进行说明。
在301部分,确定视频信号的视频源质量参数和视频缓冲参数,其中,视频缓冲参数与视频信号的视频播放时长有关。
在一个示例中,可以通过以下方式确定视频源质量参数:先确定视频信号的视频分辨率系数、视频编码算法系数、视频编码等级系数和音视频混合 码率参数;再根据视频分辨率系数、视频编码等级系数、视频编码算法系数和音视频混合码率参数,确定上述视频源质量参数。
在一个示例中,可以通过以下方式确定视频缓冲参数:先确定视频信号的初始缓冲参数和视频卡顿参数;再根据初始缓冲参数和视频卡顿参数,确定上述视频缓冲参数。
在一个示例中,可以通过以下方式确定初始缓冲参数:先确定视频信号的初始缓冲时延和视频播放时长;再根据初始缓冲时延和视频播放时长,确定上述初始缓冲参数。
在一个示例中,可以通过以下方式确定视频卡顿参数:先确定视频信号的卡顿时长占比和视频播放时长;再根据卡顿时长占比和视频播放时长,确定视频卡顿参数。
在302部分,根据视频源质量参数和视频缓冲参数,确定视频信号的视频质量评估结果。
下面对本发明实施例中所涉及到的上述各个参数或系数等概念做简单介绍:
视频质量评估结果:用于评价视频质量的统一指标,例如可用视频平均意见分(Video Mean Opinion Score,Video MOS)或移动平均意见分(Mobile Mean Opinion Score,Mobile MOS)来表示。可选的,Video MOS或Mobile MOS可包括不同的分数等级,不同的分数等级代表视频质量好坏的不同程度。例如,Video MOS可定义包括1~5分,分数越高表示视频质量越好,其中,1分表示视频质量最差,5分表示视频质量最好。
视频源质量参数:用于指示视频片源的质量,例如可用视频片源质量得分来表示,其中,视频片源得分可以称为sQuality。可选的,sQuality可以包括不同的分数等级,不同的分数等级代表视频片源的质量好坏的不同程度。例如,sQuality可定义包括1~5分,分数越高表示视频片源的质量越好,其中,1分表示视频片源的质量最差,5分表示视频片源的质量最好。
视频分辨率系数:与视频分辨率(Video Resolution,VR)有关的系数。VR用以描述视频细节分辨能力,常用像素每英寸(Pixels per Inch,PPI)来衡量。通常多用VR来描述视频的清晰度,VR越高越能表现出更多细节。常见的VR有360P、480P、720P、1080P、2K、4K等。
视频编码算法系数:与视频编码(Video Coding,VC)算法有关的系数。视频编码算法也可称为视频编码方式,是指通过特定的压缩技术,将某种视频格式的文件转换成另一种视频格式文件的方式。常见的视频编码算法或视频编码方式有H.264、H.265、VP9等。
视频编码等级系数:与视频编码等级或视频编码层级有关的系数。视频编码等级或视频编码层级,是指在视频编码标准中,为了应对不同应用的需求所制定的“等级(level)”或“层级(tier)”,以对视频编码之后的比特率做限制。一般来说,低层级的视频相较于高层级的视频会有更多的限制,低等级的视频相较于高等级的视频会有更多的限制。通常,一个解码器符合某一个给定的层级与等级,代表该解码器被要求必须能解码按照该层级与等级或按照低于该层级与等级的层级与等级所编码出来的视频。常见的视频编码等级或视频编码层级有base profile、main profile、high profile等。其中,base profile也可表示为PROFILE_BASE,main profile也可表示为
PROFILE_MAIN,high profile也可表示为PROFILE_HIGH。
音视频混合码率参数:与音视频混合码率有关的参数。音视频混合码率,是指音频和/或视频在单位时间内的数据传输率,例如可用码流、码率或比特率等,其单位可以为千比特每秒(kilo bit per second,kbps)。比特率一般与以下几个因素相关:原始物质的取样频率、取样所使用的比特数量、数据的编码方式、信息压缩的算法或程度等。
视频缓冲参数:与视频的初始缓冲和卡顿有关的参数,例如可以用视频缓冲得分来表示,其中,视频缓冲得分可以称为sBuffering。可选的,sBuffering可以包括不同的分数等级,不同的分数等级代表视频的初始缓冲和卡顿带来的体验好坏的不同程度。例如,sBuffering可定义包括1~5分,分数越高表示体验越好,其中,1分表示体验最差,5分表示体验最好。
初始缓冲参数:与初始缓冲时延有关的参数,例如可用初始缓冲得分来表示,其中,初始缓冲得分可以称为sLoading。可选的,sLoading可以包括不同的分数等级,不同的分数等级代表初始缓冲带来的体验好坏的不同程度。例如,sLoading可定义包括1~5分,分数越高表示体验越好,其中,1分表示体验最差,5分表示体验最好。
视频卡顿参数:与视频卡顿时长有关的参数,例如可用视频卡顿得分来 表示,其中,视频卡顿得分可以称为sStalling。可选的,sStalling可以包括不同的分数等级,不同的分数等级代表视频卡顿带来的体验好坏的不同程度。例如,sStalling可定义包括1~5分,分数越高表示体验越好,其中,1分表示体验最差,5分表示体验最好。视频卡顿时长是指视频播放过程中,出现卡顿的时长,其单位可以为秒(s)。
初始缓冲时延:初始缓冲时延可以指从终端发起观看请求到出现视频画面的时延,其单位可以为秒。
视频播放时长:用户观看视频的纯播放时长,不包含初始缓冲时延及视频卡顿时长,其单位可以为秒。
卡顿时长占比:视频卡顿时长与视频播放时长的比值,可由视频卡顿时长除以视频播放时长得到。
在实施上述图3所示的方法之前,可以先获取以下视频信息的部分或全部:初始缓冲时延、视频播放时长、卡顿时长占比、视频分辨率、音视频混合码率、视频编码算法、视频编码等级等。其中,视频分辨率、音视频混合码率、视频编码算法或视频编码等级可以通过视频播放器对片源进行解析得到。下面结合图4对初始缓冲时延、视频播放时长及卡顿时长占比的获取方式进行介绍。
图4示出了本发明实施例提供的一种视频播放过程。如图4所示,在t0时刻,终端发起观看请求,例如可以是终端上的视频播放按钮被用户所点击。在t1时刻,初始缓冲下载开始。在t2时刻,视频播放开始。在t5时刻,视频播放结束。其中,在t2时刻到t5时刻之间可能出现视频卡顿,例如图4所示的,在t3时刻,卡顿开始,在t4时刻,卡顿结束。可以理解的是,图4中的示例虽仅出现一次卡顿,但实际中,可能出现两次或多次卡顿,本发明实施例并不限定。
t0时刻~t1时刻为准备阶段,可以理解为终端发起观看请求到视频片源服务器返回存放视频片源真实地址文件(例如的超文本传输协议(Hypertext Transfer Protocol,HTTP)直播流(HTTP Live Streaming,HLS)的m3u8文件,HTTP动态自适应流(Dynamic Adaptive Streaming over HTTP,DASH)的媒体描述文件(Media Presentation Description,MPD)等)的阶段。此阶段可能包含广告、图片等下载以及视频防盗链的解析等。实际中由于不同的 商业客户端的交互设计不同,其交互过程可能存在一定差异。准备阶段的时长可以称为准备时延。准备时延可以通过以下方式确定:准备时延=N×RTT,其中RTT指往返时间(round trip time),N指的是:准备时延除以RTT所得的倍数。。例如,N可以为8,RTT可以取5次往返时间中去掉最大值后剩余四次往返时间的平均值。
t1时刻~t2时刻为初始缓冲下载阶段,可以理解为客户端开始请求存放视频片源真实地址文件到视频开始播放的阶段。此阶段中主要对视频进行初始缓存下载。初始缓冲下载阶段的时长可以称为初始缓冲下载时间。初始缓冲下载时间可以通过以下方式确定:首先设定播放器缓冲区,假定缓冲N秒(例如2秒)视频,则缓冲区大小(字节数)=N×码率/8。当程序开始下载视频流或者下载m3u8文件时,启动监控播放器流量的定时器,其中轮询周期为10毫秒(ms)。当播放器的下行流量达到缓冲区大小(字节数)时,终止定时器,并记录定时器运行的时长。该定时器运行的时长即为初始缓冲下载时间。
初始缓冲时延可以通过以下方式确定:初始缓冲时延=准备时延+初始缓冲下载时间。例如,图4所示的示例中,初始缓冲时延为t0时刻~t2时刻的时长。
t2时刻~t3时刻及t4时刻~t5时刻为视频播放阶段,指包含纯视频播放的阶段而不包含视频卡顿的阶段。视频播放阶段的时长可以称为视频播放时长。
t3时刻~t4时刻为卡顿阶段。图4所示的卡顿阶段仅包含一次卡顿,实际中,卡顿阶段可以包括一次或至少两次卡顿。其中,单次卡顿的时间可以通过以下方式确定:当视频缓冲区为空时,则卡顿开始,此时启动计时器开始计时;当缓冲区满,视频画面开始播放时,终止计时器,并记录计时器的运行时长。该计时器的运行时长即为单次卡顿的时间。卡顿阶段的时长可以称为卡顿时长。当卡顿阶段仅包含一次卡顿时,卡顿时长即为该次卡顿的时间;当卡顿阶段包含至少两次卡顿时,卡顿时长即为该至少两次卡顿的时间的总和。
卡顿时长占比可以通过以下方式确定:卡顿时长占比=卡顿时长/视频播放时长。例如,图4所示的示例中,卡顿时长占比=(t4-t3)/((t3-t2)+(t5-t4))。
下面在图3和图4所示方案的基础上,结合具体的示例,对视频缓冲参 数、初始缓冲参数、视频卡顿参数、视频质量评估结果或视频源质量参数的确定方式做进一步说明。
(一)视频缓冲参数的确定方式
可以根据初始缓冲参数和视频卡顿参数来确定视频缓冲参数。
在一个示例中,视频缓冲参数满足:
Y1=max(0,(1-α×(5-A)-β×(5-B)))       (1)
其中,Y1表示视频缓冲参数,A表示初始缓冲参数,B表示视频卡顿参数,α表示初始缓冲的质量损失权重系数,β表示卡顿的质量损失权重系数,其中,α和β随体验变差而衰减。
在另一个示例中,视频缓冲参数满足:
Y2=1-α×(5-A)-β×(5-B)        (2)
其中,Y2表示视频缓冲参数,A表示初始缓冲参数,B表示视频卡顿参数,α表示初始缓冲的质量损失权重系数,β表示卡顿的质量损失权重系数,其中,α和β随体验变差而衰减。
在上述两个示例中,α和β可以根据以下方式确定:
α=v17+v18×ev19×A          (3)
β=v20+v21×ev19×B          (4)
其中,v17和v18为初始缓冲的权重因子;v19为常量,可以取固定值;v20和v21为卡顿的权重因子。
在一种可能的实施方式中,v17=0.092,v20=0.108,v18、v19和v21可以根据实际情况取值。进一步的,根据v17、v18、v19、v20和v21的取值,α和β可以分别为:0.092≤α≤0.25,0.108≤β≤0.3。
可以理解的是,本发明实施例中,v17、v18、v19、v20和v21也可以有其他的取值情况或取值方式,本发明实施例并不限定。
(二)初始缓冲参数的确定方式
可以根据初始缓冲时延和视频播放时长来确定初始缓冲参数。
在一个示例中,初始缓冲参数满足:
A=max(1,min(5,(C+D×E)))        (5)
其中,A表示初始缓冲参数,C表示未考虑播放时长影响的原始初始缓冲得分,D表示不同初始缓冲时延最大的得分修正区间,E表示不同的 视频播放时长的影响因子。
在这个示例中,C、D和E可以根据以下方式确定:
Figure PCTCN2016101227-appb-000004
其中,v6、v7、v8和v9为初始缓冲参数的计算因子且为常量,IBL表示初始缓冲时延。
D=max(0,v10-v11×C)        (7)
其中,v10和v11为初始缓冲参数的修正因子且为常量。
Figure PCTCN2016101227-appb-000005
其中,VPD表示视频播放时长。
在一种可能的实施方式中,v6、v7、v8、v9、v10和v11可以为根据主观实验结果拟合得到的系数。例如,在主观实验中,不同用户在观看视频时给出自身的主观体验打分(例如可以包括针对初始缓冲的打分)。然后,可以根据这些主观体验打分进行拟合,得到v6、v7、v8、v9、v10和v11。这种方式能够综合考虑用户的体验,有利于运营商或网络提供商根据评估结果对网络进行规划或扩缩容,改善用户体验。
可以理解的是,本发明实施例中,v6、v7、v8、v9、v10和v11也可以有其他的取值情况或取值方式,本发明实施例并不限定。
(三)视频卡顿参数的确定方式
可以根据卡顿时长占比和视频播放时长来确定视频卡顿参数。
在一个示例中,视频卡顿参数满足:
B=G-v12×(H-1)×(5-G)         (9)
其中,B表示视频卡顿参数,G表示未考虑视频播放时长时的原始视频卡顿得分,H代表不同视频播放时长的影响因子,v12为视频卡顿参数的计算因子且为常量。
在这个示例中,G和H可以根据以下方式确定:
G=max(0,IF(SR<0.15,v13-v14×SR,v16-v17×SR))  (10)
其中,v13、v14、v15和v16也为视频卡顿参数的计算因子且为常量,SR表示卡顿时长占比。
H=max(1,VPD/60)          (11)
其中,VPD表示视频播放时长。
在一种可能的实施方式中,v12、v13、v14、v15和v16可以为根据主观 实验结果拟合得到的系数。例如,在主观实验中,不同用户在观看视频时给出自身的主观体验打分(例如可以包括针对视频卡顿的打分)。然后,可以根据这些主观体验打分进行拟合,得到v12、v13、v14、v15和v16。这种方式能够综合考虑用户的感受体验,有利于运营商或网络提供商根据评估结果对网络进行规划或扩缩容,改善用户体验。
可以理解的是,本发明实施例中v12、v13、v14、v15和v16也可以有其他的取值情况或取值方式,本发明实施例并不限定。
(四)视频源质量参数的确定方式
可以根据视频分辨率系数、视频编码等级系数、视频编码算法系数和音视频混合码率参数来确定视频源质量参数。
在一个示例中,视频分辨率系数包括第一分辨率系数和第二分辨率系数。其中,第一分辨率系数可用于表示指定分辨率视频的最好质量得分,例如可以由视频分辨率、视频编码算法、视频编码等级和视频的码率确定;第二分辨率系数可用于表示编码衰减的程度,例如可能受视频分辨率、视频编码算法、视频编码等级和视频的码率的影响,其中第二分辨率系数也可以称为编码衰减系数。视频源质量参数满足:
Figure PCTCN2016101227-appb-000006
其中,X表示视频源质量参数,v1表示第一分辨率系数,v2表示第二分辨率系数,v3表示视频编码算法系数,v4表示视频编码等级系数,v5为常量,MBR表示音视频混合码率参数。
在一种可能的实施方式中,v1、v2、v3、v4和v5可以分别根据以下方式进行确定:
v1可以根据视频分辨率进行确定,例如,可以按照表1所示的对应关系进行取值,其中,不同的视频分辨率可以包括:360P、480P、720P、1080P、2K(1440P)或4K(2160P)。
表1 v1与视频分辨率的对应关系
视频分辨率 v1
360P 2.8
480P 3.6
720P 4
1080P 4.5
2K(1440P) 4.8
4K(2160P) 4.9
v2的取值可以与视频分辨率相关:例如,1800≤v2≤3000,其中,当视频分辨率越高时,v2的取值越大;又例如,v2也可以根据视频分辨率进行计算得到。
v3的取值可以与编码效率相关:例如,0<v3≤3,其中,当编码效率越高时,v3的取值越大。
v4可以在一定范围内取值,例如,0<v4≤1。v5为常量。
可以理解的是,本发明实施例中,v1、v2、v3、v4和v5也可以有其他的取值情况或取值方式,本发明实施例并不限定。
(五)视频质量评估结果的确定方式
可以根据视频源质量参数和视频缓冲参数来确定视频信号的视频质量评估结果。
在一个示例中,当视频缓冲参数满足公式(1)时,视频质量评估结果满足:
F1=(X-1)×Y1+1       (13)
其中,F1表示视频质量评估结果,X表示视频源质量参数,Y1表示视频缓冲参数。
在这种情况下,可以得到:
F1=(X-1)×max(0,(1-α×(5-A)-β×(5-B)))+1   (14)
在另一个示例中,当视频缓冲参数满足公式(2)时,视频质量评估结果满足:
F2=(X-1)×max(0,Y2)+1         (15)
其中,F2表示视频质量评估结果,X表示视频源质量参数,Y2表示视频缓冲参数。
在这种情况下,可以得到:
F2=(X-1)×max(0,(1-α×(5-A)-β×(5-B)))+1   (16)
需要说明的是,在实现本发明实施例的方案时,在图3所示方法的基础上,可以进一步根据上述(一)~(五)中的至少一项对本发明实施例的方案进行实施。
基于上文所描述的技术方案,本发明实施例还可以设计一种Mobile MOS软件开发工具包(Software Development Kit,SDK),用于确定视频质量评估结果。图5为本发明实施例提供的一种Mobile MOS SDK的示意图。下面,以视频质量评估结果为Mobile MOS,视频源质量参数为sQuality,视频缓冲参数为sBuffering,初始缓冲参数为sLoading,视频卡顿参数为sStalling为例,对图5所示的方案进行说明。
如图5所示,Mobile MOS SDK包括输入模块、计算模块和输出模块。其中,输入模块用于获取视频分辨率、音视频混合码率、视频编码算法、视频编码等级、初始缓冲时延、卡顿时长占比和视频播放时长,具体获取方式可以参照图3和图4所示方案中这些内容的获取方式,此处不作赘述。计算模块用于计算sQuality、sLoading、sStalling、sBuffering和Mobile MOS。例如,计算模块可以先计算sQuality、sLoading和sStalling;然后基于sLoading和sStalling计算sBuffering;最后基于sQuality和sBuffering计算Mobile MOS。这些参数的具体计算方式可以参照图3所示方案中这些参数的计算方式,此处不作赘述。输出模块用于输出Mobile MOS。可选的,输出模块还可用于输出sQuality、sLoading、sStalling和sBuffering中的至少一项。
上述图5所示的Mobile MOS SDK可以应用到图2所示的终端、基站或服务器。下面基于图3和图4所示的方案,对终端、基站或服务器对视频质量进行评估的方法进行说明。
图6示出了本发明实施例提供的另一种视频质量的评估方法,其中,以视频质量评估结果为Mobile MOS为例进行描述。如图6所示,对于某一次视频播放过程,上述评估方法从开始到结束可以包括601~604部分。其中,601部分可以由终端执行;602~604部分可以由终端或服务器执行。下面以601~604部分由终端执行为例进行描述,602~604部分由服务器执行的情况可以参照下面的描述进行实施,此处不作赘述。
在601部分,发起观看请求。
在一个示例中,用户点击终端上视频APP的播放按钮,使得终端发起观看请求。
在602部分,统计视频信息。
上述视频信息可以包括以下内容中的部分或全部:音视频混合码率、视 频分辨率、视频编码算法、视频编码等级、初始缓冲时延、卡顿时长占比或视频播放时长。
在一个示例中,可以在视频播放过程中周期性地统计这些视频信息。例如,可以对整个视频进行分片,然后基于分片获取各个周期内的视频信息。
在另一个示例中,可以在视频播放结束后统计这些视频信息。
上述视频信息的具体获取方式可以参考图3和图4所示方案中的对应获取方式进行实施,此处不作赘述。
在一个示例中,当602部分由服务器执行时,服务器可以在终端获取上述视频信息后,从终端接收这些视频信息。
在603部分,计算Mobile MOS。
在一个示例中,可以先计算sQuality、sLoading和sStalling;然后基于sLoading和sStalling计算sBuffering;最后基于sQuality和sBuffering计算Mobile MOS。
上述sQuality、sLoading、sStalling、sBuffering和Moblie MOS的具体计算方式可以分别参考图3所示方法中的视频源质量参数、初始缓冲参数、视频卡顿参数、视频缓冲参数和视频质量评估结果的确定方式,以及可以参考上文中(一)~(五)所描述的对应示例进行实施,此处不作赘述。
在604部分,输出评估结果。
在一个示例中,上述评估结果包括Mobile MOS。
在一种可能的实施方式中,上述评估结果还可以包括sQuality、sLoading、sStalling或sBuffering中的至少一项。
图6所示的方法中,603和604部分可以通过调用图5所示的Mobile MOS SDK来实施。例如,通过调用Mobile MOS SDK,将上述视频信息输入到输入模块后,由计算模块执行603部分,由输出模块执行604模块。
通过图6所示的方案,网络运营商或视频APP的提供商可以根据输出的评估结果,将Mobile MOS得分较低的区域定位出来,以针对性地提高所提供的视频质量。进一步的,网络运营商或视频APP的提供商可以根据sQuality、sLoading、sStalling或sBuffering进行问题定位,进而进行相应优化,例如提供更清晰的片源、增加网络带宽及优化网络架构等。
图7示出了本发明实施例提供的又一种视频质量的评估方法,其中,以 视频质量评估结果为Mobile MOS为例进行描述。图7所示的方法可以由基站执行。如图7所示,对于某一次视频播放过程,上述评估方法从开始到结束可以包括701~706部分,其中,702部分和703部分择其一进行执行。
在701部分,基站判断视频流是否结束。
若是,则执行702部分;若否,则执行703部分。执行完702部分或703部分后,执行704部分。
在702部分,基站进行完整视频打点。其中,“打点”可以理解为对视频参数或视频信息进行记录。
在一个示例中,基站进行完整视频打点的视频信息可以包括视频OTT(over the top)名称、音视频混合码率、初始缓冲时延、卡顿时长占比和视频播放时长。OTT是指通过互联网向用户提供各种应用服务。上述视频OTT名称可以基于域名系统(Domain Name System,DNS)进行域名解析得到。上述音视频混合码率可以通过加密场景码率识别方案(例如,服务分类(Service Classification,SC)方案)或非加密场景码率识别方案(例如,深度包检测(Deep Packet Inspection,DPI)方案)等方式获取。
在一个示例中,在一个视频播放过程中,可以对整个视频进行周期性地分片。基站可以综合各个分片周期进行完整视频打点。
在703部分,基站进行视频流周期打点。
703部分与702部分相似,区别在于,基站进行视频流周期打点的视频信息所包括的音视频混合码率、初始缓冲时延、卡顿时长占比和视频播放时长分别为当前分片周期内的音视频混合码率、初始缓冲时延、卡顿时长占比和视频播放时长。
在704部分,基站进行OTT码率映射,补全视频信息。
在一个示例中,基站可以通过OTT码率映射得到视频分辨率、视频编码算法和视频编码等级。例如,基站可以基于上述视频OTT名称进行OTT码率映射。
执行完704部分后,基站可以获取下列所有视频信息:音视频混合码率、视频分辨率、视频编码算法、视频编码等级、初始缓冲时延、卡顿时长占比和视频播放时长。
在705部分,基站计算Mobile MOS。
在706部分,基站输出评估结果。
上述705部分和706部分分别与图6中的603部分和604部分相似,可以参考603部分和604部分的详细描述,此处不做赘述。
图7所示的方法中,703和704部分可以通过调用图5所示的Mobile MOS SDK来实施。例如,通过调用Mobile MOS SDK,将上述视频信息输入到输入模块后,由计算模块执行703部分,由输出模块执行704模块。
通过图7所示的方案,可以利用所输出的评估结果呈现网络级、小区级或栅格级的视频质量指标,以对视频质量进行实时评估并优化。例如,可以根据评估结果对相关视频的无线资源管理(Radio Resource Management,RRM)算法或解决方案进行相应调度或优化,以达到理想的视频质量。
上述主要从方法示例的角度对本发明实施例提供的方案进行了介绍。本发明实施例还提供一种视频质量的评估装置。可以理解的是,该评估装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本发明中所公开的实施例描述的各示例的单元及算法步骤,本发明实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的技术方案的范围。本发明实施例可以根据上述方法示例对上述评估装置等进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本发明实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图8示出了本发明实施例中提供的一种视频质量的评估装置的示意性框图。该评估装置800包括:处理单元802。处理单元802用于对评估装置800的动作进行控制管理,例如,处理单元802用于支持评估装置800执行图3、图6和图7所示的方法,和/或用于本文所描述的技术的其它过程。该评估装置还可以包括通信单元803,通信单元803用于支持评估装置800和其他网元之间的通信。评估装置800 还可以包括存储单元801,用于存储评估装置800的程序代码和数据。
其中,处理单元802可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信单元803可以是通信接口、收发器或收发电路等,其中,该通信接口是统称,在具体实现中,该通信接口可以包括一个或至少两个接口。存储单元801可以是存储器。
上述评估装置可以是终端、基站或服务器;也可以是终端、基站或服务器中的功能模块,例如可以是上述图5所示的Moblie MOS SDK,本发明实施例并不限制。下面结合附图9~11,以该评估装置为终端、基站或服务器为例,对本发明实施例做进一步说明。
图9示出了本发明实施例中提供的终端的一种可能的设计结构的简化示意图。所述终端900包括发射器901,接收器902和处理器903。其中,处理器903也可以为控制器,图9中表示为“控制器/处理器903”。可选的,所述终端900还可以包括调制解调处理器905,其中,调制解调处理器905可以包括编码器907、调制器907、解码器908和解调器909。
在一个示例中,发射器901调节(例如,模拟转换、滤波、放大和上变频等)该输出采样并生成上行链路信号,该上行链路信号经由天线发射给上述实施例中所述的基站。在下行链路上,天线接收上述实施例中基站发射的下行链路信号。接收器902调节(例如,滤波、放大、下变频以及数字化等)从天线接收的信号并提供输入采样。在调制解调处理器905中,编码器907接收要在上行链路上发送的业务数据和信令消息,并对业务数据和信令消息进行处理(例如,格式化、编码和交织)。调制器907进一步处理(例如,符号映射和调制)编码后的业务数据和信令消息并提供输出采样。解调器909处理(例如,解调)该输入采样并提供符号估计。解码器908处理(例如,解交织和解码)该符号估计并提供发送给终端900的已解码的数据和信 令消息。编码器907、调制器907、解调器909和解码器908可以由合成的调制解调处理器905来实现。这些单元根据无线接入网采用的无线接入技术(例如,LTE及其他演进系统的接入技术)来进行处理。需要说明的是,当终端900不包括调制解调处理器905时,调制解调处理器905的上述功能也可以由处理器903完成。
处理器903对终端900的动作进行控制管理,用于执行上述本发明实施例中由终端900进行的处理过程。例如,处理器903还用于执行图3和图6所示方法中涉及终端的处理过程和/或本发明实施例所描述的技术方案的其他过程。
进一步的,终端900还可以包括存储器904,存储器904用于存储用于终端900的程序代码和数据。
图10示出了本发明实施例提供的基站的一种可能的结构示意图。基站1000包括处理器1002和通信接口1004。其中,处理器1002也可以为控制器,图10中表示为“控制器/处理器1002”。通信接口1004用于支持基站与其他网元(例如其它基站、核心网设备等)进行通信。进一步的,基站1000还可以包括发射器/接收器1001。所述发射器/接收器1001用于支持基站与上述实施例中的所述终端之间收发信息,以及支持所述终端与其他终端之间进行无线电通信。所述处理器1002执行各种用于与终端通信的功能。在上行链路,来自所述终端的上行链路信号经由天线接收,由接收器1001进行解调(例如将高频信号解调为基带信号),并进一步由处理器1002进行处理来恢复终端所发送到业务数据和信令信息。在下行链路上,业务数据和信令消息由处理器1002进行处理,并由发射器1001进行调制(例如将基带信号调制为高频信号)来产生下行链路信号,并经由天线发射给终端。需要说明的是,上述解调或调制的功能也可以由处理器1002完成。
例如,处理器1002还用于执行图3和图7所示方法中涉及基站的处理过程和/或本发明实施例所描述的技术方案的其他过程。
进一步的,基站1000还可以包括存储器1003,存储器1003用于存储基站1000的程序代码和数据。
可以理解的是,图10仅仅示出了基站1000的简化设计。在实际应用 中,基站1000可以包含任意数量的发射器,接收器,处理器,控制器,存储器,通信单元等,而所有可以实现本发明实施例的基站都在本发明实施例的保护范围之内。
图11示出了本发明实施例提供的服务器的一种可能的结构示意图。服务器1100包括:处理器1102、通信接口1103、存储器1101。可选的,服务器1100还可以包括总线1104。其中,通信接口1103、处理器1102以及存储器1101可以通过总线1104相互连接;总线1104可以是外设部件互连标准(Peripheral Component Interconnect,简称PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,简称EISA)总线等。所述总线1104可以分为地址总线、数据总线、控制总线等。为便于表示,图11中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
处理器1102对服务器1100的动作进行控制管理,用于执行上述本发明实施例中由服务器1100进行的处理过程。例如,处理器1102用于执行图3和图6所示方法中涉及终端的处理过程和/或本发明实施例所描述的技术方案的其他过程。
结合本发明实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器
(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于视频质量的评估装置中。当然,处理器和存储介质也可以作为分立组件存在于视频质量的评估装置中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计 算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述的具体实施方式,对本发明实施例的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明实施例的具体实施方式而已,并不用于限定本发明实施例的保护范围,凡在本发明实施例的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本发明实施例的保护范围之内。

Claims (24)

  1. 一种视频质量的评估方法,其特征在于,包括:
    确定视频信号的视频源质量参数和视频缓冲参数,其中,所述视频缓冲参数与所述视频信号的视频播放时长有关;
    根据所述视频源质量参数和所述视频缓冲参数,确定所述视频信号的视频质量评估结果。
  2. 根据权利要求1所述的方法,其特征在于,所述确定视频信号的视频缓冲参数,包括:
    确定所述视频信号的初始缓冲参数和视频卡顿参数;
    根据所述初始缓冲参数和所述视频卡顿参数,确定所述视频缓冲参数。
  3. 根据权利要求2所述的方法,其特征在于,所述视频缓冲参数满足:
    Y1=max(0,(1-α×(5-A)-β×(5-B))),其中,所述Y1为所述视频缓冲参数,所述A为所述初始缓冲参数,所述B为所述视频卡顿参数,所述α为初始缓冲的质量损失权重系数,所述β为卡顿的质量损失权重系数。
  4. 根据权利要求3所述的方法,其特征在于,所述视频质量评估结果满足:
    F1=(X-1)×Y1+1,其中,所述F1为所述视频质量评估结果,所述X为所述视频源质量参数,所述Y1为所述视频缓冲参数。
  5. 根据权利要求2所述的方法,其特征在于,所述视频缓冲参数满足:
    Y2=1-α×(5-A)-β×(5-B),其中,所述Y2为所述视频缓冲参数,所述A为所述初始缓冲参数,所述B为所述视频卡顿参数,所述α为初始缓冲的质量损失权重系数,所述β为卡顿的质量损失权重系数。
  6. 根据权利要求5所述的方法,其特征在于,所述视频质量评估结果满足:
    F2=(X-1)×max(0,Y2)+1,其中,所述F2为所述视频质量评估结果,所述X为所述视频源质量参数,所述Y2为所述视频缓冲参数。
  7. 根据权利要求2至6中任一项所述的方法,其特征在于,所述确 定所述初始缓冲参数,包括:
    确定所述视频信号的初始缓冲时延和所述视频播放时长;
    根据所述初始缓冲时延和所述视频播放时长,确定所述初始缓冲参数。
  8. 根据权利要求7所述的方法,其特征在于,所述初始缓冲参数满足:
    A=max(1,min(5,(C+D×E))),其中,
    Figure PCTCN2016101227-appb-100001
    D=max(0,v10-v11×C),
    Figure PCTCN2016101227-appb-100002
    其中,所述A为所述初始缓冲参数,所述C为未考虑播放时长影响的原始初始缓冲得分,所述D为不同初始缓冲时延最大的得分修正区间,所述E为不同视频播放时长的影响因子,所述v6、所述v7、所述v8和所述v9为所述初始缓冲参数的计算因子且为常量,所述v10和所述v11为所述初始缓冲参数的修正因子且为常量,所述IBL为所述初始缓冲时延,所述VPD为所述视频播放时长。
  9. 根据权利要求2至8中任一项所述的方法,其特征在于,所述确定所述视频卡顿参数,包括:
    确定所述视频信号的卡顿时长占比和所述视频播放时长;
    根据所述卡顿时长占比和所述视频播放时长,确定所述视频卡顿参数。
  10. 根据权利要求9所述的方法,其特征在于,所述视频卡顿参数满足:
    B=G-v12×(H-1)×(5-G),其中,
    G=max(0,IF(SR<0.15,v13-v14×SR,v16-v17×SR)),
    H=max(1,VPD/60),
    其中,所述B为所述视频卡顿参数,所述G为未考虑视频播放时长时的原始视频卡顿得分,所述H为不同视频播放时长的影响因子,所述v12、所述v13、所述v14、所述v15和所述v16为所述视频卡顿参数的计算因子且为常量,所述SR为所述卡顿时长占比,所述VPD为所述视频播放时长。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述确定视频信号的视频源质量参数,包括:
    确定所述视频信号的视频分辨率系数、视频编码算法系数、视频编码等级系数和音视频混合码率参数;
    根据所述视频分辨率系数、所述视频编码算法系数、所述视频编码等级系数和所述音视频混合码率参数,确定所述视频源质量参数。
  12. 根据权利要求11所述的方法,其特征在于,所述视频源质量参数满足:
    Figure PCTCN2016101227-appb-100003
    其中,所述X为所述视频源质量参数,所述v1为所述第一分辨率系数,所述v2为所述第二分辨率系数,所述v3为所述视频编码算法系数,所述v4为所述视频编码等级系数,所述v5为常量,所述MBR为所述音视频混合码率参数。
  13. 一种视频质量的评估装置,其特征在于,包括:
    处理单元,用于确定视频信号的视频源质量参数和视频缓冲参数,其中,所述视频缓冲参数与所述视频信号的视频播放时长有关;以及用于根据所述视频源质量参数和所述视频缓冲参数,确定所述视频信号的视频质量评估结果。
  14. 根据权利要求13所述的评估装置,其特征在于,所述处理单元具体用于:确定所述视频信号的初始缓冲参数和视频卡顿参数;以及根据所述初始缓冲参数和所述视频卡顿参数,确定所述视频缓冲参数。
  15. 根据权利要求14所述的评估装置,其特征在于,所述视频缓冲参数满足:
    Y1=max(0,(1-α×(5-A)-β×(5-B))),其中,所述Y1为所述视频缓冲参数,所述A为所述初始缓冲参数,所述B为所述视频卡顿参数,所述α为初始缓冲的质量损失权重系数,所述β为卡顿的质量损失权重系数。
  16. 根据权利要求15所述的评估装置,其特征在于,所述视频质量评估结果满足:
    F1=(X-1)×Y1+1,其中,所述F1为所述视频质量评估结果,所述X为 所述视频源质量参数,所述Y1为所述视频缓冲参数。
  17. 根据权利要求14所述的评估装置,其特征在于,所述视频缓冲参数满足:
    Y2=1-α×(5-A)-β×(5-B),其中,所述Y2为所述视频缓冲参数,所述A为所述初始缓冲参数,所述B为所述视频卡顿参数,所述α为初始缓冲的质量损失权重系数,所述β为卡顿的质量损失权重系数。
  18. 根据权利要求17所述的评估装置,其特征在于,所述视频质量评估结果满足:
    F2=(X-1)×max(0,Y2)+1,其中,所述F2为所述视频质量评估结果,所述X为所述视频源质量参数,所述Y2为所述视频缓冲参数。
  19. 根据权利要求14至18中任一项所述的评估装置,其特征在于,所述处理单元具体用于:确定所述视频信号的初始缓冲时延和所述视频播放时长;以及根据所述初始缓冲时延和所述视频播放时长,确定所述初始缓冲参数。
  20. 根据权利要求19所述的评估装置,其特征在于,所述初始缓冲参数满足:
    A=max(1,min(5,(C+D×E))),其中,
    Figure PCTCN2016101227-appb-100004
    D=max(0,v10-v11×C),
    Figure PCTCN2016101227-appb-100005
    其中,所述A为所述初始缓冲参数,所述C为未考虑播放时长影响的原始初始缓冲得分,所述D为不同初始缓冲时延最大的得分修正区间,所述E为不同视频播放时长的影响因子,所述v6、所述v7、所述v8和所述v9为所述初始缓冲参数的计算因子且为常量,所述v10和所述v11为所述初始缓冲参数的修正因子且为常量,所述IBL为所述初始缓冲时延,所述VPD为所述视频播放时长。
  21. 根据权利要求14至20中任一项所述的评估装置,其特征在于,所述处理单元具体用于:确定所述视频信号的卡顿时长占比和所述视频播放时长;根据所述卡顿时长占比和所述视频播放时长,确定所述视频卡顿参数。
  22. 根据权利要求21所述的评估装置,其特征在于,所述视频卡顿参数满足:
    B=G-v12×(H-1)×(5-G),其中,
    G=max(0,IF(SR<0.15,v13-v14×SR,v16-v17×SR)),
    H=max(1,VPD/60),
    其中,所述B为所述视频卡顿参数,所述G为未考虑视频播放时长时的原始视频卡顿得分,所述H为不同视频播放时长的影响因子,所述v12、所述v13、所述v14、所述v15和所述v16为所述视频卡顿参数的计算因子且为常量,所述SR为所述卡顿时长占比,所述VPD为所述视频播放时长。
  23. 根据权利要求13至22中任一项所述的评估装置,其特征在于,所述处理单元具体用于:确定所述视频信号的视频分辨率系数、视频编码算法系数、视频编码等级系数和音视频混合码率参数;以及根据所述视频分辨率系数、所述视频编码算法系数、所述视频编码等级系数和所述音视频混合码率参数,确定所述视频源质量参数。
  24. 根据权利要求23所述的评估装置,其特征在于,所述视频源质量参数满足:
    Figure PCTCN2016101227-appb-100006
    其中,所述X为所述视频源质量参数,所述v1为所述第一分辨率系数,所述v2为所述第二分辨率系数,所述v3为所述视频编码算法系数,所述v4为所述视频编码等级系数,所述v5为常量,所述MBR为所述音视频混合码率参数。
PCT/CN2016/101227 2016-09-30 2016-09-30 一种视频质量的评估方法、评估装置和系统 WO2018058587A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101227 WO2018058587A1 (zh) 2016-09-30 2016-09-30 一种视频质量的评估方法、评估装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101227 WO2018058587A1 (zh) 2016-09-30 2016-09-30 一种视频质量的评估方法、评估装置和系统

Publications (1)

Publication Number Publication Date
WO2018058587A1 true WO2018058587A1 (zh) 2018-04-05

Family

ID=61763594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101227 WO2018058587A1 (zh) 2016-09-30 2016-09-30 一种视频质量的评估方法、评估装置和系统

Country Status (1)

Country Link
WO (1) WO2018058587A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971891A (zh) * 2018-09-30 2020-04-07 北京奇虎科技有限公司 视频质量评估方法、装置及电子设备
CN113411218A (zh) * 2021-06-22 2021-09-17 北京金山云网络技术有限公司 即时通信质量的评价方法、装置和电子设备
CN113489745A (zh) * 2021-07-29 2021-10-08 百果园技术(新加坡)有限公司 视频数据发送方法、装置、设备和存储介质
CN114598924A (zh) * 2022-03-10 2022-06-07 恒安嘉新(北京)科技股份公司 客户端综合视频播放状态的检测方法、装置、设备及介质
CN114760461A (zh) * 2022-03-31 2022-07-15 中国信息通信研究院 音视频通话业务用户体验测试方法及装置
CN114760461B (zh) * 2022-03-31 2024-05-31 中国信息通信研究院 音视频通话业务用户体验测试方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103813160A (zh) * 2012-11-12 2014-05-21 中国电信股份有限公司 视频质量监控方法和装置
CN104023232A (zh) * 2014-06-27 2014-09-03 北京邮电大学 基于层次分析和多元线性回归的移动视频质量评估方法
CN104427402A (zh) * 2013-09-03 2015-03-18 中国科学院声学研究所 一种无线网络流媒体质量获取方法及系统
CN106131597A (zh) * 2016-07-19 2016-11-16 武汉烽火网络有限责任公司 流媒体在网络上高效传输的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103813160A (zh) * 2012-11-12 2014-05-21 中国电信股份有限公司 视频质量监控方法和装置
CN104427402A (zh) * 2013-09-03 2015-03-18 中国科学院声学研究所 一种无线网络流媒体质量获取方法及系统
CN104023232A (zh) * 2014-06-27 2014-09-03 北京邮电大学 基于层次分析和多元线性回归的移动视频质量评估方法
CN106131597A (zh) * 2016-07-19 2016-11-16 武汉烽火网络有限责任公司 流媒体在网络上高效传输的方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971891A (zh) * 2018-09-30 2020-04-07 北京奇虎科技有限公司 视频质量评估方法、装置及电子设备
CN113411218A (zh) * 2021-06-22 2021-09-17 北京金山云网络技术有限公司 即时通信质量的评价方法、装置和电子设备
CN113411218B (zh) * 2021-06-22 2023-03-24 北京金山云网络技术有限公司 即时通信质量的评价方法、装置和电子设备
CN113489745A (zh) * 2021-07-29 2021-10-08 百果园技术(新加坡)有限公司 视频数据发送方法、装置、设备和存储介质
CN113489745B (zh) * 2021-07-29 2024-04-05 百果园技术(新加坡)有限公司 视频数据发送方法、装置、设备和存储介质
CN114598924A (zh) * 2022-03-10 2022-06-07 恒安嘉新(北京)科技股份公司 客户端综合视频播放状态的检测方法、装置、设备及介质
CN114598924B (zh) * 2022-03-10 2024-03-22 恒安嘉新(北京)科技股份公司 客户端综合视频播放状态的检测方法、装置、设备及介质
CN114760461A (zh) * 2022-03-31 2022-07-15 中国信息通信研究院 音视频通话业务用户体验测试方法及装置
CN114760461B (zh) * 2022-03-31 2024-05-31 中国信息通信研究院 音视频通话业务用户体验测试方法及装置

Similar Documents

Publication Publication Date Title
WO2018058587A1 (zh) 一种视频质量的评估方法、评估装置和系统
CN108141443B (zh) 用户设备、媒体流传输网络辅助节点和媒体流传输方法
EP2979414B1 (en) Quality-aware rate adaptation techniques for dash streaming
CN107147919B (zh) 直播快速启播方法及系统
US20190174520A1 (en) Service Data Transmission Method, Network Device, and Terminal Device
EP3454590B1 (en) Quality parameter reporting manner
WO2010009637A1 (zh) 一种视频质量评估方法、系统及装置
WO2014029315A1 (zh) 一种获得视频编码压缩质量的方法及装置
US20150110168A1 (en) Video data transmission method and apparatus
US9813742B2 (en) Method, device and system for evaluating user experience value of video quality
CN107210999B (zh) 链路感知流送自适应
US20150067184A1 (en) Methods and Systems for Quantifying the Holistic Quality of Experience for Internet Multimedia
US10834161B2 (en) Dash representations adaptations in network
WO2011144181A2 (zh) 一种视频质量评估的方法和网络节点
EP3310048B1 (en) Video bit rate identification method and device
CN109511011B (zh) 一种面向YouTube DASH加密视频的指纹数据库构建方法
US9520079B2 (en) Storage and carriage of green metadata for display adaptation
US10868850B2 (en) Apparatus and method for providing contents using web-based virtual desktop protocol
EP2928145A1 (en) Method for estimating a bandwidth associated with a connection between a client terminal and at least one server, corresponding client terminal
WO2014029311A1 (zh) 多媒体质量的监控方法和设备
CN113132807B (zh) 基于视频的关键帧请求方法、装置、设备及存储介质
KR101231265B1 (ko) 이종 무선망에서 스트리밍 서비스 제공 방법 및 시스템
WO2015109462A1 (zh) 一种评估音视频业务质量的方法及装置
Park et al. Implementation of ATSC mobile DTV broadcasting for N-screen smart devices
CN113438519B (zh) 一种视频传输方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917337

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917337

Country of ref document: EP

Kind code of ref document: A1