WO2018196434A1 - 一种视频质量评估方法和设备 - Google Patents

一种视频质量评估方法和设备 Download PDF

Info

Publication number
WO2018196434A1
WO2018196434A1 PCT/CN2017/120446 CN2017120446W WO2018196434A1 WO 2018196434 A1 WO2018196434 A1 WO 2018196434A1 CN 2017120446 W CN2017120446 W CN 2017120446W WO 2018196434 A1 WO2018196434 A1 WO 2018196434A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
source block
fec
evaluated
data packet
Prior art date
Application number
PCT/CN2017/120446
Other languages
English (en)
French (fr)
Inventor
熊婕
张彦芳
冯力刚
程剑
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17907272.3A priority Critical patent/EP3609179A1/en
Publication of WO2018196434A1 publication Critical patent/WO2018196434A1/zh
Priority to US16/664,194 priority patent/US11374681B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0009Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the channel coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/188Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44209Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6375Control signals issued by the client directed to the server or network components for requesting retransmission, e.g. of data packets lost or corrupted during transmission from server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/6473Monitoring network processes errors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64761Control signals issued by the network directed to the server or the client directed to the server
    • H04N21/64776Control signals issued by the network directed to the server or the client directed to the server for requesting retransmission, e.g. of data packets lost or corrupted during transmission from server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N2017/006Diagnosis, testing or measuring for television systems or their details for television sound
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N2017/008Diagnosis, testing or measuring for television systems or their details for television teletext

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a video quality assessment method and device.
  • IPTV Internet Protocol Television
  • CDN content delivery network
  • STB set-top box
  • other network devices used to ensure the normal operation of IPTV video services, and the quality of consumer content and IPTV video services. Be responsible for.
  • the IPTV monitoring solution captures a live video stream by deploying a video quality assessment device (ie, a probe) in the network, parses video related parameters, and statistically detects video packet loss, and then evaluates the video average experience score (Mean Opinion Score of Video, MOVS).
  • MOSV is an evaluation standard for measuring the quality of network video. This standard comprehensively models the impact of these damages on the user experience of watching video by detecting the source compression damage of the received video and the network transmission damage.
  • the scoring standard comes from the International Telecommunication Union-Telecommunication (ITU-T) P.910. Generally speaking, 4 points or more is good, 4 points to 3 points are general, and 3 points or less are bad.
  • the IPTV video service is carried by the Real-Time Transport Protocol (RTP) and the User Datagram Protocol (UDP). Because UDP is an unreliable transmission protocol, media transmission is unreliable and packet loss is prone to occur. Leading to mosaics and flower screens, resulting in poor user experience, seriously affecting the development of IPTV services. Therefore, the operator usually deploys Front Error Correction (FEC) to implement service guarantee, thereby reducing the adverse effects of video packet loss and error in the transmission process.
  • FEC Front Error Correction
  • the embodiment of the present application provides a video quality evaluation method and device, which are used to improve the accuracy of video quality assessment, and the calculated MOSV is more in line with the user's real video experience.
  • the embodiment of the present application provides the following technical solutions:
  • the embodiment of the present application provides a video quality evaluation method, including: acquiring a video to be evaluated, where the video to be evaluated includes a forward error correction FEC redundant data packet; and when the first source in the video to be evaluated When the number of lost data packets of the first source block is less than or equal to the number of FEC redundant data packets of the first source block, generating a first digest message for the non-lost data packet of the first source block, and The lost data packet of the first source block generates a second digest message, and calculates a video average experience score (MOSV) of the video to be evaluated according to the first digest message and the second digest message.
  • a video average experience score MOSV
  • the first summary message and the second summary report are calculated because the data packet loss condition of the first source block in the video to be evaluated and the recovery of the packet loss by the FEC redundant data packet are considered.
  • the MOV of the video to be evaluated can be calculated by using the first summary message and the second summary message.
  • the remaining data packets have the ability to recover lost packets, thus increasing the accuracy of the video quality assessment, making it more in line with the user's real video experience.
  • the number of lost data packets of the first source block is calculated by: obtaining an initial real-time from the FEC redundant data packet. Transmitting a protocol RTP sequence number and an ending RTP sequence number, and acquiring an RTP sequence number of the un-lossed packet of the first source block; according to the starting RTP sequence number and the ending RTP sequence number, the first source The RTP sequence number of the block's un-lost packet calculates the number of lost packets of the first source block.
  • the initial RTP sequence number and the ending RTP sequence number can determine the total number of RTP data packets in the first source block, and then the loss of the first source block can be calculated by excluding the RTP sequence number of the unrecovered data packet. The number of packets.
  • the method further includes: acquiring an FEC source block size of the first source block and FEC redundancy.
  • the video quality evaluation device may obtain the FEC source block size and the FEC redundancy, so that the number of FEC redundant data packets of the first source block may be determined by using the FEC source block size and the FEC redundancy. .
  • the acquiring the FEC source block size and the FEC redundancy of the first source block including: Receiving, by the receiving device of the MRF or the to-be-evaluated video, the FEC source block size and the FEC redundancy; or analyzing the control packet exchanged between the receiving device and the video server of the video to be evaluated, Thereby obtaining the FEC source block size and FEC redundancy; or parsing the FEC redundant data packet of the first source block to obtain the FEC source block size and FEC redundancy.
  • the video quality evaluation device can obtain the FEC source block size and the FEC redundancy by the foregoing three methods, and then determine the number of FEC redundant data packets of the first source block.
  • the method further includes: when When the number of lost data packets of the first source block is 0, or the number of lost data packets of the first source block is greater than the number of FEC redundant data packets of the first source block, The first summary message is generated by the un-data packet of a source block; and the MOV of the video to be evaluated is calculated according to the first summary message.
  • the number of lost packets in the first source block is 0, it indicates that no packet loss occurs in the first source block.
  • the number of lost data packets in the first source block is greater than the number of FEC redundant data packets, the first description is performed.
  • the source block cannot recover lost packets in the first source block through the FEC redundant packet.
  • the first summary message is generated for the un-data packet of the first source block, and finally the MOV of the video to be evaluated is calculated according to the first summary message, so that only the first summary message is used.
  • MOSV can represent the user's real video experience.
  • the method further The method includes: receiving, by the receiving device that receives the video to be evaluated, a retransmission request sent to a retransmission RET server, where the retransmission request is used to request, by the RET server, a data packet that is lost and cannot be recovered by FEC; And generating, by the RET server, a third digest packet for the data packet that is lost and cannot be recovered by the FEC; and the calculating, according to the first digest packet and the second digest packet
  • the video average experience score (MOSV) of the video to be evaluated is calculated, and the MOV of the video to be evaluated is calculated according to the first digest message, the second digest message, and the third digest message.
  • MOSV video average experience score
  • the video quality evaluation device considers the recovery of the lost data packet of the FEC technology and the RET technology to be evaluated, so that the first summary message, the second summary message, and the third summary message may be used according to the first summary message, the second summary message, and the third summary message.
  • the accuracy of the video quality assessment is increased, so that it is more in line with the user's real experience.
  • the second digest message includes: an RTP sequence number of the lost data packet of the first source block, a payload size, and digest information of a video transport stream video TS packet in the lost data packet of the first source block.
  • the embodiment of the present application further provides a video quality evaluation device, including: a video acquisition module, configured to acquire a video to be evaluated, where the video to be evaluated includes a forward error correction FEC redundant data packet; and an abstract generation module, When the number of lost data packets of the first source block in the to-be-evaluated video is less than or equal to the number of FEC redundant data packets of the first source block, is not the first source block Generating a first digest message, and generating a second digest message for the lost data packet of the first source block, and a video evaluation module, configured to use the first digest message and the second digest message The video average experience score MOVS of the video to be evaluated is calculated.
  • a video acquisition module configured to acquire a video to be evaluated, where the video to be evaluated includes a forward error correction FEC redundant data packet
  • an abstract generation module When the number of lost data packets of the first source block in the to-be-evaluated video is less than or equal to the number of FEC redundant data packets of the first source block,
  • the first summary message and the second summary report are calculated because the data packet loss condition of the first source block in the video to be evaluated and the recovery of the packet loss by the FEC redundant data packet are considered.
  • the MOV of the video to be evaluated can be calculated by using the first summary message and the second summary message.
  • the remaining data packets have the ability to recover lost packets, thus increasing the accuracy of the video quality assessment, making it more in line with the user's real video experience.
  • the digest generating module includes: a sequence number acquiring module, configured to obtain a starting real-time transport protocol RTP from the FEC redundant data packet a sequence number and an ending RTP sequence number, and an RTP sequence number of the un-lossed packet of the first source block; a packet loss statistic module, configured to use the start RTP sequence number and the end RTP sequence number, The RTP sequence number of the first source block that does not lose the data packet calculates the number of lost data packets of the first source block.
  • the initial RTP sequence number and the ending RTP sequence number can determine the total number of RTP data packets in the first source block, and then the loss of the first source block can be calculated by excluding the RTP sequence number of the unrecovered data packet. The number of packets.
  • the video quality evaluation device further includes: an FEC information acquiring module, configured to acquire The FEC source block size and FEC redundancy of the first source block.
  • the video quality evaluation device may obtain the FEC source block size and the FEC redundancy, so that the number of FEC redundant data packets of the first source block may be determined by using the FEC source block size and the FEC redundancy. .
  • the FEC information acquiring module is specifically used for receiving the device from the MRF or the to-be-evaluated video Obtaining the FEC source block size and the FEC redundancy; or parsing the control packet exchanged between the receiving device and the video server of the video to be evaluated, thereby obtaining the FEC source block size and FEC redundancy Or; parsing the FEC redundant data packet of the first source block to obtain the FEC source block size and FEC redundancy.
  • the video quality evaluation device can obtain the FEC source block size and the FEC redundancy by the foregoing three methods, and then determine the number of FEC redundant data packets of the first source block.
  • the digest generating module is further used to When the number of lost data packets of the first source block is 0, or the number of lost data packets of the first source block is greater than the number of FEC redundant data packets of the first source block,
  • the first summary packet is generated by the first summary packet, and the video evaluation module is further configured to calculate the MOVS of the to-be-evaluated video according to the first summary packet.
  • the number of lost packets in the first source block is 0, it indicates that no packet loss occurs in the first source block.
  • the first description is performed.
  • the source block cannot recover lost packets in the first source block through the FEC redundant packet.
  • the first summary message is generated for the un-data packet of the first source block, and finally the MOV of the video to be evaluated is calculated according to the first summary message, so that only the first summary message is used.
  • MOSV can represent the user's real video experience.
  • the video quality The evaluation device further includes: a receiving module, wherein the receiving module is configured to receive a retransmission request sent by the receiving device of the to-be-evaluated video to a retransmission RET server, where the retransmission request is used to send to the RET server Requesting a retransmission of a data packet that is lost and cannot be recovered by FEC; the digest generating module is further configured to: when receiving the retransmission response returned by the RET server, generate a packet for the lost packet that cannot be recovered by FEC
  • the third summary message is configured to calculate the MOVS of the to-be-evaluated video according to the first summary message, the second summary message, and the third summary message.
  • the video quality evaluation device considers the recovery of the lost data packet of the FEC technology and the RET technology to be evaluated, so that the first summary message, the second summary message, and the third summary message may be used according to the first summary message, the second summary message, and the third summary message.
  • the accuracy of the video quality assessment is increased, so that it is more in line with the user's real experience.
  • the second digest message includes: an RTP sequence number of the lost data packet of the first source block, a payload size, and digest information of a video transport stream video TS packet in the lost data packet of the first source block.
  • the embodiment of the present application further provides another video quality evaluation device, a processor, a memory, a receiver, a transmitter, and a bus.
  • the processor, the receiver, the transmitter, and the memory communicate with each other through the bus.
  • the receiver for receiving data; the transmitter for transmitting data; the memory for storing instructions; the processor for executing the instructions in the memory, performing as described above.
  • the component modules of the video quality assessment device may also perform the steps described in the foregoing first aspect and various possible implementations, as described above in relation to the first aspect and various possible implementations. In the description.
  • a fifth aspect of the present application provides a computer readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the methods described in the above aspects.
  • a sixth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the methods described in the various aspects above.
  • FIG. 1 is a schematic structural diagram of an IPTV video system applied to a video quality assessment method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of another IPTV video system applied to a video quality assessment method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an FEC implementation process according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a RET implementation process according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application scenario of an IPTV video system according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a structure of an STB according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of a video quality evaluation method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of an interaction process between network elements in an IPTV video system according to an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of another interaction process between network elements in an IPTV video system according to an embodiment of the present disclosure.
  • FIG. 10-a is a schematic structural diagram of a video quality evaluation apparatus according to an embodiment of the present application.
  • FIG. 10-b is a schematic structural diagram of a summary generating module according to an embodiment of the present disclosure.
  • FIG. 10-c is a schematic structural diagram of another video quality evaluation apparatus according to an embodiment of the present application.
  • FIG. 10-d is a schematic structural diagram of another video quality evaluation apparatus according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another video quality evaluation apparatus according to an embodiment of the present application.
  • the embodiment of the present application provides a video quality evaluation method and device, which are used to improve the accuracy of video quality assessment and more conform to the user's real video experience.
  • a system architecture diagram of the video quality assessment method provided by the embodiment of the present application includes: a video quality assessment device 101 and a video.
  • Quality monitoring system 102 and Multimedia Relay Function (MRF) 103 The video quality monitoring system 102 is configured to send a monitoring instruction to the video quality evaluation device 101, and the video quality evaluation device 101 can monitor the video quality of the video to be evaluated according to the monitoring instruction delivered by the video quality monitoring system 102, for example, the The evaluation video can be live video data or video on demand data.
  • MRF Multimedia Relay Function
  • the monitoring instruction delivered by the video quality monitoring system 102 may include a video identifier of the video to be evaluated, the video identifier may be a channel number of the live video, or the video identifier may be a combination of a multicast address of the live video and a multicast port.
  • the video identification can also be a five-tuple data for video on demand.
  • the quintuple data refers to a source Internet Protocol (IP) address, a source port, a destination IP address, a destination port, and a transport layer protocol.
  • IP Internet Protocol
  • the live video stream sent on the video transmission channel is sent by the MRF to the receiving device of the video to be evaluated, and the receiving device of the video to be evaluated is a device for video decoding and playing on the user side, such as an STB.
  • the receiving device of the video to be evaluated decodes the received live video stream and presents it to the user, thereby providing a video service for the user.
  • the FEC technology can be used on the MRF 103 to implement service guarantee, thereby reducing the adverse effects of video packet loss and error in the transmission process.
  • the video quality evaluation device 101 provided in the embodiment of the present application can perform FEC analog decoding on the video data, so that the video quality evaluation device 101 can perform video quality evaluation according to the FEC decoded video data, and improve the accuracy of the video quality assessment. More in line with the user's real video experience.
  • the MOSV calculation of the video data in the video live scene is taken as an example.
  • the MOVS of the video data in the video on demand scenario may also be implemented by referring to the subsequent scene
  • the service quality assurance system deployed by the operator may use a retransmission (RET) technology in addition to the FEC encoding technology, thereby reducing packet loss and error in the transmission process of the video data.
  • RET retransmission
  • FIG. 2 another system architecture diagram applied to the video quality assessment method provided by the embodiment of the present application may include: in addition to the video quality assessment device 101, the video quality monitoring system 102, and the MRF 103, the following: The RET server 104, the RET server 104 can be connected to the MRF 103 and the video quality evaluation device 101 respectively.
  • the video quality evaluation device 101 provided in the embodiment of the present application also needs to consider the retransmission capability of the RET for the lost data packet, so that the video quality evaluation device 101
  • the video quality can be evaluated according to the FEC decoded video data and the data packet recovered after the RET retransmission is successful, and the accuracy of the video quality evaluation is improved, and the calculated MOSV is more in line with the user's real video experience.
  • the application layer FEC is mainly based on the erasure code, and the source video data is divided into equal-sized data packets at the transmitting end, and k 1 is divided.
  • the application layer FEC technology plays a fault-tolerant role in the error control of streaming media transmission.
  • the basic scheme of the FEC is to insert an FEC redundant data packet into each live video stream on the MRF, and when the STB detects the packet loss, according to the FEC redundant data packet and the received video data packet. Try to recover lost packets.
  • the MRF sends two data streams, one of which is the FEC redundant stream and the other is the live video stream.
  • the destination addresses of the FEC redundant stream and the live video stream are the same, and the FEC redundant stream and The destination port of the live video stream has a specific relationship.
  • the destination port number of the FEC redundant stream is the destination port number of the live video stream minus 1.
  • the destination port number of the live video stream is 2345
  • the destination port number of the FEC redundant stream is 2344.
  • the payload type field of the RTP of the FEC redundant stream is different from the payload type of the RTP of the live video stream, so that the STB recognizes the FEC redundant stream.
  • the header data structure of the FEC redundant packet is as follows:
  • Table 1 shows the structure of the FEC redundant data packet:
  • the multicast server sends a live video stream.
  • the MRF turns on the FEC and inserts the FEC redundant data packet.
  • the STB receives the live video stream and the FEC redundant stream.
  • the STB recovers the lost data packet of the live video stream by using the FEC redundant stream, and decodes the recovered data packet and the live video stream.
  • the implementation process of the RET provided by the embodiment of the present application is described.
  • the basic solution of the RET is that the RET server caches the video data of each video channel. After the STB detects the packet loss, it sends an RTP Control Protocol (RTCP) retransmission request to the RET server, and the RET server sends the retransmitted RTP packet to the STB.
  • RFC 4585 of the Internet Engineering Task Force (IETF) defines the specific implementation form of the retransmission request.
  • IETF RFC 4588 specifies the RTP encapsulation format of the retransmitted data packet. As shown in Figure 4, the detailed RET process is as follows:
  • the multicast server sends a live video stream.
  • the multicast server sends the live video stream to the MRF, and the MRF forwards it to the RET server.
  • the MRF is not shown in Figure 4.
  • the live video stream sent by the MRF to the RET server is used for caching of the RET server.
  • the RET server receives the live video stream and caches the live video stream.
  • An RET session for retransmission is established between the STB and the RET server.
  • the STB performs error and loss checking on the received video data. If a packet loss or a wrong packet is found, a retransmission request is sent to the RET server. According to the standard RFC4585, a retransmission request may request to load multiple data packets. pass.
  • the RET server searches for the data packet in the video data buffer area of the corresponding channel according to the received retransmission request, and sends the found data packet to the STB, and the data packet can be encapsulated according to RFC4588.
  • the STB receives the retransmitted data packet and then decodes and displays the received data packet.
  • FIG. 5 is a structural diagram of an IPTV network system applied to a video quality assessment method according to an embodiment of the present application.
  • Video HE is the IPTV video headend.
  • the video data transcoded into a constant bit rate is sent live or on-demand.
  • the video data When the video data is transmitted from the video head to the destination set-top box, it will change due to the network status.
  • the video data is abnormal, such as packet loss, delay, jitter, and out of order. These anomalies may cause defects such as blooming, jamming, etc. on the video screen played on the terminal screen, resulting in a decrease in the user's video viewing experience.
  • the video quality assessment device is deployed in an end-to-end IPTV service scenario, and the video quality assessment device may implement a MOSV probe by using software or hardware, where the video quality assessment device is used to monitor and calculate a network node or a certain The video experience evaluation score of the IPTV service of the end user.
  • the video quality assessment device can be deployed as a probe on a core router (Core Router, CR), a Broadband Remote Access Server (BRAS), an Optical Line Terminal (OLT), etc.
  • the network node is connected to the network nodes such as CR, BRAS, and OLT.
  • the video quality assessment device can also be deployed on a terminal device such as an IPTV STB.
  • the CR is a central processing unit (CPU), random access memory/dynamic random access memory (Random Access Memory/Dynamic Random Access Memory (RAM/DRAM), flash memory (Flash), non-volatile It consists of Non-Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) and various interfaces.
  • the MOSV probe can be deployed on the CPU.
  • the BRAS includes: a service management module and a service forwarding module, and the MOSV probe can be deployed on the main control CPU of the service management module.
  • the OLT includes a main control board, a service board, an interface board, and a power board. The MOSV probe can be deployed on the main control board CPU.
  • the composition of the STB may include five modules: a receiving front end module, a main module, a cable modem module, an audio and video output module, and a peripheral interface module.
  • the receiving front end module includes a tuner and a Quadrature Amplitude Modulation (QAM) demodulator, and the part can demodulate the video transmission stream from the radio frequency signal
  • the main module is a core part of the entire STB
  • the main module includes The decoding part, the embedded CPU and the memory, wherein the decoding part can perform operations such as decoding, decoding multiplexing, descrambling, etc. of the transport stream, and the embedded CPU and the memory are used to run and store the software system, and control each module.
  • QAM Quadrature Amplitude Modulation
  • the MOSV probe is deployed on the embedded CPU.
  • the cable modem module includes: a bidirectional tuner, a downlink QAM demodulator, a Quadrature Phase Shift Keyin (QPSK)/QAM modulator, and a media access control module, which implement all functions of cable modem. .
  • the audio and video output module performs digital/analog (D/A) conversion on the audio and video signals to restore the analog audio and video signals and output them on the television.
  • the peripheral interface module includes a rich external interface, including a high-speed serial interface, a universal serial interface USB, and the like.
  • the video quality evaluation method performed by the video quality evaluation device in the embodiment of the present application.
  • the video quality evaluation method provided by the embodiment of the present application mainly includes the following steps:
  • the video quality evaluation device can capture the video to be evaluated from the MRF, and the MRF can insert the FEC redundant data packet into the to-be-evaluated video.
  • the insertion process of the FEC redundant data packet is described in the foregoing embodiment.
  • the video quality assessment method provided by the embodiment of the present application may further include the following steps:
  • 700A Receive a monitoring instruction sent by a video quality monitoring system, where the monitoring instruction includes: a video identifier of the video to be evaluated.
  • the video quality evaluation device first receives a monitoring instruction sent by the video quality monitoring system to the video quality evaluation device, where the monitoring instruction includes a channel number of the live video, or the monitoring instruction includes a multicast address.
  • the video identifier may also be a video-on-demand quintuple data.
  • the video quality assessment device may indicate a video channel to be evaluated by using a Synchronization Source Identifier (SSRC), a multicast address, and a multicast port.
  • SSRC Synchronization Source Identifier
  • the SSRC can be used to capture the video to be evaluated, or the multicast address and multicast port can be used to capture the video to be evaluated.
  • Video service information includes: video coding configuration parameters and FEC capability information.
  • the video quality evaluation device may perform video quality monitoring on the video to be evaluated according to the monitoring instruction, and the video quality evaluation device first obtains video service information, where the video service information is video of the video quality evaluation device to be evaluated.
  • the video service information includes: video coding configuration parameters and FEC capability information.
  • the video coding configuration parameter may be a video coding type, a frame rate, and a resolution used by the multicast server to send a live video stream.
  • the FEC capability information refers to the FEC information used by the MRF to perform FEC encoding on the live video stream.
  • the FEC capability information may include: FEC source block size and FEC redundancy.
  • the video quality evaluation device can obtain the video to be evaluated according to the video service information.
  • the video to be evaluated may be lost during the transmission process.
  • the processing of the first source block in the video to be evaluated is taken as an example.
  • the first source block is the first video source block.
  • the data packet of the first source block may be lost during the transmission process, and the number of lost data packets of the first source block in the first source block is first acquired, because the first source block is inserted by the MRF when inserted.
  • the FEC redundant packet so the number of FEC redundant packets of the first source block is counted. If the number of lost data packets of the first source block in the video to be evaluated is less than or equal to the number of FEC redundant data packets of the first source block, it may be recovered in the first source block by using the FEC redundant data packet. Lost packets.
  • a digest message is generated for each of the non-lost data packet and the lost data packet of the first source block.
  • the structure of the digest message can refer to the structure of the stripped packets in the ITU-T P.1201.2 standard.
  • the method for generating the digest message can refer to the calculation method of stripped packets in the ITU-T P.1201.2 standard.
  • the digest message generated for the un-lossed data packet of the first source block is defined as a “first digest message”
  • the digest message generated for the lost data packet of the first source block is defined as “second. Summary message”.
  • the number of lost data packets of the first source block is calculated as follows:
  • A1 Obtain a starting RTP sequence number and an ending RTP sequence number from the FEC redundant data packet, and obtain an RTP sequence number of the first source block that has not lost the data packet;
  • A2. Calculate the number of lost data packets of the first source block according to the starting RTP sequence number and the ending RTP sequence number and the RTP sequence number of the un-loss packet of the first source block.
  • the foregoing description of the FEC redundant data packet indicates that the initial RTP sequence number and the ending RTP sequence number can determine the total number of RTP data packets in the first source block, and then pass the RTP for the unrecovered data packet. By eliminating the serial number, the number of lost packets of the first source block can be calculated.
  • the video quality assessment method provided by the embodiment of the present application may include the following steps in addition to the foregoing implementation steps:
  • FEC capability information of the first source block where the FEC capability information includes: FEC source block size and FEC redundancy.
  • the video quality assessment device in the embodiment of the present application may obtain the FEC capability information, and use the FEC capability information to determine the number of FEC redundant data packets of the first source block.
  • step C1 acquires FEC source block size and FEC redundancy of the first source block, including:
  • control packet exchanged between the receiving device and the video server of the video to be evaluated is parsed to obtain the FEC source block size and the FEC redundancy;
  • the video quality assessment device may obtain the FEC capability information in multiple manners.
  • the video quality assessment device may obtain the FEC capability information from the MRF or the receiving device of the video to be evaluated.
  • the receiving device of the video to be evaluated may specifically be an STB.
  • the video quality assessment device can view the MRF configuration file, the STB manual, and the like to obtain the FEC source block size and the FEC redundancy, and the FEC redundancy refers to the number of redundant packets in each FEC source block.
  • the video quality assessment device obtains the FEC capability by inter-recognizing the control packet.
  • the STB and the video server usually interact with each other through Real Time Streaming Protocol (RTSP) or Hypertext Transfer Protocol (HTTP).
  • RTSP Real Time Streaming Protocol
  • HTTP Hypertext Transfer Protocol
  • the information includes the FEC source block size and the number of FEC redundancy packets.
  • the FEC capability information can be obtained by parsing such control messages.
  • the video quality evaluation device can also obtain the FEC capability information by parsing the FEC redundant data packet, and the FEC redundant data packet includes the FEC source block size and the FEC redundancy.
  • the second digest message generated for the lost data packet of the first source block includes: an RTP sequence number of the lost data packet of the first source block, a load size, and missing data of the first source block.
  • the load size of the lost data packet of the first source block may be obtained by multiplying the number of lost data packets by the size of the lost data packet, and the manner for generating the summary information of the video TS packet may be referred to the ITU-T P.1201.2 standard. The way to generate summary information using stripped packets.
  • the video quality evaluation device considers the recovery condition of the lost data packet of the FEC technology to be evaluated, so that the MOV of the video to be evaluated can be calculated according to the first summary message and the second summary message, and the MOVS of the MOV
  • the specific calculation method refer to the calculation method of MOVS in the ITU-T P.1201.2 standard.
  • the recovery of the lost data packet by the FEC technology is considered, and the accuracy of the video quality assessment is increased, so that the user is more realistic. Experience.
  • the video quality assessment method provided by the embodiment of the present application may include the following steps in addition to the foregoing implementation steps:
  • the first source block is The first summary message is generated by the unsigned packet;
  • the number of lost packets in the first source block when the number of lost packets in the first source block is 0, it indicates that no packet loss occurs in the first source block, and the number of lost data packets in the first source block is greater than the number of FEC redundant data packets.
  • the first source block cannot recover the lost data packet in the first source block through the FEC redundant data packet.
  • a first digest message is generated for the un-data packet of the first source block, and finally the MOV of the video to be evaluated is calculated according to the first digest message.
  • the video quality assessment is performed when the number of lost data packets of the first source block is 0, or the number of lost data packets of the first source block is greater than the number of FEC redundant data packets of the first source block.
  • the device may also discard the FEC redundant data packets of the first source block, thereby reducing the occupation of the FEC buffer queue.
  • the FEC technology cannot recover the first one because the number of lost data packets of the first source block is 0, or the number of lost data packets of the first source block is greater than the number of FEC redundant data packets of the first source block.
  • the source block loses the data packet, so the MOSV calculated using only the first digest message can represent the user's real video experience.
  • the IPTV video system is further provided with a RET server.
  • the video quality evaluation method provided by the embodiment of the present application may include the following steps in addition to the foregoing implementation steps:
  • E1 a retransmission request sent by the receiving device that receives the video to be evaluated to the RET server, where the retransmission request is used to request the RET server to retransmit the data packet that is lost and cannot be recovered by the FEC;
  • the multicast server sends the to-be-evaluated video to the MRF, and the MRF generates a corresponding FEC redundant data packet according to the to-be-evaluated video.
  • the MRF After the MRF generates the FEC redundant data packet, The MRF sends the video to be evaluated and the FEC redundant data packet to the receiving device of the video to be evaluated, and the MRF sends the video to be evaluated and the FEC redundant data packet to the RET server.
  • the RET server After receiving the video to be evaluated and the FEC redundant data packet from the MRF, the RET server saves the received video to be evaluated and the FEC redundant data packet in a video data buffer, where the video data buffer is used to cache the video to be evaluated and The FEC redundant data packet is such that the RET server retransmits the lost video data at the request of the receiving device of the video to be evaluated.
  • the video to be evaluated includes: the first source block and the second source block. If the lost data packet of the first source block can be successfully recovered by the FEC technology, the lost data packet of the second source block cannot be successfully recovered by the FEC technology.
  • the receiving device of the evaluation video may request the RET server to retransmit the lost data packet of the second source block.
  • the data packet When the video quality evaluation device receives the retransmission response returned by the RET server, the video quality evaluation device is lost and cannot be recovered by FEC.
  • the data packet generates a third summary message.
  • the manner in which the third digest message is generated may refer to the calculation method using the stripped packets in the ITU-T P.1201.2 standard.
  • the step 703 calculates the video average experience score (MOSV) of the video to be evaluated according to the first summary message and the second summary message, which specifically includes:
  • the video quality evaluation device considers the recovery of the lost data packet of the FEC technology and the RET technology to be evaluated, so that the first summary message, the second summary message, and the third summary message may be used according to the first summary message, the second summary message, and the third summary message.
  • the specific calculation method of MOVS can be found in the ITU-T P.1201.2 standard.
  • the MOSV calculated by the first digest packet, the second digest packet, and the third digest packet in the embodiment of the present application the recovery of the lost data packet by the FEC technology and the RET technology is considered, and the video quality assessment is accurate. Sex, making it more in line with the user's real experience.
  • the STB uses the to-be-evaluated video and the FEC redundant data packet to detect whether the packet is lost. After detecting the packet loss, the STB uses the FEC. The redundant data packet recovers the lost data packet as much as possible. If there are too many lost data packets, the FEC decoding fails, and the STB sends a retransmission request to the RET server, and the video quality evaluation device obtains the retransmission request sent by the STB.
  • the RET server sends a retransmission response to the STB according to the retransmission request sent by the STB, and the video quality assessment device obtains the retransmission response, and the video quality assessment device determines that the lost data packet of the first source block is successfully recovered, thereby
  • the third summary packet of the data packet successfully retransmitted by the RET server is calculated.
  • the MOSV calculated based on the first digest packet, the second digest packet, and the third digest packet is considered by the embodiment of the present application.
  • the recovery of lost packets by FEC technology and RET technology increases the accuracy of video quality assessment, making it more in line with the user's real experience.
  • the video to be evaluated is first obtained, and the video to be evaluated includes an FEC redundant data packet, where the number of lost data packets of the first source block in the video to be evaluated is less than or equal to the first Generating a first digest packet for a non-lost packet of the first source block, and generating a second digest packet for the lost packet of the first source block, and finally generating a second digest packet for the first source block
  • the first summary message and the second summary message calculate a video average experience score MOS of the video to be evaluated.
  • the first summary packet and the second summary packet are calculated, because the packet loss situation of the first source block in the video to be evaluated and the recovery of the packet loss by the FEC redundant data packet are considered in the embodiment of the present application.
  • the MOV of the video to be evaluated can be calculated by using the first summary message and the second summary message. Compared with the prior art, only the captured video data is evaluated for the MOVS, and the FEC redundant data is considered in the embodiment of the present application. The ability to recover lost packets, thus increasing the accuracy of the video quality assessment, making it more in line with the user's real video experience.
  • the video quality impairment is mainly caused by video compression and network transmission packet loss.
  • Video compression impairments are related to video bitrate, resolution, frame rate, and content complexity, so video compression impairments can be calculated based on video bitrate, resolution, frame rate, and content complexity.
  • packets When the video is transmitted over the network, packets will be lost. If the packet is lost, the frame quality of the video will be damaged. If the damaged frame is a reference frame, the damage will continue to propagate backward. Therefore, when evaluating the network transmission damage, you need to determine the video.
  • the frame size, frame type, and frame loss event are evaluated according to the size of the frame loss event and the location of the packet loss, and then the network transmission damage is calculated according to the damage condition.
  • the video quality evaluation device in the embodiment of the present application can obtain the lost data packet that can be successfully recovered through the FEC technology and the RET technology, so that the final recovery of the video compression damage, the network transmission damage, and the lost message can be finally calculated.
  • MOSV the calculation method of the MOVS in the embodiment of the present application can refer to the method of calculating the MOVS by using the stripped packets in the ITU-T P.1201.2 standard.
  • FIG. 8 is a schematic diagram of an interaction process between network elements in an IPTV video system provided by an embodiment of the present application.
  • the video quality evaluation device is deployed as a MOSV probe on a network device (for example, a router), and the video quality evaluation device can obtain video data sent from the multicast server to the receiving device of the video to be evaluated through port mirroring.
  • the IPTV video system has the FEC function.
  • the multicast server sends the live video stream to the MRF, the MRF enables the FEC function, inserts the FEC redundant stream, the STB receives the live video stream and the FEC redundant stream, and recovers the live video using the FEC redundant data packet.
  • the video source block in the stream loses the packet and decodes the video live stream.
  • the embodiment of the present application describes a method for optimizing an IPTV video quality assessment in the scenario, and the specific method includes the following steps:
  • the video quality monitoring system issues a monitoring command to a specific video channel to the MOSV probe.
  • the monitoring command may include: an SSRC of a specific video channel, and the MOSV probe may use the SSRC to capture a live video stream.
  • the monitoring command may include: a multicast address and a multicast port of a specific video channel, and the MOVS probe may also capture a live video stream by using a multicast address and a multicast port.
  • the MOSV probe acquires video service information.
  • the video service information may include video coding configuration parameters and FEC capability information at the head end.
  • the MOSV probe can obtain the video service information after the step 1, and the video service information can be obtained in advance before the step 1.
  • the head end video coding configuration parameters may include: an encoding type, a frame rate, a resolution, and the like.
  • the STB and the multicast server usually advertise the FEC capabilities and parameters through the RTSP or HTTP interaction, including the FEC source block size and the FEC redundancy, which can be obtained by analyzing such control packets.
  • FEC capability information for example, in the channel list acquisition, the ChannelFECPort parameter is added to support the FEC error correction function for the channel, and ChannelFECPort indicates the port number of the channel supporting FEC. If the channel supports FEC, fill in the port number, otherwise it is empty.
  • the FEC capability information can also be obtained by parsing the FEC redundant data packet, and the FEC redundant data packet header includes the FEC source block size and the FEC redundancy.
  • the MOSV probe can set two buffers to capture the live video stream and the FEC redundant stream. First, the video live stream and the FEC redundant stream of the channel to be evaluated are extracted according to the multicast address, and then the live video stream and the FEC redundant stream are distinguished by the port or the RTP payload type. The original video summary packet is parsed by the original video data packet in the live stream, and the original video summary message is placed in a buffer queue to solve the out-of-order problem.
  • the starting and ending RTP sequence numbers contained in the structure of the FEC redundant data packet are used to synchronize the original video data packet with the FEC redundant data packet.
  • the packet RTP sequence number is used to count the number of lost packets of the source block. If there is no packet loss in the source block, discard the corresponding FEC packet in the FEC buffer queue, and then perform step 6. If the source block has packet loss, the number of received FEC redundant data packets is m. When m ⁇ n, the packet loss exceeds the FEC recovery capability. The data packet lost by the current source block cannot be recovered.
  • step 5 an abstract message similar to the original video summary message may be generated for the lost data packet.
  • the summary format of the original video summary message and the summary format of the recoverable summary message are given in Table 2 below.
  • the recoverable digest packet b is continuous with a and c
  • the cc number of the video TS packet in the digest packet b is consecutive with the cc number of the video TS packet in a and c
  • a and c are consecutive two digest packets.
  • the MOS is calculated according to the summary message to be evaluated, and the summary message to be evaluated may include: an original video summary message and a recoverable summary message.
  • Table 2 is the content of the summary message.
  • FIG. 9 is a schematic diagram of an interaction process between network elements in an IPTV video system provided by an embodiment of the present application.
  • FEC and RET are used in the IPTV video system
  • the STB detects the packet loss
  • the FEC redundant data packet is used to recover the lost data packet. If there are too many packet loss, the FEC decoding fails, and the STB goes to the RET server. Send a retransmission request to get the lost packet.
  • the specific methods are as follows:
  • the video quality monitoring system issues a monitoring command to a specific video channel to the MOSV probe, and the monitoring command may include: an SSRC of a specific video channel and a specific user (for example, an IP address of the user equipment), and the MOSV probe can capture the SSRC. Live video stream.
  • the monitoring instruction may include: a multicast address of a specific video channel, a multicast port, and a specific user (for example, an IP address of the user equipment), and the MOVS probe may also capture the live video stream by using the multicast address and the multicast port.
  • the IP address of the user equipment is used to capture the unicast retransmission request and the retransmission response stream.
  • the MOSV probe acquires RET information.
  • the RET information may include whether the STB enables RET, RET server address, and the like.
  • the MOSV probe sets two buffer buffers to capture the live video stream and the RET retransmission stream.
  • the video live stream of the channel to be evaluated is filtered according to the multicast address, and the RET retransmission stream is captured by the IP address of the user equipment, including a retransmission request and a retransmission response.
  • the MOSV probe generates summary messages for the data packets that can be recovered or retransmitted successfully. After detecting the packet loss, the STB first recovers the lost data packet by using the FEC redundant datagram. If the FEC redundant data packet cannot recover the lost data packet, the STB initiates a retransmission request to the RET server. Therefore, the MOVS probe can determine whether the lost data packet is recovered by the FEC redundant data packet by retransmission request, and if the retransmission request of the lost data packet is received, it indicates that the lost data packet cannot be recovered, if the lost data packet is not received. The retransmission request indicates that the lost packet has been recovered by the FEC redundant packet.
  • the lost data packet recovered by the FEC redundant data packet it may be referred to step 5 in the foregoing embodiment to generate a recoverable summary message.
  • the retransmission success digest message is generated in step 5 in the foregoing embodiment.
  • the MOS is calculated according to the summary message to be evaluated.
  • the summary message to be evaluated may include: the original video summary message, the recoverable summary message, and the retransmission success summary message. .
  • the video quality evaluation device obtains related parameters of the channel to be evaluated, including a multicast address, a multicast port, an FEC source block size, an FEC redundancy packet number, a RET server address, and the like. . If the system only enables the FEC function, the video quality evaluation device needs to obtain the FEC redundant stream in addition to the live video stream. For each video source block, the number of received FEC redundant data packets and the lost data packets are counted. The number, and based on this, can determine whether the lost data packet in the current source block can be recovered.
  • a corresponding recoverable summary message needs to be created for the lost packet and inserted into the original video summary message queue for use in the MOVS calculation of the video to be evaluated.
  • the video quality evaluation device needs to monitor the RET retransmission stream at the same time.
  • the corresponding retransmission success summary message needs to be generated and inserted into the original video summary report.
  • the video quality evaluation device extracts the digest message from the digest message queue, analyzes the relevant video attributes, and finally calculates the MOSV value of the channel.
  • the technical solution of the present application considers the FEC and RET technologies used by the service quality assurance system.
  • the influence of the FEC redundant data packet and the RET retransmission data packet on the lost data packet is considered, and the video quality is increased.
  • the accuracy of the assessment makes it more in line with the user's real experience.
  • the FEC and RET fault tolerance capabilities are considered, so that the video quality assessment accuracy can be improved, and the user experience can be truly reflected.
  • a video quality evaluation device 1000 may include: a video acquisition module 1001, a summary generation module 1002, and a video evaluation module 1003, where
  • a video acquisition module 1001 configured to acquire a video to be evaluated, where the video to be evaluated includes a forward error correction FEC redundant data packet;
  • the digest generation module 1002 is configured to: when the number of lost data packets of the first source block in the to-be-evaluated video is less than or equal to the number of FEC redundant data packets of the first source block, Generating a first digest message from a non-lost data packet of a source block, and generating a second digest message for the lost data packet of the first source block;
  • the video evaluation module 1003 is configured to calculate a video average experience score (MOSV) of the to-be-evaluated video according to the first summary message and the second summary message.
  • MOSV video average experience score
  • the digest generating module 1002 includes:
  • the sequence number obtaining module 10021 is configured to obtain, from the FEC redundant data packet, an initial real-time transport protocol RTP sequence number and an ending RTP sequence number, and acquire an RTP sequence number of the first source block that is not lost.
  • the packet loss statistic module 10022 is configured to calculate, according to the starting RTP sequence number and the ending RTP sequence number, the RTP sequence number of the first source block that is not lost, calculate the lost data packet of the first source block. The number.
  • the video quality assessment apparatus 1000 further includes:
  • the FEC information obtaining module 1004 is configured to acquire an FEC source block size and an FEC redundancy of the first source block.
  • the FEC information obtaining module 1004 is specifically configured to obtain the FEC source block size and FEC redundancy from the MRF or the receiving device of the video to be evaluated; Or, the control packet exchanged between the receiving device and the video server of the video to be evaluated is parsed to obtain the FEC source block size and the FEC redundancy; or the FEC redundancy of the first source block is obtained. The remaining data packets are parsed to obtain the FEC source block size and FEC redundancy.
  • the digest generating module 1002 is further configured to: when the number of lost data packets of the first source block is 0, or the number of lost data packets of the first source block When the number of the FEC redundant data packets of the first source block is greater than that, the first digest packet is generated for the non-lost data packet of the first source block;
  • the video evaluation module 1003 is further configured to calculate a MOVS of the video to be evaluated according to the first summary message.
  • the video quality evaluation device further includes: a receiving module 1005, configured to receive, by the receiving device of the video to be evaluated, a retransmission RET server. Retransmitting the request, the retransmission request is used to request the RET server to retransmit a data packet that is lost and cannot be recovered by FEC;
  • the digest generating module 1002 is further configured to: when receiving the retransmission response returned by the RET server, generate a third digest message for the data packet that is lost and cannot be recovered by FEC;
  • the video evaluation module 1003 is configured to calculate a MOVS of the video to be evaluated according to the first summary message, the second summary message, and the third summary message.
  • the second digest message includes an RTP sequence number of the lost data packet of the first source block, a payload size, and a video transport stream in the lost data packet of the first source block. Summary information of the TS package.
  • An example of the present application may be used to obtain the video to be evaluated.
  • the video to be evaluated includes an FEC redundant data packet, and the number of lost data packets of the first source block in the video to be evaluated is less than or equal to the first source.
  • the summary message and the second summary message calculate a video average experience score MOS of the video to be evaluated.
  • the first summary packet and the second summary packet are calculated, because the packet loss situation of the first source block in the video to be evaluated and the recovery of the packet loss by the FEC redundant data packet are considered in the embodiment of the present application.
  • the MOV of the video to be evaluated can be calculated by using the first summary message and the second summary message. Compared with the prior art, only the captured video data is evaluated for the MOVS, and the FEC redundant data is considered in the embodiment of the present application. The ability to recover lost packets, thus increasing the accuracy of the video quality assessment, making it more in line with the user's real video experience.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a program, and the program executes some or all of the steps described in the foregoing method embodiments.
  • the video quality evaluation device 1100 includes: a receiver 1101, a transmitter 1102, a processor 1103, and a memory 1104 (wherein video quality evaluation)
  • the number of processors 1103 in the device 1100 may be one or more, and one processor is taken as an example in FIG.
  • the receiver 1101, the transmitter 1102, the processor 1103, and the memory 1104 may be connected by a bus or other means, wherein FIG. 11 is exemplified by a bus connection.
  • Memory 1104 can include read only memory and random access memory and provides instructions and data to processor 1103. A portion of the memory 1104 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
  • the memory 1104 stores operating systems and operational instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, wherein the operational instructions can include various operational instructions for implementing various operations.
  • the operating system can include a variety of system programs for implementing various basic services and handling hardware-based tasks.
  • the processor 1103 controls the operation of the video quality evaluation device.
  • the processor 1103 may also be referred to as a central processing unit (English name: Central Processing Unit, English abbreviation: CPU).
  • CPU Central Processing Unit
  • the components of the video quality evaluation device are coupled together by a bus system.
  • the bus system may include a power bus, a control bus, and a status signal bus in addition to the data bus.
  • the various buses are referred to as bus systems in the figures.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 1103 or implemented by the processor 1103.
  • the processor 1103 can be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1103 or an instruction in a form of software.
  • the processor 1103 may be a general-purpose processor, a digital signal processor (English full name: digital signal processing, English abbreviation: DSP), an application specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), field programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 1104, and the processor 1103 reads the information in the memory 1104 and performs the steps of the above method in combination with its hardware.
  • the receiver 1101 can be configured to receive input digital or character information, and generate signal inputs related to related settings and function control of the video quality evaluation device, the transmitter 1102 can include a display device such as a display screen, and the transmitter 1102 can be used to pass through an external interface. Output numeric or character information.
  • the processor 1103 is configured to execute the instruction in the memory, and perform the video quality evaluation method described in the foregoing embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be Physical units can be located in one place or distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, and specifically may be implemented as one or more communication buses or signal lines.
  • U disk mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, etc., including a number of instructions to make a computer device (may be A personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present application.
  • a computer device may be A personal computer, server, or network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be stored by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请实施例公开了一种视频质量评估方法和设备,用于提高视频质量评估的准确性,计算出的MOSV更加符合用户的真实视频体验。本申请实施例提供一种视频质量评估方法,包括:获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。

Description

一种视频质量评估方法和设备 技术领域
本申请涉及计算机技术领域,尤其涉及一种视频质量评估方法和设备。
背景技术
网络协定电视(Internet Protocol Television,IPTV)是用宽频网络作为介质传送电视信息的一种系统,用户付费给电信运营商,电信运营商给用户提供IPTV视频头端(headend)、内容分发网络(英文:content delivery network,简称:CDN)、机顶盒(英文:set-top box,简称:STB)以及其他用于保证IPTV视频业务正常运行的网络设备等,并对用户的消费内容及IPTV视频业务的质量负责。
现有技术中,IPTV的监控方案通过在网络中部署视频质量评估设备(即探针)捕获视频直播流,解析视频相关参数,统计视频丢包情况,进而评估视频平均体验得分(Mean Opinion Score of Video,MOSV)。MOSV是一种衡量网络视频质量好坏的评价标准,此标准通过检测所接收的视频的片源压缩损伤以及网络传输损伤,将这些损伤对用户观看视频的体验造成的影响进行综合建模打分,打分标准来自国际电信联盟-电信标准(International Telecommunication Union-Telecommunication,ITU-T)P.910,一般来说,4分以上为好,4分至3分为一般,3分以下为差。
IPTV视频业务是采用实时传输协议(Real-Time Transport Protocol,RTP)和用户数据报协议(User Datagram Protocol,UDP)承载的,因为UDP是不可靠传输协议,所以媒体传输不可靠,容易出现丢包导致马赛克、花屏,导致用户体验较差,严重影响了IPTV业务的发展。因此运营商通常都会部署前向纠错(Front Error Correction,FEC)实现业务保障,从而减少视频数据在传输过程中的丢包、误码等情况对解码的不良影响。
现有技术中,为了提升IPTV网络的业务质量,网络运营商往往会采用各种差错恢复机制来增强IPTV视频系统对网络丢包的容错能力。而现有技术在捕获视频数据后,使用捕获到的该视频数据进行视频质量评估,导致计算出的MOSV不准确,该MOSV并不能反映用户的真实视频体验。
发明内容
本申请实施例提供了一种视频质量评估方法和设备,用于提高视频质量评估的准确性,计算出的MOSV更加符合用户的真实视频体验。
解决上述技术问题,本申请实施例提供以下技术方案:
第一方面,本申请实施例提供一种视频质量评估方法,包括:获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;根据所述 第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。
在本申请实施例中,由于考虑到了待评估视频中第一源块的数据包丢失情况以及FEC冗余数据包对丢包的恢复情况,从而计算出了第一摘要报文和第二摘要报文,通过该第一摘要报文和第二摘要报文可计算出待评估视频的MOSV,相比较于现有技术中只对捕获到的视频数据评估MOSV,本申请实施例中考虑到了FEC冗余数据包对丢失数据包的恢复能力,因此增加了视频质量评估的准确性,使之更加符合用户的真实视频体验。
结合第一方面,在第一方面的第一种可能的实现方式中,所述第一源块的丢失数据包的个数通过如下方式计算:从所述FEC冗余数据包中获取起始实时传输协议RTP序列号和结束RTP序列号,以及获取所述第一源块的未丢失数据包的RTP序列号;根据所述起始RTP序列号和所述结束RTP序列号、所述第一源块的未丢失数据包的RTP序列号计算所述第一源块的丢失数据包的个数。起始RTP序列号和结束RTP序列号可以确定出第一源块中的总RTP数据包个数,再通过对未丢失数据包的RTP序列号的排除,从而可以计算出第一源块的丢失数据包的个数。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述方法还包括:获取所述第一源块的FEC源块大小和FEC冗余度。本申请实施例中视频质量评估设备可以获取到FEC源块大小和FEC冗余度,从而使用该FEC源块大小和FEC冗余度可以确定出第一源块的FEC冗余数据包的个数。
结合第一方面的第二种可能的实现方式,在第一方面的第三种可能的实现方式中,所述获取所述第一源块的FEC源块大小和FEC冗余度,包括:从所述MRF或所述待评估视频的接收设备获取到所述FEC源块大小和FEC冗余度;或,对所述待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到所述FEC源块大小和FEC冗余度;或,对所述第一源块的FEC冗余数据包进行解析,从而得到所述FEC源块大小和FEC冗余度。视频质量评估设备通过前述的三种方式可以获取到FEC源块大小和FEC冗余度,进而确定出第一源块的FEC冗余数据包的个数。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能的实现方式,在第一方面的第四种可能的实现方式中,所述方法还包括:当所述第一源块的丢失数据包的个数为0,或所述第一源块的丢失数据包的个数大于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文;根据所述第一摘要报文计算所述待评估视频的MOSV。第一源块的丢包个数为0时,说明第一源块中没有发生数据包丢失,第一源块的丢失数据包的个数大于FEC冗余数据包的个数时,说明第一源块无法通过FEC冗余数据包恢复出第一源块中的丢失数据包。在这两种情况下,为第一源块的未丢失数据包生成第一摘要报文,最后根据该第一摘要报文计算待评估视频的MOSV,因此只使用第一摘要报文计算出的MOSV可以表示用户的真实视频体验。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能或第四种可能的实现方式,在第一方面的第五种可能的实现方式中,所述方法还包括:接收所述待评估视频的接收设备发送给重传RET服务器的重传请求,所述重传请求用于向所述RET服务器请求重传丢失且无法通过FEC恢复的数据包;当接收到所述RET服务器返回的重传响应时,为所述丢失且无法通过FEC恢复的数据包生成第三摘要报文;所述根据所述第一摘要 报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV,具体包括:根据所述第一摘要报文、所述第二摘要报文和所述第三摘要报文计算所述待评估视频的MOSV。在本申请的实施例中,视频质量评估设备考虑到了FEC技术和RET技术对待评估视频的丢失数据包的恢复情况,从而可以根据第一摘要报文、第二摘要报文和第三摘要报文计算待评估视频的MOSV,考虑了FEC技术和RET技术对丢失数据包的恢复情况,增加了视频质量评估的准确性,使之更加符合用户的真实体验。
结合第一方面或第一方面的第一种可能或第二种可能或第三种可能或第四种可能或第五种可能的实现方式,在第一方面的第六种可能的实现方式中,所述第二摘要报文包括:所述第一源块的丢失数据包的RTP序列号、负载大小和所述第一源块的丢失数据包中的视频传输流视频TS包的摘要信息。
第二方面,本申请实施例还提供一种视频质量评估设备,包括:视频获取模块,用于获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;摘要生成模块,用于当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;视频评估模块,用于根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。
在本申请实施例中,由于考虑到了待评估视频中第一源块的数据包丢失情况以及FEC冗余数据包对丢包的恢复情况,从而计算出了第一摘要报文和第二摘要报文,通过该第一摘要报文和第二摘要报文可计算出待评估视频的MOSV,相比较于现有技术中只对捕获到的视频数据评估MOSV,本申请实施例中考虑到了FEC冗余数据包对丢失数据包的恢复能力,因此增加了视频质量评估的准确性,使之更加符合用户的真实视频体验。
结合第二方面,在第二方面的第一种可能的实现方式中,所述摘要生成模块,包括:序列号获取模块,用于从所述FEC冗余数据包中获取起始实时传输协议RTP序列号和结束RTP序列号,以及获取所述第一源块的未丢失数据包的RTP序列号;丢包统计模块,用于根据所述起始RTP序列号和所述结束RTP序列号、所述第一源块的未丢失数据包的RTP序列号计算所述第一源块的丢失数据包的个数。起始RTP序列号和结束RTP序列号可以确定出第一源块中的总RTP数据包个数,再通过对未丢失数据包的RTP序列号的排除,从而可以计算出第一源块的丢失数据包的个数。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述视频质量评估设备,还包括:FEC信息获取模块,用于获取所述第一源块的FEC源块大小和FEC冗余度。本申请实施例中视频质量评估设备可以获取到FEC源块大小和FEC冗余度,从而使用该FEC源块大小和FEC冗余度可以确定出第一源块的FEC冗余数据包的个数。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,所述FEC信息获取模块,具体用于从所述MRF或所述待评估视频的接收设备获取到所述FEC源块大小和FEC冗余度;或,对所述待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到所述FEC源块大小和FEC冗余度;或,对所述第一源块的FEC冗余数据包进行解析,从而得到所述FEC源块大小和FEC冗余度。视频质量评估设备通 过前述的三种方式可以获取到FEC源块大小和FEC冗余度,进而确定出第一源块的FEC冗余数据包的个数。
结合第二方面或第二方面的第一种可能或第二种可能或第三种可能的实现方式,在第二方面的第四种可能的实现方式中,所述摘要生成模块,还用于当所述第一源块的丢失数据包的个数为0,或所述第一源块的丢失数据包的个数大于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文;所述视频评估模块,还用于根据所述第一摘要报文计算所述待评估视频的MOSV。第一源块的丢包个数为0时,说明第一源块中没有发生数据包丢失,第一源块的丢失数据包的个数大于FEC冗余数据包的个数时,说明第一源块无法通过FEC冗余数据包恢复出第一源块中的丢失数据包。在这两种情况下,为第一源块的未丢失数据包生成第一摘要报文,最后根据该第一摘要报文计算待评估视频的MOSV,因此只使用第一摘要报文计算出的MOSV可以表示用户的真实视频体验。
结合第二方面或第二方面的第一种可能或第二种可能或第三种可能或第四种可能的实现方式,在第二方面的第五种可能的实现方式中,所述视频质量评估设备,还包括:接收模块,其中,所述接收模块,用于接收所述待评估视频的接收设备发送给重传RET服务器的重传请求,所述重传请求用于向所述RET服务器请求重传丢失且无法通过FEC恢复的数据包;所述摘要生成模块,还用于当接收到所述RET服务器返回的重传响应时,为所述丢失且无法通过FEC恢复的数据包生成第三摘要报文;所述视频评估模块,具体用于根据所述第一摘要报文、所述第二摘要报文和所述第三摘要报文计算所述待评估视频的MOSV。在本申请的实施例中,视频质量评估设备考虑到了FEC技术和RET技术对待评估视频的丢失数据包的恢复情况,从而可以根据第一摘要报文、第二摘要报文和第三摘要报文计算待评估视频的MOSV,考虑了FEC技术和RET技术对丢失数据包的恢复情况,增加了视频质量评估的准确性,使之更加符合用户的真实体验。
结合第二方面或第二方面的第一种可能或第二种可能或第三种可能或第四种可能或第五种可能的实现方式,在第二方面的第六种可能的实现方式中,所述第二摘要报文包括:所述第一源块的丢失数据包的RTP序列号、负载大小和所述第一源块的丢失数据包中的视频传输流视频TS包的摘要信息。
第三方面,本申请实施例还提供另一种视频质量评估设备,处理器,存储器,接收器、发射器和总线;所述处理器、接收器、发射器、存储器通过所述总线相互的通信;所述接收器,用于接收数据;所述发射器,用于发送数据;所述存储器,用于存储指令;所述处理器,用于执行所述存储器中的所述指令,执行如前述第一方面中任一项所述的方法。
在本申请的第四方面中,视频质量评估设备的组成模块还可以执行前述第一方面以及各种可能的实现方式中所描述的步骤,详见前述对第一方面以及各种可能的实现方式中的说明。
本申请的第五方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
本申请的第六方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
附图说明
图1为本申请实施例提供的视频质量评估方法所应用的一种IPTV视频系统架构示意图;
图2为本申请实施例提供的视频质量评估方法所应用的另一种IPTV视频系统架构示意图;
图3为本申请实施例提供的FEC实现过程示意图;
图4为本申请实施例提供的RET实现过程示意图;
图5为本申请实施例提供的IPTV视频系统的应用场景示意图;
图6为本申请实施例提供的STB的组成结构示意图;
图7为本申请实施例提供的一种视频质量评估方法的流程方框示意图;
图8为本申请实施例提供的IPTV视频系统内各个网元之间的一种交互流程示意图;
图9为本申请实施例提供的IPTV视频系统内各个网元之间的另一种交互流程示意图;
图10-a为本申请实施例提供的一种视频质量评估设备的组成结构示意图;
图10-b为本申请实施例提供的一种摘要生成模块的组成结构示意图;
图10-c为本申请实施例提供的另一种视频质量评估设备的组成结构示意图;
图10-d为本申请实施例提供的另一种视频质量评估设备的组成结构示意图;
图11为本申请实施例提供的另一种视频质量评估设备的组成结构示意图。
具体实施方式
本申请实施例提供了一种视频质量评估方法和设备,用于提高视频质量评估的准确性,更加符合用户的真实视频体验。
下面结合附图,对本申请的实施例进行描述。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
首先介绍本申请实施例的方法适用的系统架构,请参阅图1所示,为本申请实施例提供的视频质量评估方法所应用的一种系统架构图,主要包括:视频质量评估设备101、视频质量监控系统102和多媒体中继功能体(Multimedia Relay Function,MRF)103。其中,视频质量监控系统102用于向视频质量评估设备101下发监控指令,视频质量评估设备101可以根据该视频质量监控系统102下发的监控指令对待评估视频进行视频质量的监控,例如该待评估视频可以是视频直播数据或者视频点播数据。视频质量监控系统102下发的监控指令可以包括待评估视频的视频标识,该视频标识可以是视频直播的频道号,或该视频标识可以是视频直播的组播地址和组播端口的组合,该视频标识也可以是视频点播的五元组数据。例如,该五元组数据是指源互联网协议(Internet Protocol,IP)地址、源端口、目的IP地址、目的端口和传输层协议。例如,视频传输通道上发送的视频直播流由MRF 发送给待评估视频的接收设备,该待评估视频的接收设备为用户侧用于视频解码及播放的设备,如STB等。待评估视频的接收设备将接收到的视频直播流解码后呈现给用户,从而为用户提供视频服务。MRF103上可以使用FEC技术用于实现业务保障,从而减少视频数据在传输过程中的丢包、误码等情况对解码的不良影响。本申请实施例提供的视频质量评估设备101中可以对视频数据进行FEC模拟解码,从而视频质量评估设备101可以根据FEC解码恢复后的视频数据进行视频质量的评估,提高视频质量评估的准确性,更加符合用户的真实视频体验。在后续实施例中,以视频直播场景下的视频数据的MOSV计算为例进行举例说明,对于视频点播场景下的视频数据的MOSV也可以参照后续场景来实现。
在本申请的另一些实施例中,运营商部署的业务质量保障系统,除了使用FEC编码技术,还可以使用重传(retransmission,RET)技术,从而减少视频数据在传输过程中的丢包、误码等情况对解码的不良影响。请参阅图2所示,为本申请实施例提供的视频质量评估方法所应用的另一种系统架构图,除了包括视频质量评估设备101、视频质量监控系统102和MRF103之外,还可以包括:RET服务器104,该RET服务器104可以分别连接MRF103和视频质量评估设备101,本申请实施例中提供的视频质量评估设备101还需要考虑RET对丢失数据包的重传能力,从而视频质量评估设备101可以根据FEC解码恢复后的视频数据以及RET重传成功后恢复的数据包进行视频质量的评估,提高视频质量评估的准确性,计算出的MOSV更加符合用户的真实视频体验。
请参阅图3所示,接下来对本申请实施例提供的FEC的实现过程进行说明,应用层FEC主要是基于纠删码,在发送端把源视频数据分割成大小相等的数据包,把k 1个源数据包编码为n 1(n 1>k 1)个数据包并发送,经过网络传输后在接收端收到m 1(n 1>=m 1)个数据包,只要接收到的数据包个数大于或等于源数据包的个数,即m 1>=k 1,就可以用接收到的数据包恢复出k 1个源数据包。应用层FEC技术在流媒体传输的差错控制中起到了容错的作用。
本申请的一些实施例中,FEC的基本方案是在MRF上给每个视频直播流插入FEC冗余数据包,当STB检测到丢包时,根据FEC冗余数据包和接收到的视频数据包尽量恢复出丢失数据包。当FEC功能生效时,MRF会发送两路的数据流,其中一路数据流为FEC冗余流,另一路为视频直播流,FEC冗余流和视频直播流的目的地址相同,FEC冗余流和视频直播流的目的端口具有特定的关系,例如FEC冗余流的目的端口号为视频直播流的目的端口号减1。假如视频直播流的目的端口号为2345端口,则FEC冗余流的目的端口号为2344端口。当采用RTP承载时,FEC冗余流的RTP的净荷类型(payload type)字段与视频直播流的RTP的payload type不同,以便STB识别出FEC冗余流。FEC冗余数据包的头数据结构如下:
FEC_DATA_STRUCT
{
UINT16rtp_begin_seq;//源码流FEC编码起始RTP序列号
UINT16rtp_end_seq;//源码流FEC编码结束RTP序列号
UINT8redund_num;//FEC报文数
UINT8redund_idx;//FEC报文索引序列号,从0开始
UINT16fec_len;//FEC荷载字节数
UINT16rtp_len;//被编码RTP报文的最大长度,字节数
UINT16rsv;//备用,全填充为0
UINT8fec_data[x];//fec数据,对rtp报文进行fec编码
}
以下为一个FEC冗余数据包的举例说明,如表1所示为FEC冗余数据包的组成结构:
Figure PCTCN2017120446-appb-000001
接下来对FEC源块大小和FEC冗余度的计算方式进行举例说明,FEC源块大小可以用该源块被封装为多少个RTP数据包来衡量。如上表1所示,根据起始RTP序列号和结束RTP序列号,可计算得到源块大小为56980-56881+1=100,FEC冗余度=5/100=5%。
如图3所示,FEC的详细实现流程如下:
1、组播服务器发送视频直播流。
2、MRF开启FEC,插入FEC冗余数据包。
3、STB接收视频直播流与FEC冗余流。
4、STB利用FEC冗余流对视频直播流的丢失数据包进行恢复,并对恢复出的数据包和视频直播流进行解码。
在本申请的一些实施例中,请参阅图4所示,接下来对本申请实施例提供的RET的实现过程进行说明,RET的基本方案是RET服务器(server)缓存每个视频频道的视频数据,当STB检测到丢包后,向RET服务器发送RTP控制协议(RTP Control Protocol,RTCP)重传请求,RET服务器向STB发送重传的RTP数据包。国际互联网工程任务组(Internet Engineering Task Force,IETF)的RFC 4585定义了重传请求的具体实现形式,IETF RFC 4588规定了重传数据包的RTP封装格式。如图4所示,RET详细流程如下:
1、组播服务器发送视频直播流,例如,组播服务器将视频直播流发送给MRF,MRF再转发给RET服务器,图4中没有给出MRF。MRF发给RET服务器的视频直播流用于RET服务器的缓存。
2、RET服务器接收到视频直播流,将该视频直播流缓存下来。
3、STB与RET服务器之间建立用于重传的RET会话。
4、STB对于收到的视频数据进行错误和丢失检查,如果发现有丢包或错包,则向RET服务器发出重传请求,根据标准RFC4585,一个重传请求可以请求对多个数据包进行重传。
5、RET服务器根据收到的重传请求,查找相应频道的视频数据缓存区中的数据包,将找到的数据包发送给STB,发送数据包可以按照RFC4588封装。
6、STB接收重传的数据包,然后对接收到的数据包进行解码显示。
本申请实施例中提供的视频质量评估设备可以作为探针部署在图3或图4所示的路由 器中,该视频质量评估设备也可以作为一个独立的网络节点部署在视频传输网络中。本申请实施例中在网络节点上评估视频质量时,考虑到了FEC编码对丢失数据包的恢复能力,使得计算得到的MOSV能够反映用户的真实视频体验。请参阅图5所示,为本申请实施例提供的视频质量评估方法所应用的IPTV网络系统架构图。video HE是IPTV视频头端,将转码成恒定码率的视频数据以直播或点播的方式发送下来,当视频数据从视频头端传送到目的机顶盒时,由于所经过的网络状态的变化,会导致视频数据出现丢包、时延、抖动、乱序等异常现象。这些异常现象会造成终端屏幕上所播放的视频画面出现花屏、卡顿等缺陷,导致用户的视频观看体验下降。
本发明实施例中视频质量评估设备部署在端到端的IPTV业务场景中,该视频质量评估设备可以通过软件或硬件实现MOSV探针,该视频质量评估设备用来监控并计算网络节点处或某一终端用户的IPTV业务的视频体验评估分值。如图5所示,视频质量评估设备可以作为探针部署在核心路由器(Core Router,CR)、宽带远程接入服务器(Broadband Remote Access Server,BRAS)、光线路终端(Optical Line Terminal,OLT)等网络节点,或旁挂在CR、BRAS、OLT等网络节点上。该视频质量评估设备也可以部署在IPTV STB等终端设备上。
其中,CR是由中央处理器(Central Processing Unit,CPU)、随机存取存储器/动态随机存取存储器(Random Access Memory/Dynamic Random Access Memory,RAM/DRAM)、闪存(Flash)、非易失性随机存取存储器(Non-Volatile Random Access Memory,NVRAM)、只读存储器(Read Only Memory,ROM)及各类接口组成,MOSV探针可以部署在CPU上。BRAS包括:业务管理模块和业务转发模块,MOSV探针可以部署在业务管理模块的主控CPU上。OLT包括主控板、业务单板、接口板、电源板等,其中MOSV探针可以部署在主控板CPU上。
如6图所示,STB的组成结构可以包括五个模块:接收前端模块、主模块、电缆调制解调器模块、音视频输出模块、外围接口模块。其中,接收前端模块包括调谐器和正交振幅调制(Quadrature amplitude modulation,QAM)解调器,该部分可以从射频信号中解调出视频传输流,主模块是整个STB的核心部分,主模块包括:解码部分、嵌入式CPU和存储器,其中,解码部分可以对传输流进行解码、解码复用、解扰等操作,嵌入式CPU和存储器用来运行和存储软件系统,并对各个模块进行控制,MOSV探针就部署在嵌入式CPU上。电缆调制解调器模块包括:双向调谐器、下行QAM解调器、上行正交相移键控(Quadrature Phase Shift Keyin,QPSK)/QAM调制器和媒体访问控制模块,该部分实现电缆调制解调的所有功能。音视频输出模块对音视频信号进行数字/模拟(Digital/Analog,D/A)转换还原出模拟音视频信号,并在电视机上输出。外围接口模块包括丰富的外部接口,包括高速串行接口、通用串行接口USB等。
接下来对本申请实施例中视频质量评估设备所执行的视频质量评估方法进行详细说明,请参阅图7所示,本申请实施例提供的视频质量评估方法主要包括如下步骤:
701、获取待评估视频,待评估视频包括FEC冗余数据包。
其中,视频质量评估设备可以从MRF捕获到待评估视频,MRF可以在该待评估视频中插入FEC冗余数据包,FEC冗余数据包的插入过程详见前述实施例的说明。
在本申请的一些实施例中,视频质量评估设备在执行步骤701之前,本申请实施例提供的视频质量评估方法还可以包括如下步骤:
700A、接收视频质量监控系统发送的监控指令,监控指令包括:待评估视频的视频标识。
在本申请的实施例中,视频质量评估设备首先接收视频质量监控系统向视频质量评估设备下发的监控指令,该监控指令中包括有视频直播的频道号,或该监控指令包括有组播地址和组播端口的组合,该视频标识也可以是视频点播的五元组数据。例如,视频质量评估设备可以通过同步源标识符(Synchronization Source Identifier,SSRC)、组播地址和组播端口指示待评估的视频频道。其中,SSRC可用于捕获待评估视频,也可以用组播地址和组播端口捕获待评估视频。
700B、根据监控指令获取视频业务信息,视频业务信息包括:视频编码配置参数和FEC能力信息。
在本申请的实施例中,视频质量评估设备可以根据监控指令对待评估的视频进行视频质量监控,视频质量评估设备首先获取到视频业务信息,该视频业务信息是视频质量评估设备对待评估视频进行视频质量监控所需要使用的业务信息。具体的,该视频业务信息包括:视频编码配置参数和FEC能力信息。举例说明,视频编码配置参数可以是组播服务器发送视频直播流所使用的视频编码类型、帧率、分辨率等,FEC能力信息是指MRF对视频直播流进行FEC编码时所采用的FEC信息,例如FEC能力信息可以包括:FEC源块大小和FEC冗余度。视频质量评估设备可以根据该视频业务信息获取到待评估视频。
702、当待评估视频中的第一源块的丢失数据包的个数小于或等于第一源块的FEC冗余数据包的个数时,为第一源块的未丢失数据包生成第一摘要报文,以及为第一源块的丢失数据包生成第二摘要报文。
其中,待评估视频在传输过程中可能会发生丢失,本申请实施例以待评估视频中的第一源块的处理为例进行说明,该第一源块即第一视频源块。第一源块在传输过程中有的数据包会发生丢失,首先获取到第一源块中第一源块的丢失数据包的个数,由于该第一源块由MRF传输时插入有相应的FEC冗余数据包,因此统计该第一源块的FEC冗余数据包的个数。若待评估视频中的第一源块的丢失数据包的个数小于或等于第一源块的FEC冗余数据包的个数,则说明利用FEC冗余数据包可以恢复出第一源块中丢失的数据包。针对第一源块的未丢失数据包和丢失数据包分别生成摘要报文。其中,摘要报文的结构可以参阅ITU-T P.1201.2标准中stripped packets的结构。摘要报文的生成方式可以参照ITU-T P.1201.2标准中利用stripped packets的计算方式。为便于描述,将为第一源块的未丢失数据包生成的摘要报文定义为“第一摘要报文”,将为第一源块的丢失数据包生成的摘要报文定义为“第二摘要报文”。
在本申请的一些实施例中,第一源块的丢失数据包的个数通过如下方式计算:
A1、从FEC冗余数据包中获取起始RTP序列号和结束RTP序列号,以及获取第一源块的未丢失数据包的RTP序列号;
A2、根据起始RTP序列号和结束RTP序列号、第一源块的未丢失数据包的RTP序列号计算第一源块的丢失数据包的个数。
其中,由前述对FEC冗余数据包的举例说明可知,起始RTP序列号和结束RTP序列号可以确定出第一源块中的总RTP数据包个数,再通过对未丢失数据包的RTP序列号的排除,可以计算出第一源块的丢失数据包的个数。
在本申请的一些实施例,本申请实施例提供的视频质量评估方法除了包括前述的实现步骤之外,还可以包括如下步骤:
C1、获取第一源块的FEC能力信息,该FEC能力信息包括:FEC源块大小和FEC冗余度。
其中,本申请实施例中视频质量评估设备可以获取到FEC能力信息,从而使用该FEC能力信息确定第一源块的FEC冗余数据包的个数。
在本申请的一些实施例中,步骤C1获取第一源块的FEC源块大小和FEC冗余度,包括:
C11、从MRF或待评估视频的接收设备获取到FEC源块大小和FEC冗余度;或,
C12、对待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到FEC源块大小和FEC冗余度;或,
C13、对第一源块的FEC冗余数据包进行解析,从而得到FEC源块大小和FEC冗余度。
其中,视频质量评估设备获取FEC能力信息有多种方式,例如,视频质量评设备可以从MRF或待评估视频的接收设备获得FEC能力信息,该待评估视频的接收设备具体可以为STB。例如视频质量评估设备可以查看MRF配置文件,STB手册等获取FEC源块大小及FEC冗余度,该FEC冗余度是指每个FEC源块中的冗余包个数。又如,视频质量评估设备通过解析控制报文获取,STB与视频服务器通常会通过实时流协议(Real Time Streaming Protocol,RTSP)或超文本传输协议(HTTP,HyperText Transfer Protocol)交互,互相通告FEC能力信息,包括FEC源块大小及FEC冗余包个数,通过解析这类控制报文可获取FEC能力信息。又如,视频质量评估设备还可以通过解析FEC冗余数据包来获取FEC能力信息,FEC冗余数据包中包含了FEC源块大小及FEC冗余度。
在本申请的一些实施例中,为第一源块的丢失数据包生成的第二摘要报文包括:第一源块的丢失数据包的RTP序列号、负载大小和第一源块的丢失数据包中的视频传输流(Transport Stream,TS)包的摘要信息。其中,第一源块的丢失数据包的负载大小可以通过丢失数据包的个数与丢失数据包的大小相乘得到,视频TS包的摘要信息的生成方式可以参阅ITU-T P.1201.2标准中利用stripped packets生成摘要信息的方式。
703、根据第一摘要报文和第二摘要报文计算待评估视频的MOSV。
在本申请的实施例中,视频质量评估设备考虑到了FEC技术对待评估视频的丢失数据包的恢复情况,从而可以根据第一摘要报文和第二摘要报文计算待评估视频的MOSV,MOSV的具体计算方法可参看ITU-T P.1201.2标准中的MOSV的计算方式。按照本申请实施例中第一摘要报文和第二摘要报文计算出的MOSV,考虑了FEC技术对丢失数据包的恢复情况,增加了视频质量评估的准确性,使之更加符合用户的真实体验。
在本申请的一些实施例,本申请实施例提供的视频质量评估方法除了包括前述的实现步骤之外,还可以包括如下步骤:
D1、当第一源块的丢失数据包的个数为0,或第一源块的丢失数据包的个数大于第一源块的FEC冗余数据包的个数时,为第一源块的未丢失数据包生成第一摘要报文;
D2、根据第一摘要报文计算待评估视频的MOSV。
其中,第一源块的丢包个数为0时,说明第一源块中没有发生数据包丢失,第一源块的丢失数据包的个数大于FEC冗余数据包的个数时,说明第一源块无法通过FEC冗余数据包恢复出第一源块中的丢失数据包。在这两种情况下,为第一源块的未丢失数据包生成第一摘要报文,最后根据该第一摘要报文计算待评估视频的MOSV。可选的,当第一源块的丢失数据包的个数为0,或第一源块的丢失数据包的个数大于第一源块的FEC冗余数据包的个数时,视频质量评估设备还可以丢弃第一源块的FEC冗余数据包,从而减少FEC缓冲队列的占用。由于第一源块的丢失数据包的个数为0,或第一源块的丢失数据包的个数大于第一源块的FEC冗余数据包的个数,FEC技术均无法恢复出第一源块的丢失数据包,因此只使用第一摘要报文计算出的MOSV可以表示用户的真实视频体验。
在本申请的一些实施例中,如图2所示,若在IPTV视频系统还设置有RET服务器。本申请实施例提供的视频质量评估方法除了包括前述的实现步骤之外,还可以包括如下步骤:
E1、接收待评估视频的接收设备发送给RET服务器的重传请求,重传请求用于向RET服务器请求重传丢失且无法通过FEC恢复的数据包;
E2、当接收到RET服务器返回的重传响应时,为丢失且无法通过FEC恢复的数据包生成第三摘要报文。
其中,当IPTV视频系统中同时部署了FEC和RET,组播服务器向MRF发送待评估视频,MRF会根据该待评估视频生成相应的FEC冗余数据包,MRF在生成FEC冗余数据包之后,MRF将待评估视频和FEC冗余数据包发送给待评估视频的接收设备,同时MRF会向RET服务器发送待评估视频和FEC冗余数据包。RET服务器从MRF接收到待评估视频和FEC冗余数据包之后,将接收到的待评估视频和FEC冗余数据包保存在视频数据缓存区中,该视频数据缓存区用于缓存待评估视频和FEC冗余数据包,以便于RET服务器在待评估视频的接收设备的请求下重传丢失的视频数据。举例说明,待评估视频包括:第一源块和第二源块,若第一源块的丢失数据包可以通过FEC技术恢复成功,第二源块的丢失数据包无法通过FEC技术恢复成功,待评估视频的接收设备可以向RET服务器请求重传该第二源块的丢失数据包,当视频质量评估设备接收到RET服务器返回的重传响应时,视频质量评估设备为丢失且无法通过FEC恢复的数据包生成第三摘要报文。其中,第三摘要报文的生成方式可以参照ITU-T P.1201.2标准中利用stripped packets的计算方式。
在前述执行步骤E1至E2的实现场景下,步骤703根据第一摘要报文和第二摘要报文计算待评估视频的视频平均体验得分MOSV,具体包括:
F1、根据第一摘要报文、第二摘要报文和第三摘要报文计算待评估视频的MOSV。
在本申请的实施例中,视频质量评估设备考虑到了FEC技术和RET技术对待评估视频的丢失数据包的恢复情况,从而可以根据第一摘要报文、第二摘要报文和第三摘要报文计算待评估视频的MOSV,MOSV的具体计算方法可参看ITU-T P.1201.2标准。按照本申请实施例中第一摘要报文、第二摘要报文和第三摘要报文计算出的MOSV,考虑了FEC 技术和RET技术对丢失数据包的恢复情况,增加了视频质量评估的准确性,使之更加符合用户的真实体验。
举例说明如下,当STB接收到RET服务器转发的待评估视频和FEC冗余数据包之后,STB使用待评估视频和FEC冗余数据包进行检测是否丢包,在检测到丢包后,STB利用FEC冗余数据包尽量恢复出丢失数据包,如果因丢失数据包太多,FEC解码失败,STB再向RET服务器发送重传请求,视频质量评估设备获取STB发送的重传请求。RET服务器根据STB发送的重传请求向STB发送重传响应,视频质量评估设备获取该重传响应,通过该重传响应,视频质量评估设备确定第一源块的丢失数据包恢复成功,从而可以计算出通过RET服务器重传成功的数据包的第三摘要报文,此时需要基于第一摘要报文、第二摘要报文和第三摘要报文计算出的MOSV,本申请实施例考虑了FEC技术和RET技术对丢失数据包的恢复情况,增加了视频质量评估的准确性,使之更加符合用户的真实体验。
通过前述实施例对本申请实施例的举例说明可知,首先获取待评估视频,待评估视频包括FEC冗余数据包,当待评估视频中的第一源块的丢失数据包的个数小于或等于第一源块的FEC冗余数据包的个数时,为第一源块的未丢失数据包生成第一摘要报文,以及为第一源块的丢失数据包生成第二摘要报文,最后根据第一摘要报文和第二摘要报文计算待评估视频的视频平均体验得分MOS。由于本申请实施例中考虑到了待评估视频中第一源块的数据包丢失情况以及FEC冗余数据包对丢包的恢复情况,从而计算出了第一摘要报文和第二摘要报文,通过该第一摘要报文和第二摘要报文可计算出待评估视频的MOSV,相比较于现有技术中只对捕获到的视频数据评估MOSV,本申请实施例中考虑到了FEC冗余数据包对丢失数据包的恢复能力,因此增加了视频质量评估的准确性,使之更加符合用户的真实视频体验。
需要说明的是,在本申请的前述实施例中,视频质量损伤主要是由于视频压缩和网络传输丢包引起的。视频压缩损伤与视频码率、分辨率、帧率及内容复杂度相关,因此可根据视频码率、分辨率、帧率及内容复杂度计算视频压缩损伤。视频经网络传输时会丢包,丢包会导致视频的帧质量受损,且如果受损帧为参考帧时,损伤会持续向后传播,因此在评估网络传输损伤时,需先确定视频的帧大小、帧类型、帧丢包事件,根据帧丢包事件的大小和丢包发生位置评估受损程度,然后根据损伤情况计算网络传输损伤。另外本申请实施例中视频质量评估设备还可以通过FEC技术和RET技术获取到能恢复成功的丢失数据包,使得最后可以综合视频压缩损伤、网络传输损伤和丢失报文的恢复情况计算得到最终的MOSV,本申请实施例中MOSV的计算方法可以参照ITU-T P.1201.2标准中利用stripped packets计算MOSV的方式。
为便于更好的理解和实施本申请实施例的上述方案,下面举例相应的应用场景来进行具体说明。
请参阅如图8所示,为本申请实施例提供的IPTV视频系统内各个网元之间的一种交互流程示意图。本申请实施例中视频质量评估设备作为MOSV探针部署于网络设备(例如路由器)上,视频质量评估设备可以通过端口镜像,获得从组播服务器发送给待评估视频的接收设备的视频数据。IPTV视频系统中具有FEC功能,组播服务器发送视频直播流至MRF,MRF开启FEC功能,插入FEC冗余流,STB接收视频直播流及FEC冗余流,利 用FEC冗余数据包恢复出视频直播流中的视频源块的丢失数据包,并解码显示视频直播流。本申请实施例描述了在该场景下优化IPTV视频质量评估的方法,具体方法包括如下步骤:
1、视频质量监控系统向MOSV探针下达对特定视频频道的监控指令,该监控指令可以包括:特定视频频道的SSRC,MOSV探针可以利用该SSRC捕获视频直播流。或者该监控指令可以包括:特定视频频道的组播地址和组播端口,MOSV探针也可以用组播地址和组播端口捕获视频直播流。
2、MOSV探针获取视频业务信息。该视频业务信息可以包括头端的视频编码配置参数和FEC能力信息。MOSV探针可以在步骤1之后获取到视频业务信息,也可以在步骤1之前就预先获取到视频业务信息,此处不做限定。其中,头端视频编码配置参数可以包括:编码类型、帧率、分辨率等。MOSV探针获取FEC能力信息有三种获取手段:其一,MOSV探针从MRF或STB获得FEC能力信息,例如查看MRF配置文件,STB手册等,获取FEC源块大小及FEC冗余度。其二,通过解析控制报文获取,STB与组播服务器通常会通过RTSP或HTTP交互,互相通告FEC能力及参数,包括FEC源块大小及FEC冗余度,通过解析这类控制报文可获取FEC能力信息,例如:在频道列表获取中,增加ChannelFECPort参数以支持对于频道的FEC纠错功能,ChannelFECPort表示频道支持FEC的端口号,若该频道支持FEC则填写端口号,否则为空。其三,FEC能力信息还可以通过解析FEC冗余数据包来获取,FEC冗余数据包的报文头中包含了FEC源块大小及FEC冗余度。
3、MOSV探针在可以设置两个缓冲(buffer),分别捕获视频直播流和FEC冗余流。首先根据组播地址提取待评估的频道的视频直播流和FEC冗余流,再通过端口或RTP负载类型区分视频直播流和FEC冗余流。将视频直播流中的原始视频数据包解析后生成原始视频摘要报文,将原始视频摘要报文放到一个缓冲队列中,以解决乱序问题。
4、解析FEC冗余数据包,模拟FEC解码过程。首先利用FEC冗余数据包的结构里包含的起止RTP序列号,实现原始视频数据包与FEC冗余数据包的同步。从原始视频数据包生成的摘要报文队列中,利用报文RTP序列号统计源块的丢包个数n。若源块没有丢包,则丢弃FEC缓冲队列中对应的FEC数据包,然后执行步骤6。若源块有丢包,则统计收到的FEC冗余数据包的个数为m,当m<n,则表示丢包超出了FEC的恢复能力,当前源块丢失的数据包无法恢复,此时丢弃FEC缓冲队列中对应的FEC数据包,并根据该原始视频摘要报文执行步骤6进行评估;当m≥n时,说明通过FEC技术可以恢复出当前源块的丢失数据包,执行如下的步骤5,可以为丢失数据包生成与原始视频摘要报文相似的摘要报文。
5、为FEC可恢复的丢失数据包生成可恢复摘要报文。可恢复摘要报文主要包括:丢失数据包的RTP序列号、负载大小,视频TS包的摘要信息等。根据原始视频摘要报文的RTP序列号确定在一次传输过程中的丢失数据包的总数l;根据丢失数据包的总数以及视频TS包中的连续计数器标志cc,确定丢失的总视频TS包数k;计算出每个RTP中的丢失视频TS包的个数为k/l;假设每个视频TS包的payloadLength=184,则每个RTP的payloadLength=184*k/l。根据原始视频摘要报文与可恢复摘要报文执行后续步骤6进行评估。如下表格2中给出了原始视频摘要报文的摘要格式和可恢复摘要报文的摘要格式, 举例说明:摘要报文a中最后一个视频TS包的cc=3,摘要报文c中的第一个视频TS包的cc=9,可恢复摘要报文b与a、c连续,摘要报文b中的视频TS包的cc编号与a、c中的视频TS包的cc编号连续,所以摘要报文b中的视频TS包cc编号从4~8,丢失的视频TS包个数是9-3-1=5。a、c为连续的两个摘要报文,根据摘要报文的RTP序列号相减可知,在一次传输过程中的丢失数据包的总数l=4552-4550-1=1,丢失的视频TS包总数k=9-3-1=5,丢失报文的RTP的payloadLength=184*5/1=920,利用上述信息可生成丢失数据包的摘要报文b。
6、将待评估摘要报文从队列中取出后,根据待评估摘要报文计算MOSV,该待评估摘要报文可包括:原始视频摘要报文、可恢复摘要报文。
7、向视频质量监控系统上报视频质量评估结果。
其中,表格2为摘要报文的内容
Figure PCTCN2017120446-appb-000002
请参阅如图9所示,为本申请实施例提供的IPTV视频系统内各个网元之间的一种交互流程示意图。当IPTV视频系统中同时使用FEC和RET,当STB检测到丢包后,首先是利用FEC冗余数据包尽量恢复出丢失数据包,如果因丢包太多,FEC解码失败,STB 再向RET服务器发送重传请求,获取丢失数据包。此时评估用户视频体验时,需同时考虑FEC与RET对丢包的补偿能力,具体方法如下:
1、视频质量监控系统向MOSV探针下达对特定视频频道的监控指令,该监控指令可以包括:特定视频频道的SSRC和特定用户(例如用户设备的IP地址),MOSV探针可以利用该SSRC捕获视频直播流。或者该监控指令可以包括:特定视频频道的组播地址、组播端口和特定用户(例如用户设备的IP地址),MOSV探针也可以用组播地址和组播端口捕获视频直播流。用户设备的IP地址用于捕获单播重传请求和重传响应流。
2、MOSV探针获取RET信息。该RET信息可以包括STB是否使能RET,RET服务器地址等。
3、MOSV探针设置两个缓冲buffer,分别捕获视频直播流和RET重传流。首先根据组播地址过滤待评估频道的视频直播流,再通过用户设备的IP地址捕获RET重传流,包括重传请求和重传响应。
4、MOSV探针为可恢复或重传成功的数据包分别生成摘要报文。STB在检测到丢包后,首先会利用FEC冗余数据报恢复出丢失数据包,如果FEC冗余数据包不能恢复出丢失数据包,STB向RET服务器发起重传请求。因此MOSV探针可以通过重传请求判断丢失数据包是否被FEC冗余数据包恢复,若收到该丢失数据包的重传请求则表明该丢失数据包不能被恢复,若未收到丢失数据包的重传请求则表明该丢失数据包已经被FEC冗余数据包恢复。对于被FEC冗余数据包恢复的丢失数据包,可以指执行前述实施例中的步骤5,生成可恢复摘要报文。对于收到重传请求且收到相应地重传响应的丢失数据包,也可以执行前述实施例中的步骤5中生成重传成功摘要报文。
5、将待评估摘要报文从队列中取出后,根据待评估摘要报文计算MOSV,该待评估摘要报文可包括:原始视频摘要报文、可恢复摘要报文和重传成功摘要报文。
6、向视频质量监控系统上报视频质量评估结果。
通过前述的举例说明可知,本申请实施例中,视频质量评估设备获取待评估频道的相关参数,包括组播地址、组播端口、FEC源块大小、FEC冗余包个数、RET服务器地址等。如果系统只开启了FEC功能,视频质量评估设备除了获取视频直播流,还需获取FEC冗余流,针对每个视频源块,统计接收到的FEC冗余数据包的个数和丢失数据包的个数,并据此判断当前源块中丢失数据包能否恢复。当丢失数据包能够利用FEC冗余数据包恢复时,需要为丢失数据包创建相应的可恢复摘要报文,并将其插入原始视频摘要报文队列中,一起用于待评估视频的MOSV计算。如果系统同时开启了FEC和RET功能时,视频质量评估设备需要同时监控RET重传流,对于重传成功的数据包需要生成相应地的重传成功摘要报文,并将其插入原始视频摘要报文队列中,一起用于待评估视频的MOSV计算。视频质量评估设备从摘要报文队列中取出摘要报文,分析得到相关视频属性,最后计算该频道的MOSV值。本申请技术方案考虑了业务质量保障系统采用的FEC、RET技术,在原有评估方法的基础之上,考虑了FEC冗余数据包及RET重传数据包对丢失数据包的影响,增加了视频质量评估的准确性,使之更加符合用户的真实体验。本申请实施例在网络节点上评估视频质量时,考虑FEC、RET容错能力,因此可以提高视频质量评估准确性,并且可以真实反应用户体验。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
为便于更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关装置。
请参阅图10-a所示,本申请实施例提供的一种视频质量评估设备1000,可以包括:视频获取模块1001、摘要生成模块1002、视频评估模块1003,其中,
视频获取模块1001,用于获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;
摘要生成模块1002,用于当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;
视频评估模块1003,用于根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。
在本申请的一些实施例中,请参阅图10-b所示,所述摘要生成模块1002,包括:
序列号获取模块10021,用于从所述FEC冗余数据包中获取起始实时传输协议RTP序列号和结束RTP序列号,以及获取所述第一源块的未丢失数据包的RTP序列号;
丢包统计模块10022,用于根据所述起始RTP序列号和所述结束RTP序列号、所述第一源块的未丢失数据包的RTP序列号计算所述第一源块的丢失数据包的个数。
在本申请的一些实施例中,请参阅图10-c所示,所述视频质量评估设备1000,还包括:
FEC信息获取模块1004,用于获取所述第一源块的FEC源块大小和FEC冗余度。
进一步的,在本申请的一些实施例中,所述FEC信息获取模块1004,具体用于从所述MRF或所述待评估视频的接收设备获取到所述FEC源块大小和FEC冗余度;或,对所述待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到所述FEC源块大小和FEC冗余度;或,对所述第一源块的FEC冗余数据包进行解析,从而得到所述FEC源块大小和FEC冗余度。
在本申请的一些实施例中,所述摘要生成模块1002,还用于当所述第一源块的丢失数据包的个数为0,或所述第一源块的丢失数据包的个数大于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文;
所述视频评估模块1003,还用于根据所述第一摘要报文计算所述待评估视频的MOSV。
在本申请的一些实施例中,请参阅图10-d所示,所述视频质量评估设备,还包括:接收模块1005,用于接收所述待评估视频的接收设备发送给重传RET服务器的重传请求,所述重传请求用于向所述RET服务器请求重传丢失且无法通过FEC恢复的数据包;
所述摘要生成模块1002,还用于当接收到所述RET服务器返回的重传响应时,为所 述丢失且无法通过FEC恢复的数据包生成第三摘要报文;
所述视频评估模块1003,具体用于根据所述第一摘要报文、所述第二摘要报文和所述第三摘要报文计算所述待评估视频的MOSV。
在本申请的一些实施例中,所述第二摘要报文包括所述第一源块的丢失数据包的RTP序列号、负载大小和所述第一源块的丢失数据包中的视频传输流TS包的摘要信息。
通过前述实施例对本申请的举例说明可知,首先获取待评估视频,待评估视频包括FEC冗余数据包,当待评估视频中的第一源块的丢失数据包的个数小于或等于第一源块的FEC冗余数据包的个数时,为第一源块的未丢失数据包生成第一摘要报文,以及为第一源块的丢失数据包生成第二摘要报文,最后根据第一摘要报文和第二摘要报文计算待评估视频的视频平均体验得分MOS。由于本申请实施例中考虑到了待评估视频中第一源块的数据包丢失情况以及FEC冗余数据包对丢包的恢复情况,从而计算出了第一摘要报文和第二摘要报文,通过该第一摘要报文和第二摘要报文可计算出待评估视频的MOSV,相比较于现有技术中只对捕获到的视频数据评估MOSV,本申请实施例中考虑到了FEC冗余数据包对丢失数据包的恢复能力,因此增加了视频质量评估的准确性,使之更加符合用户的真实视频体验。
需要说明的是,上述装置各模块/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其带来的技术效果与本申请方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储有程序,该程序执行包括上述方法实施例中记载的部分或全部步骤。
接下来介绍本申请实施例提供的另一种视频质量评估设备,请参阅图11所示,视频质量评估设备1100包括:接收器1101、发射器1102、处理器1103和存储器1104(其中视频质量评估设备1100中的处理器1103的数量可以一个或多个,图11中以一个处理器为例)。在本申请的一些实施例中,接收器1101、发射器1102、处理器1103和存储器1104可通过总线或其它方式连接,其中,图11中以通过总线连接为例。
存储器1104可以包括只读存储器和随机存取存储器,并向处理器1103提供指令和数据。存储器1104的一部分还可以包括非易失性随机存取存储器(英文全称:Non-Volatile Random Access Memory,英文缩写:NVRAM)。存储器1104存储有操作系统和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。操作系统可包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。
处理器1103控制视频质量评估设备的操作,处理器1103还可以称为中央处理单元(英文全称:Central Processing Unit,英文简称:CPU)。具体的应用中,视频质量评估设备的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1103中,或者由处理器1103实现。 处理器1103可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1103中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1103可以是通用处理器、数字信号处理器(英文全称:digital signal processing,英文缩写:DSP)、专用集成电路(英文全称:Application Specific Integrated Circuit,英文缩写:ASIC)、现场可编程门阵列(英文全称:Field-Programmable Gate Array,英文缩写:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1104,处理器1103读取存储器1104中的信息,结合其硬件完成上述方法的步骤。
接收器1101可用于接收输入的数字或字符信息,以及产生与视频质量评估设备的相关设置以及功能控制有关的信号输入,发射器1102可包括显示屏等显示设备,发射器1102可用于通过外接接口输出数字或字符信息。
本申请实施例中,处理器1103,用于执行所述存储器中的所述指令,执行前述实施例中所描述的视频质量评估方法。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是 通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (17)

  1. 一种视频质量评估方法,其特征在于,包括:
    获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;
    当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;
    根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。
  2. 根据权利要求1所述的方法,其特征在于,所述第一源块的丢失数据包的个数通过如下方式计算:
    从所述FEC冗余数据包中获取起始实时传输协议RTP序列号和结束RTP序列号,以及获取所述第一源块的未丢失数据包的RTP序列号;
    根据所述起始RTP序列号和所述结束RTP序列号、所述第一源块的未丢失数据包的RTP序列号计算所述第一源块的丢失数据包的个数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    获取所述第一源块的FEC源块大小和FEC冗余度。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述第一源块的FEC源块大小和FEC冗余度,包括:
    从所述MRF或所述待评估视频的接收设备获取到所述FEC源块大小和FEC冗余度;或,
    对所述待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到所述FEC源块大小和FEC冗余度;或,
    对所述第一源块的FEC冗余数据包进行解析,从而得到所述FEC源块大小和FEC冗余度。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    当所述第一源块的丢失数据包的个数为0,或所述第一源块的丢失数据包的个数大于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文;
    根据所述第一摘要报文计算所述待评估视频的MOSV。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    接收所述待评估视频的接收设备发送给重传RET服务器的重传请求,所述重传请求用于向所述RET服务器请求重传丢失且无法通过FEC恢复的数据包;
    当接收到所述RET服务器返回的重传响应时,为所述丢失且无法通过FEC恢复的数据包生成第三摘要报文;
    所述根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV,具体包括:
    根据所述第一摘要报文、所述第二摘要报文和所述第三摘要报文计算所述待评估视频的MOSV。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述第二摘要报文包括: 所述第一源块的丢失数据包的RTP序列号、负载大小和所述第一源块的丢失数据包中的视频传输流视频TS包的摘要信息。
  8. 一种视频质量评估设备,其特征在于,包括:
    视频获取模块,用于获取待评估视频,所述待评估视频包括前向纠错FEC冗余数据包;
    摘要生成模块,用于当所述待评估视频中的第一源块的丢失数据包的个数小于或等于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文,以及为所述第一源块的丢失数据包生成第二摘要报文;
    视频评估模块,用于根据所述第一摘要报文和所述第二摘要报文计算所述待评估视频的视频平均体验得分MOSV。
  9. 根据权利要求8所述的设备,其特征在于,所述摘要生成模块,包括:
    序列号获取模块,用于从所述FEC冗余数据包中获取起始实时传输协议RTP序列号和结束RTP序列号,以及获取所述第一源块的未丢失数据包的RTP序列号;
    丢包统计模块,用于根据所述起始RTP序列号和所述结束RTP序列号、所述第一源块的未丢失数据包的RTP序列号计算所述第一源块的丢失数据包的个数。
  10. 根据权利要求8或9所述的设备,其特征在于,所述视频质量评估设备,还包括:
    FEC信息获取模块,用于获取所述第一源块的FEC源块大小和FEC冗余度。
  11. 根据权利要求10所述的设备,其特征在于,所述FEC信息获取模块,具体用于从所述MRF或所述待评估视频的接收设备获取到所述FEC源块大小和FEC冗余度;或,对所述待评估视频的接收设备与视频服务器之间交互的控制报文进行解析,从而得到所述FEC源块大小和FEC冗余度;或,对所述第一源块的FEC冗余数据包进行解析,从而得到所述FEC源块大小和FEC冗余度。
  12. 根据权利要求8至11中任一项所述的设备,其特征在于,所述摘要生成模块,还用于当所述第一源块的丢失数据包的个数为0,或所述第一源块的丢失数据包的个数大于所述第一源块的FEC冗余数据包的个数时,为所述第一源块的未丢失数据包生成第一摘要报文;
    所述视频评估模块,还用于根据所述第一摘要报文计算所述待评估视频的MOSV。
  13. 根据权利要求8至12中任一项所述的设备,其特征在于,所述视频质量评估设备,还包括:接收模块,其中,
    所述接收模块,用于接收所述待评估视频的接收设备发送给重传RET服务器的重传请求,所述重传请求用于向所述RET服务器请求重传丢失且无法通过FEC恢复的数据包;
    所述摘要生成模块,还用于当接收到所述RET服务器返回的重传响应时,为所述丢失且无法通过FEC恢复的数据包生成第三摘要报文;
    所述视频评估模块,具体用于根据所述第一摘要报文、所述第二摘要报文和所述第三摘要报文计算所述待评估视频的MOSV。
  14. 根据权利要求8至13中任一项所述的设备,其特征在于,所述第二摘要报文包括:所述第一源块的丢失数据包的RTP序列号、负载大小和所述第一源块的丢失数据包中的视频传输流视频TS包的摘要信息。
  15. 一种视频质量评估设备,其特征在于,所述视频质量评估设备包括:处理器,存 储器,接收器、发射器和总线;所述处理器、接收器、发射器、存储器通过所述总线相互的通信;
    所述接收器,用于接收数据;
    所述发射器,用于发送数据;
    所述存储器,用于存储指令;
    所述处理器,用于执行所述存储器中的所述指令,执行如权利要求1至8中任一项所述的方法。
  16. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-8任意一项所述的方法。
  17. 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1-8任意一项所述的方法。
PCT/CN2017/120446 2017-04-27 2017-12-31 一种视频质量评估方法和设备 WO2018196434A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17907272.3A EP3609179A1 (en) 2017-04-27 2017-12-31 Video quality evaluation method and device
US16/664,194 US11374681B2 (en) 2017-04-27 2019-10-25 Video quality assessment method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710288618.6 2017-04-27
CN201710288618.6A CN108809893B (zh) 2017-04-27 2017-04-27 一种视频质量评估方法和设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/664,194 Continuation US11374681B2 (en) 2017-04-27 2019-10-25 Video quality assessment method and device

Publications (1)

Publication Number Publication Date
WO2018196434A1 true WO2018196434A1 (zh) 2018-11-01

Family

ID=63919355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120446 WO2018196434A1 (zh) 2017-04-27 2017-12-31 一种视频质量评估方法和设备

Country Status (4)

Country Link
US (1) US11374681B2 (zh)
EP (1) EP3609179A1 (zh)
CN (1) CN108809893B (zh)
WO (1) WO2018196434A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163901A (zh) * 2019-04-15 2019-08-23 福州瑞芯微电子股份有限公司 一种后处理评价方法及系统
CN111131756A (zh) * 2019-12-26 2020-05-08 视联动力信息技术股份有限公司 一种基于视联网的异常检测方法、装置、设备及介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110139167B (zh) * 2018-02-09 2022-02-25 华为技术有限公司 数据处理方法以及服务器
CN112532341A (zh) * 2019-09-17 2021-03-19 青岛海信宽带多媒体技术有限公司 一种媒体数据播放方法及装置
CN111586475B (zh) * 2020-05-27 2022-05-06 飞思达技术(北京)有限公司 一种iptv和ott直播音视频质量和感知评价系统
CN111741295B (zh) * 2020-08-14 2020-11-24 北京全路通信信号研究设计院集团有限公司 一种连续监测视频网络端到端QoS指标的监测系统及其方法
US11557025B2 (en) * 2020-08-17 2023-01-17 Netflix, Inc. Techniques for training a perceptual quality model to account for brightness and color distortions in reconstructed videos
US11532077B2 (en) 2020-08-17 2022-12-20 Netflix, Inc. Techniques for computing perceptual video quality based on brightness and color components
US11601533B1 (en) * 2020-09-28 2023-03-07 Amazon Technologies, Inc. Source port adaptive multi-path (SAM) protocol
US11962868B2 (en) * 2020-11-30 2024-04-16 Verizon Patent And Licensing Inc. Detecting a quality issue associated with a video stream delivery
CN112752123B (zh) * 2020-12-28 2022-03-25 上海哔哩哔哩科技有限公司 一种网络质量评估方法与装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600663A (en) * 1994-11-16 1997-02-04 Lucent Technologies Inc. Adaptive forward error correction system
CN101686106A (zh) * 2008-09-28 2010-03-31 华为技术有限公司 自适应前向纠错的方法、装置和系统
US20110001833A1 (en) * 2009-07-01 2011-01-06 Spirent Communications, Inc. Computerized device and method for analyzing signals in a multimedia over coax alliance (moca) network and similar tdm / encrypted networks
CN102056004A (zh) * 2009-11-03 2011-05-11 华为技术有限公司 一种视频质量评估方法、设备及系统
CN103166808A (zh) * 2011-12-15 2013-06-19 华为技术有限公司 一种iptv业务质量的监测方法,装置及系统
CN104349220A (zh) * 2014-11-25 2015-02-11 复旦大学 一种用于智能电视终端的服务质量监测系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003199128A (ja) * 2001-12-25 2003-07-11 Ando Electric Co Ltd 動画配信試験装置
US8539532B2 (en) * 2007-11-23 2013-09-17 International Business Machines Corporation Retransmission manager and method of managing retransmission
CN101442400B (zh) 2007-11-23 2012-03-07 国际商业机器公司 在数字内容传送系统中使用的质量管理器和方法
EP2296379A4 (en) * 2008-07-21 2011-07-20 Huawei Tech Co Ltd METHOD, SYSTEM AND DEVICE FOR EVALUATING A VIDEO QUALITY
JP4544435B2 (ja) * 2009-02-10 2010-09-15 日本電気株式会社 映像品質推定装置、映像品質推定方法およびプログラム
PL2432161T3 (pl) * 2010-09-16 2016-02-29 Deutsche Telekom Ag Sposób oraz system do pomiaru jakości transmisji strumienia bitów audio i wideo w ciąg transmisji
JP2014517559A (ja) * 2011-04-11 2014-07-17 ノキア シーメンス ネットワークス オサケユキチュア エクスペリエンス品質
US9246842B2 (en) * 2012-04-27 2016-01-26 Intel Corporation QoE-aware radio access network architecture for http-based video streaming
CN104780369B (zh) * 2012-08-21 2018-04-17 华为技术有限公司 一种获得视频编码压缩质量的方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600663A (en) * 1994-11-16 1997-02-04 Lucent Technologies Inc. Adaptive forward error correction system
CN101686106A (zh) * 2008-09-28 2010-03-31 华为技术有限公司 自适应前向纠错的方法、装置和系统
US20110001833A1 (en) * 2009-07-01 2011-01-06 Spirent Communications, Inc. Computerized device and method for analyzing signals in a multimedia over coax alliance (moca) network and similar tdm / encrypted networks
CN102056004A (zh) * 2009-11-03 2011-05-11 华为技术有限公司 一种视频质量评估方法、设备及系统
CN103166808A (zh) * 2011-12-15 2013-06-19 华为技术有限公司 一种iptv业务质量的监测方法,装置及系统
CN104349220A (zh) * 2014-11-25 2015-02-11 复旦大学 一种用于智能电视终端的服务质量监测系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3609179A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163901A (zh) * 2019-04-15 2019-08-23 福州瑞芯微电子股份有限公司 一种后处理评价方法及系统
CN111131756A (zh) * 2019-12-26 2020-05-08 视联动力信息技术股份有限公司 一种基于视联网的异常检测方法、装置、设备及介质
CN111131756B (zh) * 2019-12-26 2022-11-01 视联动力信息技术股份有限公司 一种基于视联网的异常检测方法、装置、设备及介质

Also Published As

Publication number Publication date
US20200067629A1 (en) 2020-02-27
EP3609179A4 (en) 2020-02-12
CN108809893B (zh) 2020-03-27
CN108809893A (zh) 2018-11-13
US11374681B2 (en) 2022-06-28
EP3609179A1 (en) 2020-02-12

Similar Documents

Publication Publication Date Title
WO2018196434A1 (zh) 一种视频质量评估方法和设备
US8644316B2 (en) In-band media performance monitoring
US9565482B1 (en) Adaptive profile switching system and method for media streaming over IP networks
US9577682B2 (en) Adaptive forward error correction (FEC) system and method
US9641588B2 (en) Packets recovery system and method
US9781488B2 (en) Controlled adaptive rate switching system and method for media streaming over IP networks
US9363684B2 (en) Determining loss of IP packets
WO2020006912A1 (zh) 网络传输质量分析方法、装置、计算机设备和存储介质
KR102188222B1 (ko) 비디오 서비스 품질 평가 방법 및 장치
US20160219318A1 (en) Information presentation device and method
KR20130040090A (ko) 복합 네트워크에서 멀티미디어 데이터를 전송하기 위한 장치 및 그 방법
US9647951B2 (en) Media stream rate reconstruction system and method
JP2010161550A (ja) 映像コンテンツ受信装置、および映像コンテンツ受信方法
WO2013098812A1 (en) Transport over udp system and method
KR100720592B1 (ko) 인트라 프레임의 전송오류 복구 방법
Fernando MMT: the next-generation media transport standard
Nagel et al. Demonstration of TVoIP services in a multimedia broadband enabled access network
Elmer Interoperability for professional video on IP networks
WO2009002304A1 (en) Method and apparatus for remote stream reconstruction and monitoring
Porter Evaluating and improving the performance of video content distribution in lossy networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017907272

Country of ref document: EP

Effective date: 20191107