CN108574840B - Method and device for evaluating video experience quality - Google Patents

Method and device for evaluating video experience quality Download PDF

Info

Publication number
CN108574840B
CN108574840B CN201710141023.8A CN201710141023A CN108574840B CN 108574840 B CN108574840 B CN 108574840B CN 201710141023 A CN201710141023 A CN 201710141023A CN 108574840 B CN108574840 B CN 108574840B
Authority
CN
China
Prior art keywords
video
uplink
quality
network element
control network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710141023.8A
Other languages
Chinese (zh)
Other versions
CN108574840A (en
Inventor
江亮亮
陈现迎
柳亮亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710141023.8A priority Critical patent/CN108574840B/en
Publication of CN108574840A publication Critical patent/CN108574840A/en
Application granted granted Critical
Publication of CN108574840B publication Critical patent/CN108574840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to the technical field of communication, and discloses a method and a device for evaluating video experience quality, which are used for evaluating the video service experience quality from a transmitting terminal device to a receiving terminal device by a receiving terminal device. The method comprises the following steps: receiving an uplink video damage parameter sent by a control network element by a receiving end device, wherein the uplink video damage parameter is used for representing the quality damage degree of uplink video data sent to the control network element by a sending end device; and the receiving end equipment evaluates the video experience quality from the transmitting end equipment to the receiving end equipment based on the uplink video damage parameter and the downlink video damage parameter, wherein the downlink video damage parameter is used for representing the quality damage degree of downlink video data sent to the receiving end equipment by the control network element.

Description

Method and device for evaluating video experience quality
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method and a device for evaluating video experience quality.
Background
With the development of communication technology, the demand of users for video services is increasing day by day, and the improvement of the quality of experience of video services becomes a key point of attention in the industry. At present, some schemes for measuring the quality of experience of video services are proposed in the industry, and the quality of experience of video is evaluated by measuring some influence factors influencing the quality of experience of video services. Video Mean Opinion Score (vMOS) is commonly used in the industry to characterize video quality of experience, and operators improve video quality of service experience by improving the vMOS index.
In a video conference application scenario, when the number of conference sites participating in a conference is more than 2, the conference sites need to be controlled by a Multipoint Control Unit (MCU), and terminals of all conference sites need to be connected to the MCU through standard interfaces. And the terminals of all the conference sites carry out video mixing or switching through the MCU. In the prior art, a method for evaluating video experience quality in a video conference system is as follows: the method comprises the steps that a functional module used for evaluating the video experience quality is integrated in the receiving end equipment of the video conference system, and the functional module periodically calculates the video experience quality in the video conference process.
In a video conference application scene with an MCU, at least one path of uplink video data reaches the MCU through an uplink between a sending terminal device and the MCU, and after the MCU performs video mixing or switching, downlink video data is sent to a receiving terminal device through a downlink between the MCU and the receiving terminal device. Because the MCU needs to perform transcoding operation when performing video mixing or switching, and the functional module integrated by the sink device can only measure the impact factors affecting the experience quality of video services in the downlink, the experience quality of video evaluated by the functional module integrated by the sink device cannot accurately reflect the real experience quality of video from the originating device to the sink device.
Disclosure of Invention
The embodiment of the application provides a method and a device for evaluating video experience quality, which are used for solving the problem that the real video experience quality from a sending end device to a receiving end device cannot be accurately evaluated by the existing method.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, a receiving end device receives an uplink video damage parameter sent by a control network element, and evaluates video experience quality from a sending end device to the receiving end device based on the uplink video damage parameter and the downlink video damage parameter, where the uplink video damage parameter is used to represent a quality damage degree of uplink video data sent to the control network element by the sending end device, and the downlink video damage parameter is used to represent a quality damage degree of downlink video data sent to the receiving end device by the control network element. Therefore, the video damage parameters of the uplink and the downlink can be synthesized, the end-to-end video experience quality from the sending end device to the receiving end device can be correctly evaluated, and the accuracy of video experience quality evaluation in a video communication system is improved, so that timely adjustment is facilitated, and the video communication experience of a user is guaranteed.
In one possible design, the uplink video impairment parameter includes a video quality attenuation factor and an uplink transmission delay, and the downlink video impairment parameter includes a downlink video source quality, an interactive experience quality, and a downlink transmission delay, where the uplink is a transmission link from the transmitting device to the controlling network element, and the downlink is a transmission link from the controlling network element to the receiving device. The sink device may specifically evaluate the quality of experience of the video from the originating device to the sink device by: the receiving end equipment determines the video source quality from the transmitting end equipment to the receiving end equipment by combining the video source quality of the downlink and the video quality attenuation factor, determines the transmission delay from the transmitting end equipment to the receiving end equipment by combining the transmission delay of the uplink and the transmission delay of the downlink, determines the viewing experience quality according to the determined transmission delay from the transmitting end equipment to the receiving end equipment, and determines the video experience quality from the transmitting end equipment to the receiving end equipment according to the determined video source quality from the transmitting end equipment to the receiving end equipment, the interactive experience quality and the viewing experience quality. Optionally, the receiving end device may further combine other more uplink video impairment parameters and downlink video impairment parameters to evaluate the quality of experience of the video from the originating device to the receiving end device, which is just an example.
In one possible design, the video quality degradation factor conforms to the following equation:
Figure BDA0001242803460000021
wherein F is the video quality attenuation factor, the sQualitycFor the video source quality of the uplink, the sQualityoThe reference video source quality of the video source data of the originating device is obtained when the video source data is not damaged by video coding; in one possible implementation, the sink device multiplies the video source quality of the downlink by the video quality attenuation factor, and determines the resulting product as the video source quality from the originating device to the sink device.
In one possible design, the terminating device adds the transmission delay of the uplink to the transmission delay of the downlink, and determines the resulting sum as the transmission delay from the originating device to the terminating device.
In one possible design, the originating device includes a first originating device and a second originating device, the uplink video data includes first uplink video data and second uplink video data, the control network element is a lower-level control network element in a cascade mode, and the lower-level control network element is connected to an upper-level control network element; the receiving end device receives an uplink video damage parameter sent by the subordinate control network element, wherein the uplink video damage parameter is determined by a first uplink video damage parameter and a second uplink video damage parameter, the first uplink video damage parameter is used for representing the quality damage degree of first uplink video data sent to the subordinate control network element by the first sending end device, and the second uplink video damage parameter is used for representing the quality damage degree of second uplink video data sent to the subordinate control network element by the second sending end device through the superior control network element. Optionally, the cascade mode may include at least two upper level control network elements, and the method may also be used. The current control network element integrates all video damage parameters and sends the integrated video damage parameters to the next-level control network element or the receiving end device, so that the receiving end device can integrate the video damage parameters of an uplink and a downlink and evaluate the video experience quality from the sending end device to the receiving end device.
In a second aspect, a method for evaluating video experience quality is provided, where a network element receives uplink video data sent by a receiving end device, acquires an uplink video impairment parameter, and sends the uplink video impairment parameter to a receiving end device, where the uplink video impairment parameter is used by the receiving end device to evaluate video experience quality from the sending end device to the receiving end device, where the uplink video impairment parameter is used to characterize a quality impairment degree of the uplink video data. Therefore, the receiving end device can synthesize the video damage parameters of the uplink and the downlink, correctly evaluate the end-to-end video experience quality from the transmitting end device to the receiving end device, and improve the accuracy of video experience quality evaluation in a video communication system, so that timely adjustment is facilitated, and the video communication experience of a user is guaranteed.
In one possible design, the uplink video impairment parameter includes a video quality attenuation factor and an uplink transmission delay, the uplink is a transmission link between the originating device and the control network element, and the control network element obtains a video source quality sQuality of the uplinkcAnd the resolution of the uplink video data, and the control network element determines the reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe reference video source quality represents the quality of the video source data of the originating device when the video source data is not damaged by video coding, and the control network element calculates a video quality attenuation factor F according to the following formula:
Figure BDA0001242803460000031
and the control network element acquires the uplink transmission delay according to the uplink video data.
In a possible design, if the number of the originating device is one, the uplink video impairment parameter includes a video quality attenuation factor of the originating device and a transmission delay of an uplink between the originating device and the control network element; and the control network element obtains the video source quality sQuality of the uplink between the originating device and the control network element according to the uplink video data sent by the originating devicecAnd obtaining the resolution of the uplink video data; the control network element determines reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe reference video source quality characterizes a quality of video source data of the originating device without video coding impairments; the control network element calculates the video quality according to the following formulaAttenuation factor F:
Figure BDA0001242803460000032
and the control network element acquires the uplink transmission delay between the transmitting terminal equipment and the control network element according to the uplink video data.
In one possible design, if the number of the originating device is at least two, the uplink video impairment parameters include: the average value of the video quality attenuation factors of the at least two paths of uplink video data sent by the at least two sending end devices, and the maximum value of the transmission delay of the at least two paths of uplink between the at least two sending end devices and the control network element. The control network element respectively acquires the video source quality sQuality of at least two paths of uplink between the at least two originating devices and the control network element according to the at least two paths of uplink video data sent by the at least two originating devicescAnd respectively acquiring the resolution of the at least two paths of uplink video data; the control network element determines reference video source quality sQuality corresponding to the resolution of each path of uplink video data according to the corresponding relation between the resolution and the reference video source qualityo(ii) a The control network element calculates the video quality attenuation factor F of each path of uplink video data according to the following formula:
Figure BDA0001242803460000033
obtaining at least two Fs, calculating the average value of the at least two Fs
Figure BDA0001242803460000034
And the control network element respectively acquires the transmission time delays of the at least two paths of uplink links according to the uplink video data and determines the maximum value of the transmission time delays of the at least two paths of uplink links.
In one possible design, the originating device includes a first originating device and a second originating device, the uplink video data includes first uplink video data and second uplink video data, the control network element is a lower-level control network element in a cascade mode, and the lower-level control network element is connected to an upper-level control network element; and the lower control network element receives the first uplink video data sent by the first sending end device, and receives the second uplink video data sent by the second sending end device through the upper control network element. The lower control network element acquires a first uplink video damage parameter according to first uplink video data, wherein the first uplink video damage parameter is used for representing the quality damage degree of the first uplink video data; receiving a superior uplink video damage parameter sent by the superior control network element, wherein the superior uplink video damage parameter is used for representing the quality damage degree from the second sending end device to superior uplink video data of the superior control network element; the lower control network element calculates the second uplink video damage parameter based on the upper uplink video damage parameter and the upper and lower uplink video damage parameters, wherein the upper and lower uplink video damage parameters are used for representing the quality damage degree of the upper and lower uplink video data from the upper control network element to the lower control network element, and the second uplink video damage parameter is used for representing the quality damage degree of the second uplink video data; and the lower control network element determines the uplink video damage parameter according to the first uplink video damage parameter and the second uplink video damage parameter.
In a third aspect, an apparatus for evaluating quality of experience of a video is provided, where the apparatus has a function of implementing the behavior of a sink device in any one of the possible implementations of the first aspect and the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In a fourth aspect, an apparatus for assessing quality of experience of a video is provided, the apparatus having functionality to control network element behavior in any of the possible implementations of the second aspect and the second aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In a fifth aspect, there is provided an apparatus for assessing quality of experience of video, the apparatus comprising a transceiver, a processor, a memory, the processor and the memory being connected by a bus system, the processor being configured to execute code in the memory, the execution causing the processor to perform the method of the first aspect or any possible implementation manner of the first aspect when the code is executed.
In a sixth aspect, there is provided an apparatus for assessing quality of experience of video, the apparatus comprising a transceiver, a processor, a memory, the processor and the memory being coupled by a bus system, the processor being configured to execute code in the memory, the execution causing the processor to perform the method of the second aspect or any possible implementation of the second aspect when the code is executed.
In a seventh aspect, there is provided a system for evaluating quality of experience of video, the system comprising the apparatus of the third aspect or the fifth aspect, and the apparatus of the fourth aspect or the sixth aspect.
In an eighth aspect, a computer storage medium is provided for storing a computer program comprising instructions for performing the first aspect, the second aspect, any of the possible implementations of the first aspect, or the method in any of the possible implementations of the second aspect.
In a ninth aspect, embodiments of the present invention provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspects.
Drawings
FIG. 1 is a block diagram of a system for evaluating video quality of experience according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for evaluating video quality of experience according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a cascade mode architecture in an embodiment of the present application;
FIG. 4 is a schematic diagram of a video conference system architecture according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a method for evaluating video quality of experience according to scenario one in the present application;
FIG. 6 is a schematic diagram illustrating a method for evaluating video quality of experience in scenario two according to an embodiment of the present application;
fig. 7 is a schematic diagram of a method for evaluating video quality of experience in scene three in the embodiment of the present application;
fig. 8 is a schematic diagram illustrating one of a signaling flow and an execution method for evaluating video quality of experience according to an embodiment of the present application;
FIG. 9 is a second schematic diagram illustrating a signaling flow and an execution method for evaluating video quality of experience according to an embodiment of the present application;
fig. 10 to fig. 13 are schematic structural diagrams of an apparatus for evaluating video quality of experience according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, in the embodiment of the present application, a system 100 for evaluating quality of experience of a video includes at least a control network element 101, an originating device 102, and a receiving device 103. The originating device 102 sends the uplink video data to the controlling network element 101 via an uplink, which is a transmission link between the originating device 102 and the controlling network element 101. The control network element 101 sends downlink video data to the sink device 103 through a downlink, where the downlink is a transmission link between the control network element 101 and the sink device 103. In the embodiment of the present application, a module having a video experience quality assessment function is integrated in both the control network element 101 and the receiving end device 103, and the module may be configured to assess video experience quality according to received video data.
Based on the system 100 for evaluating video quality of experience shown in fig. 1, as shown in fig. 2, in the embodiment of the present application, the method for evaluating video quality of experience has the following flow.
Step 201, the sending end device sends the uplink video data to the control network element, and the control network element receives the uplink video data sent by the sending end device.
Step 202, the control network element obtains an uplink video damage parameter according to the received uplink video data.
The uplink video damage parameter is used for representing the quality damage degree of the uplink video data. The uplink video impairment parameters include a video quality degradation factor and an uplink transmission delay.
Step 203, the control network element sends the uplink video damage parameter to the receiving end device, and the receiving end device receives the uplink video damage parameter sent by the control network element.
And step 204, the receiving end device evaluates the video experience quality from the sending end device to the receiving end device based on the uplink video damage parameter and the downlink video damage parameter.
The downlink video damage parameter is used for representing the quality damage degree of downlink video data sent to the receiving end device by the control network element. The downlink video impairment parameters comprise video source quality of a downlink, interactive experience quality and transmission delay of the downlink.
Specifically, the video quality attenuation factor conforms to the following equation:
Figure BDA0001242803460000051
wherein F is a video quality attenuation factor, sQualitycFor the video source quality of the uplink, sQualityoIs the reference video source quality of the video source data of the originating device when not damaged by video coding.
In step 201, the control network element obtains an uplink transmission delay according to the uplink video data;
the control network element obtains an uplink video damage parameter, specifically, a video quality attenuation factor F and an uplink transmission delay in the uplink video damage parameter need to be obtained. Wherein:
control network element obtaining video source quality sQuality of uplinkcAnd the resolution of the uplink video data, and determining the reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThen according to the formula
Figure BDA0001242803460000052
A video quality attenuation factor F is calculated.
It should be noted here that the number of the originating device may be one or at least two.
If the number of the originating devices is one, the uplink video impairment parameter includes a video quality degradation factor of the one originating device and a transmission delay of an uplink between the one originating device and the control network element. The control network element acquires uplink transmission delay between a transmitting terminal device and the control network element according to the uplink video data; and the control network element obtains the video source quality sQuality of the uplink between the originating device and the control network element according to the uplink video data sent by the originating devicecAnd obtaining the resolution of the uplink video data. The control network element determines the reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityo. The control network element is according to the formula
Figure BDA0001242803460000053
A video quality attenuation factor F is calculated.
If the number of the sending-end equipment is at least two, the uplink video damage parameters comprise: the average value of the video quality attenuation factors of the at least two paths of uplink video data sent by the at least two sending end devices and the maximum value of the transmission time delay of the at least two paths of uplink between the at least two sending end devices and the control network element. The control network element respectively acquires the transmission time delays of the at least two paths of uplink links according to the uplink video data and determines the maximum value of the transmission time delays of the at least two paths of uplink links; and the control network element respectively acquires the video source quality sQuality of at least two paths of uplink between the at least two transmitting end devices and the control network element according to the at least two paths of uplink video data transmitted by the at least two transmitting end devicescAnd respectively obtaining the resolutions of at least two paths of uplink video data, and determining the reference video source quality sQuality corresponding to the resolutions of each path of uplink video data by the control network element according to the corresponding relation between the resolutions and the reference video source qualityoThen according to the formula
Figure BDA0001242803460000061
Calculating the video quality attenuation factor F of each path of uplink video data to obtain at least two F, and then calculating the average value of the at least two F
Figure BDA0001242803460000062
In step 203, the sink device evaluates the quality of experience of the video from the originating device to the sink device, specifically by the following methods of S1-S3.
And S1, the receiving end device determines the quality of the video source from the transmitting end device to the receiving end device by combining the quality of the video source of the downlink and the video quality attenuation factor in the uplink video damage parameter.
The receiving end device multiplies the video source quality of the downlink by a video quality attenuation factor F, and determines the product as the video source quality from the transmitting end device to the receiving end device.
S2, the terminating device determines the transmission delay from the originating device to the terminating device according to the transmission delay of the uplink and the transmission delay of the downlink, and determines the viewing experience quality according to the determined transmission delay from the originating device to the terminating device.
The receiving end device adds the transmission delay of the uplink and the transmission delay of the downlink, and determines the sum as the transmission delay from the transmitting end device to the receiving end device.
And S3, the receiving end device determines the video experience quality from the transmitting end device to the receiving end device according to the determined video source quality, the interactive experience quality and the watching experience quality from the transmitting end device to the receiving end device.
In practical application, if the number of terminals participating in communication in the video communication system is small, each terminal only needs to be connected with one control network element to complete media exchange. However, for a plurality of terminal data participating in communication, especially in an application scenario of a plurality of meeting places, a control network element in a cascade mode is often required. In one possible implementation, the architecture of the tandem mode includes an upper level control network element and a lower level control network element. The upper control network element is connected with the lower control network element, and both the upper control network element and the lower control network element can be connected with terminals. The lower level controlling network element may perform the method performed by the controlling network element 101 described above. Taking any terminal connected to the subordinate control network element as an example of the receiving end device, the sending end device may include two types: a first originating device and a second originating device. The first transmitting terminal equipment is any terminal which is connected with the lower-level control network element and transmits video data, and the second transmitting terminal equipment is any terminal which is connected with the upper-level control network element and transmits video data. Accordingly, the upstream video data described in the method shown in fig. 2 may include first upstream video data and second upstream video data. The first uplink video data is received by the receiving end device and sent by the first sending end device, and the second uplink video data is received by the receiving end device and sent by the second sending end device. The video data received by the upper control network element and sent by the second originating device may be referred to as upper uplink video data, and the video data received by the lower control network element and sent by the upper control network element may be referred to as upper and lower uplink video data.
The superior control network element is also integrated with a module with a video experience quality evaluation function, and is used for collecting superior uplink video damage parameters, and the superior uplink video damage parameters are used for representing the quality damage degree of superior uplink video data from the second transmitting terminal equipment to the superior control network element.
Specifically, based on the method for evaluating video quality of experience shown in fig. 2, as shown in fig. 3, in the cascade mode, the method for evaluating video quality of experience is as follows.
Step 301, a first sender device sends first uplink video data to a subordinate control network element, and the subordinate control network element receives the first uplink video data sent by the first sender device; and the second sending end equipment sends the second uplink video data to the lower control network element, and the lower control network element receives the second uplink video data sent by the second sending end equipment.
And the second transmitting terminal equipment transmits the second uplink video data to the lower control network element through the upper control network element. In practical application, the second origination device sends the superior uplink video data to the superior control network element, and the superior control network element performs video transcoding mixing operation on the superior uplink video data sent by the second origination device, and sends the data after video transcoding mixing (which may be called the superior uplink video data and the inferior uplink video data) to the inferior control network element.
Step 302, the lower control network element obtains a first uplink video damage parameter according to the first uplink video data, and the lower control network element receives a higher uplink video damage parameter sent by the higher control network element.
The first uplink video damage parameter is used for representing the quality damage degree of the first uplink video data.
And step 303, the lower control network element calculates a second uplink video damage parameter based on the upper uplink video damage parameter and the upper and lower uplink video damage parameters. The uplink and downlink video damage parameters are used for representing the quality damage degree of uplink and downlink video data from the upper control network element to the lower control network element, and the second uplink video damage parameter is used for representing the quality damage degree of the second uplink video data.
Step 304, the lower control network element sends the uplink video damage parameter to the receiving end device, and the receiving end device receives the uplink video damage parameter sent by the lower control network element.
And the uplink video damage parameter is determined by the first uplink video damage parameter and the second uplink video damage parameter. For example, the uplink video impairment parameters include the first uplink video impairment parameter and the second uplink video impairment parameter; for another example, the uplink video impairment parameter is calculated by the lower control network element according to the first uplink video impairment parameter and the second uplink video impairment parameter.
Step 305, the receiving end device evaluates the video experience quality from the sending end device to the receiving end device based on the uplink video damage parameter and the downlink video damage parameter.
For a specific evaluation method, reference may be made to the method for evaluating video experience quality shown in fig. 2, and repeated details are not repeated.
It should be noted that, in a specific implementation, the video communication system may also include at least two upper level control network elements, and the method may also be adopted in such a network architecture. The current control network element integrates all video damage parameters and sends the integrated video damage parameters to the next-level control network element or the receiving end device, so that the receiving end device can integrate the video damage parameters of an uplink and a downlink and evaluate the video experience quality from the sending end device to the receiving end device.
To sum up, in the method for evaluating video experience quality provided in the embodiment of the present application, the control network element obtains the uplink video damage parameter and sends the uplink video damage parameter to the receiving end device, and the receiving end device can comprehensively consider the uplink video damage parameter and the downlink video damage parameter, and correctly evaluate the video experience quality from the transmitting end device to the receiving end device.
Embodiments of the present application also provide a computer storage medium for storing a computer program including instructions for performing the method for evaluating video quality of experience as shown in fig. 2.
The method for evaluating the video experience quality provided by the embodiment of the application can be applied to various video communication systems with control network elements, and the control network elements can be used for executing video transcoding operation in the video communication systems.
The method for evaluating the video experience quality provided by the embodiment of the present application is further described in detail with reference to specific application scenarios.
Taking the application to a video conference system as an example, as shown in fig. 4, in the video conference system 400, a terminal 407 establishes a signaling channel and a bearer channel with a network side, schedules the terminal to access a video conference through the network side, and performs data transmission. Wherein, the link shown by the solid line in fig. 4 represents a signaling channel, and the signaling channel is responsible for media negotiation control; the links shown by the dashed lines represent bearer paths, which are responsible for the transmission of the media streams. The chairman terminal 401 is used for scheduling the video conference and sending a control signaling in the video conference; a conference Application Server (AS) 402 belongs to a service capability layer network element, provides a Multimedia conference service based on an IP Multimedia Subsystem (IMS) network, completes scheduling of a voice conference, a data conference, or a video conference, and provides a web-interface configuration and management service; a Multimedia Resource Function Controller (MRFC) 403 controls media resources on a Multimedia Resource Processor (MRFP) through an h.248 Protocol, and the MRFC403 parses a Session Initiation Protocol (SIP) Resource control command from the AS402, converts the SIP Resource control command into a control command for the MRFP, and generates corresponding charging information; a Multipoint Control Unit (MCU) 404 is located at a core exchange position on a bearer channel of a media stream, and during a video processing process, the MCU404 decodes a video code stream received from a transmitting device, adjusts the resolution of a video, mixes multiple channels of videos, re-encodes the videos, and transmits the videos to a corresponding receiving device; the call control network element 405 performs signaling operations for call session control, manages SIP sessions and cooperates with other network entities for session control, service control and resource allocation; the Access Gateway (Access Gateway)406 provides an analog subscriber line interface, and directly encapsulates user data and subscriber line signaling in an IP packet, so as to Access a general telephone user to a soft switch or an IMS network, and provide services such as a telephone service and a dial-up service for the user.
The video conference system 400 includes at least two terminals 407, and one terminal 407 is illustrated in fig. 4 as an example. The terminal 407 may be divided into an originating device and a terminating device according to the transmission direction of video data. The connection forms of the originating device and the receiving device with the network side all conform to the network architecture shown in fig. 4, and the working modes between the originating device and the network side and between the receiving device and the network side all conform to the above description. In the following description, the controlling network element may be the MCU404 in fig. 4, and of course, the controlling network element may also be another network element type that undertakes a video transcoding and mixing function in a video communication system, for example, a Universal Media Gateway (UMG), which is not limited in this embodiment.
The method for evaluating the video quality of experience will be described in further detail below by taking the method for evaluating the video quality of experience as an example for being applied to the video conference system 400.
In the video conference system 400, the control network element is an MCU, and the video conference system has different application scenarios according to the number and layout of conference sites. Several application scenarios are respectively described below.
Several application scenarios are introduced.
Scene one,
The conference places participating in the video conference are few, and the media exchange can be completed only through one MCU among the conference places, namely, the media streams between the terminals in the conference places are converted only through one MCU. The conference rooms are divided into primary and secondary rooms.
As shown in fig. 5, it is assumed that each meeting place is divided into a main meeting place and a sub meeting place, and the main meeting place is the originating device, and each sub meeting place is the terminating device. The MCU decodes the video received from the terminal 1 in the main conference room, and adjusts the resolution according to the requirements of the terminals 2, 3 and 4 in each sub-conference room, for example, the resolution of the video data sent from the terminal 1 is 1080P, the resolutions of the video data sent to the terminals 2, 3 and 4 are respectively adjusted to 360P, 480P and 720P, and the video is compressed and then sent to each sub-conference room through the network. In the embodiment of the application, the MCU acquires the uplink video damage parameter according to the uplink video data sent by the terminal 1, and sends the uplink video damage parameter to the terminals in each branch meeting place by controlling the packet, so that the terminals 2, 3, and 4 can calculate the video experience quality from the terminal 1 to themselves according to the uplink video damage parameter and their respective downlink video damage parameters. The MCU may periodically transmit the uplink video impairment parameter to the terminals in each meeting place, or may not periodically transmit the uplink video impairment parameter to the terminals in each meeting place, which is not limited in the present application.
And in a second scene, the number of conference places participating in the video conference is small, and media exchange can be completed only through one MCU among the conference places, namely, media streams between terminals in the conference places are converted only through one MCU.
As shown in fig. 6, the conference rooms are not divided into primary and secondary rooms, the terminals (terminal 1, terminal 2, terminal 3, and terminal 4) in each conference room transmit uplink video data to the MCU, the MCU decodes the received uplink video data of each channel, adjusts the video resolution, combines the uplink video data of each channel, compresses the combined video, and transmits the video data of each branch conference room through the network. Each terminal is both a transmitting terminal device and a receiving terminal device. In the embodiment of the application, the MCU periodically sends the comprehensive uplink video damage parameter of each channel of uplink video data to the terminals in each branch meeting place through the control packet, so that the terminals 1, 2, 3 and 4 can calculate the video experience quality from the originating device to the terminating device according to the comprehensive uplink video damage parameter and their respective downlink video damage parameters.
And in the third scene, more meeting places participate in the video conference, and one MCU is not enough to finish the media exchange among the meeting places. A cascaded manner of MCUs is required to schedule the conference. It is assumed that the MCU cascade mode includes an upper MCU and a lower MCU.
As shown in fig. 7, the upper MCU is connected to a terminal a1, a terminal a2, a terminal A3, and a terminal a4, and the lower MCU is connected to a terminal B1, a terminal B2, a terminal B3, and a terminal B4.
In the mode of MCU cascade, each MCU integrates a module with a video experience quality evaluation function. The current MCU acquires video damage parameters between the current MCU and a last network element, wherein the last network element can be a superior MCU of the current MCU or a terminal connected with the current MCU. And the current MCU synthesizes each video damage parameter and sends the synthesized video damage parameter to the receiving end equipment. And finally, the receiving end device comprehensively evaluates the video experience quality from the transmitting end device to the receiving end device according to the received video damage parameters sent by the MCU and the video damage parameters detected by the receiving end device.
Assume that terminal B4 is the receiving device and the other terminals are the originating devices. The terminal A1, the terminal A2, the terminal A3 and the terminal A4 respectively send superior uplink video data to the superior MCU, and the superior MCU obtains superior uplink video damage parameters according to the superior uplink video data sent by the terminals and sends the superior uplink video damage parameters to the subordinate MCU. And the upper MCU also sends the uplink video data of the upper level and the lower level to the lower MCU, and the lower MCU acquires the damage parameters of the uplink video of the upper level and the lower level according to the uplink video data of the upper level and the lower level. The lower MCU also receives uplink video data (marked as first uplink video data) sent by the terminal B1, the terminal B2 and the terminal B3, and acquires a first uplink video damage parameter of the first uplink video data. And the lower MCU acquires a second uplink video damage parameter according to the upper uplink video damage parameter and the upper and lower uplink video damage parameters. The lower MCU sends the first uplink video damage parameter and the second uplink video damage parameter to a terminal B4; or the lower-level MCU calculates the uplink video damage parameter according to the first uplink video damage parameter and the second uplink video damage parameter, and sends the calculated uplink video damage parameter to the terminal B4. The terminal B4 is used as a receiving terminal device, receives downlink video data sent by a lower MCU, and acquires downlink video damage parameters according to the downlink video data; the terminal B4 evaluates the video experience quality from the transmitting terminal equipment to the receiving terminal equipment according to the received first uplink video damage parameter, the second uplink video damage parameter and the downlink video damage parameter; alternatively, the terminal B4 evaluates the quality of experience of the video from the originating device to the terminating device according to the received upstream video impairment parameter and the downstream video impairment parameter.
The following describes the method for evaluating the quality of experience of video in scene one, scene two, and scene three in further detail.
For scenario one and scenario two, as shown in fig. 8, when evaluating the quality of experience of the video, the signaling flow and the execution method between the network elements are as follows.
Step 801, the conference application server initiates an SIP request to a call control network element, invites a terminal corresponding to a called number, the control network element receives the SIP request sent by the conference application server, and the control network element sends the SIP request to the terminal through an access network element.
It should be noted that the terminal may be an originating device or a receiving device, and both the originating device and the receiving device conform to the description of the relevant access conference process.
The SIP request does not carry a Session Description Protocol (SDP), and the purpose is to allow the terminal to provide SDP offer first, and then negotiate media capabilities with a conference port according to the SDP offer. SDP is a protocol used to describe multimedia session announcements, multimedia session invitations, and other forms of multimedia session initializations, and typically includes session information and media information.
And step 802, the terminal replies 200OK to the conference application server through the access network element and the call control network element, represents and responds to the SIP request, and carries the SDP offer in a 200OK message. The conference application server receives the 200OK from the terminal.
In the embodiment of the application, an indication is added to the original SDP offer to indicate whether the periodic reporting of the video quality of experience is supported. Taking the video quality of experience as vMOS as an example, the SDP offer is added with the indication shown in table 1. The parameter name of the indication is vMOS _ report _ flag, which represents whether the video vMOS periodic report is supported or not, the video vMOS periodic report is represented by TRUE, and the video vMOS periodic report is not supported by FALSE.
TABLE 1
Parameter name Description of the parameters Examples of the invention
vMOS_report_flag Whether to support periodic reporting of video vMOS TRUE OR FALSE
Step 803, after receiving the 200OK response, the conference application server initiates a request to the multimedia resource function controller, and the request carries the SDP offer, and the multimedia resource function controller receives the request sent by the conference application server.
And step 804, after receiving the request, the multimedia resource function controller sends a command to the MCU to instruct the MCU to add the terminal, and the MCU receives the command sent by the multimedia resource function controller.
Step 805, after receiving the command, the MCU adds the terminal, allocates media resources to the terminal, and returns a 200OK to the multimedia resource function controller, which indicates that the terminal successfully allocates media resources, and the multimedia resource function controller receives the 200OK returned by the MCU.
Step 806, the multimedia resource function controller returns 200OK to the conference application server, and the SDP Answer (i.e. SDP Answer) is carried in the 200OK, and the conference application server receives the 200OK returned by the multimedia resource function controller.
In step 807, the conference application server replies an ACK message to the terminal through the call control network element and the access gateway, and the terminal receives the ACK message returned by the conference application server, where the ACK message carries an SDP Answer.
And after the terminal receives the ACK message, the media stream is established between the MCU and the terminal. And the MCU adds the terminal into the interaction of the video conference.
If the terminal corresponding to the transmitting terminal device supports the periodic reporting of the video experience quality in the media negotiation process, the MCU starts to periodically acquire the uplink video damage parameters from the transmitting terminal device to the MCU and sends the uplink video damage parameters to the receiving terminal device.
The specific process is as follows:
and 808, transmitting the uplink video data to the MCU by the transmitting terminal equipment, and receiving the uplink video data transmitted by the transmitting terminal equipment by the MCU.
Step 809, MCU determines video source quality sQuality of uplinkcAnd a video quality degradation factor for the uplink.
The MCU analyzes the uplink video data, extracts the video resolution and the coding format of the uplink video data, periodically counts parameters such as the code rate and the frame rate of the video and the like, and accordingly calculates the parametersThe parameters calculate the video source quality sQuality of the uplinkc. Calculating sQualitycThe method of (3) can be any method known in the art.
For the first scenario, the MCU only needs to acquire the video source quality of the uplink from the terminal 1 to the MCU in the main meeting place. For the second scenario, the MCU needs to obtain the video source quality of each uplink from each of the terminals 1, 2, 3 and 4 to the MCU.
The MCU will maintain a reference video source quality for video of different resolutions without video coding impairments, assuming a tabular maintenance, as shown in table 2, CIF, 360P, VGA, 4CIF, 720P and 1080P are resolutions commonly used in video conferencing systems. Each resolution corresponds to a reference video source quality, and the MCU queries the corresponding reference video source quality sQuality according to the resolution of the uplink video datao. According to the formula
Figure BDA0001242803460000111
A video quality attenuation factor is calculated.
TABLE 2
Resolution ratio sQualityo
CIF 4.06
360P 4.12
VGA 4.24
4CIF 4.31
720P 4.58
1080P 4.78
For the first scenario, the MCU only needs to calculate the video quality attenuation factor of the uplink from the terminal 1 to the MCU in the main conference room. For the second scenario, the MCU needs to calculate the video quality attenuation factors of each uplink from each terminal of the terminal 1, the terminal 2, the terminal 3, and the terminal 4 to the MCU, and calculate the average value F of the video quality attenuation factors of each uplink.
Step 810, the MCU detects the uplink transmission delay.
The uplink transmission delay may be a round-trip propagation delay of a Control packet, for example, the Control packet is a Real-time Transport Control Protocol (RTCP) packet. For scenario one, the uplink transmission delay is the transmission delay RTT of the uplink from the terminal 1 to the MCU in the main meeting place. And for the second scenario, the uplink transmission delay is the maximum MaxRTT among the transmission delays of each uplink from each terminal to the MCU of the terminal 1, the terminal 2, the terminal 3, and the terminal 4.
It should be noted that step 810 may be exchanged with the order of step 809, or step 810 may be parallel to step 809.
Step 811, the MCU sends the video quality attenuation factor and uplink transmission delay of the uplink to the receiving end device, and the receiving end device receives the video quality attenuation factor and uplink transmission delay of the uplink sent by the MCU.
For example, the MCU carries the parameters in the extended information of the RTCP packet sent to the receiving device. Specifically, the results are shown in Table 3.
TABLE 3
Figure BDA0001242803460000112
Step 812, the sink device evaluates the quality of experience of the video from the originating device to the sink device.
Specifically, the receiving end device calculates the video source quality sQuality of the downlink according to parameters such as the video resolution, the coding format, the code rate, the frame rate, the screen size and the like of the downlink video data;
the receiving end device combines the video quality attenuation factor of the uplink and the video source quality sQuality of the downlink to calculate the video source quality sQuality' from the transmitting end device to the receiving end device.
For scene one, sQuality ═ F; for scene two,
Figure BDA0001242803460000126
And the receiving end equipment calculates the interactive experience quality sInteractive.
The receiving end device combines the downlink transmission delay RTTd and the uplink transmission delay, and calculates the transmission delay RTTa from the transmitting end device to the receiving end device.
For scenario one, RTTa=RTTd+ RTT; for scenario two, RTTa=RTTd+MaxRTT。
And the receiving end equipment calculates the viewing experience quality sView according to the RTTa.
And the receiving end equipment calculates the video experience quality from the transmitting end equipment to the receiving end equipment, such as calculating vMOS (virtual metal oxide semiconductor) according to the calculated video quality sQuality', the interaction experience quality sInteraction and the watching experience quality sView.
For scenario three, as shown in fig. 9, when evaluating the quality of experience of the video, the signaling flow and the execution method between the network elements are as follows.
In fig. 9, the example that the terminal includes the originating device and the receiving device is taken as an example, the originating device and the receiving device both use the method shown in fig. 9 to access the video conference. Steps 901 to 907 are similar to steps 801 to 807, and other repetition points are not described herein.
After the media negotiation of steps 901 to 907, the terminal establishes a media stream with the MCU1 and the MCU 2. If the terminal corresponding to the transmitting terminal device supports the reporting of the video experience quality in the media negotiation process of the steps 901 to 907, the higher-level MCU acquires the higher-level uplink video damage parameters from the transmitting terminal device (including the terminal a1, the terminal a2, the terminal A3 and the terminal a4) to the higher-level MCU and transmits the higher-level uplink video damage parameters to the lower-level MCU, the lower-level MCU acquires the first uplink video damage parameters from the transmitting terminal device (including the terminal B1, the terminal B2 and the terminal B3) to the lower-level MCU and the higher-level uplink video damage parameters from the higher-level MCU to the lower-level MCU, calculates the second uplink video damage parameters according to the higher-level uplink video damage parameters and the received higher-level uplink video damage parameters, and transmits the first uplink video damage parameters and the second uplink video damage parameters to the receiving terminal device (i.e., the terminal B4; or, the uplink video damage parameter is calculated according to the first uplink video damage parameter and the second uplink video damage parameter, and the calculated uplink video damage parameter is sent to the receiving end device (i.e., the terminal B4).
The specific process is as follows:
step 908, the MCU1 calculates the average video quality attenuation factor of each superior uplink from each originating device (including terminal a1, terminal a2, terminal A3 and terminal a4) to the MCU1, and records the average video quality attenuation factor as
Figure BDA0001242803460000121
And the MCU1 detects the maximum value of the transmission time delay of each upper uplink and records as MaxRTT1
Wherein, calculating
Figure BDA0001242803460000122
And MaxRTT1The method of (3) is as described in steps 808 to 811, and is not described herein again.
Step 909, MCU1 will
Figure BDA0001242803460000123
And MaxRTT1The MCU2 is sent to, and the MCU2 receives the signal sent by the MCU1
Figure BDA0001242803460000124
And MaxRTT1
Specifically, the MCU1 may carry the above parameters in the extension information of the RTCP packet, an example of which is shown in table 4.
TABLE 4
Figure BDA0001242803460000125
Figure BDA0001242803460000131
Step 910, MCU2 calculates the average video quality degradation factor between each originating device (including terminal a1, terminal a2, terminal A3, terminal a4, terminal B1, terminal B2, and terminal B3) to MCU 2.
Specifically, the MCU2 calculates video quality attenuation factors, denoted as and F, for each of the respective originating devices (including terminal B1, terminal B2, and terminal B3) to each of the uplink paths of the MCU2B1、FB2And FB3And calculating the video quality attenuation factors of the uplink of the upper and lower stages from the MCU1 to the MCU2, and recording the video quality attenuation factors as FMCU1
The step of the MCU2 calculating the video quality attenuation factors of each uplink from each originating device (including the terminal B1, the terminal B2 and the terminal B3) to the MCU2 may be interchanged with or parallel to the sequence from step 908 to step 909.
MCU2 calculates video quality attenuation factors, denoted as F ', between various originating devices (including terminal A1, terminal A2, terminal A3, and terminal A4) to MCU 2'MCU1. Wherein,
Figure BDA0001242803460000132
the average video quality attenuation factor between each originating device to the MCU2 is noted
Figure BDA0001242803460000133
Then
Figure BDA0001242803460000134
Step 911, the MCU2 calculates the maximum value of the transmission delay from each originating device (including terminal a1, terminal a2, terminal A3, terminal a4, terminal B1, terminal B2 and terminal B3) to the MCU 2.
Specifically, the MCU2 calculates transmission delays from the MCU1, the terminal B1, the terminal B2 and the terminal B3 to the MCU2, which are respectively denoted as RTTMCU1、RTTB1、RTTB2、RTTB3. It should be noted that the sequence between the step of the MCU2 calculating the transmission delays of the MCU1, the terminal B1, the terminal B2, and the terminal B3 to the MCU2 and the step 908 to the step 910 may be switched or parallel.
The MCU2 calculates the transmission delays from the terminal A1, the terminal A2, the terminal A3 and the terminal A4 to the MCU2, which are recorded as RTT'MCU1. Wherein, RTT'MCU1=RTTMCU1+MaxRTT1
The maximum value of the transmission delay from each originating device to the MCU2 is recorded as MaxRTT2Then MaxRTT2=MAX(RTT′MCU1,RTTB1,RTTB2,RTTB3) Wherein, MAX is calculated by taking the maximum value.
Wherein, the steps 911 and 910 are not ordered in sequence.
Step 912, the MCU2 sends to the receiving end device (i.e. terminal B4)
Figure BDA0001242803460000135
And MaxRTT2The receiving end equipment receives the MCU2 transmitted
Figure BDA0001242803460000136
And MaxRTT2
Specifically, the MCU2 may carry the above parameters in the extension information of the RTCP packet, an example of which is shown in table 5.
TABLE 5
Figure BDA0001242803460000137
Step 913, the receiving end device receives the message sent by the MCU2
Figure BDA0001242803460000138
And MaxRTT2Thereafter, the quality of experience of the video from the originating device to the receiving device is evaluated.
It should be noted that, the scene one, the scene two, and the scene three are only examples for convenience of explaining the method of the embodiment of the present application, and the method of the embodiment of the present application may be applied to various similar application scenarios, for example, an MCU cascade scenario having at least two higher MCUs.
Based on the same inventive concept as the method for evaluating video quality of experience shown in fig. 2, as shown in fig. 10, an embodiment of the present application further provides an apparatus 1000 for evaluating video quality of experience, where the apparatus 1000 for evaluating video quality of experience is configured to perform the method for evaluating video quality of experience shown in fig. 2. The apparatus 1000 for evaluating video experience quality comprises:
a receiving unit 1001, configured to receive an uplink video damage parameter sent by a control network element, where the uplink video damage parameter is used to characterize a quality damage degree of uplink video data sent to the control network element by a sending-end device;
a processing unit 1002, configured to evaluate quality of experience of a video from a source device to a device based on the uplink video impairment parameter and the downlink video impairment parameter received by the receiving unit 1001, where the downlink video impairment parameter is used to characterize a quality impairment degree of downlink video data sent by a control network element to the device.
Optionally, the uplink video impairment parameter includes a video quality attenuation factor and an uplink transmission delay, and the uplink is a transmission link from the originating device to the control network element; the downlink video damage parameters comprise video source quality, interactive experience quality and transmission delay of a downlink, and the downlink is a transmission link between a control network element and a device;
the processing unit 1002 is specifically configured to:
determining a video source quality from the originating device to the apparatus in combination with the video source quality of the downlink and the video quality degradation factor;
determining the transmission delay from the transmitting end equipment to the device by combining the transmission delay of the uplink and the transmission delay of the downlink, and determining the viewing experience quality according to the determined transmission delay from the transmitting end equipment to the device;
determining a quality of experience of the video from the originating device to the device based on the determined quality of the video source, the determined quality of experience of the interaction, and the determined quality of experience of the viewing from the originating device to the device.
The processing unit 1002 is configured to determine the quality of a video source from the originating device to the apparatus, and specifically includes:
the video source quality of the downlink is multiplied by a video quality attenuation factor, and the resulting product is determined as the video source quality from the originating device to the apparatus.
Optionally, the processing unit 1002 is configured to determine a transmission delay from the originating device to the apparatus, and specifically includes:
the transmission delay of the uplink and the transmission delay of the downlink are added, and the resulting sum is determined as the transmission delay from the originating device to the apparatus.
Optionally, the originating device includes a first originating device and a second originating device, the uplink video data includes a first uplink video data and a second uplink video data, the control network element is a lower control network element in a cascade mode, and the lower control network element is connected to the upper control network element;
receiving unit 1001 is specifically configured to:
receiving an uplink video damage parameter sent by a lower control network element, wherein the uplink video damage parameter is determined by a first uplink video damage parameter and a second uplink video damage parameter, the first uplink video damage parameter is used for representing the quality damage degree of first uplink video data sent to the lower control network element by a first sending end device, and the second uplink video damage parameter is used for representing the quality damage degree of second uplink video data sent to the lower control network element by a second sending end device through a higher control network element.
Based on the same inventive concept as the method for evaluating video quality of experience shown in fig. 2, as shown in fig. 11, an embodiment of the present application further provides an apparatus 1100 for evaluating video quality of experience, where the apparatus 1100 for evaluating video quality of experience is configured to perform the method for evaluating video quality of experience shown in fig. 2, where the apparatus 1100 for evaluating video quality of experience includes:
a receiving unit 1101, configured to receive uplink video data sent by a sending end device;
a processing unit 1102, configured to obtain an uplink video damage parameter, where the uplink video damage parameter is used to represent a quality damage degree of uplink video data;
a sending unit 1103, configured to send the uplink video impairment parameter obtained by the processing unit 1102 to a receiving end device, where the receiving end device evaluates the quality of experience of the video from the sending end device to the receiving end device.
Optionally, the uplink video impairment parameter includes a video quality attenuation factor and an uplink transmission delay, and the uplink is a transmission link from the originating device to the apparatus;
the processing unit 1102 is specifically configured to:
obtaining video source quality sQuality of uplinkcAnd resolution of the upstream video data;
determining reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe quality of the video source data of the originating device is represented by the reference video source quality when the video source data is not damaged by video coding;
the video quality attenuation factor F is calculated according to the following formula:
Figure BDA0001242803460000151
and acquiring the uplink transmission delay according to the uplink video data.
Optionally, if the number of the originating device is one, the uplink video impairment parameter includes a video quality attenuation factor of the originating device and a transmission delay of an uplink between the originating device and the apparatus; and is
The processing unit 1102 is specifically configured to:
based on the upstream video data sent by one originating device,obtaining video source quality sQuality of uplink between originating device and apparatuscAnd obtaining the resolution of the uplink video data;
determining reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe quality of the video source data of the originating device is represented by the reference video source quality when the video source data is not damaged by video coding;
the video quality attenuation factor F is calculated according to the following formula:
Figure BDA0001242803460000152
acquiring uplink transmission delay between a transmitting end device and a device according to uplink video data;
optionally, if the number of the originating devices is at least two, the uplink video impairment parameter includes: the average value of the video quality attenuation factors of at least two paths of uplink video data sent by at least two sending end devices and the maximum value of the transmission time delay of at least two paths of uplink between the at least two sending end devices and the device;
the processing unit 1102 is specifically configured to:
respectively acquiring video source quality sQuality of at least two paths of uplink between at least two originating devices and a device according to at least two paths of uplink video data sent by the at least two originating devicescAnd respectively acquiring the resolution of at least two paths of uplink video data;
according to the corresponding relation between the resolution and the reference video source quality, determining the reference video source quality sQuality corresponding to the resolution of each path of uplink video data respectivelyo
Calculating the video quality attenuation factor F of each path of uplink video data according to the following formula:
Figure BDA0001242803460000153
obtaining at least two F, calculating the average value of at least two F
Figure BDA0001242803460000154
And respectively acquiring the transmission time delays of the at least two paths of uplink links according to the uplink video data, and determining the maximum value of the transmission time delays of the at least two paths of uplink links.
Optionally, the originating device includes a first originating device and a second originating device, the uplink video data includes a first uplink video data and a second uplink video data, the apparatus is a lower level control network element in a cascade mode, and the lower level control network element is connected to the upper level control network element;
the receiving unit 1101 is specifically configured to: receiving first uplink video data sent by first sending end equipment;
the sending unit 1103 is specifically configured to: receiving second uplink video data sent by second sending end equipment through a superior control network element;
the processing unit 1102 is specifically configured to: acquiring a first uplink video damage parameter according to the first uplink video data, wherein the first uplink video damage parameter is used for representing the quality damage degree of the first uplink video data;
the receiving unit 1101 is specifically configured to: receiving a superior uplink video damage parameter sent by a superior control network element, wherein the superior uplink video damage parameter is used for representing the quality damage degree of superior uplink video data from a second transmitting terminal device to the superior control network element;
the processing unit 1102 is specifically configured to: calculating a second uplink video damage parameter based on the upper-level uplink video damage parameter and the upper-level and lower-level uplink video damage parameters, wherein the upper-level and lower-level uplink video damage parameters are used for representing the quality damage degree of the upper-level and lower-level uplink video data from the upper-level control network element to the lower-level control network element, and the second uplink video damage parameter is used for representing the quality damage degree of the second uplink video data; and determining the uplink video damage parameters according to the first uplink video damage parameters and the second uplink video damage parameters.
Based on the same inventive concept as the method for evaluating video quality of experience shown in fig. 2, as shown in fig. 11, the present application further provides an apparatus 1200 for evaluating video quality of experience, where the apparatus 1200 for evaluating video quality of experience can be used to execute the method shown in fig. 3. The apparatus 1200 for evaluating video quality of experience includes a transceiver 1201, a processor 1202, a memory 1203, and a bus 1204, where the processor 1202 and the memory 1203 are connected via the bus 1204, and the processor 1202 is configured to execute code in the memory 1203, and when the code is executed, the execution causes the processor 1202 to execute the method for evaluating video quality of experience shown in fig. 2.
The processor 1202 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 1202 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 1203 may include a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1203 may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: solid-state drive, abbreviated: SSD); the memory 1203 may also include a combination of the above types of memories.
Based on the same inventive concept as the method for evaluating video quality of experience shown in fig. 2, as shown in fig. 13, an embodiment of the present application further provides an apparatus 1300 for evaluating video quality of experience, where the apparatus 1300 for evaluating video quality of experience can be used to perform the method shown in fig. 3. The apparatus 1300 for evaluating video quality of experience includes a transceiver 1301, a processor 1302, a memory 1303 and a bus 1304, where the processor 1302 and the memory 1303 are connected via the bus 1304, and the processor 1302 is configured to execute codes in the memory 1303, and when the codes are executed, the execution causes the processor 1302 to execute the method for evaluating video quality of experience shown in fig. 2.
The processor 1302 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 1302 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 1303 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory 1303 may also include a non-volatile memory (english: flash memory), such as a hard disk (HDD) or a solid-state drive (SSD); the memory 1303 may also comprise a combination of the above-mentioned kinds of memories.
It should be noted that the apparatuses provided in fig. 10-11 can be used to implement the method for evaluating the quality of experience of video shown in fig. 2. In a specific implementation manner, the processing unit 1002 in fig. 10 may be implemented by the processor 1202 in fig. 12, and the receiving unit 1001 may be implemented by the transceiver 1201 in fig. 12. The processing unit 1102 in fig. 11 may be implemented by the processor 1302 in fig. 13, and the receiving unit 1101 and the sending unit 1103 may be implemented by the transceiver 1301 in fig. 13.
In the system 100 for evaluating quality of experience of a video provided in fig. 1 in the embodiment of the present application, the receiving device 103 may be a device provided in the embodiment corresponding to fig. 10 and fig. 12. The controlling network element 101 may be the device provided in the embodiments corresponding to fig. 11 and fig. 13. The system for evaluating the quality of experience of the video is used for executing the method of the corresponding embodiment of the figure 2.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (16)

1. A method for assessing video quality of experience, comprising:
receiving an uplink video damage parameter sent by a control network element by a receiving end device, wherein the uplink video damage parameter is used for representing the quality damage degree of uplink video data sent to the control network element by a sending end device;
the receiving end device evaluates the video experience quality from the sending end device to the receiving end device based on the uplink video damage parameter and the downlink video damage parameter, wherein the downlink video damage parameter is used for representing the quality damage degree of downlink video data sent to the receiving end device by the control network element;
the uplink video damage parameter comprises a video quality attenuation factor and the transmission delay of an uplink, wherein the uplink is a transmission link between the transmitting terminal equipment and the control network element; the downlink video impairment parameter comprises video source quality, interactive experience quality and transmission delay of a downlink, and the downlink is a transmission link between the control network element and the receiving end equipment;
the receiving end device evaluates the quality of experience of the video from the sending end device to the receiving end device based on the uplink video damage parameter and the downlink video damage parameter, and comprises the following steps:
the receiving end device determines the video source quality from the transmitting end device to the receiving end device by combining the video source quality of the downlink with the video quality attenuation factor;
the receiving end device determines the transmission delay from the transmitting end device to the receiving end device by combining the transmission delay of the uplink and the transmission delay of the downlink, and determines the viewing experience quality according to the determined transmission delay from the transmitting end device to the receiving end device;
and the receiving end equipment determines the video experience quality from the transmitting end equipment to the receiving end equipment according to the determined video source quality from the transmitting end equipment to the receiving end equipment, the interaction experience quality and the watching experience quality.
2. The method of claim 1, wherein the video quality degradation factor conforms to the following equation:
Figure FDA0002256986740000011
wherein F is the video quality attenuation factor, the sQualitycFor the video source quality of the uplink, the sQualityoThe reference video source quality of the video source data of the originating device is obtained when the video source data is not damaged by video coding;
the receiving device determining the video source quality from the originating device to the receiving device in combination with the video source quality of the downlink and the video quality degradation factor, comprising:
the sink device multiplies the video source quality of the downlink by the video quality attenuation factor, and determines the resulting product as the video source quality from the originating device to the sink device.
3. The method of claim 1 or 2, wherein the determining, by the sink device, the transmission delay from the originating device to the sink device in conjunction with the transmission delay of the uplink and the transmission delay of the downlink comprises:
and the receiving end equipment adds the transmission delay of the uplink and the transmission delay of the downlink, and determines the sum as the transmission delay from the transmitting end equipment to the receiving end equipment.
4. The method of claim 1 or 2, wherein the originating device comprises a first originating device and a second originating device, the upstream video data comprises a first upstream video data and a second upstream video data, the controlling network element is a subordinate controlling network element in a cascaded mode, and the subordinate controlling network element is connected to a superior controlling network element;
the receiving end equipment receives the uplink video damage parameters sent by the control network element, and the method comprises the following steps:
the receiving end equipment receives an uplink video damage parameter sent by the lower control network element, wherein the uplink video damage parameter is determined by a first uplink video damage parameter and a second uplink video damage parameter; the first uplink video impairment parameter is used to represent a quality impairment degree of first uplink video data sent by the first sending end device to the subordinate control network element, and the second uplink video impairment parameter is used to represent a quality impairment degree of second uplink video data sent by the second sending end device to the subordinate control network element via the superior control network element.
5. A method for assessing video quality of experience, comprising:
the method comprises the steps that a control network element receives uplink video data sent by a receiving and sending end device and obtains uplink video damage parameters, wherein the uplink video damage parameters are used for representing the quality damage degree of the uplink video data;
the control network element sends the uplink video damage parameter to a receiving end device, and the uplink video damage parameter is used for the receiving end device to evaluate the video experience quality from the sending end device to the receiving end device;
the uplink video damage parameter comprises a video quality attenuation factor and the transmission delay of an uplink, wherein the uplink is a transmission link between the transmitting terminal equipment and the control network element;
the method for acquiring the uplink video damage parameter by the control network element includes:
the control network element obtains the video source quality sQuality of the uplinkcAnd a resolution of the upstream video data;
the control network element determines reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe reference video source quality characterizes a quality of video source data of the originating device without video coding impairments;
the control network element calculates a video quality attenuation factor F according to the following formula:
Figure FDA0002256986740000022
and the control network element acquires the transmission delay of the uplink according to the uplink video data.
6. The method of claim 5, wherein the method further comprises:
if the number of the transmitting end equipment is one, the uplink video damage parameter comprises a video quality attenuation factor of the transmitting end equipment and the transmission delay of an uplink between the transmitting end equipment and the control network element; and is
The method for acquiring the uplink video damage parameter by the control network element includes:
the control network element obtains the video source quality sQuality of an uplink between the originating device and the control network element according to the uplink video data sent by the originating devicecAnd obtaining the division of the uplink video dataResolution;
the control network element determines reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe reference video source quality characterizes a quality of video source data of the originating device without video coding impairments;
the control network element calculates a video quality attenuation factor F according to the following formula:
Figure FDA0002256986740000021
and the control network element acquires the transmission delay of an uplink between the originating device and the control network element according to the uplink video data.
7. The method of claim 5, wherein the method further comprises:
if the number of the originating devices is at least two, the uplink video impairment parameters include: the average value of the video quality attenuation factors of the at least two paths of uplink video data sent by the at least two sending end devices and the maximum value of the transmission time delay of the at least two paths of uplink between the at least two sending end devices and the control network element;
the method for acquiring the uplink video damage parameter by the control network element includes:
the control network element respectively acquires the video source quality sQuality of at least two paths of uplink between the at least two originating devices and the control network element according to the at least two paths of uplink video data sent by the at least two originating devicescAnd respectively acquiring the resolution of the at least two paths of uplink video data;
the control network element determines reference video source quality sQuality corresponding to the resolution of each path of uplink video data according to the corresponding relation between the resolution and the reference video source qualityo
The control network element calculates the video quality attenuation factor F of each path of uplink video data according to the following formula:
Figure FDA0002256986740000031
obtaining at least two Fs, calculating the average value of the at least two Fs
Figure FDA0002256986740000032
And the control network element respectively acquires the transmission time delays of the at least two paths of uplink links according to the uplink video data and determines the maximum value of the transmission time delays of the at least two paths of uplink links.
8. The method of claim 5 or 7, wherein the originating device comprises a first originating device and a second originating device, the upstream video data comprises a first upstream video data and a second upstream video data, the controlling network element is a subordinate controlling network element in a cascaded mode, and the subordinate controlling network element is connected to a superior controlling network element;
the control network element receives uplink video data sent by the transceiver device, and the control network element includes:
the lower control network element receives the first uplink video data sent by the first sender equipment; and
the lower control network element receives the second uplink video data sent by the second sending end device through the upper control network element;
correspondingly, the acquiring, by the control network element, the uplink video damage parameter includes:
the lower control network element acquires a first uplink video damage parameter according to first uplink video data, wherein the first uplink video damage parameter is used for representing the quality damage degree of the first uplink video data; and
the lower control network element receives an upper uplink video damage parameter sent by the upper control network element, wherein the upper uplink video damage parameter is used for representing the quality damage degree of upper uplink video data from the second transmitting terminal equipment to the upper control network element; the lower control network element calculates the second uplink video damage parameter based on the upper uplink video damage parameter and the upper and lower uplink video damage parameters, wherein the upper and lower uplink video damage parameters are used for representing the quality damage degree of the upper and lower uplink video data from the upper control network element to the lower control network element, and the second uplink video damage parameter is used for representing the quality damage degree of the second uplink video data;
and the lower control network element determines the uplink video damage parameter according to the first uplink video damage parameter and the second uplink video damage parameter.
9. An apparatus for assessing quality of experience of a video, comprising:
the receiving unit is used for receiving an uplink video damage parameter sent by a control network element, wherein the uplink video damage parameter is used for representing the quality damage degree of uplink video data sent to the control network element by a sending-end device;
a processing unit, configured to evaluate quality of experience of the video from the originating device to the apparatus based on the uplink video impairment parameter and the downlink video impairment parameter received by the receiving unit, where the downlink video impairment parameter is used to represent a quality impairment degree of downlink video data sent to the apparatus by the control network element;
the uplink video damage parameter comprises a video quality attenuation factor and the transmission delay of an uplink, wherein the uplink is a transmission link between the transmitting terminal equipment and the control network element; the downlink video impairment parameter comprises video source quality, interactive experience quality and transmission delay of a downlink, and the downlink is a transmission link between the control network element and the device;
the processing unit is specifically configured to:
determining a video source quality from the originating device to the apparatus in combination with the video source quality of the downlink and the video quality degradation factor;
determining a transmission delay from the originating device to the apparatus in conjunction with the transmission delay of the uplink and the transmission delay of the downlink, and determining a quality of viewing experience based on the determined transmission delay from the originating device to the apparatus;
determining a quality of video experience from the originating device to the apparatus based on the determined quality of video source from the originating device to the apparatus, the quality of interactive experience, and the quality of viewing experience.
10. The apparatus of claim 9, wherein the video quality degradation factor conforms to the following equation:
Figure FDA0002256986740000041
wherein F is the video quality attenuation factor, the sQualitycFor the video source quality of the uplink, the sQualityoThe reference video source quality of the video source data of the originating device is obtained when the video source data is not damaged by video coding;
the processing unit is configured to determine a quality of a video source from the originating device to the apparatus, and specifically includes:
multiplying the video source quality of the downlink by the video quality attenuation factor, determining the resulting product as the video source quality from the originating device to the apparatus.
11. The apparatus according to claim 9 or 10, wherein the processing unit is configured to determine a transmission delay from the originating device to the apparatus, and specifically includes:
adding the transmission delay of the uplink to the transmission delay of the downlink, and determining the sum as the transmission delay from the originating device to the apparatus.
12. The apparatus of claim 9 or 10, wherein the originating device comprises a first originating device and a second originating device, the upstream video data comprises a first upstream video data and a second upstream video data, the controlling network element is a lower controlling network element in a tandem mode, and the lower controlling network element is connected to an upper controlling network element;
the receiving unit is specifically configured to:
receiving an uplink video damage parameter sent by the subordinate control network element, where the uplink video damage parameter is determined by a first uplink video damage parameter and a second uplink video damage parameter, the first uplink video damage parameter is used to represent a quality damage degree of first uplink video data sent by the first sending end device to the subordinate control network element, and the second uplink video damage parameter is used to represent a quality damage degree of second uplink video data sent by the second sending end device to the subordinate control network element via the superior control network element.
13. An apparatus for assessing quality of experience of a video, comprising:
the receiving unit is used for receiving the uplink video data sent by the sending end equipment;
the processing unit is used for acquiring uplink video damage parameters, and the uplink video damage parameters are used for representing the quality damage degree of the uplink video data;
a sending unit, configured to send the uplink video damage parameter obtained by the processing unit to a receiving end device, where the receiving end device evaluates video experience quality from the sending end device to the receiving end device;
the uplink video impairment parameters comprise a video quality degradation factor and a transmission delay of an uplink, wherein the uplink is a transmission link from the originating equipment to the device;
the processing unit is specifically configured to:
obtaining video source quality sQuality of uplinkcAnd a resolution of the upstream video data;
determining reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoSaid reference video source quality characterizing said originating deviceThe quality of the video source data of (a) without video coding impairments;
the video quality attenuation factor F is calculated according to the following formula:
Figure FDA0002256986740000042
and acquiring the transmission delay of an uplink according to the uplink video data.
14. The apparatus of claim 13,
if the number of the transmitting end equipment is one, the uplink video damage parameter comprises a video quality attenuation factor of the transmitting end equipment and the transmission time delay of an uplink between the transmitting end equipment and the device; and is
The processing unit is specifically configured to:
obtaining the video source quality sQuality of an uplink between the originating device and the apparatus according to the uplink video data sent by the originating devicecAnd obtaining the resolution of the uplink video data;
determining reference video source quality sQuality corresponding to the resolution of the uplink video data according to the corresponding relation between the resolution and the reference video source qualityoThe reference video source quality characterizes a quality of video source data of the originating device without video coding impairments;
the video quality attenuation factor F is calculated according to the following formula:
Figure FDA0002256986740000051
and acquiring the transmission delay of an uplink between the transmitting terminal equipment and the device according to the uplink video data.
15. The apparatus of claim 13,
if the number of the originating devices is at least two, the uplink video impairment parameters include: the average value of the video quality attenuation factors of the at least two paths of uplink video data sent by the at least two sending end devices and the maximum value of the transmission time delay of the at least two paths of uplink between the at least two sending end devices and the device;
the processing unit is specifically configured to:
respectively acquiring video source quality sQuality of at least two paths of uplink between the at least two originating devices and the device according to at least two paths of uplink video data sent by the at least two originating devicescAnd respectively acquiring the resolution of the at least two paths of uplink video data;
according to the corresponding relation between the resolution and the reference video source quality, determining the reference video source quality sQuality corresponding to the resolution of each path of uplink video data respectivelyo
Calculating the video quality attenuation factor F of each path of uplink video data according to the following formula:
Figure FDA0002256986740000052
obtaining at least two Fs, calculating the average value of the at least two Fs
Figure FDA0002256986740000053
And respectively acquiring the transmission time delays of at least two paths of uplink links according to the uplink video data, and determining the maximum value of the transmission time delays of the at least two paths of uplink links.
16. The apparatus of claim 13 or 15, wherein the originating device comprises a first originating device and a second originating device, the upstream video data comprises a first upstream video data and a second upstream video data, the apparatus is a lower level control network element in a tandem mode, the lower level control network element is connected to an upper level control network element;
the receiving unit is specifically configured to: receiving the first uplink video data sent by the first sending end equipment; receiving a superior uplink video damage parameter sent by the superior control network element, wherein the superior uplink video damage parameter is used for representing the quality damage degree from the second sending end device to superior uplink video data of the superior control network element;
the sending unit is specifically configured to: receiving, by the superior control network element, the second uplink video data sent by the second originating device;
the processing unit is specifically configured to: acquiring a first uplink video damage parameter according to first uplink video data, wherein the first uplink video damage parameter is used for representing the quality damage degree of the first uplink video data; calculating a second uplink video damage parameter based on the upper uplink video damage parameter and the upper and lower uplink video damage parameters, wherein the upper and lower uplink video damage parameters are used for representing the quality damage degree of the upper and lower uplink video data from the upper control network element to the lower control network element, and the second uplink video damage parameter is used for representing the quality damage degree of the second uplink video data; and determining the uplink video damage parameter according to the first uplink video damage parameter and the second uplink video damage parameter.
CN201710141023.8A 2017-03-10 2017-03-10 Method and device for evaluating video experience quality Active CN108574840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710141023.8A CN108574840B (en) 2017-03-10 2017-03-10 Method and device for evaluating video experience quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710141023.8A CN108574840B (en) 2017-03-10 2017-03-10 Method and device for evaluating video experience quality

Publications (2)

Publication Number Publication Date
CN108574840A CN108574840A (en) 2018-09-25
CN108574840B true CN108574840B (en) 2020-03-10

Family

ID=63577333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710141023.8A Active CN108574840B (en) 2017-03-10 2017-03-10 Method and device for evaluating video experience quality

Country Status (1)

Country Link
CN (1) CN108574840B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971891B (en) * 2018-09-30 2024-08-20 北京奇虎科技有限公司 Video quality assessment method and device and electronic equipment
CN112543319A (en) * 2019-09-20 2021-03-23 中国移动通信有限公司研究院 Video call quality testing method, device and computer readable storage medium
CN111669636B (en) * 2020-06-19 2022-02-25 海信视像科技股份有限公司 Audio-video synchronous video recording method and display equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9635315B2 (en) * 2006-08-07 2017-04-25 Oovoo Llc Video conferencing over IP networks
CN102075781A (en) * 2009-11-20 2011-05-25 上海倍亚得信息技术有限公司 System and method for testing user perception effect of video telephony and streaming media
WO2011144181A2 (en) * 2011-06-09 2011-11-24 华为技术有限公司 Method and network node for video quality evaluation
EP2772010B1 (en) * 2011-10-24 2018-09-05 T-Mobile USA, Inc. Optimizing video-call quality of service
CN103118238B (en) * 2011-11-17 2016-03-16 中国电信股份有限公司 The control method of video conference and video conferencing system
US20140016487A1 (en) * 2012-07-13 2014-01-16 Anritsu Company Test system to estimate the uplink or downlink quality of multiple user devices using a mean opinion score (mos)
CN104685870B (en) * 2012-07-24 2019-10-18 统一有限责任两合公司 Method, apparatus and system for test video quality
CN104753812B (en) * 2013-12-30 2019-12-10 台湾积体电路制造股份有限公司 Application quality management in a communication system
CN103888846B (en) * 2014-03-04 2017-02-08 浙江大学 Wireless video streaming service self-adaption rate control method based on QoE

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Temporal Video Quality Model Accounting for Variable Frame Delay Distortions;Margaret H. Pinson,et al;《IEEE Transactions on Broadcasting》;20141112;第60卷(第4期);全文 *
基于Java的实时流媒体传输自适应调节方法和实现;王新宁,等;《电脑知识与技术》;20140115;全文 *

Also Published As

Publication number Publication date
CN108574840A (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN102571526B (en) Regulating method of conference site bandwidth, device, conference terminal and media control server
CN103733680B (en) Method and apparatus for controlling wireless uplink session
US6453336B1 (en) Video conferencing with adaptive client-controlled resource utilization
CN111147893B (en) Video self-adaption method, related equipment and storage medium
US9948889B2 (en) Priority of uplink streams in video switching
US20110205332A1 (en) Heterogeneous video conferencing
US20120290668A1 (en) Multi-participant conference setup
EP2987273B1 (en) Dynamic allocation of encoders
US9743043B2 (en) Method and system for handling content in videoconferencing
Santos et al. A novel QoE-aware multicast mechanism for video communications over IEEE 802.11 WLANs
CN108574840B (en) Method and device for evaluating video experience quality
WO2020134761A1 (en) Video conference implementation method and device, video conference system, and storage medium
US10645128B2 (en) Media session processing method, related device, and communications system
CN115209189A (en) Video stream transmission method, system, server and storage medium
CN112055165B (en) Video communication method, device, system, server, client and storage medium
WO2016101623A1 (en) Remote interaction method and device in multipoint audio and video communication
CN103152545B (en) A kind of method, video server and video conferencing system processing error correction request
CN111404908B (en) Data interaction method and device, electronic equipment and readable storage medium
EP4184885B1 (en) Method, system and computer program product for determining congestion of a communication link transmitting a media stream over the communication link
Sorokin et al. Impact of CPU load on video conferencing quality
Yokota et al. Quality and Transferred Data Based Video Bitrate Control Method for Web-Conferencing
Johanson Multimedia communication, collaboration and conferencing using Alkit Confero
HUSSEIN PERFORMANCE ANALYSIS OF VIDEO CODEC (H. 263+, H. 264) FOR VIDEOCONFERENCING OVER WIRELESS LOCAL AREA NET-WORK (WLAN)
KR20090053175A (en) Apaptive telecollaboration service system and method using variable transmission rate
CN116436870A (en) Bandwidth adaptation method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant