WO2023045365A1 - 视频质量评估方法、装置、电子设备及存储介质 - Google Patents

视频质量评估方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023045365A1
WO2023045365A1 PCT/CN2022/093999 CN2022093999W WO2023045365A1 WO 2023045365 A1 WO2023045365 A1 WO 2023045365A1 CN 2022093999 W CN2022093999 W CN 2022093999W WO 2023045365 A1 WO2023045365 A1 WO 2023045365A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
videos
score
data
category
Prior art date
Application number
PCT/CN2022/093999
Other languages
English (en)
French (fr)
Inventor
陈俊江
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP22871426.7A priority Critical patent/EP4407984A1/en
Publication of WO2023045365A1 publication Critical patent/WO2023045365A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics

Definitions

  • the embodiments of the present application relate to the technical field of communications, and in particular, to a video quality evaluation method, device, electronic equipment, and storage medium.
  • 5G farth generation mobile communication technology
  • 5G is a new generation of broadband mobile communication technology featuring high speed, low latency and large connection, which will lead the world from the era of mobile Internet to the era of mobile Internet of Things.
  • video services will also involve a variety of scenarios, such as high-definition video calls, autonomous driving, and telemedicine.
  • an automated video quality evaluation system is urgently needed to evaluate the video quality, and propose improvement measures for weak links or technical defects in video quality, so as to continuously improve the video service system.
  • the operation quality of the system satisfies the increasingly strong quality demands of users for video services.
  • An embodiment of the present application provides a video quality assessment method, including: classifying each video in the video set; inputting different categories of videos into different preset models, and using the preset models to obtain the quality assessment of the videos result.
  • the embodiment of the present application also provides a video quality evaluation device, including: an acquisition module, used to classify each video in the video collection; an evaluation module, used to input different types of videos into different preset models, and use the The preset model is used to obtain the quality evaluation result of the video.
  • the embodiment of the present application also provides an electronic device, including: at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores information that can be executed by the at least one processor. Instructions, the instructions are executed by the at least one processor, so that the at least one processor can execute the above video quality assessment method.
  • the embodiment of the present application also provides a computer-readable storage medium storing a computer program, and implementing the above video quality evaluation method when the computer program is executed by a processor.
  • FIG. 1 is a schematic flow chart of a video quality assessment method provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of the principle of weighted sampling by the video quality assessment method provided in the embodiment of the present application;
  • Fig. 3 is another schematic flowchart of the video quality assessment method provided by the embodiment of the present application.
  • FIG. 4 is a schematic diagram of the principle of the metric mapping evaluation model in the video quality evaluation method provided by the embodiment of the present application.
  • Fig. 5 is another schematic flowchart of the video quality assessment method provided by the embodiment of the present application.
  • FIG. 6 is an example diagram of video data collected by a video quality assessment method provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the principle of the end-to-end evaluation model in the video quality evaluation method provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of the principle of a video quality assessment method provided in an embodiment of the present application.
  • Fig. 9 is a schematic diagram of training, verification and testing of the preset model in the video quality assessment method provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of a module structure of a video quality assessment device provided in an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • it relates to a video quality assessment method.
  • different categories of videos are input into different preset models, and different preset models are used to obtain video quality assessment results. Since the video quality is evaluated by using the preset model, the automatic evaluation of the video quality can be realized, the efficiency is high, and it is suitable for large-scale deployment applications; at the same time, by inputting different videos into different preset models, and then using different
  • the preset model is used to obtain video quality evaluation results, so that different types of videos in different scenarios can get quality evaluation results that are suitable for the category, so that video quality evaluation can be adapted to videos in various scenarios, and the video quality of various scenarios is uniform. Accurate evaluation results can be obtained.
  • the execution body of the video quality assessment method provided in the embodiment of the present application may be a server, where the server may be implemented by a single server or a cluster composed of multiple servers.
  • S101 Classify each video in the video set.
  • classifying each video in the video set may be to classify each video in the video set according to at least one data of functional scenario, video length, concurrent access number, access type, and network environment parameters, Thereby, the category of each video is obtained.
  • functional scenarios it can be divided into categories such as meeting, live broadcast or on-demand;
  • video length it can be divided into categories such as long video and short video.
  • the specific classification method can be carried out according to actual needs, and this embodiment of the application does not specifically limit it.
  • S102 Input videos of different categories into different preset models, and use the preset models to obtain video quality evaluation results.
  • the obtained video quality evaluation results are also different.
  • the obtained video quality evaluation results may also be different.
  • the obtained video quality evaluation result may be different.
  • the embodiment of the present application does not specifically limit the type and quantity of the preset models.
  • the computing power of the server may be insufficient, and at the same time, it is of little significance.
  • it before inputting videos of different categories into different preset models, it also includes: adding labels and/or weights to each video in the video set, and extracting videos using a weighted sampling algorithm according to the labels and/or weights Concentrated part of the video; and input different types of video into different preset models, including: inputting different types of video in part of the video into different preset models, wherein the schematic diagram of the principle of weighted sampling of the video can be Refer to Figure 2.
  • tags and/or weights can be added to the videos according to the number of concurrent accesses, network environment, and usage scale. For example, if the video is in a 5G network environment, add a 5G tag; if the video is in a 4G network environment, add a 4G tag; If the number is lower, add less weight.
  • the specific way of adding tags and weights can be set according to actual needs, which is not specifically limited in this embodiment of the present application.
  • the weighted sampling algorithm is used to sample each video, so that the video with high weight can be sampled more and the video with low weight can be sampled less, and representative videos can be extracted for evaluation. It can reduce the system pressure brought by massive data.
  • the video quality assessment method obtaineds the category of each video in the video set, inputs the videos of different categories into different preset models, and uses different preset models to obtain video quality assessment results. Since the preset model is used to evaluate the video quality, the automatic evaluation of the video quality can be realized, the efficiency is high, and it is suitable for large-scale deployment applications; at the same time, by obtaining the category of the video, different videos can be input into different preset models , and then use different preset models to obtain video quality evaluation results, so that different types of videos in different scenes can get quality evaluation results that are suitable for the category, so that video quality evaluation can be adapted to videos in various scenarios. The video quality of the scene can get accurate evaluation results.
  • the videos in the video set include videos of the first category
  • the preset model includes a metric mapping evaluation model.
  • the video quality assessment method also includes: acquiring video transmission characteristic data on the video link; and inputting different types of videos into different preset models, and using the preset models to obtain video quality assessment results (S102), then The method includes: inputting the transmission feature data of the first category video into the metric mapping assessment model, using the metric mapping assessment model to obtain the first score of the first category video, and outputting the first score after being evaluated by the metric mapping assessment model according to the transmission feature data.
  • FIG. 3 is another schematic flowchart of the video quality assessment method provided in the embodiment of the present application, which specifically includes the following steps:
  • S101' Classify each video in the video set.
  • Transmission characteristic data refers to characteristic data related to video transmission on the video link, such as characteristic data such as packet loss rate, frame loss rate, delay or jitter.
  • the transmission feature data that each video has can be selected.
  • the more transmission feature data is input the more accurate the quality evaluation result obtained by the metric mapping evaluation model is.
  • S103' Input the transmission feature data of the first category video into the metric mapping evaluation model, use the metric mapping assessment model to obtain the first score of the first category video, and output the first score after being evaluated by the metric mapping evaluation model according to the transmission feature data .
  • KPI Key Performance Indication, Key Performance Indicator
  • KPI Key Performance Indication, Key Performance Indicator
  • KQI Key Quality Indicators, key quality indicators
  • VMOS Video Mean Opinion Score, video average subjective opinion score
  • the video quality is generally evaluated by comparing the video watched by the user with the original video, and the quality of the video is determined by comparing the difference between the original video and the watched video. But in some cases, if the original video is difficult to obtain, it is difficult to determine the quality of the video by comparing the difference between the original video and the watched video. For example, in a weak network environment, it is difficult to obtain the original video. At this time, you can obtain the video The transmission characteristic data of the link is used to obtain the evaluation score of the video quality according to the transmission characteristic data, so as to realize the quality evaluation of the video. Correspondingly, videos in a weak network environment can be classified into the first category of videos.
  • the metric mapping evaluation model after using the metric mapping evaluation model to obtain the first score of the first category video, it further includes: if the first score is less than the first expected score, backtracking and locating the video in the video according to the metric mapping evaluation model Abnormal transmission characteristic data on the link, and/or, output early warning information of video quality according to the first score.
  • the first expected score may be set according to actual needs, and no specific limitation is set here.
  • it can be deduced from the above-mentioned VMOS to KQI, and then deduced from KQI to KPI, and finally locate the abnormal transmission according to KPI Characteristic data for corresponding abnormal location.
  • the abnormal transmission characteristic data of the video on the video link can be inferred, and specific problems can be located when the video quality is poor, so as to make targeted The measures to improve the video quality can be improved; and the early warning information of the video quality can be output according to the first score, so that the video user can know the information that the video quality is about to deteriorate in advance, improve the user experience, and realize the prior prediction of the video quality.
  • the videos in the video set also include a second category of videos
  • the preset model also includes an end-to-end evaluation model
  • the preset model before inputting videos of different categories into different preset models (S102), it also includes : Set a collection point at the front-end and back-end of the video link, collect the front-end video data at the front-end and the back-end video data at the back-end through the collection point; and input different types of video to different presets
  • using a preset model to obtain a video quality evaluation result (S102) also includes: inputting the front-end video data and the back-end video data of the second category video into the end-to-end evaluation model, and using the end-to-end evaluation model to obtain A second score of the second category video, wherein the second score is output by the end-to-end evaluation model after comparing the difference between the backend video data and the frontend video data.
  • FIG. 5 is another schematic flowchart of the video quality assessment method provided in the embodiment of the present application, which specifically includes the following steps:
  • S102 Set at least one collection point at the front end and back end of the video link respectively, and collect the front end video data at the front end and the back end video data at the back end of the video through the collection points.
  • the front-end video data or the back-end video data of the video is collected through the collection point
  • the front-end video data and the back-end video data are collected at the collection point through bypass copying, so as not to affect the normal video
  • the link process will not generate additional burdens, and the user will not be aware of it.
  • FIG. 6 is an example diagram of video data collected by the video quality assessment method provided in the embodiment of the present application.
  • collection points can be set before encoding, before transmission, after transmission, and after encoding of the video link, and the front-end video data and back-end video data can be obtained through bypass copying.
  • S103" Input the front-end video data and back-end video data of the second category video into the end-to-end evaluation model, and use the end-to-end evaluation model to obtain the second score of the second category video, and the second score is evaluated by the end-to-end
  • the model outputs after comparing the difference between the backend video data and the frontend video data.
  • FIG. 7 is a schematic diagram of the principle of an end-to-end evaluation model in the video quality evaluation method provided in the embodiment of the present application.
  • the distorted video in the figure is the back-end video data
  • the reference video is the front-end video data.
  • the reference video and the distorted video should be of the same type and in the same time period. They are relative concepts and can be before encoding or after decoding. , it can also be before encoding and after encoding, as long as there is a difference (loss) between the two, it can be used as a comparison. That is, the front-end video data and the back-end video data may respectively refer to video data before encoding and video data after decoding, or may refer to video data before encoding and video data after encoding respectively.
  • the quality evaluation can be performed by comparing the original video before and after transmission. Videos in the network environment are classified into the second category of videos.
  • Collect video data by setting collection points at the front-end and back-end of the video link respectively, and then input the collected front-end video data and back-end video data into the end-to-end evaluation model to obtain the quality evaluation results of the second category of video, Quality assessment of the second category of video can be achieved.
  • the end-to-end evaluation model after using the end-to-end evaluation model to obtain the second score of the second category of video (S103'), it also includes: if the second score is lower than the second expected score, calculating the second score of the second category of video
  • the transmission feature data is input into the metric mapping evaluation model, and the metric mapping evaluation model is used to obtain the first score of the second category of video, and/or output early warning information of the video quality according to the second score.
  • the second expected score may be set according to actual needs, which is not specifically limited in this embodiment of the present application. It should be understood that when acquiring the transmission feature data (S102') of the video on the video link, what is acquired is the transmission feature data of all videos in the video (if it is sampling, all videos in the sampling), including the first category video and the second category video, so here the transmission feature data of the second category video can be directly input into the metric mapping evaluation model to obtain the first score. You can continue to refer to FIG. 6 , when the video data is collected through the collection point, the video transmission feature data (ie, the measurement data in the figure) is collected at the same time.
  • the video By inputting the transmission feature data of the second category video into the metric mapping evaluation model when the score is lower than expected, and using the metric mapping evaluation model to obtain the first score of the second category video, the video can be further evaluated from two dimensions quality; at the same time, since the metric mapping model obtains the score through the transmission characteristic data, it can further deduce the abnormal transmission characteristic data according to the score, so as to realize the problem location of the video quality; in addition, output the early warning of the video quality according to the second score Information can enable video users to know in advance that the video quality is about to deteriorate, improve user experience, and realize a priori prediction of video quality.
  • FIG. 8 is a principle example diagram of the video quality assessment method provided in the embodiment of the present application.
  • the video sets in the video service are uniformly classified, some videos are extracted after weighted sampling, and then link data is collected for the extracted part of the videos.
  • the metric mapping model is used for video quality assessment.
  • the end-to-end evaluation model is used for the video quality evaluation of the second category video; if the score obtained by the end-to-end evaluation model is lower than expected, the metric mapping evaluation model can be used for further evaluation, and another dimension can be further evaluated. Analyze the abnormal transmission characteristic data, so as to realize the prediction of the corresponding video quality and the location of the problem.
  • FIG. 9 is an example diagram of preset model training, verification and testing in the video quality assessment method provided in the embodiment of the present application.
  • the preset model can be trained by collecting corresponding data from the video set. After training to a certain extent, the corresponding data can be collected to verify the accuracy of the model. After the model reaches a certain accuracy, the preset model (that is, Fig. The quality evaluation model in ) is applied to the test, so as to obtain the corresponding quality evaluation results (such as the scores of the 98, 32, and 76 videos in the figure).
  • a video quality evaluation device 200 as shown in FIG. 10 , including an acquisition module 201 and an evaluation module 202.
  • acquisition module 201 and an evaluation module 202.
  • evaluation module 202 The functions of each module are described in detail as follows:
  • An acquisition module 201 configured to classify each video in the video collection
  • the evaluation module 202 is configured to input videos of different categories into different preset models, and use the preset models to obtain video quality evaluation results.
  • the video includes a first category video
  • the preset model includes a metric mapping evaluation model
  • the video quality assessment device 200 provided in the embodiment of the present application further includes a first acquisition module, wherein the first acquisition module is used to acquire the video in The transmission feature data on the video link
  • the evaluation module 202 is also used to: input the transmission feature data of the first category video into the metric mapping evaluation model, and use the metric mapping evaluation model to obtain the first score of the first category video, the first The score is output by the metric mapping evaluation model based on the evaluation of the transfer characteristic data.
  • the video quality assessment apparatus 200 provided in the embodiment of the present application further includes an assessment processing module, and the assessment processing module is used to: when the first score is less than the first expected score, deduce and locate Abnormal video transmission characteristic data on the video link, and/or, output early warning information of video quality according to the first score.
  • the video further includes a second category of video
  • the preset model further includes an end-to-end evaluation model
  • the video quality evaluation device 200 provided in the embodiment of the present application further includes a second acquisition module, wherein the second acquisition module is used to At least one collection point is respectively set at the front end and the back end of the video link of the video; the front end video data at the front end and the back end video data at the back end are collected by the collection point; the evaluation module 202 is also used for: using the second category
  • the front-end video data and back-end video data of the video are input into the end-to-end evaluation model, and the second score of the second category video is obtained by using the end-to-end evaluation model. and output the difference between the front-end video data.
  • the video quality evaluation device 200 provided in the embodiment of the present application further includes a re-evaluation module, wherein the re-evaluation module is used to: if the second score is lower than the second expected score, input the transmission feature data of the second category video In the metric mapping evaluation model, the metric mapping evaluation model is used to obtain the first score of the second category video, and/or, output the early warning information of the video quality according to the second score.
  • the second collection module is also used to collect front-end video data and back-end video data at the collection point by means of bypass replication.
  • the obtaining module 201 is further configured to classify each video in the video set according to at least one data of functional scenarios, video length, concurrent access number, access type, and network environment parameters.
  • the video quality assessment device 200 provided by the embodiment of the present application further includes an extraction module, wherein the extraction module is used to: add labels and/or weights to each video in the video set; use a weighted sampling algorithm according to the labels and/or weights Extract part of the videos in the video set; the evaluation module 202 is also used to: input videos of different categories in the part of videos into different preset models.
  • this embodiment is an apparatus embodiment corresponding to the foregoing method embodiments, and this embodiment can be implemented in cooperation with the foregoing method embodiments.
  • the relevant technical details mentioned in the foregoing method embodiments are still valid in this embodiment, and will not be repeated here in order to reduce repetition.
  • the relevant technical details mentioned in this embodiment may also be applied to the foregoing method embodiments.
  • modules involved in this embodiment are logical modules.
  • a logical unit can be a physical unit, or a part of a physical unit, or multiple physical units. Combination of units.
  • units that are not closely related to solving the technical problems raised by the present application are not introduced in this embodiment, but this does not mean that there are no other units in this embodiment.
  • FIG. 11 it relates to an electronic device, as shown in FIG. 11 , including: at least one processor 301; and a memory 302 communicatively connected to at least one processor 301; Instructions executed by the processor 301, the instructions are executed by at least one processor 301, so that the at least one processor 301 can execute the above video quality assessment method.
  • the memory and the processor are connected by a bus
  • the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors and various circuits of the memory together.
  • the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
  • the bus interface provides an interface between the bus and the transceivers.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
  • the data processed by the processor is transmitted on the wireless medium through the antenna, further, the antenna also receives the data and transmits the data to the processor.
  • the processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interface, voltage regulation, power management, and other control functions. Instead, memory can be used to store data that the processor uses when performing operations.
  • it relates to a computer readable storage medium storing a computer program.
  • the above method embodiments are implemented when the computer program is executed by the processor.
  • a storage medium includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本申请实施例涉及通信技术领域,公开了一种视频质量评估方法,包括:对视频集中各个视频进行分类;将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。本申请实施例还公开了一种视频质量评估装置、电子设备及存储介质。

Description

视频质量评估方法、装置、电子设备及存储介质
交叉引用
本申请基于申请号为“202111115460.5”、申请日为2021年09月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请实施例涉及通信技术领域,特别涉及一种视频质量评估方法、装置、电子设备及存储介质。
背景技术
5G(第五代移动通信技术)为具有高速率、低时延和大连接特点的新一代宽带移动通信技术,将引领全球从移动互联网时代过渡到移动物联网时代。随着5G的商用和普及,视频业务也会涉及到多种场景,例如高清视频通话、自动驾驶和远程医疗等场景。为了及时准确地掌握视频业务系统的整体运行状态,急需一套自动化运行的视频质量评估体系对视频质量进行评估,并针对视频质量的薄弱环节或技术缺陷等提出改进措施,从而不断提高视频业务系统的运行质量,满足用户对视频业务日益强烈的质量诉求。
目前,对视频质量进行评估主要有两种方法:一种是由评估员进行主观质量评估,另一种是通过建立数学模型来进行客观质量评估。然而,前一种评估方法由于采用人工的方式进行,效率较低,难以大规模部署应用;后一种评估方法由于建立的数学模型是针对单一场景下的视频,因此其只能适应单一场景,对于多种场景的视频无法得出准确的评估结果。
发明内容
本申请实施例提供了一种视频质量评估方法,包括:对视频集中各个视频进行分类;将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
本申请实施例还提供了一种视频质量评估装置,包括:获取模块,用于对视频集中各个视频进行分类;评估模块,用于将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的视频质量评估方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现上述的视频质量评估方法。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定。
图1是本申请实施例提供的视频质量评估方法的流程示意图;
图2是本申请实施例提供的视频质量评估方法进行加权采样的原理示意图;
图3是本申请实施例提供的视频质量评估方法的另一流程示意图;
图4是本申请实施例提供的视频质量评估方法中度量映射评估模型的原理示意图;
图5是本申请实施例提供的视频质量评估方法的又一流程示意图;
图6是本申请实施例提供的视频质量评估方法采集视频数据的示例图;
图7是本申请实施例提供的视频质量评估方法中端到端评估模型的原理示意图;
图8是本申请实施例提供的视频质量评估方法的原理示意图;
图9是本申请实施例提供的视频质量评估方法中预置模型的训练、验证和测试示意图;
图10是本申请实施例提供的视频质量评估装置的模块结构示意图;
图11是本申请实施例提供的电子设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
在一个实施例中,涉及一种视频质量评估方法,通过对视频集中各个视频进行分类,将不同类别的视频输入至不同的预置模型中,利用不同的预置模型获取视频的质量评估结果。由于是利用预置模型来评估视频质量,因此可以实现视频质量的自动化评估,效率较高,适宜大规模部署应用;同时,通过将不同的视频输入至不同的预置模型中,再利用不同的预置模型来获取视频质量评估结果,可以使不同场景下不同类别的视频得到与类别相适应的质量评估结果,从而可以使视频质量评估适应多种场景的视频,对多种场景的视频质量均能得到准确的评估结果。
应当说明的是,本申请实施例提供的视频质量评估方法的执行主体可以为服务端,其中,服务端可以由单独的服务器或多个服务器组成的集群来实现。
本申请实施例提供的视频质量评估方法的具体流程如图1所示,包括以下步骤:
S101:对视频集中各个视频进行分类。
在一个具体的例子中,对视频集中各个视频进行分类,可以是按照功能场景、视频长度、接入并发数、接入类型和网络环境参数中的至少一种数据对视频集中各个视频进行分类,从而得到各个视频的类别。例如,按照功能场景可以分为:会议、直播或点播等类别;按照视频长度可以分为:长视频、短视频等类别。具体分类的方法可以根据实际需要进行,本申请实施例对此不做具体 限制。
S102:将不同类别的视频输入至不同的预置模型中,利用预置模型获取视频的质量评估结果。
可以理解的是,当使用的预置模型不同时,获得的视频的质量评估结果也不相同。另外,当用视频在不同位置的视频数据输入至相同的预置模型时,获得的视频的质量评估结果也可能不相同。例如,将视频链路上传输后的视频输入至预置模型,和将解码后的视频输入至预置模型相比,获得的视频的质量评估结果可能不相同。预置模型可以为两个以上,从而实现对不同类别的视频的质量评估,具体可以根据各个视频的分类方法决定,本申请实施例对预置模型的种类和数量不做具体限制。
由于视频集中包括的视频数量可能是海量的,若对视频集中每一视频均进行视频质量评估,则服务端的算力可能不足,同时意义不大。在一个具体的例子中,在将不同类别的视频输入至不同的预置模型中之前,还包括:对视频集中各个视频添加标签和/或权重,根据标签和/或权重采用加权采样算法抽取视频集中的部分视频;而将不同类别的视频输入至不同的预置模型中,包括:将部分视频中不同类别的视频输入至不同的预置模型中,其中,对视频进行加权采样的原理示意图可以参考图2。
在对视频集中各个视频添加标签和/或权重时,可以按照接入并发数、网络环境和使用规模等对视频添加标签和/或权重。例如,若视频为5G网络环境,则添加5G的标签,若视频为4G网络环境,则添加4G的标签;又例如,若接入并发数较多,则添加较高的权重;若接入并发数较少,则添加较少的权重。具体添加标签和权重的方式可以根据实际需要进行设置,本申请实施例对此不做具体限制。在对各个视频添加标签和/或权重后,对各个视频采集加权采样算法进行抽样,从而可以使视频集中的视频权重高的采样多、权重低采样少,抽取得到具有代表性的视频进行评估,可以降低海量数据带来的系统压力。
通过对视频添加标签和/或权重,根据标签和/或权重采用加权采样算法抽取视频集中的部分视频,对抽取的部分视频进行视频质量评估,可以抽取代表性的视频进行质量评估,能够较好地反映视频集整体的视频质量,同时对海量的视频集进行降维处理,减少服务端的负担。
本申请实施例提供的视频质量评估方法,通过获取视频集中各个视频的类别,将不同类别的视频输入至不同的预置模型中,利用不同的预置模型获取视频的质量评估结果。由于是利用预置模型来评估视频质量,因此可以实现视频质量的自动化评估,效率较高,适宜大规模部署应用;同时,通过获取视频的类别,将不同的视频输入至不同的预置模型中,再利用不同的预置模型来获取视频质量评估结果,可以使不同场景下不同类别的视频得到与类别相适应的质量评估结果,从而可以使视频质量评估适应多种场景的视频,对多种场景的视频质量均能得到准确的评估结果。
在一个具体的例子中,视频集中的视频包括第一类别视频,预置模型包括度量映射评估模型,在将不同类别的视频输入至不同的预置模型中(S102)之前,本申请实施例提供的视频质量评估方法还包括:获取视频在视频链路上的传输特征数据;而将不同类别的视频输入至不同的预置模型中,利用预置模型获取视频的质量评估结果(S102),则包括:将第一类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第一类别视频的第一得分,第一得分由度量映射评估模型根据传输特征数据评估后输出。
请参考图3,其为本申请实施例提供的视频质量评估方法的另一流程示意图,具体包括以下步骤:
S101’:对视频集中各个视频进行分类。
S102’:获取视频在视频链路上的传输特征数据。
传输特征数据是指视频在视频链路上与传输相关的特征数据,例如丢包率、丢帧率、延迟或抖动等特征数据。
为了使训练好的度量映射评估模型可以实现对所有视频均可以实现质量评估,在训练度量映射评估模型时,可以选取每一个视频都具有的传输特征数据。另外,在训练度量映射评估模型时,输入的传输特征数据越多,度量映射评估模型得到的质量评估结果越准确。
S103’:将第一类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第一类别视频的第一得分,第一得分由度量映射评估模型根据传输特征数据评估后输出。
度量映射评估模型在具体实现时,如图4所示,可以根据视频的监控数据 (例如日志)得到KPI(Key Performance Indication,关键绩效指标),将KPI作为传输特征数据,然后将KPI映射到KQI(Key Quality Indicators,关键质量指标),最后再将KQI映射至VMOS(Video Mean Opinion Score,视频平均主观意见分),将VMOS作为度量映射评估模型输出的第一得分。通过以上映射,若VMOS较低时,例如低于某一得分时,可以通过倒推的方式发送传输特征数据中的异常传输特征数据,从而在得到视频质量评估结果的同时,实现对导致视频质量下降的问题进行定位,方便视频质量的改进。
应当理解的是,对视频质量的评估,一般通过用户观看的视频和原始视频进行对比,通过比较原始视频与观看的视频之间的差别来确定视频的质量。但对于一些情况下,若原始视频难以获取,则难以通过比较原始视频与观看的视频之间的差别来确定视频的质量,例如在弱网环境下,难以获取原始视频,此时可以通过获取视频链路的传输特征数据,根据传输特征数据来获取视频质量的评估得分,实现对视频的质量评估。相应地,可以将弱网环境下的视频划分为第一类别视频。
在一个具体的例子中,在利用度量映射评估模型获取第一类别视频的第一得分之后,还包括:若第一得分小于第一预期得分,则根据度量映射评估模型倒推并定位视频在视频链路上异常的传输特征数据,和/或,根据第一得分输出视频质量的预警信息。
第一预期得分可以根据实际需要进行设置,此处不做具体限制。在根据度量映射评估模型倒推并定位视频在视频链路上异常的传输特征数据时,可以是从上述的VMOS倒推至KQI,再由KQI倒推至KPI,最后根据KPI定位出异常的传输特征数据,进行相应的异常定位。
通过在第一得分小于第一预期得分时,根据度量映射评估模型倒推并定位视频在视频链路上异常的传输特征数据,可以在视频质量较差时定位出具体的问题,从而作出针对性的措施改进;而根据第一得分输出视频质量的预警信息,可以使视频的用户提前获知视频质量即将变差的信息,提高用户体验,实现视频质量的先验预测。
在一个具体的例子中,视频集中的视频还包括第二类别视频,预置模型还包括端到端评估模型;在将不同类别的视频输入至不同的预置模型中(S102) 之前,还包括:在视频的视频链路的前端和后端分别设置一个采集点,通过采集点采集视频在前端的前端视频数据和在后端的后端视频数据;而将不同类别的视频输入至不同的预置模型中,利用预置模型获取视频的质量评估结果(S102),还包括:将第二类别视频的前端视频数据和后端视频数据输入至端到端评估模型中,利用端到端评估模型获取第二类别视频的第二得分,其中第二得分由端到端评估模型在比较后端视频数据和前端视频数据之间的差别后输出。
请参考图5,其为本申请实施例提供的视频质量评估方法的又一流程示意图,具体包括以下步骤:
S101”:对视频集中各个视频进行分类。
S102”:在视频的视频链路的前端和后端分别设置至少一个采集点,通过采集点采集视频在前端的前端视频数据和在后端的后端视频数据。
在一个具体的例子中,在通过采集点采集视频的前端视频数据或后端视频数据时,在采集点通过旁路复制的方式采集前端视频数据和后端视频数据,从而不影响到正常的视频链路流程,不会产生额外的负担,实现用户无感知。
请参考图6,其为本申请实施例方式提供的视频质量评估方法采集视频数据的示例图。如图6所示,可以分别在视频链路的编码前、传输前、传输后和编码后设置采集点,通过旁路复制的方式来获取前端视频数据和后端视频数据。
S103”:将第二类别视频的前端视频数据和后端视频数据输入至端到端评估模型中,利用端到端评估模型获取第二类别视频的第二得分,第二得分由端到端评估模型在比较后端视频数据和前端视频数据之间的差别后输出。
在构建端到端评估模型时,可以利用机器学习、深度学习等技术构建多种全参考的算法,例如,以PSNR、VMAF、DVQA等为基础网络,形成端到端评估算法集,最后经过训练得到端到端评估模型。在选择基础网络时,可以根据服务端的算力和质量的需求决定。例如,在编码前后的链路,可选择VMAF为基础网络,可以节省服务端的算力;又例如,在视频的产生端和播放端的链路,可选择DVQA为基础网络,可以对视频的时空联合特征进行精准提取。
请参考图7,其为本申请实施例提供的视频质量评估方法中端到端评估模型的原理示意图。图中的失真视频即为后端视频数据,而参考视频即为前端视频数据,参考视频和失真视频应为同一种类和同一时间段内的,它们为相对概 念,可以是编码前和解码后的,也可以是编码前和编码后,只要两者之间有差别(损失),即可以拿来作为对比。也即,前端视频数据和后端视频数据可以分别指编码前的视频数据和解码后的视频数据,也可以分别指编码前的视频数据和编码后的视频数据。
应当理解的是,在网络环境较好可以获取到原始视频的情况下,可以通过对比原始视频在传输前后的对比来进行质量评估,因此,与第一类别视频划分的条件相对,可以将非弱网环境下的视频划分为第二类别视频。
通过在视频链路的前端和后端分别设置采集点来采集视频数据,再将采集的前端视频数据和后端视频数据输入至端到端评估模型中,得到第二类别视频的质量评估结果,可以实现对第二类别视频的质量评估。
在一个具体的例子中,在利用端到端评估模型获取第二类别视频的第二得分(S103’)之后,还包括:若第二得分低于第二预期得分,则将第二类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第二类别视频的第一得分,和/或,根据第二得分输出视频质量的预警信息。
第二预期得分可以根据实际需要进行设置,本申请实施例对此不做具体限制。应当理解的是,在获取视频在视频链路上的传输特征数据(S102’)时,获取的是视频中所有视频(若为抽样则为抽样中的所有视频)的传输特征数据,包括第一类别视频和第二类别视频,因此这里可以直接将第二类别视频的传输特征数据输入至度量映射评估模型中来获取第一得分。可以继续参考图6,在通过采集点采集视频数据时,同时采集视频的传输特征数据(即图中的度量数据)。
通过在得分低于预期时,将第二类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第二类别视频的第一得分,可以从两个维度上进一步评估视频的质量;同时,由于度量映射模型是通过传输特征数据得到得分的,因此还可以进一步根据得分倒推出异常的传输特征数据,从而实现视频质量的问题定位;另外,根据第二得分输出视频质量的预警信息,可以使视频的用户提前获知视频质量即将变差的信息,提高用户体验,实现视频质量的先验预测。
请参考图8,其为本申请实施例提供的视频质量评估方法的原理示例图。 具体地,将视频业务中的视频集进行统一分类,通过加权采样后抽取部分视频,之后对抽取的部分视频进行链路采集数据,对于第一类别视频采用度量映射模型进行视频质量评估,对于第二类别视频采用端到端评估模型进行视频质量评估;若采用端到端评估模型得到的得分低于预期,则可以再采用度量映射评估模型进一步评估,在得到另外一种维度的同时还可以进一步分析出异常的传输特征数据,从而实现相应的视频质量的预测和问题的定位。
请参考图9,其为本申请实施例提供的视频质量评估方法中预置模型训练、验证和测试的示例图。具体地,预置模型可以分别从视频集中采集相应的数据进行训练,在训练到一定程度后,再采集相应的数据验证模型的精度,在模型达到一定的精度后,将预置模型(即图中的质量评估模型)应用于测试,从而得到相应的质量评估结果(如图中的98、32、76各个视频的得分)。
此外,本领域技术人员可以理解,上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。
在一个实施例中,涉及一种视频质量评估装置200,如图10所示,包括获取模块201和评估模块202,各模块功能详细说明如下:
获取模块201,用于对视频集中各个视频进行分类;
评估模块202,用于将不同类别的视频输入至不同的预置模型中,利用预置模型获取视频的质量评估结果。
在一个例子中,视频包括第一类别视频,预置模型包括度量映射评估模型;本申请实施例提供的视频质量评估装置200还包括第一采集模块,其中,第一采集模块用于获取视频在视频链路上的传输特征数据;评估模块202还用于:将第一类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第一类别视频的第一得分,第一得分由度量映射评估模型根据传输特征数据评估后输出。
在一个例子中,本申请实施例提供的视频质量评估装置200还包括评估处 理模块,所述评估处理模块用于:在第一得分小于第一预期得分时,根据度量映射评估模型倒推并定位视频在视频链路上异常的传输特征数据,和/或,根据第一得分输出视频质量的预警信息。
在一个例子中,视频还包括第二类别视频,预置模型还包括端到端评估模型;本申请实施例提供的视频质量评估装置200还包括第二采集模块,其中,第二采集模块用于:在视频的视频链路的前端和后端分别设置至少一个采集点;通过采集点采集视频在前端的前端视频数据和在后端的后端视频数据;评估模块202还用于:将第二类别视频的前端视频数据和后端视频数据输入至端到端评估模型中,利用端到端评估模型获取第二类别视频的第二得分,第二得分由端到端评估模型在比较后端视频数据和前端视频数据之间的差别后输出。
在一个例子中,本申请实施例提供的视频质量评估装置200还包括复评模块,其中,复评模块用于:若第二得分低于第二预期得分,则将第二类别视频的传输特征数据输入至度量映射评估模型中,利用度量映射评估模型获取第二类别视频的第一得分,和/或,根据第二得分输出视频质量的预警信息。
在一个例子中,第二采集模块还用于在采集点通过旁路复制的方式采集前端视频数据和后端视频数据。
在一个例子中,获取模块201还用于依据功能场景、视频长度、接入并发数、接入类型和网络环境参数中的至少一种数据对视频集中各个视频进行分类。
在一个例子中,本申请实施例提供的视频质量评估装置200还包括抽取模块,其中,抽取模块用于:对视频集中各个视频添加标签和/或权重;根据标签和/或权重采用加权采样算法抽取视频集中的部分视频;评估模块202还用于:将部分视频中不同类别的视频输入至不同的预置模型中。
不难发现,本实施例为与前述方法的实施例相对应的装置实施例,本实施例可与前述方法的实施例互相配合实施。前述方法的实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在前述方法的实施例中。
值得一提的是,本实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本申请的创新部分,本实施 例中并没有将与解决本申请所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。
在一个实施例中,涉及一种电子设备,如图11所示,包括:至少一个处理器301;以及,与至少一个处理器301通信连接的存储器302;其中,存储器302存储有可被至少一个处理器301执行的指令,指令被至少一个处理器301执行,以使至少一个处理器301能够执行上述的视频质量评估方法。
其中,存储器和处理器采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器和存储器的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器。
处理器负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器可以被用于存储处理器在执行操作时所使用的数据。
在一个实施例中,涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施 例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (11)

  1. 一种视频质量评估方法,包括:
    对视频集中各个视频进行分类;
    将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
  2. 根据权利要求1所述的视频质量评估方法,其中,所述视频包括第一类别视频,所述预置模型包括度量映射评估模型;
    在所述将不同类别的视频输入至不同的预置模型中之前,还包括:
    获取所述视频在视频链路上的传输特征数据;
    所述将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果,包括:
    将所述第一类别视频的传输特征数据输入至所述度量映射评估模型中,利用所述度量映射评估模型获取所述第一类别视频的第一得分,所述第一得分由所述度量映射评估模型根据所述传输特征数据评估后输出。
  3. 根据权利要求2所述的视频质量评估方法,其中,在所述利用所述度量映射评估模型获取所述第一类别视频的第一得分之后,还包括:
    若所述第一得分小于第一预期得分,则根据所述度量映射评估模型倒推并定位所述视频在视频链路上异常的传输特征数据,和/或,根据所述第一得分输出视频质量的预警信息。
  4. 根据权利要求2或3所述的视频质量评估方法,其中,所述视频还包括第二类别视频,所述预置模型还包括端到端评估模型;
    在所述将不同类别的视频输入至不同的预置模型中之前,还包括:
    在所述视频的视频链路的前端和后端分别设置至少一个采集点;
    通过所述采集点采集所述视频在所述前端的前端视频数据和在所述后端的后端视频数据;
    所述将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果,还包括:
    将所述第二类别视频的所述前端视频数据和所述后端视频数据输入至所述端到端评估模型中,利用所述端到端评估模型获取所述第二类别视频的第二得 分,所述第二得分由所述端到端评估模型在比较所述后端视频数据和所述前端视频数据之间的差别后输出。
  5. 根据权利要求4所述的视频质量评估方法,其中,在所述利用所述端到端评估模型获取所述第二类别视频的第二得分之后,还包括:
    若所述第二得分低于第二预期得分,则将所述第二类别视频的传输特征数据输入至所述度量映射评估模型中,利用所述度量映射评估模型获取所述第二类别视频的第一得分,和/或,根据所述第二得分输出视频质量的预警信息。
  6. 根据权利要求4或5所述的视频质量评估方法,其中,所述通过所述采集点采集所述视频在所述前端的前端视频数据和在所述后端的后端视频数据,包括:
    在所述采集点通过旁路复制的方式采集所述前端视频数据和所述后端视频数据。
  7. 根据权利要求1-6任一项所述的视频质量评估方法,其中,所述对视频集中各个视频进行分类,包括:
    依据功能场景、视频长度、接入并发数、接入类型和网络环境参数中的至少一种数据对所述视频集中各个视频进行分类。
  8. 根据权利要求1-7中任一项所述的视频质量评估方法,其中,在所述将不同类别的视频输入至不同的预置模型中之前,还包括:
    对视频集中各个视频添加标签和/或权重;
    根据所述标签和/或所述权重采用加权采样算法抽取所述视频集中的部分视频;
    所述将不同类别的视频输入至不同的预置模型中,包括:
    将所述部分视频中不同类别的视频输入至不同的预置模型中。
  9. 一种视频质量评估装置,包括:
    获取模块,用于对视频集中各个视频进行分类;
    评估模块,用于将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
  10. 一种电子设备,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至8任一项所述的视频质量评估方法。
  11. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述的视频质量评估方法。
PCT/CN2022/093999 2021-09-23 2022-05-19 视频质量评估方法、装置、电子设备及存储介质 WO2023045365A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22871426.7A EP4407984A1 (en) 2021-09-23 2022-05-19 Video quality evaluation method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111115460.5 2021-09-23
CN202111115460.5A CN115866235A (zh) 2021-09-23 2021-09-23 视频质量评估方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023045365A1 true WO2023045365A1 (zh) 2023-03-30

Family

ID=85653001

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093999 WO2023045365A1 (zh) 2021-09-23 2022-05-19 视频质量评估方法、装置、电子设备及存储介质

Country Status (3)

Country Link
EP (1) EP4407984A1 (zh)
CN (1) CN115866235A (zh)
WO (1) WO2023045365A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546191B (zh) * 2023-07-05 2023-09-29 杭州海康威视数字技术股份有限公司 视频链路质量检测方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616315A (zh) * 2008-06-25 2009-12-30 华为技术有限公司 一种视频质量评价方法、装置和系统
US20150341667A1 (en) * 2012-12-21 2015-11-26 Thomson Licensing Video quality model, method for training a video quality model, and method for determining video quality using a video quality model
CN111212279A (zh) * 2018-11-21 2020-05-29 华为技术有限公司 一种视频质量的评估方法及装置
CN113840131A (zh) * 2020-06-08 2021-12-24 中国移动通信有限公司研究院 视频通话质量评估方法、装置、电子设备及可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616315A (zh) * 2008-06-25 2009-12-30 华为技术有限公司 一种视频质量评价方法、装置和系统
US20150341667A1 (en) * 2012-12-21 2015-11-26 Thomson Licensing Video quality model, method for training a video quality model, and method for determining video quality using a video quality model
CN111212279A (zh) * 2018-11-21 2020-05-29 华为技术有限公司 一种视频质量的评估方法及装置
CN113840131A (zh) * 2020-06-08 2021-12-24 中国移动通信有限公司研究院 视频通话质量评估方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
EP4407984A1 (en) 2024-07-31
CN115866235A (zh) 2023-03-28

Similar Documents

Publication Publication Date Title
CN109347668B (zh) 一种服务质量评估模型的训练方法及装置
CN111242171B (zh) 网络故障的模型训练、诊断预测方法、装置以及电子设备
WO2018059402A1 (zh) 确定故障类型的方法和装置
CN112784718B (zh) 一种基于边缘计算与深度学习的绝缘子状态识别方法
CN111181800B (zh) 测试数据处理方法、装置、电子设备及存储介质
WO2022062968A1 (zh) 一种自训练方法、系统、装置、电子设备及存储介质
CN111107423A (zh) 一种视频业务播放卡顿的识别方法和装置
WO2009155814A1 (zh) 一种视频质量评价方法、装置和系统
WO2023045365A1 (zh) 视频质量评估方法、装置、电子设备及存储介质
WO2023051318A1 (zh) 模型训练方法、无线资源调度方法及其装置及电子设备
CN103716187A (zh) 网络拓扑结构确定方法和系统
CN113225339A (zh) 网络安全监测方法、装置、计算机设备及存储介质
EP3890312A1 (en) Distributed image analysis method and system, and storage medium
CN112566170B (zh) 网络质量评估方法、装置、服务器及存储介质
CN112948262A (zh) 一种系统测试方法、装置、计算机设备和存储介质
CN112672086A (zh) 一种音视频设备数据采集、分析、预警系统
CN114827951B (zh) 一种基于车辆终端的车辆网络质量分析方法、系统及存储介质
CN117195785A (zh) 一种总线验证方法及验证知识产权核系统
WO2023147731A1 (zh) 异常数据的处理方法、装置及电子设备
CN112015726B (zh) 一种用户活跃度预测方法、系统及可读存储介质
CN113840131B (zh) 视频通话质量评估方法、装置、电子设备及可读存储介质
TWI510109B (zh) 遞迴式異常網路流量偵測方法
CN110544182B (zh) 一种基于机器学习技术的配电通信网融合控制方法及系统
CN116915767B (zh) 文档传输方法及装置
CN116405587B (zh) 一种手机售后性能情况智能监测方法、系统和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871426

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18692801

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2022871426

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022871426

Country of ref document: EP

Effective date: 20240423