WO2023045365A1 - 视频质量评估方法、装置、电子设备及存储介质 - Google Patents
视频质量评估方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2023045365A1 WO2023045365A1 PCT/CN2022/093999 CN2022093999W WO2023045365A1 WO 2023045365 A1 WO2023045365 A1 WO 2023045365A1 CN 2022093999 W CN2022093999 W CN 2022093999W WO 2023045365 A1 WO2023045365 A1 WO 2023045365A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- videos
- score
- data
- category
- Prior art date
Links
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000013210 evaluation model Methods 0.000 claims description 55
- 238000001303 quality assessment method Methods 0.000 claims description 42
- 230000005540 biological transmission Effects 0.000 claims description 41
- 238000013507 mapping Methods 0.000 claims description 40
- 238000011156 evaluation Methods 0.000 claims description 20
- 230000002159 abnormal effect Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 13
- 238000012549 training Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011867 re-evaluation Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/004—Diagnosis, testing or measuring for television systems or their details for digital television systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
Definitions
- the embodiments of the present application relate to the technical field of communications, and in particular, to a video quality evaluation method, device, electronic equipment, and storage medium.
- 5G farth generation mobile communication technology
- 5G is a new generation of broadband mobile communication technology featuring high speed, low latency and large connection, which will lead the world from the era of mobile Internet to the era of mobile Internet of Things.
- video services will also involve a variety of scenarios, such as high-definition video calls, autonomous driving, and telemedicine.
- an automated video quality evaluation system is urgently needed to evaluate the video quality, and propose improvement measures for weak links or technical defects in video quality, so as to continuously improve the video service system.
- the operation quality of the system satisfies the increasingly strong quality demands of users for video services.
- An embodiment of the present application provides a video quality assessment method, including: classifying each video in the video set; inputting different categories of videos into different preset models, and using the preset models to obtain the quality assessment of the videos result.
- the embodiment of the present application also provides a video quality evaluation device, including: an acquisition module, used to classify each video in the video collection; an evaluation module, used to input different types of videos into different preset models, and use the The preset model is used to obtain the quality evaluation result of the video.
- the embodiment of the present application also provides an electronic device, including: at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores information that can be executed by the at least one processor. Instructions, the instructions are executed by the at least one processor, so that the at least one processor can execute the above video quality assessment method.
- the embodiment of the present application also provides a computer-readable storage medium storing a computer program, and implementing the above video quality evaluation method when the computer program is executed by a processor.
- FIG. 1 is a schematic flow chart of a video quality assessment method provided in an embodiment of the present application
- FIG. 2 is a schematic diagram of the principle of weighted sampling by the video quality assessment method provided in the embodiment of the present application;
- Fig. 3 is another schematic flowchart of the video quality assessment method provided by the embodiment of the present application.
- FIG. 4 is a schematic diagram of the principle of the metric mapping evaluation model in the video quality evaluation method provided by the embodiment of the present application.
- Fig. 5 is another schematic flowchart of the video quality assessment method provided by the embodiment of the present application.
- FIG. 6 is an example diagram of video data collected by a video quality assessment method provided in an embodiment of the present application.
- FIG. 7 is a schematic diagram of the principle of the end-to-end evaluation model in the video quality evaluation method provided by the embodiment of the present application.
- FIG. 8 is a schematic diagram of the principle of a video quality assessment method provided in an embodiment of the present application.
- Fig. 9 is a schematic diagram of training, verification and testing of the preset model in the video quality assessment method provided by the embodiment of the present application.
- FIG. 10 is a schematic diagram of a module structure of a video quality assessment device provided in an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- it relates to a video quality assessment method.
- different categories of videos are input into different preset models, and different preset models are used to obtain video quality assessment results. Since the video quality is evaluated by using the preset model, the automatic evaluation of the video quality can be realized, the efficiency is high, and it is suitable for large-scale deployment applications; at the same time, by inputting different videos into different preset models, and then using different
- the preset model is used to obtain video quality evaluation results, so that different types of videos in different scenarios can get quality evaluation results that are suitable for the category, so that video quality evaluation can be adapted to videos in various scenarios, and the video quality of various scenarios is uniform. Accurate evaluation results can be obtained.
- the execution body of the video quality assessment method provided in the embodiment of the present application may be a server, where the server may be implemented by a single server or a cluster composed of multiple servers.
- S101 Classify each video in the video set.
- classifying each video in the video set may be to classify each video in the video set according to at least one data of functional scenario, video length, concurrent access number, access type, and network environment parameters, Thereby, the category of each video is obtained.
- functional scenarios it can be divided into categories such as meeting, live broadcast or on-demand;
- video length it can be divided into categories such as long video and short video.
- the specific classification method can be carried out according to actual needs, and this embodiment of the application does not specifically limit it.
- S102 Input videos of different categories into different preset models, and use the preset models to obtain video quality evaluation results.
- the obtained video quality evaluation results are also different.
- the obtained video quality evaluation results may also be different.
- the obtained video quality evaluation result may be different.
- the embodiment of the present application does not specifically limit the type and quantity of the preset models.
- the computing power of the server may be insufficient, and at the same time, it is of little significance.
- it before inputting videos of different categories into different preset models, it also includes: adding labels and/or weights to each video in the video set, and extracting videos using a weighted sampling algorithm according to the labels and/or weights Concentrated part of the video; and input different types of video into different preset models, including: inputting different types of video in part of the video into different preset models, wherein the schematic diagram of the principle of weighted sampling of the video can be Refer to Figure 2.
- tags and/or weights can be added to the videos according to the number of concurrent accesses, network environment, and usage scale. For example, if the video is in a 5G network environment, add a 5G tag; if the video is in a 4G network environment, add a 4G tag; If the number is lower, add less weight.
- the specific way of adding tags and weights can be set according to actual needs, which is not specifically limited in this embodiment of the present application.
- the weighted sampling algorithm is used to sample each video, so that the video with high weight can be sampled more and the video with low weight can be sampled less, and representative videos can be extracted for evaluation. It can reduce the system pressure brought by massive data.
- the video quality assessment method obtaineds the category of each video in the video set, inputs the videos of different categories into different preset models, and uses different preset models to obtain video quality assessment results. Since the preset model is used to evaluate the video quality, the automatic evaluation of the video quality can be realized, the efficiency is high, and it is suitable for large-scale deployment applications; at the same time, by obtaining the category of the video, different videos can be input into different preset models , and then use different preset models to obtain video quality evaluation results, so that different types of videos in different scenes can get quality evaluation results that are suitable for the category, so that video quality evaluation can be adapted to videos in various scenarios. The video quality of the scene can get accurate evaluation results.
- the videos in the video set include videos of the first category
- the preset model includes a metric mapping evaluation model.
- the video quality assessment method also includes: acquiring video transmission characteristic data on the video link; and inputting different types of videos into different preset models, and using the preset models to obtain video quality assessment results (S102), then The method includes: inputting the transmission feature data of the first category video into the metric mapping assessment model, using the metric mapping assessment model to obtain the first score of the first category video, and outputting the first score after being evaluated by the metric mapping assessment model according to the transmission feature data.
- FIG. 3 is another schematic flowchart of the video quality assessment method provided in the embodiment of the present application, which specifically includes the following steps:
- S101' Classify each video in the video set.
- Transmission characteristic data refers to characteristic data related to video transmission on the video link, such as characteristic data such as packet loss rate, frame loss rate, delay or jitter.
- the transmission feature data that each video has can be selected.
- the more transmission feature data is input the more accurate the quality evaluation result obtained by the metric mapping evaluation model is.
- S103' Input the transmission feature data of the first category video into the metric mapping evaluation model, use the metric mapping assessment model to obtain the first score of the first category video, and output the first score after being evaluated by the metric mapping evaluation model according to the transmission feature data .
- KPI Key Performance Indication, Key Performance Indicator
- KPI Key Performance Indication, Key Performance Indicator
- KQI Key Quality Indicators, key quality indicators
- VMOS Video Mean Opinion Score, video average subjective opinion score
- the video quality is generally evaluated by comparing the video watched by the user with the original video, and the quality of the video is determined by comparing the difference between the original video and the watched video. But in some cases, if the original video is difficult to obtain, it is difficult to determine the quality of the video by comparing the difference between the original video and the watched video. For example, in a weak network environment, it is difficult to obtain the original video. At this time, you can obtain the video The transmission characteristic data of the link is used to obtain the evaluation score of the video quality according to the transmission characteristic data, so as to realize the quality evaluation of the video. Correspondingly, videos in a weak network environment can be classified into the first category of videos.
- the metric mapping evaluation model after using the metric mapping evaluation model to obtain the first score of the first category video, it further includes: if the first score is less than the first expected score, backtracking and locating the video in the video according to the metric mapping evaluation model Abnormal transmission characteristic data on the link, and/or, output early warning information of video quality according to the first score.
- the first expected score may be set according to actual needs, and no specific limitation is set here.
- it can be deduced from the above-mentioned VMOS to KQI, and then deduced from KQI to KPI, and finally locate the abnormal transmission according to KPI Characteristic data for corresponding abnormal location.
- the abnormal transmission characteristic data of the video on the video link can be inferred, and specific problems can be located when the video quality is poor, so as to make targeted The measures to improve the video quality can be improved; and the early warning information of the video quality can be output according to the first score, so that the video user can know the information that the video quality is about to deteriorate in advance, improve the user experience, and realize the prior prediction of the video quality.
- the videos in the video set also include a second category of videos
- the preset model also includes an end-to-end evaluation model
- the preset model before inputting videos of different categories into different preset models (S102), it also includes : Set a collection point at the front-end and back-end of the video link, collect the front-end video data at the front-end and the back-end video data at the back-end through the collection point; and input different types of video to different presets
- using a preset model to obtain a video quality evaluation result (S102) also includes: inputting the front-end video data and the back-end video data of the second category video into the end-to-end evaluation model, and using the end-to-end evaluation model to obtain A second score of the second category video, wherein the second score is output by the end-to-end evaluation model after comparing the difference between the backend video data and the frontend video data.
- FIG. 5 is another schematic flowchart of the video quality assessment method provided in the embodiment of the present application, which specifically includes the following steps:
- S102 Set at least one collection point at the front end and back end of the video link respectively, and collect the front end video data at the front end and the back end video data at the back end of the video through the collection points.
- the front-end video data or the back-end video data of the video is collected through the collection point
- the front-end video data and the back-end video data are collected at the collection point through bypass copying, so as not to affect the normal video
- the link process will not generate additional burdens, and the user will not be aware of it.
- FIG. 6 is an example diagram of video data collected by the video quality assessment method provided in the embodiment of the present application.
- collection points can be set before encoding, before transmission, after transmission, and after encoding of the video link, and the front-end video data and back-end video data can be obtained through bypass copying.
- S103" Input the front-end video data and back-end video data of the second category video into the end-to-end evaluation model, and use the end-to-end evaluation model to obtain the second score of the second category video, and the second score is evaluated by the end-to-end
- the model outputs after comparing the difference between the backend video data and the frontend video data.
- FIG. 7 is a schematic diagram of the principle of an end-to-end evaluation model in the video quality evaluation method provided in the embodiment of the present application.
- the distorted video in the figure is the back-end video data
- the reference video is the front-end video data.
- the reference video and the distorted video should be of the same type and in the same time period. They are relative concepts and can be before encoding or after decoding. , it can also be before encoding and after encoding, as long as there is a difference (loss) between the two, it can be used as a comparison. That is, the front-end video data and the back-end video data may respectively refer to video data before encoding and video data after decoding, or may refer to video data before encoding and video data after encoding respectively.
- the quality evaluation can be performed by comparing the original video before and after transmission. Videos in the network environment are classified into the second category of videos.
- Collect video data by setting collection points at the front-end and back-end of the video link respectively, and then input the collected front-end video data and back-end video data into the end-to-end evaluation model to obtain the quality evaluation results of the second category of video, Quality assessment of the second category of video can be achieved.
- the end-to-end evaluation model after using the end-to-end evaluation model to obtain the second score of the second category of video (S103'), it also includes: if the second score is lower than the second expected score, calculating the second score of the second category of video
- the transmission feature data is input into the metric mapping evaluation model, and the metric mapping evaluation model is used to obtain the first score of the second category of video, and/or output early warning information of the video quality according to the second score.
- the second expected score may be set according to actual needs, which is not specifically limited in this embodiment of the present application. It should be understood that when acquiring the transmission feature data (S102') of the video on the video link, what is acquired is the transmission feature data of all videos in the video (if it is sampling, all videos in the sampling), including the first category video and the second category video, so here the transmission feature data of the second category video can be directly input into the metric mapping evaluation model to obtain the first score. You can continue to refer to FIG. 6 , when the video data is collected through the collection point, the video transmission feature data (ie, the measurement data in the figure) is collected at the same time.
- the video By inputting the transmission feature data of the second category video into the metric mapping evaluation model when the score is lower than expected, and using the metric mapping evaluation model to obtain the first score of the second category video, the video can be further evaluated from two dimensions quality; at the same time, since the metric mapping model obtains the score through the transmission characteristic data, it can further deduce the abnormal transmission characteristic data according to the score, so as to realize the problem location of the video quality; in addition, output the early warning of the video quality according to the second score Information can enable video users to know in advance that the video quality is about to deteriorate, improve user experience, and realize a priori prediction of video quality.
- FIG. 8 is a principle example diagram of the video quality assessment method provided in the embodiment of the present application.
- the video sets in the video service are uniformly classified, some videos are extracted after weighted sampling, and then link data is collected for the extracted part of the videos.
- the metric mapping model is used for video quality assessment.
- the end-to-end evaluation model is used for the video quality evaluation of the second category video; if the score obtained by the end-to-end evaluation model is lower than expected, the metric mapping evaluation model can be used for further evaluation, and another dimension can be further evaluated. Analyze the abnormal transmission characteristic data, so as to realize the prediction of the corresponding video quality and the location of the problem.
- FIG. 9 is an example diagram of preset model training, verification and testing in the video quality assessment method provided in the embodiment of the present application.
- the preset model can be trained by collecting corresponding data from the video set. After training to a certain extent, the corresponding data can be collected to verify the accuracy of the model. After the model reaches a certain accuracy, the preset model (that is, Fig. The quality evaluation model in ) is applied to the test, so as to obtain the corresponding quality evaluation results (such as the scores of the 98, 32, and 76 videos in the figure).
- a video quality evaluation device 200 as shown in FIG. 10 , including an acquisition module 201 and an evaluation module 202.
- acquisition module 201 and an evaluation module 202.
- evaluation module 202 The functions of each module are described in detail as follows:
- An acquisition module 201 configured to classify each video in the video collection
- the evaluation module 202 is configured to input videos of different categories into different preset models, and use the preset models to obtain video quality evaluation results.
- the video includes a first category video
- the preset model includes a metric mapping evaluation model
- the video quality assessment device 200 provided in the embodiment of the present application further includes a first acquisition module, wherein the first acquisition module is used to acquire the video in The transmission feature data on the video link
- the evaluation module 202 is also used to: input the transmission feature data of the first category video into the metric mapping evaluation model, and use the metric mapping evaluation model to obtain the first score of the first category video, the first The score is output by the metric mapping evaluation model based on the evaluation of the transfer characteristic data.
- the video quality assessment apparatus 200 provided in the embodiment of the present application further includes an assessment processing module, and the assessment processing module is used to: when the first score is less than the first expected score, deduce and locate Abnormal video transmission characteristic data on the video link, and/or, output early warning information of video quality according to the first score.
- the video further includes a second category of video
- the preset model further includes an end-to-end evaluation model
- the video quality evaluation device 200 provided in the embodiment of the present application further includes a second acquisition module, wherein the second acquisition module is used to At least one collection point is respectively set at the front end and the back end of the video link of the video; the front end video data at the front end and the back end video data at the back end are collected by the collection point; the evaluation module 202 is also used for: using the second category
- the front-end video data and back-end video data of the video are input into the end-to-end evaluation model, and the second score of the second category video is obtained by using the end-to-end evaluation model. and output the difference between the front-end video data.
- the video quality evaluation device 200 provided in the embodiment of the present application further includes a re-evaluation module, wherein the re-evaluation module is used to: if the second score is lower than the second expected score, input the transmission feature data of the second category video In the metric mapping evaluation model, the metric mapping evaluation model is used to obtain the first score of the second category video, and/or, output the early warning information of the video quality according to the second score.
- the second collection module is also used to collect front-end video data and back-end video data at the collection point by means of bypass replication.
- the obtaining module 201 is further configured to classify each video in the video set according to at least one data of functional scenarios, video length, concurrent access number, access type, and network environment parameters.
- the video quality assessment device 200 provided by the embodiment of the present application further includes an extraction module, wherein the extraction module is used to: add labels and/or weights to each video in the video set; use a weighted sampling algorithm according to the labels and/or weights Extract part of the videos in the video set; the evaluation module 202 is also used to: input videos of different categories in the part of videos into different preset models.
- this embodiment is an apparatus embodiment corresponding to the foregoing method embodiments, and this embodiment can be implemented in cooperation with the foregoing method embodiments.
- the relevant technical details mentioned in the foregoing method embodiments are still valid in this embodiment, and will not be repeated here in order to reduce repetition.
- the relevant technical details mentioned in this embodiment may also be applied to the foregoing method embodiments.
- modules involved in this embodiment are logical modules.
- a logical unit can be a physical unit, or a part of a physical unit, or multiple physical units. Combination of units.
- units that are not closely related to solving the technical problems raised by the present application are not introduced in this embodiment, but this does not mean that there are no other units in this embodiment.
- FIG. 11 it relates to an electronic device, as shown in FIG. 11 , including: at least one processor 301; and a memory 302 communicatively connected to at least one processor 301; Instructions executed by the processor 301, the instructions are executed by at least one processor 301, so that the at least one processor 301 can execute the above video quality assessment method.
- the memory and the processor are connected by a bus
- the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors and various circuits of the memory together.
- the bus may also connect together various other circuits such as peripherals, voltage regulators, and power management circuits, all of which are well known in the art and therefore will not be further described herein.
- the bus interface provides an interface between the bus and the transceivers.
- a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing means for communicating with various other devices over a transmission medium.
- the data processed by the processor is transmitted on the wireless medium through the antenna, further, the antenna also receives the data and transmits the data to the processor.
- the processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interface, voltage regulation, power management, and other control functions. Instead, memory can be used to store data that the processor uses when performing operations.
- it relates to a computer readable storage medium storing a computer program.
- the above method embodiments are implemented when the computer program is executed by the processor.
- a storage medium includes several instructions to make a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims (11)
- 一种视频质量评估方法,包括:对视频集中各个视频进行分类;将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
- 根据权利要求1所述的视频质量评估方法,其中,所述视频包括第一类别视频,所述预置模型包括度量映射评估模型;在所述将不同类别的视频输入至不同的预置模型中之前,还包括:获取所述视频在视频链路上的传输特征数据;所述将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果,包括:将所述第一类别视频的传输特征数据输入至所述度量映射评估模型中,利用所述度量映射评估模型获取所述第一类别视频的第一得分,所述第一得分由所述度量映射评估模型根据所述传输特征数据评估后输出。
- 根据权利要求2所述的视频质量评估方法,其中,在所述利用所述度量映射评估模型获取所述第一类别视频的第一得分之后,还包括:若所述第一得分小于第一预期得分,则根据所述度量映射评估模型倒推并定位所述视频在视频链路上异常的传输特征数据,和/或,根据所述第一得分输出视频质量的预警信息。
- 根据权利要求2或3所述的视频质量评估方法,其中,所述视频还包括第二类别视频,所述预置模型还包括端到端评估模型;在所述将不同类别的视频输入至不同的预置模型中之前,还包括:在所述视频的视频链路的前端和后端分别设置至少一个采集点;通过所述采集点采集所述视频在所述前端的前端视频数据和在所述后端的后端视频数据;所述将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果,还包括:将所述第二类别视频的所述前端视频数据和所述后端视频数据输入至所述端到端评估模型中,利用所述端到端评估模型获取所述第二类别视频的第二得 分,所述第二得分由所述端到端评估模型在比较所述后端视频数据和所述前端视频数据之间的差别后输出。
- 根据权利要求4所述的视频质量评估方法,其中,在所述利用所述端到端评估模型获取所述第二类别视频的第二得分之后,还包括:若所述第二得分低于第二预期得分,则将所述第二类别视频的传输特征数据输入至所述度量映射评估模型中,利用所述度量映射评估模型获取所述第二类别视频的第一得分,和/或,根据所述第二得分输出视频质量的预警信息。
- 根据权利要求4或5所述的视频质量评估方法,其中,所述通过所述采集点采集所述视频在所述前端的前端视频数据和在所述后端的后端视频数据,包括:在所述采集点通过旁路复制的方式采集所述前端视频数据和所述后端视频数据。
- 根据权利要求1-6任一项所述的视频质量评估方法,其中,所述对视频集中各个视频进行分类,包括:依据功能场景、视频长度、接入并发数、接入类型和网络环境参数中的至少一种数据对所述视频集中各个视频进行分类。
- 根据权利要求1-7中任一项所述的视频质量评估方法,其中,在所述将不同类别的视频输入至不同的预置模型中之前,还包括:对视频集中各个视频添加标签和/或权重;根据所述标签和/或所述权重采用加权采样算法抽取所述视频集中的部分视频;所述将不同类别的视频输入至不同的预置模型中,包括:将所述部分视频中不同类别的视频输入至不同的预置模型中。
- 一种视频质量评估装置,包括:获取模块,用于对视频集中各个视频进行分类;评估模块,用于将不同类别的视频输入至不同的预置模型中,利用所述预置模型获取所述视频的质量评估结果。
- 一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至8任一项所述的视频质量评估方法。
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述的视频质量评估方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22871426.7A EP4407984A1 (en) | 2021-09-23 | 2022-05-19 | Video quality evaluation method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111115460.5 | 2021-09-23 | ||
CN202111115460.5A CN115866235A (zh) | 2021-09-23 | 2021-09-23 | 视频质量评估方法、装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023045365A1 true WO2023045365A1 (zh) | 2023-03-30 |
Family
ID=85653001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/093999 WO2023045365A1 (zh) | 2021-09-23 | 2022-05-19 | 视频质量评估方法、装置、电子设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4407984A1 (zh) |
CN (1) | CN115866235A (zh) |
WO (1) | WO2023045365A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116546191B (zh) * | 2023-07-05 | 2023-09-29 | 杭州海康威视数字技术股份有限公司 | 视频链路质量检测方法、装置及设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101616315A (zh) * | 2008-06-25 | 2009-12-30 | 华为技术有限公司 | 一种视频质量评价方法、装置和系统 |
US20150341667A1 (en) * | 2012-12-21 | 2015-11-26 | Thomson Licensing | Video quality model, method for training a video quality model, and method for determining video quality using a video quality model |
CN111212279A (zh) * | 2018-11-21 | 2020-05-29 | 华为技术有限公司 | 一种视频质量的评估方法及装置 |
CN113840131A (zh) * | 2020-06-08 | 2021-12-24 | 中国移动通信有限公司研究院 | 视频通话质量评估方法、装置、电子设备及可读存储介质 |
-
2021
- 2021-09-23 CN CN202111115460.5A patent/CN115866235A/zh active Pending
-
2022
- 2022-05-19 EP EP22871426.7A patent/EP4407984A1/en active Pending
- 2022-05-19 WO PCT/CN2022/093999 patent/WO2023045365A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101616315A (zh) * | 2008-06-25 | 2009-12-30 | 华为技术有限公司 | 一种视频质量评价方法、装置和系统 |
US20150341667A1 (en) * | 2012-12-21 | 2015-11-26 | Thomson Licensing | Video quality model, method for training a video quality model, and method for determining video quality using a video quality model |
CN111212279A (zh) * | 2018-11-21 | 2020-05-29 | 华为技术有限公司 | 一种视频质量的评估方法及装置 |
CN113840131A (zh) * | 2020-06-08 | 2021-12-24 | 中国移动通信有限公司研究院 | 视频通话质量评估方法、装置、电子设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP4407984A1 (en) | 2024-07-31 |
CN115866235A (zh) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109347668B (zh) | 一种服务质量评估模型的训练方法及装置 | |
CN111242171B (zh) | 网络故障的模型训练、诊断预测方法、装置以及电子设备 | |
WO2018059402A1 (zh) | 确定故障类型的方法和装置 | |
CN112784718B (zh) | 一种基于边缘计算与深度学习的绝缘子状态识别方法 | |
CN111181800B (zh) | 测试数据处理方法、装置、电子设备及存储介质 | |
WO2022062968A1 (zh) | 一种自训练方法、系统、装置、电子设备及存储介质 | |
CN111107423A (zh) | 一种视频业务播放卡顿的识别方法和装置 | |
WO2009155814A1 (zh) | 一种视频质量评价方法、装置和系统 | |
WO2023045365A1 (zh) | 视频质量评估方法、装置、电子设备及存储介质 | |
WO2023051318A1 (zh) | 模型训练方法、无线资源调度方法及其装置及电子设备 | |
CN103716187A (zh) | 网络拓扑结构确定方法和系统 | |
CN113225339A (zh) | 网络安全监测方法、装置、计算机设备及存储介质 | |
EP3890312A1 (en) | Distributed image analysis method and system, and storage medium | |
CN112566170B (zh) | 网络质量评估方法、装置、服务器及存储介质 | |
CN112948262A (zh) | 一种系统测试方法、装置、计算机设备和存储介质 | |
CN112672086A (zh) | 一种音视频设备数据采集、分析、预警系统 | |
CN114827951B (zh) | 一种基于车辆终端的车辆网络质量分析方法、系统及存储介质 | |
CN117195785A (zh) | 一种总线验证方法及验证知识产权核系统 | |
WO2023147731A1 (zh) | 异常数据的处理方法、装置及电子设备 | |
CN112015726B (zh) | 一种用户活跃度预测方法、系统及可读存储介质 | |
CN113840131B (zh) | 视频通话质量评估方法、装置、电子设备及可读存储介质 | |
TWI510109B (zh) | 遞迴式異常網路流量偵測方法 | |
CN110544182B (zh) | 一种基于机器学习技术的配电通信网融合控制方法及系统 | |
CN116915767B (zh) | 文档传输方法及装置 | |
CN116405587B (zh) | 一种手机售后性能情况智能监测方法、系统和介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22871426 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18692801 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022871426 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022871426 Country of ref document: EP Effective date: 20240423 |