CN103841410B - Based on half reference video QoE objective evaluation method of image feature information - Google Patents

Based on half reference video QoE objective evaluation method of image feature information Download PDF

Info

Publication number
CN103841410B
CN103841410B CN201410079834.6A CN201410079834A CN103841410B CN 103841410 B CN103841410 B CN 103841410B CN 201410079834 A CN201410079834 A CN 201410079834A CN 103841410 B CN103841410 B CN 103841410B
Authority
CN
China
Prior art keywords
video
image
texture information
texture
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410079834.6A
Other languages
Chinese (zh)
Other versions
CN103841410A (en
Inventor
李文璟
喻鹏
罗千
耿杨
嵇华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201410079834.6A priority Critical patent/CN103841410B/en
Publication of CN103841410A publication Critical patent/CN103841410A/en
Application granted granted Critical
Publication of CN103841410B publication Critical patent/CN103841410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开基于图像特征信息的半参考视频QoE客观评估方法,该方法包括:运营商端提取原始视频中每一帧图像的显著性信息图及纹理信息图,压缩处理显著性信息图和纹理信息图,得到原始视频的半参考数据;用户端接收运营商端传来的原始视频的半参考数据及受损视频,提取受损视频中每一帧图像的显著性信息图及纹理信息图,得到受损视频的半参考数据,根据原始视频及受损视频的半参考数据,计算受损视频的损伤情况,使用预先训练好的神经网络算法对主观感受质量MOS进行评估。

The invention discloses a semi-reference video QoE objective evaluation method based on image feature information. The method includes: the operator side extracts the saliency information map and the texture information map of each frame image in the original video, and compresses the saliency information map and the texture information. Figure, to obtain the semi-reference data of the original video; the client receives the semi-reference data of the original video and the damaged video from the operator, extracts the saliency information map and the texture information map of each frame of the image in the damaged video, and obtains The semi-reference data of the damaged video, according to the semi-reference data of the original video and the damaged video, calculate the damage of the damaged video, and use the pre-trained neural network algorithm to evaluate the subjective quality MOS.

Description

基于图像特征信息的半参考视频QoE客观评估方法An objective evaluation method of semi-reference video QoE based on image feature information

技术领域technical field

本发明涉及通信技术领域,具体涉及基于图像特征信息的半参考视频QoE客观评估方法。The invention relates to the field of communication technology, in particular to a semi-reference video QoE objective evaluation method based on image feature information.

背景技术Background technique

随着无线网络和高速宽带接入的普及,实时视频服务正经历着快速的发展。QoE(QualityofExperience)类指标能反映实时视频业务的服务质量,实时视频业务的QoE质量评估的客观方法(也称为QoE客观评估方法)为根据特定的客观业务质量指标对主观评分进行评估,QoE客观评估方法又可以根据对原始视频数据的使用情况分为三类,分别是全参考(需要全部的原始数据)、半参考(需要部分原始数据)和无参考(不需要原始数据)。With the popularity of wireless networks and high-speed broadband access, real-time video services are experiencing rapid development. QoE (Quality of Experience) indicators can reflect the service quality of real-time video services. The objective method of QoE quality evaluation for real-time video services (also known as the QoE objective evaluation method) is to evaluate subjective scores based on specific objective service quality indicators. QoE objective Evaluation methods can be divided into three categories according to the use of raw video data, namely full reference (requires all raw data), semi-reference (requires part of the original data), and no-reference (no raw data is required).

现有的QoE客观评估方法多数为全参考或无参考方法,全参考方法能得到最准确的评估结果,但难以应用;无参考方法易于部署,但通常仅适用于特定的损伤场景;半参考方法虽然能在两者间取得较好权衡,但缺少成熟的方案。Most of the existing QoE objective evaluation methods are full-reference or no-reference methods. The full-reference method can get the most accurate evaluation results, but it is difficult to apply; the no-reference method is easy to deploy, but usually only applies to specific damage scenarios; the semi-reference method Although a good balance can be achieved between the two, there is a lack of mature solutions.

现有技术存在的问题是:无论全参考方法还是无参考方法在实用性上都存在局限性,并且现有技术没有对评估准确性进行测试的横向比较结果。The problems in the prior art are: both the full-reference method and the no-reference method have limitations in practicability, and the prior art has no horizontal comparison results for testing the accuracy of evaluation.

发明内容Contents of the invention

本发明所要解决的技术问题现有技术在实用性上存在局限性,并且没有对评估准确性进行测试的横向比较结果。Technical Problems to be Solved by the Invention The prior art has limitations in practicability, and there is no horizontal comparison result for testing the accuracy of evaluation.

为此目的,本发明提出基于图像特征信息的半参考视频QoE客观评估方法,该方法包括:For this purpose, the present invention proposes a semi-reference video QoE objective evaluation method based on image feature information, the method comprising:

运营商端提取原始视频中每一帧图像的显著性信息图及纹理信息图,压缩处理所述显著性信息图和纹理信息图,得到原始视频的半参考数据;The operator side extracts the saliency information map and the texture information map of each frame image in the original video, compresses and processes the saliency information map and the texture information map, and obtains semi-reference data of the original video;

用户端接收运营商端传来的原始视频的半参考数据及受损视频,提取受损视频中每一帧图像的显著性信息图及纹理信息图,得到受损视频的半参考数据,根据原始视频及受损视频的半参考数据,计算受损视频的损伤情况,使用预先训练好的神经网络算法对主观感受质量MOS进行评估,其中,所述受损视频为经过有损信道传输的运营商端的原始视频。The user terminal receives the semi-reference data of the original video and the damaged video from the operator, extracts the saliency information map and texture information map of each frame of the damaged video, and obtains the semi-reference data of the damaged video. Semi-reference data of video and damaged video, calculate the damage of the damaged video, and use the pre-trained neural network algorithm to evaluate the subjective quality MOS, wherein the damaged video is the operator’s data transmitted through the lossy channel end of the original video.

其中,所述显著性信息图包括时域显著性信息图和空域显著性信息图。Wherein, the saliency information map includes a time-domain saliency information map and a spatial-domain saliency information map.

其中,所述显著性信息图包括权重不同的强度分量、色彩分量、方向分量及皮肤色调分量。Wherein, the saliency information map includes intensity components, color components, direction components and skin tone components with different weights.

其中,所述纹理信息图包括时域纹理信息图和空域纹理信息图。Wherein, the texture information map includes a time-domain texture information map and a spatial-domain texture information map.

其中,所述每一帧图像的纹理信息图的提取包括:边缘提取、形态学膨胀处理及加叠;Wherein, the extraction of the texture information map of each frame image includes: edge extraction, morphological expansion processing and superposition;

其中,所述边缘提取包括:提取当前帧图像的边缘信息图像;Wherein, the edge extraction includes: extracting the edge information image of the current frame image;

所述形态学膨胀包括:将当前帧图像的边缘信息图像进行形态学膨胀处理,得到处理后的边缘信息图像;The morphological expansion includes: performing morphological expansion processing on the edge information image of the current frame image to obtain the processed edge information image;

所述加叠包括:将处理后的边缘信息图像与当前帧图像加叠,得到当前帧图像的纹理信息图。The superimposing includes: superimposing the processed edge information image and the current frame image to obtain the texture information map of the current frame image.

其中,所述每一帧图像的纹理信息图包括:运营商端原始视频中每一帧图像的纹理信息图、用户端受损视频中每一帧图像的纹理信息图。Wherein, the texture information map of each frame of image includes: the texture information map of each frame of image in the original video of the operator end, and the texture information map of each frame of image in the damaged video of the user end.

其中,所述压缩处理包括:Wherein, the compression process includes:

采用小波变换分解所述空域和时域两方面的显著性信息图和纹理信息图,得到不同高频子带;Using wavelet transform to decompose the saliency information map and texture information map in both the space domain and the time domain to obtain different high-frequency sub-bands;

作出所有高频子带的直方图;Make a histogram of all high frequency subbands;

采用泛化高斯分布GGD拟合所有高频子带的直方图并计算拟合误差。The histograms of all high-frequency subbands were fitted using the generalized Gaussian distribution GGD and the fitting error was calculated.

其中,所述半参考数据包括:显著性信息空域损伤、显著性信息时域损伤、纹理信息空域损伤及纹理信息时域损伤。Wherein, the semi-reference data includes: saliency information spatial domain damage, saliency information time domain damage, texture information spatial domain damage and texture information time domain damage.

其中,所述根据原始视频及受损视频的半参考数据,计算受损视频的损伤情况包括:根据原始视频及受损视频的半参考数据,通过相对熵计算受损视频的损伤情况。Wherein, the calculating the damage condition of the damaged video according to the semi-reference data of the original video and the damaged video includes: calculating the damage condition of the damaged video through relative entropy according to the semi-reference data of the original video and the damaged video.

相比于现有技术,本发明提供的方法的有益效果是:本发明提供的实时视频业务QoE客观质量评估方法对视频损伤的类型不敏感,也即对不同原因导致的损伤视频都能得到较为准确的评估结果;本发明对底层传输网络不敏感,即可以用于多种实际场景(包括局域网、广域网以及无线环境等等)下对实时视频业务进行客观质量评估;本发明易于部署和实现,所有模块功能均能在软件层面实现,若存在特定需求,还可考虑以硬件方式实现以加快处理速度。Compared with the prior art, the beneficial effect of the method provided by the present invention is: the real-time video service QoE objective quality evaluation method provided by the present invention is not sensitive to the type of video damage, that is, the damaged video caused by different reasons can be obtained relatively Accurate evaluation results; the present invention is not sensitive to the underlying transmission network, that is, it can be used for objective quality evaluation of real-time video services in various practical scenarios (including local area networks, wide area networks, and wireless environments, etc.); the present invention is easy to deploy and implement, All module functions can be implemented at the software level. If there are specific needs, hardware implementation can also be considered to speed up the processing speed.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1示出了基于图像特征信息的半参考视频QoE客观评估方法流程图;Fig. 1 shows the flow chart of the semi-reference video QoE objective assessment method based on image feature information;

图2示出了实施例2中对LVQ数据库进行评估的结果图。FIG. 2 shows the results of the evaluation of the LVQ database in Example 2.

具体实施方式detailed description

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are the Some, but not all, embodiments are invented. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

实施例1:Example 1:

本实施例公开一种基于图像特征信息的半参考视频QoE客观评估方法,该方法包括:This embodiment discloses a semi-reference video QoE objective evaluation method based on image feature information, the method comprising:

运营商端提取原始视频中每一帧图像的显著性信息图及纹理信息图,压缩处理所述显著性信息图和纹理信息图,得到原始视频的半参考数据;The operator side extracts the saliency information map and the texture information map of each frame image in the original video, compresses and processes the saliency information map and the texture information map, and obtains semi-reference data of the original video;

用户端接收运营商端传来的原始视频的半参考数据及受损视频,提取受损视频中每一帧图像的显著性信息图及纹理信息图,得到受损视频的半参考数据,根据原始视频及受损视频的半参考数据,计算受损视频的损伤情况,使用预先训练好的神经网络算法对主观感受质量MOS进行评估,其中,所述受损视频为经过有损信道传输的运营商端的原始视频。The user terminal receives the semi-reference data of the original video and the damaged video from the operator, extracts the saliency information map and texture information map of each frame of the damaged video, and obtains the semi-reference data of the damaged video. Semi-reference data of video and damaged video, calculate the damage of the damaged video, and use the pre-trained neural network algorithm to evaluate the subjective quality MOS, wherein the damaged video is the operator’s data transmitted through the lossy channel end of the original video.

其中,所述显著性信息图包括时域显著性信息图和空域显著性信息图。Wherein, the saliency information map includes a time-domain saliency information map and a spatial-domain saliency information map.

其中,所述显著性信息图包括权重不同的强度分量、色彩分量、方向分量及皮肤色调分量,其中,对皮肤色调分量权重设为2,其余分量权重为1,也可根据实际进行调整。Wherein, the saliency information map includes intensity components, color components, direction components and skin tone components with different weights, wherein the weight of the skin tone component is set to 2, and the weight of other components is 1, which can also be adjusted according to actual conditions.

其中,所述纹理信息图包括时域纹理信息图和空域纹理信息图。Wherein, the texture information map includes a time-domain texture information map and a spatial-domain texture information map.

其中,所述每一帧图像的纹理信息图的提取包括:边缘提取、形态学膨胀处理及加叠;Wherein, the extraction of the texture information map of each frame image includes: edge extraction, morphological expansion processing and superposition;

其中,所述边缘提取包括:提取当前帧图像的边缘信息图像;Wherein, the edge extraction includes: extracting the edge information image of the current frame image;

所述形态学膨胀包括:将当前帧图像的边缘信息图像进行形态学膨胀处理,得到处理后的边缘信息图像;The morphological expansion includes: performing morphological expansion processing on the edge information image of the current frame image to obtain the processed edge information image;

所述加叠包括:将处理后的边缘信息图像与当前帧图像加叠,得到当前帧图像的纹理信息图。The superimposing includes: superimposing the processed edge information image and the current frame image to obtain the texture information map of the current frame image.

其中,所述每一帧图像的纹理信息图包括:运营商端原始视频中每一帧图像的纹理信息图、用户端受损视频中每一帧图像的纹理信息图。Wherein, the texture information map of each frame of image includes: the texture information map of each frame of image in the original video of the operator end, and the texture information map of each frame of image in the damaged video of the user end.

在运营商端,所述压缩处理包括:On the operator side, the compression process includes:

采用小波变换分解所述空域和时域两方面的显著性信息图和纹理信息图,得到不同高频子带;Using wavelet transform to decompose the saliency information map and texture information map in both the space domain and the time domain to obtain different high-frequency sub-bands;

作出所有高频子带的直方图;Make a histogram of all high frequency subbands;

采用泛化高斯分布GGD拟合所有高频子带的直方图并计算拟合误差。The histograms of all high-frequency subbands were fitted using the generalized Gaussian distribution GGD and the fitting error was calculated.

其中,所述半参考数据包括:显著性信息空域损伤、显著性信息时域损伤、纹理信息空域损伤及纹理信息时域损伤。Wherein, the semi-reference data includes: saliency information spatial domain damage, saliency information time domain damage, texture information spatial domain damage and texture information time domain damage.

其中,所述根据原始视频及受损视频的半参考数据,计算受损视频的损伤情况包括:根据原始视频及受损视频的半参考数据,通过相对熵计算受损视频的损伤情况。Wherein, the calculating the damage condition of the damaged video according to the semi-reference data of the original video and the damaged video includes: calculating the damage condition of the damaged video through relative entropy according to the semi-reference data of the original video and the damaged video.

实施例2:Example 2:

本实施例公开一种基于图像特征信息的半参考视频QoE客观评估方法,如图1所示,本发明对实时视频业务进行半参考QoE客观质量评估的方法主要分为11个步骤,可划分为运营商端和用户端两个部分,其中运营商端包含5步,用户端包含6步。其总体流程图如图1所示,下面分别介绍每个步骤。This embodiment discloses a semi-reference video QoE objective evaluation method based on image feature information. As shown in FIG. There are two parts, the operator end and the user end, in which the operator end includes 5 steps, and the user end includes 6 steps. Its overall flow chart is shown in Figure 1, and each step is described below.

运营商端:Operator side:

101)对原始视频中的每一帧分别提取其显著性信息。显著性描述的是一副图像中相对更吸引人注意力的区域。首先,分别从强度、色彩、方向以及皮肤色调4方面构建了显著性分量,再按照不同权重将4个分量合并成一副显著性图。其中,计算皮肤色调的显著性分量前需要应用人脸识别技术来检测是否真正存在人像。得到的显著性图为原图中每一个像素分配了一个显著性值,值越高表示该像素所在区域越吸引人眼注意力。101) Extract saliency information for each frame in the original video separately. Saliency describes relatively more attention-grabbing regions in an image. First, the saliency components are constructed from four aspects of intensity, color, direction and skin tone, and then the 4 components are combined into a saliency map according to different weights. Among them, before calculating the salience component of skin tone, it is necessary to apply face recognition technology to detect whether there is a real portrait. The obtained saliency map assigns a saliency value to each pixel in the original image, and the higher the value, the more attractive the area where the pixel is located is.

下面描述具体的计算过程。首先,对于输入的每一帧图像建立尺度数为9的高斯金字塔(Gaussianpyramids),并设其中的中央尺度ce{2,3,4},周边尺度s=c+δ,其中δ∈{3,4}。定义对两幅不同尺度图像间的差值运算为:对粗尺度图向细尺度插值后进行逐像素相减。设r、g和b分别为原图像的三原色分量,则强度图像I即为I=(r+g+b)/3。另外,定义宽调谐颜色信道R=r-(g+b)/2,G=g-(r+b)/2,B=b-(r+g)/2,以及Y=(r+g)/2-|r-g|/2-b。以上的I、R、G、B和Y都是针对多尺度的。下面,定义强度特征图像为:The specific calculation process is described below. First, a Gaussian pyramid with a scale of 9 is established for each frame of the input image, and the central scale ce{2, 3, 4} is set, and the peripheral scale s=c+δ, where δ∈{3, 4}. Define the difference operation between two images of different scales is: perform pixel-by-pixel subtraction on the coarse-scale image to the fine-scale interpolation. Let r, g and b be the three primary color components of the original image respectively, then the intensity image I is I=(r+g+b)/3. In addition, define the wide tuning color channel R=r-(g+b)/2, G=g-(r+b)/2, B=b-(r+g)/2, and Y=(r+g )/2-|rg|/2-b. The above I, R, G, B, and Y are all for multi-scale. Next, define the intensity feature image as:

定义色彩特征图像为:Define the color feature image as:

接下来,需要对各尺度的强度图像I在分别方向6E(0°,45°,90°,135°)使用Gabor滤波器,得到Gabor金字塔0(δ,θ),定义方向特征图像为:Next, it is necessary to use the Gabor filter on the intensity image I of each scale in the respective directions 6E (0°, 45°, 90°, 135°) to obtain the Gabor pyramid 0 (δ, θ), and define the direction feature image as:

为得到各个显著性分量,需要将上面的各个特征图像进行合并。具体的合并操作定义为,对于两个不同尺度的输入图像,将它们都放缩到尺度4后进行逐像素相加。由此,强度、色彩和方向三方面的显著性分量分别通过公式(5)到(7)计算得到,其中的是归一算子:In order to obtain each salient component, it is necessary to combine the above feature images. The specific merge operation is defined as , for two input images of different scales, scale them to scale 4 and add them pixel by pixel. Thus, the saliency components of intensity, color and direction are calculated by formulas (5) to (7) respectively, where is the normalization operator:

另外,如果检测到人脸存在时需要额外计算皮肤像素的显著性分量通过对原始图像中像素与皮肤色调的接近程度来为每个像素值分配权值。最后,将各个显著性分量根据预定义的权重Wi合并后,就得到了单帧显著性图像:In addition, if a human face is detected, it is necessary to additionally calculate the saliency component of the skin pixel Each pixel value is assigned a weight by how close it is to skin tone in the original image. Finally, after merging each saliency component according to the predefined weight W i , a single-frame saliency image is obtained:

通过对视频中的每一帧计算显著性,并用得到的显著性图像对原图进行加权处理,便能够得到图像的加权显著性信息,该步骤的计算由公式(9)给出,其中,p描述原始视频,Sp表示从原始视频中提取出的各帧的显著性图像,乘运算是针对两幅图像每个像素进行的,i表示某帧,F表示总帧数,SWS代表空域显著性加权图像,其中,SWS分别表示Saliency、Weighted和Spatial。By calculating the saliency of each frame in the video, and using the obtained saliency image to weight the original image, the weighted saliency information of the image can be obtained. The calculation of this step is given by formula (9), where p Describe the original video, S p represents the saliency image of each frame extracted from the original video, the multiplication operation is performed for each pixel of the two images, i represents a certain frame, F represents the total number of frames, SWS represents the spatial saliency Weighted image, where SWS represent Saliency, Weighted and Spatial respectively.

SWSP二(SWSP(i)|SWSP(i)二Sp(i)*Original_Video(i),i∈F)(9)SWS P 2 (SWS P (i)|SWS P (i) 2 S p (i)*Original_Video(i), i∈F) (9)

102)对原始视频中的每一帧分别提取其纹理信息。纹理信息能够根据图像中的边缘情况来评估各个区域对损伤的敏感程度。这是基于下面的假设:图像中纹理复杂的区域中出现的损伤要比色调平坦的区域更难以被察觉。在本方案中,首先使用高斯拉普拉斯滤波器从原始图像提取其中的边缘信息,然后将边缘图像作形态学膨胀处理并将各个像素值取反得到纹理图,最后将同样使用该纹理图对原图作加权处理,便得到了图像的纹理信息。其中纹理复杂的区域都能被膨胀后的边缘覆盖,从而能着重评估平滑色调部分的图像损伤。102) Extract texture information for each frame in the original video. Texture information enables the assessment of the sensitivity of various regions to damage based on the edge conditions in the image. This is based on the assumption that impairments appearing in heavily textured areas of an image are less noticeable than in flat tonal areas. In this scheme, the Gaussian Laplacian filter is first used to extract the edge information from the original image, then the edge image is morphologically expanded and each pixel value is reversed to obtain a texture map, and finally the texture map will also be used The texture information of the image is obtained by weighting the original image. Areas with complex textures can be covered by dilated edges, so that image damage in smooth tonal parts can be evaluated emphatically.

下面描述具体的计算过程。首先,用于提取边缘的高斯拉普拉斯滤波器,是先对图像作高斯滤波,再使用拉普拉斯算子得到边缘图像。高斯滤波操作由公式(10)给出。The specific calculation process is described below. First of all, the Gaussian Laplacian filter used to extract the edge first performs Gaussian filtering on the image, and then uses the Laplacian operator to obtain the edge image. The Gaussian filtering operation is given by equation (10).

L(x,y;t)=g(x,y;t)*f(x,y)(10)L(x,y;t)=g(x,y;t)*f(x,y)(10)

其中L即为滤波结果,f(x,y)为视频一帧图像的在(x,y)处的像素值,g(x,y;t)是高斯函数,由公式(11)描述,其中的t为滤波尺度。Where L is the filtering result, f(x,y) is the pixel value at (x,y) of a video frame image, g(x,y;t) is a Gaussian function, described by formula (11), where The t is the filter scale.

gg (( xx ,, ythe y ;; tt )) == 11 22 ππ ee -- (( xx 22 ++ ythe y 22 )) // 22 tt -- -- -- (( 1111 ))

拉普拉斯算子由公式(12)给出。The Laplacian is given by Equation (12).

▿▿ 22 ff (( xx ,, ythe y )) == ff (( xx ++ 11 ,, ythe y )) ++ ff (( xx -- 11 ,, ythe y )) ++ ff (( xx ,, ythe y ++ 11 )) ++ ff (( xx ,, ythe y -- 11 )) -- 44 ff (( xx ,, ythe y )) -- -- -- (( 1212 ))

形态学膨胀处理由公式(13)给出,其中A为输入图像,B是长宽为8的矩形描述。The morphological dilation process is given by Equation (13), where A is the input image and B is a rectangular description with a length and width of 8.

dilation(A,B)=(a+b|a∈A,b∈B)(13)dilation(A, B)=(a+b|a∈A, b∈B)(13)

综上所述,提取单帧纹理图像的计算过程由公式(14)给出。In summary, the calculation process of extracting a single frame texture image is given by formula (14).

通过对视频中的每一帧计算纹理,并用得到的纹理图像对原图进行加权处理,便能够得到图像的加权纹理信息,其中对于损伤评估来说特别关键的区域都以高亮显示,其余区域都以灰暗色调呈现。该步骤的计算由公式(15)给出。其中表示从原始视频中提取出的各帧的纹理图像,TWS代表空域纹理加权图像,TWS三个字母分别表示Texture、Weighted和Spatial。By calculating the texture for each frame in the video, and weighting the original image with the obtained texture image, the weighted texture information of the image can be obtained, and the areas that are particularly critical for damage assessment are highlighted, and the rest of the area All presented in dark tones. The calculation of this step is given by equation (15). in Represents the texture image of each frame extracted from the original video, TWS stands for spatial texture weighted image, and the three letters of TWS represent Texture, Weighted and Spatial respectively.

103)这一步需要101)和102)计算得到的每一帧的显著性和纹理信息作为输入。在评估实施视频业务的QoE时,空域和时域的损伤情况对于主观感受都非常重要。本方案对显著性信息和纹理信息分别计算了相应的空域和时域损伤值。具体来说,空域方面的评估使用到了各个帧的显著性和纹理信息即101)和102)直接得到的结果,而时域方面的评估则是用到了相邻帧间的显著性和纹理信息差值。因此,对于每个视频,总共需要从4个方面对损伤情况进行评估。该步骤的计算由公式(16)和(17)给出,SWT代表时域显著性加权图像,SWT分别表示Saliency、Weighted和Temporal。TWT代表时域纹理加权图像,TWT分别表示Texture、Weighted和Temporal。103) This step requires the saliency and texture information of each frame calculated in 101) and 102) as input. When evaluating the QoE of implementing video services, the impairments in the air and time domains are very important for subjective perception. This scheme calculates the corresponding spatial and temporal damage values for saliency information and texture information, respectively. Specifically, the evaluation in the spatial domain uses the results obtained directly from the saliency and texture information of each frame (101) and 102), while the evaluation in the temporal domain uses the saliency and texture information differences between adjacent frames value. Therefore, for each video, a total of 4 aspects need to be evaluated for impairment. The calculation of this step is given by formulas (16) and (17), SWT stands for temporal saliency weighted image, and SWT stands for Saliency, Weighted and Temporal, respectively. TWT stands for temporal texture weighted image, and TWT stands for Texture, Weighted and Temporal respectively.

SWTP=(SWTP(i)|SWTP(i)=abs(SWSP(i)-SWSP(i-1)),i∈F)(16)SWT P = (SWT P (i) | SWT P (i) = abs (SWS P (i) - SWS P (i-1)), i ∈ F) (16)

TWTP=(TWTP(i)|TWTP(i)=abs(TWSP(i)-TWSP(i一l)),ieF}(17)TWT P =(TWT P (i)|TWT P (i)=abs(TWS P (i)-TWS P (i-l)), ieF}(17)

104)由103)得到的空域和时域的显著性和纹理信息数据量巨大,是无法直接经网络传输的,因此需要进行压缩处理。其中104)描述的是小波变换的步骤。对于原始视频中每一帧得到的4方面的信息,分别作尺度数为3,方向数为2的SteerablePyramid分解,共得到6个不同的高频子带用于评估图像损伤。随后,作出各高频子带直方图,得到描述原始视频的P。该步骤的计算由公式(18)给出。其中Wavelet表示作如前所述的小波变换,Hist则为作各高频子带的直方图。104) The spatial and temporal saliency and texture information obtained from 103) have a huge amount of data, which cannot be directly transmitted through the network, so compression processing is required. Among them, 104) describes the steps of wavelet transformation. For the 4 aspects of information obtained in each frame of the original video, the SteerablePyramid decomposition with the number of scales being 3 and the number of directions being 2 is respectively performed, and a total of 6 different high-frequency subbands are obtained for evaluating image damage. Then, a histogram of each high-frequency subband is made to obtain P describing the original video. The calculation of this step is given by equation (18). Among them, Wavelet represents the wavelet transform as mentioned above, and Hist represents the histogram of each high-frequency sub-band.

P=Hist(Wauelet(SWSP,TWSP,SWTP,TWTP))(18)P=Hist(Wauelet(SWS P ,TWS P ,SWT P ,TWT P ))(18)

105)该步骤需要用到泛化高斯分布(GeneralizedGaussianDistribution,GGD)对各高频子带直方图进行拟合。GGD函数仅由α和β两个参数确定曲线形状,并能很好的地与高频子带直方图相拟合,其定义由公式(19)和(20)给出。同时,还需要使用相对熵(又称为KL散度)公式计算拟合曲线对高频子带直方图进行拟合时产生的误差ε。具体来说,首先通过公式(21)得到GGD拟合直方图得到的近似直方图描述Pm,然后通过公式(22)计算Pm对于P的相对熵ε。最后,每个高频子带对应的传输参数包括α、β和ε,其中α值为11比特浮点数(8位尾数,3位指数),β值8比特,ε值8比特,总共27比特。而对于每个视频帧,4方面的损伤情况各对应着6个高频子带,也即总共需要传输27*4*6=648比特,如果视频的FPS为30,那么总带宽占用量为648*30/8=2.43KB/s。这对于半参考视频QoE客观质量评估来说是完全可以接受的。这些原视频的半参考数据需要使用无损的辅助信道进行传输。105) This step requires the use of a generalized Gaussian Distribution (GGD) to fit the histograms of each high-frequency subband. The GGD function only determines the shape of the curve by two parameters α and β, and can well fit the high-frequency sub-band histogram, and its definition is given by formulas (19) and (20). At the same time, it is also necessary to use the relative entropy (also known as KL divergence) formula to calculate the error ε generated when the fitting curve fits the high-frequency subband histogram. Specifically, the approximate histogram description P m obtained by fitting the histogram with GGD is first obtained by formula (21), and then the relative entropy ε of P m to P is calculated by formula (22). Finally, the transmission parameters corresponding to each high-frequency subband include α, β, and ε, where the value of α is an 11-bit floating-point number (8-bit mantissa, 3-bit exponent), the value of β is 8 bits, and the value of ε is 8 bits, totaling 27 bits . For each video frame, the 4 aspects of damage correspond to 6 high-frequency subbands, that is, a total of 27*4*6=648 bits need to be transmitted. If the FPS of the video is 30, the total bandwidth usage is 648 *30/8=2.43KB/s. This is perfectly acceptable for semi-reference video QoE objective quality assessment. These semi-reference data of the original video need to be transmitted using a lossless auxiliary channel.

pp (( xx )) == ββ 22 αΓαΓ (( 11 // ββ )) ee (( -- || xx || // αα )) ββ -- -- -- (( 1919 ))

ΓΓ (( zz )) == ∫∫ 00 ∞∞ tt zz -- 11 ee tt dtdt -- -- -- (( 2020 ))

Pm=GGD_Fitting(P)(21)P m =GGD_Fitting(P)(21)

ϵϵ == DD. KLKL (( PP mm || || PP )) == ΣΣ ii lnln (( PP mm (( ii )) PP (( ii )) )) PP mm (( ii )) -- -- -- (( 22twenty two ))

用户端:user terminal:

106)除输入为受损视频及其显著性图像外,其他部分与101)相同。该步骤的计算由公式(23)给出。其中Q描述受损视频,表示从受损视频中提取出的各帧的显著性信息。106) Same as 101) except that the input is damaged video and its saliency image. The calculation of this step is given by equation (23). where Q describes the damaged video, Represents the saliency information of each frame extracted from the damaged video.

107)除输入为受损视频及其纹理图像外,其他部分与102)相同。该步骤的计算由公式(24)给出。其中表示从受损视频中提取出的各帧的纹理信息。107) The same as 102) except that the input is a damaged video and its texture image. The calculation of this step is given by equation (24). in Represents the texture information of each frame extracted from the corrupted video.

108)除输入为106)和107)的数据外,其他与103)相同。该步骤的计算由公式(25)和(26)给出。108) Same as 103) except the data entered in 106) and 107). The calculation of this step is given by equations (25) and (26).

109)除输入为108)的数据外,其他与104)相同。该步骤的计算由公式(27)给出。109) Same as 104) except for the data entered in 108). The calculation of this step is given by equation (27).

110)该步骤需要109)和105)的数据作为输入。首先,需要根据接受到得半参考数据中的各α、β值重建各个帧对应的高频子带的直方图;然后,通过公式(28)计算具体的损伤值,其中P表示原始视频对应的各直方图,Q表示受损视频对应的各直方图,Pm表示GGD拟合得到的各近似直方图,ε=DKL(Pm||P)。最后对于每个视频,将计算得到4个损伤值,即公式(28)中的Distortion,分别对应空域显著性加权损伤、空域纹理加权损伤、时域显著性加权损伤以及时域纹理加权损伤。110) This step requires the data from 109) and 105) as input. First of all, it is necessary to reconstruct the histogram of the high-frequency subband corresponding to each frame according to the α and β values in the received semi-reference data; then, calculate the specific damage value by formula (28), where P represents the corresponding Each histogram, Q represents each histogram corresponding to the damaged video, P m represents each approximate histogram obtained by GGD fitting, ε=DKL(P m ||P). Finally, for each video, four damage values will be calculated, that is, Distortion in formula (28), corresponding to spatial saliency weighted damage, spatial texture weighted damage, temporal saliency weighted damage, and temporal texture weighted damage.

111)最后,根据110)计算得到视频损伤情况,使用预先训练好的神经网络算法对主观感受质量(MOS)进行评估。该步骤的计算由公式(29)给出。111) Finally, the video damage is calculated according to 110), and the subjective perceived quality (MOS) is evaluated using a pre-trained neural network algorithm. The calculation of this step is given by equation (29).

本发明实施例的有益效果为:The beneficial effects of the embodiments of the present invention are:

考虑到人眼观看视频时,关注的仅是视频图像中的部分区域而非整体,因此在关键区域出现的视频损伤将尤其影响主观感受。针对这一点,本方案结合人类视觉模型,从色彩、强度、方向和皮肤色调4个方面出发求取了图像的显著性信息。其中,在确定是否需要考虑皮肤色调时,先应用了人脸识别技术判断图像中是否存在人像,并仅当人像存在时额外计算皮肤色调的显著性分量,由此使得显著性信息的计算准确性能得到保证,进而提高损伤评估的准确程度。Considering that when human eyes watch a video, they only pay attention to some areas in the video image rather than the whole, so the video damage in key areas will especially affect the subjective experience. Aiming at this point, this scheme combines the human visual model to obtain the saliency information of the image from four aspects: color, intensity, direction and skin tone. Among them, when determining whether skin tone needs to be considered, face recognition technology is first applied to judge whether there is a portrait in the image, and only when there is a portrait, the salience component of skin tone is additionally calculated, thus making the calculation of saliency information accurate. Guaranteed, thereby improving the accuracy of damage assessment.

图像本身存在的边缘情况,可被用于判断图像对损伤的敏感程度。具体来说,当图像某区域边缘密集,也即纹理复杂时,若出现了损伤也难以被人眼察觉;相反,若损伤出现在色调平滑的区域,则很容易被人眼捕捉到。据此,本方案提取了图像边缘信息,随后通过形态学膨胀方法遮盖边缘附近像素,从而能对像素平滑区域出现的损伤进行着重评估,从而提高损伤评估的准确程度。The edge cases in the image itself can be used to judge the sensitivity of the image to damage. Specifically, when the edges of a certain area of the image are dense, that is, the texture is complex, it is difficult to be detected by the human eye if there is damage; on the contrary, if the damage appears in an area with a smooth tone, it is easy to be captured by the human eye. Accordingly, this solution extracts the edge information of the image, and then uses the morphological expansion method to cover the pixels near the edge, so that the damage in the pixel smooth area can be evaluated emphatically, thereby improving the accuracy of damage evaluation.

为实现评估方法半参考化,使用小波变换结合概率分布拟合的方式实现数据压缩,并通过相对熵来计算具体的损伤值。该方法能够在确保评估准确性的同时,仅占用极少的额外传输带宽。In order to realize the semi-reference of the evaluation method, wavelet transform combined with probability distribution fitting is used to realize data compression, and the specific damage value is calculated by relative entropy. This method can occupy only a small amount of additional transmission bandwidth while ensuring the accuracy of the evaluation.

实施例3:Example 3:

在公开的视频主观质量数据库LIVEVideoQualityDatabase(LVQD)上,对本发明评估的准确性进行了测试,测试结果如图2所示。LVQD包含了10组视频,每组包含1个原始(无损)视频,以及分别从H.264压缩损伤、MPEG2压缩损伤、IP传输损伤以及无线传输损伤角度构建的15个受损视频,因此整个数据库共包含了150个受损视频。对每个受损视频,按照ITU-RBT.500-11标准规定进行了主观评分。主观评分使用的是单刺激过程,评分人员在0到100的连续区间上给出评分值。实验共邀请了38名评分人员,其中9份评分根据标准被判定为无效并被剔除。对剩余的29份评分则根据标准进行处理后便得到了150个受损视频对应的MOS和评分方差。On the publicly available video subjective quality database LIVEVideoQualityDatabase (LVQD), the accuracy of the evaluation of the present invention is tested, and the test results are shown in FIG. 2 . LVQD contains 10 groups of videos, each group contains 1 original (lossless) video, and 15 damaged videos constructed from the perspectives of H.264 compression damage, MPEG2 compression damage, IP transmission damage and wireless transmission damage, so the entire database A total of 150 damaged videos are included. For each damaged video, a subjective score was made according to the ITU-RBT.500-11 standard. Subjective scoring uses a single-stimulus process, with raters giving scores on a continuous scale from 0 to 100. A total of 38 raters were invited for the experiment, and 9 of them were judged to be invalid according to the criteria and were eliminated. After the remaining 29 scores were processed according to the standard, the MOS and score variance corresponding to 150 damaged videos were obtained.

使用本方法对LVQD的受损视频进行QoE客观质量评估,其结果如图2所示。与其他主流全参考QoE客观质量评估方法进行对比的结果如表1和表2所示,其中表1列出的是皮尔森相关系数,表2列出的是斯皮尔曼相关系数。虽然本专利提出的是半参考方法,在与能使用全部原始视频数据的全参考方法进行对比时存在不利,但实际对比结果表明本方法在评估准确性方面仍有较强竞争力。Using this method to evaluate the QoE objective quality of LVQD damaged video, the results are shown in Figure 2. The results of comparison with other mainstream full-reference QoE objective quality assessment methods are shown in Table 1 and Table 2, where Table 1 lists the Pearson correlation coefficient, and Table 2 lists the Spearman correlation coefficient. Although the semi-reference method proposed in this patent has disadvantages when compared with the full-reference method that can use all original video data, the actual comparison results show that this method is still highly competitive in terms of evaluation accuracy.

表1.LVQD评估结果——皮尔森相关系数Table 1. LVQD assessment results - Pearson correlation coefficient

表2.LVQD评估结果——斯皮尔曼相关系数Table 2. LVQD evaluation results - Spearman correlation coefficient

虽然结合附图描述了本发明的实施方式,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下做出各种修改和变型,这样的修改和变型均落入由所附权利要求所限定的范围之内。Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention. within the bounds of the requirements.

Claims (7)

1. the half reference video QoE objective evaluation method based on image feature information, is characterized in that, the method comprises:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and compression is processed described aobviousShow property hum pattern and texture information figure, obtain half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extracts each in impaired videoThe conspicuousness hum pattern of two field picture and texture information figure, obtain half reference data of impaired video, according to original video and impairedHalf reference data of video, calculates the degree of impairment of impaired video, uses the good neural network algorithm of training in advance to feel subjectivityAssessed by mass M OS, wherein, described impaired video is the original video through operator's end of Erasure channel transmission;
The extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and add folded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, processedAfter marginal information image;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image are added folded, obtain the texture of current frame imageHum pattern;
Described according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, comprising:
According to receiving to such an extent that each α, the β value in half reference data rebuild the histogram of the high-frequency sub-band that each frame is corresponding;
Calculate concrete impairment value by formula (28), wherein P represents each histogram that original video is corresponding, and Q represents impaired lookingFrequently corresponding each histogram, PmRepresent the each approximate histogram that GGD matching obtains, ϵ = D K L ( P m | | P ) = Σ i l n ( P m ( i ) P ( i ) ) P m ( i ) ;
For each video, will calculate 4 impairment values, i.e. Distortion in formula (28), corresponding spatial domain is aobvious respectivelyWork property weighted injury, spatial domain texture weighted injury, time domain conspicuousness weighted injury and time domain texture weighted injury;
D i s t o r t i o n ( P , Q ) = D K L ( P m | | Q ) - ϵ = Σ i l n ( P ( i ) Q ( i ) ) P m ( i ) - - - ( 28 )
2. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises time domain conspicuousness hum patternWith spatial domain conspicuousness hum pattern.
3. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises that the intensity of weighted dividesAmount, color component, durection component and skin color component.
4. method according to claim 1, is characterized in that, described texture information figure comprises time domain texture information figure and skyTerritory texture information figure.
5. method according to claim 1, is characterized in that, the texture information figure of described each two field picture comprises: operationBusiness holds the texture information figure of each two field picture in the impaired video of texture information figure, user side of each two field picture in original video.
6. method according to claim 1 and 2, is characterized in that, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of spatial domain and time domain two aspects, obtain different high frequencyBand;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian distribution GGD matching.
7. method according to claim 1, is characterized in that, described half reference data comprises: damage in conspicuousness information spatial domainWound, the damage of conspicuousness information time domain, the damage of texture information spatial domain and the damage of texture information time domain.
CN201410079834.6A 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information Expired - Fee Related CN103841410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Publications (2)

Publication Number Publication Date
CN103841410A CN103841410A (en) 2014-06-04
CN103841410B true CN103841410B (en) 2016-05-04

Family

ID=50804492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410079834.6A Expired - Fee Related CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Country Status (1)

Country Link
CN (1) CN103841410B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104113788B (en) * 2014-07-09 2017-09-19 北京邮电大学 Method and system for QoE training and evaluation of TCP video stream service
CN107657251A (en) * 2016-07-26 2018-02-02 阿里巴巴集团控股有限公司 Determine the device and method of identity document display surface, image-recognizing method
CN106651829B (en) * 2016-09-23 2019-10-08 中国传媒大学 A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN109801266A (en) * 2018-12-27 2019-05-24 西南技术物理研究所 A kind of image quality measure system of wireless image data-link
CN110324613B (en) * 2019-07-30 2021-06-01 华南理工大学 A deep learning image evaluation method for video transmission quality
CN110599468A (en) * 2019-08-30 2019-12-20 中国信息通信研究院 No-reference video quality evaluation method and device
CN111242936A (en) * 2020-01-17 2020-06-05 苏州瓴图智能科技有限公司 Non-contact palm herpes detection device and method based on image
WO2022155818A1 (en) * 2021-01-20 2022-07-28 京东方科技集团股份有限公司 Image encoding method and device, image decoding method and device, and codec
CN113011270A (en) * 2021-02-23 2021-06-22 中国矿业大学 Coal mining machine cutting state identification method based on vibration signals

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 A Method of Video Quality Evaluation Based on Artificial Neural Network
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7668397B2 (en) * 2005-03-25 2010-02-23 Algolith Inc. Apparatus and method for objective assessment of DCT-coded video quality with or without an original video sequence
KR100731358B1 (en) * 2005-11-09 2007-06-21 삼성전자주식회사 Image quality evaluation method and system
KR20080029371A (en) * 2006-09-29 2008-04-03 광운대학교 산학협력단 Image Quality Evaluation System and Method
KR101033296B1 (en) * 2009-03-30 2011-05-09 한국전자통신연구원 Apparatus and method for extracting and comparing spatiotemporal feature information in broadcasting communication system
JP2011186715A (en) * 2010-03-08 2011-09-22 Nk Works Kk Method and photographic image device evaluation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 A Method of Video Quality Evaluation Based on Artificial Neural Network
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Reduced-Reference Image Quality Assessment Using A Wavelet-Domain Natural Image Statistic Model;Zhou Wang,Eero P. Simoncelli;《Human Vision and Electronic Imaging X, Proc. SPIE》;20050120;第5666卷;第1-2节 *
基于多特征类型的无线视频质量用户体验(QoE)方法研究;杨艳;《中国博士学位论文全文数据库 信息科技辑》;20130115;第1-2、5、7、20-22页 *
基于视觉显著性的网络丢包图像和视频的客观质量评估方法研究;冯欣;《中国博士学位论文全文数据库 信息科技辑》;20111215;第9、19-22页 *

Also Published As

Publication number Publication date
CN103841410A (en) 2014-06-04

Similar Documents

Publication Publication Date Title
CN103841410B (en) Based on half reference video QoE objective evaluation method of image feature information
Gu et al. Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views
Wang et al. Reduced-reference image quality assessment using a wavelet-domain natural image statistic model
Li et al. Content-partitioned structural similarity index for image quality assessment
CN104023230B (en) A kind of non-reference picture quality appraisement method based on gradient relevance
CN106127741B (en) Non-reference picture quality appraisement method based on improvement natural scene statistical model
CN108830823B (en) Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN109978854B (en) An image quality assessment method for screen content based on edge and structural features
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN103366378B (en) Based on the no-reference image quality evaluation method of conditional histograms shape coincidence
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
JP2006505853A (en) Method for generating quality-oriented importance map for evaluating image or video quality
CN103426173B (en) Objective evaluation method for stereo image quality
CN107146220B (en) A kind of universal non-reference picture quality appraisement method
CN104851098A (en) Objective evaluation method for quality of three-dimensional image based on improved structural similarity
CN111489333B (en) No-reference night natural image quality evaluation method
CN112508800A (en) Attention mechanism-based highlight removing method for surface of metal part with single gray image
Li et al. Recent advances and challenges in video quality assessment
Sadiq et al. Blind image quality assessment using natural scene statistics of stationary wavelet transform
CN106934770A (en) A kind of method and apparatus for evaluating haze image defog effect
CN106960433A (en) It is a kind of that sonar image quality assessment method is referred to based on image entropy and the complete of edge
Patil et al. Survey on image quality assessment techniques
CN102497576B (en) Full-reference image quality assessment method based on mutual information of Gabor features (MIGF)
CN104394405B (en) A kind of method for evaluating objective quality based on full reference picture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160504

CF01 Termination of patent right due to non-payment of annual fee