CN107635136A - No-reference stereo image quality assessment method based on visual perception and binocular competition - Google Patents

No-reference stereo image quality assessment method based on visual perception and binocular competition Download PDF

Info

Publication number
CN107635136A
CN107635136A CN201711003045.4A CN201711003045A CN107635136A CN 107635136 A CN107635136 A CN 107635136A CN 201711003045 A CN201711003045 A CN 201711003045A CN 107635136 A CN107635136 A CN 107635136A
Authority
CN
China
Prior art keywords
mrow
msub
image
msup
munderover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711003045.4A
Other languages
Chinese (zh)
Other versions
CN107635136B (en
Inventor
刘利雄
张久发
王天舒
黄华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN107635136A publication Critical patent/CN107635136A/en
Application granted granted Critical
Publication of CN107635136B publication Critical patent/CN107635136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of stereo image quality evaluation method, more particularly to a kind of view-based access control model is perceived with binocular competition without stereo image quality evaluation method is referred to, and belongs to art of image analysis.Input stereo pairs are converted into half-tone information by this method first, obtain the simulation disparity map of stereo pairs and uncertain figure using matching algorithm to half-tone information, while synthesize single eye images using half-tone information and its filter response and simulation disparity map correction.Secondly, obtained single eye images and uncertain figure are subjected to difference of Gaussian processing on different scale space and frequency space, and extract nature scene statistics and visually-perceptible characteristic vector.Then, feature is trained respectively using SVMs and BP neural network, obtains forecast model, applied forecasting model and test and corresponding characteristic vector, carry out prediction of quality and assessment.This method has subjective consistency height, data base-independent is high, the characteristics of stability is high, the effect of great competitiveness is all shown when handling Various Complex type of distortion, it can be embedded into the related application system of the stereoscopic vision content such as stereoscopic image/video processing, there is very strong application value.

Description

基于视觉感知和双目竞争的无参考立体图像质量评价方法No-reference stereo image quality assessment method based on visual perception and binocular competition

技术领域technical field

本发明涉及一种立体图像质量评价方法,特别涉及一种基于视觉感知和双 目竞争的无参考立体图像质量评价方法,属于图像分析领域。The invention relates to a method for evaluating the quality of a stereoscopic image, in particular to a method for evaluating the quality of a stereoscopic image without reference based on visual perception and binocular competition, and belongs to the field of image analysis.

背景技术Background technique

近些年来,随着科学技术的发展,立体图像产生和传播的成本变得越来越 低,这使得立体图像作为一种优秀的信息传播的媒介,在我们的日常生活中变 得越来越普遍,越来越不可缺少。然而,立体图像在场景采集、编码、网络传 输、解码、后期处理、压缩存储和放映的各个阶段都会不可避免的引入失真, 例如,在场景采集过程中由于设备参数设定、镜头晃动等因素引起的模糊失真; 图像压缩存储引起的压缩失真等等。而失真的引入则会大大降低人们的视觉体 验,严重的还会影响到人们的身心健康。如何遏制低质量立体图像的传播,保 证人们的视觉体验,成为了一个亟待解决的问题。In recent years, with the development of science and technology, the cost of stereoscopic image generation and dissemination has become lower and lower, which makes stereoscopic images, as an excellent medium of information dissemination, more and more popular in our daily life. Universal and increasingly indispensable. However, stereoscopic images will inevitably introduce distortion in each stage of scene acquisition, encoding, network transmission, decoding, post-processing, compression storage and projection. Blurring and distortion; compression distortion caused by image compression storage and so on. The introduction of distortion will greatly reduce people's visual experience, and seriously affect people's physical and mental health. How to curb the dissemination of low-quality stereoscopic images and ensure people's visual experience has become an urgent problem to be solved.

使立体图像产生和传播的媒体具有自动评价图像质量高低的能力,从而改 善媒体输出端图像的质量,对于解决这个问题具有重要意义。具体来说,本研 究具有以下应用价值:It is of great significance to solve this problem that the media that generate and disseminate stereoscopic images have the ability to automatically evaluate the image quality, so as to improve the image quality at the media output end. Specifically, this research has the following application value:

(1)可以嵌入实际的应用系统(比如视频的放映系统、网络传输系统等) 中,实时的监控图像/视频的质量;(1) It can be embedded in actual application systems (such as video projection systems, network transmission systems, etc.) to monitor the quality of images/videos in real time;

(2)可以用于评价各种立体图像/视频处理算法、工具(比如立体图像的压 缩编码、图像/视频采集工具等)的优劣;(2) It can be used to evaluate the advantages and disadvantages of various stereoscopic image/video processing algorithms and tools (such as compression coding of stereoscopic images, image/video acquisition tools, etc.);

(3)可以用于立体图像/视频作品的质量审核,防止劣质图像制品危害观众 的身心健康。(3) It can be used in the quality review of stereoscopic image/video works to prevent inferior image products from endangering the physical and mental health of viewers.

综上所述,对于客观无参考立体图像质量评价模型的研究具有重要的理论 价值和现实意义。本发明提出了一种基于视觉感知和双目感知的无参考立体图 像质量评价方法,其参考的已有理论和技术为Kruger等人提出的视觉感知理论 以及Joshi等人提出的视觉感知特征提取理论。To sum up, the research on the objective no-reference stereo image quality evaluation model has important theoretical value and practical significance. The present invention proposes a no-reference stereoscopic image quality evaluation method based on visual perception and binocular perception. The existing theories and technologies referred to are the visual perception theory proposed by Kruger et al. and the visual perception feature extraction theory proposed by Joshi et al. .

(一)视觉感知理论(1) Visual Perception Theory

Kruger等人提出了视觉感知理论,有关视觉感知理论的研究首先要考虑人 眼视网膜的感知现象。视网膜中的感光细胞产生光传导,而光传导产生的信号 在兴奋性或抑制性的视觉通道中传递。研究显示,在人类视网膜神经节细胞中 存在低通滤波,在这种背景下出现的一个突出特征就是视网膜的中心-环绕接收 场[40]。中心-环绕接收场一般为同心圆形状,即在接收场的中心区域是对光信 号兴奋(或抑制)的,而在环绕场中则对接收光信号抑制(或兴奋)。这种接收 场可以通过高斯差分建模,并类似于用于边缘检测的拉普拉斯滤波器[41]。因此 它强调亮度的空间变化,此外,这种接收场也对时间变化敏感,并且因此形成 了对运动处理的基础。另外,在人类视觉系统中还存在处理不同类型的视觉信 息(颜色、形状、运动、纹理、立体信息)的分离且高度互联的通道,这有助 于视觉信息表达的效率和稳定性。在这种视觉感知机理下,大脑通过大量的深 度信息来感知立体图像的三维特征,双目视差是其中最重要的深度信息之一。 考虑到视网膜中可能存在多个空间频率,因此要模拟这些频率的中心-环绕接收 场,需要生成多个标准差值,并通过高斯差分算子来计算差分图像。Kruger and others put forward the theory of visual perception, and the research on the theory of visual perception should first consider the perception phenomenon of human retina. Photoreceptor cells in the retina produce phototransmissions, which generate signals that are transmitted in excitatory or inhibitory visual pathways. Studies have shown the presence of low-pass filtering in human retinal ganglion cells, and a prominent feature emerging in this context is the central-surrounding receptive field of the retina [40]. The center-surrounding receptive field is generally in the shape of concentric circles, that is, the central area of the receptive field is excited (or inhibited) to the optical signal, while the received optical signal is inhibited (or excited) in the surrounding field. This receptive field can be modeled by difference of Gaussians and is similar to the Laplacian filter used for edge detection [41]. It therefore emphasizes spatial variations in brightness, moreover, this receptive field is also sensitive to temporal variations and thus forms the basis for motion processing. In addition, there are separate and highly interconnected channels for processing different types of visual information (color, shape, motion, texture, stereoscopic information) in the human visual system, which contributes to the efficiency and stability of visual information expression. Under this visual perception mechanism, the brain perceives the three-dimensional features of stereoscopic images through a large amount of depth information, and binocular disparity is one of the most important depth information. Considering that there may be multiple spatial frequencies in the retina, to simulate the center-surround receptive field at these frequencies, it is necessary to generate multiple standard deviation values and calculate the difference image through the difference of Gaussian operator.

(二)视觉感知特征提取(2) Visual perception feature extraction

Joshi等人在对视觉感知和视网膜感知等问题进行了研究的基础上,提出 了提取图像的能量特征和边缘特征作为视觉感知特征的方法。On the basis of research on visual perception and retinal perception, Joshi et al. proposed a method to extract energy features and edge features of images as visual perception features.

能量特征的提取计算公式如下:The calculation formula for energy feature extraction is as follows:

其中,H代表图像的信息熵,m代表图像灰度级别的数量,pl代表第l个灰 度级别出现的概率相关值。Among them, H represents the information entropy of the image, m represents the number of image gray levels, and p l represents the probability correlation value of the lth gray level.

边缘特征的提取计算公式如下:The calculation formula for edge feature extraction is as follows:

其中,Canny代表利用Canny方法进行图像的边缘检测,并将符合条件的边缘像 素点用数值方式表示。Among them, Canny means to use the Canny method to detect the edge of the image, and express the qualified edge pixels numerically.

发明内容Contents of the invention

本发明的目的是为了解决无参考立体图像质量评价中人眼视觉感知系统模 拟方法不够完善,对图像中视觉感知信息的利用不充分,主观一致性差,数据 库独立性差,算法稳定性差等的问题,提出一种基于视觉感知和双目竞争的无 参考立体图像质量评价方法。The purpose of the present invention is to solve the problems that the simulation method of the human visual perception system is not perfect enough, the visual perception information in the image is not fully utilized, the subjective consistency is poor, the independence of the database is poor, and the algorithm stability is poor, etc. in the quality evaluation of the stereoscopic image without reference. A no-reference stereo image quality assessment method based on visual perception and binocular competition is proposed.

本发明方法是通过以下技术方案实现的。The method of the present invention is realized through the following technical solutions.

基于视觉感知和双目竞争的无参考立体图像质量评价方法,其具体步骤如 下:A no-reference stereoscopic image quality evaluation method based on visual perception and binocular competition, the specific steps are as follows:

步骤一、将输入的待测试立体图像对转化为灰度信息。Step 1: Convert the input stereo image pair to be tested into grayscale information.

步骤二、应用匹配算法对灰度信息做进一步的处理,得到模拟视差图和不 确定性图,同时利用Gabor滤波得到灰度信息的滤波响应。Step 2, apply the matching algorithm to further process the grayscale information, obtain the simulated disparity map and the uncertainty map, and use Gabor filtering to obtain the filter response of the grayscale information simultaneously.

步骤三、利用灰度信息及其滤波响应和模拟视差图校正合成单眼图像。Step 3, correcting and synthesizing the monocular image by using the gray level information and its filter response and the simulated disparity map.

步骤四、从单眼图像和不确定性图的不同尺度空间和频率空间中得到高斯 差分图像,并完成自然场景统计和视觉感知特征提取。Step 4. Gaussian difference images are obtained from different scale spaces and frequency spaces of monocular images and uncertainty maps, and natural scene statistics and visual perception feature extraction are completed.

高斯差分图像的计算方法如下:The calculation method of the difference of Gaussian image is as follows:

σ2 ij=L*σ1 ij (3)σ 2 ij = L*σ 1 ij (3)

其中,代表高斯差分图像,分别代表对原始图像(单眼图像 或不确定性图)进行不同卷积核下的高斯滤波得到的图像,σ1 ij和σ2 ij分别代表 两个不同的卷积核,w和h代表某一尺度下待处理图像的宽和高,f代表频率,i 和j分别代表某一个尺度空间和频率空间。in, represents the difference of Gaussian image, with Represent the original image (monocular image or uncertainty map) Gaussian filtering under different convolution kernels respectively, σ 1 ij and σ 2 ij represent two different convolution kernels, w and h represent a certain The width and height of the image to be processed under the scale, f represents the frequency, i and j represent a certain scale space and frequency space, respectively.

视觉感知特征的提取方法如下:The extraction method of visual perception features is as follows:

能量特征的提取:Extraction of energy features:

其中,H代表图像的信息熵,m代表图像灰度级别的数量,pl代表第l个灰 度级别出现的概率相关值。Among them, H represents the information entropy of the image, m represents the number of image gray levels, and p l represents the probability correlation value of the lth gray level.

边缘特征的提取:Extraction of edge features:

其中,Canny代表利用Canny方法进行图像的边缘检测,并将符合条件的边 缘像素点用数值方式表示。Among them, Canny means to use the Canny method to detect the edge of the image, and express the qualified edge pixels numerically.

步骤五、采用步骤一、步骤二、步骤三和步骤四的方法对数据库中的每一 幅彩色立体图像对进行处理,计算得到每一组立体图像对应的质量特征向量; 然后利用基于学习的机器学习方法,在训练集上进行训练,在测试集上进行测 试,把质量特征向量映射为对应的质量分数;进而利用现有的算法性能指标 (SROCC、LCC等)对算法的优劣进行评估。Step 5, using the methods of step 1, step 2, step 3 and step 4 to process each color stereo image pair in the database, and calculate the quality feature vector corresponding to each group of stereo images; then use the learning-based machine The learning method is to train on the training set, test on the test set, and map the quality feature vector to the corresponding quality score; then use the existing algorithm performance indicators (SROCC, LCC, etc.) to evaluate the pros and cons of the algorithm.

有益效果Beneficial effect

本发明提出的基于视觉感知和双目竞争的无参考立体图像质量评价方法, 与已有技术相比具有主观一致性高,数据库独立性高,算法稳定性高等特点; 可以与立体图像/视频处理相关应用系统协同使用,具有很强的应用价值。Compared with the prior art, the no-reference stereoscopic image quality evaluation method proposed by the present invention based on visual perception and binocular competition has the characteristics of high subjective consistency, high database independence, and high algorithm stability; it can be used with stereoscopic image/video processing Related application systems are used in coordination, which has strong application value.

附图说明Description of drawings

图1是本发明的基于视觉感知和双目竞争的无参考立体图像质量评价方法 的流程图;Fig. 1 is the flow chart of the no-reference stereoscopic image quality evaluation method based on visual perception and binocular competition of the present invention;

图2是本发明以及其他立体图像质量评价方法在LIVE数据库上进行测试的 盒形图。Fig. 2 is a box diagram of the present invention and other stereoscopic image quality evaluation methods tested on the LIVE database.

具体实施方式detailed description

下面结合附图和具体实施例对本发明方法的实施方式做详细的说明。The implementation of the method of the present invention will be described in detail below in conjunction with the accompanying drawings and specific examples.

实施例Example

本方法的流程如图1所示,具体实施过程为:The process flow of this method is shown in Figure 1, and the specific implementation process is as follows:

步骤一、将输入的待测试立体图像对转化为灰度信息。Step 1: Convert the input stereo image pair to be tested into grayscale information.

步骤二、应用匹配算法对灰度信息做进一步的处理,得到模拟视差图和不 确定性图,同时利用Gabor滤波得到灰度信息的滤波响应。Step 2, apply the matching algorithm to further process the grayscale information, obtain the simulated disparity map and the uncertainty map, and use Gabor filtering to obtain the filter response of the grayscale information simultaneously.

模拟视差图通过左、右视图灰度信息的结构相似度匹配得到。The simulated disparity map is obtained by matching the structural similarity of the gray information of the left and right views.

不确定性图的计算方法如下:The uncertainty map is calculated as follows:

其中,l代表左视图灰度图,r代表经过视差补偿处理后的右视图灰度图, μ和σ分别代表相应灰度图的均值和标准差值,C1和C2分别代表常数项。模拟视 差图以及不确定性图都将用于后续的高斯差分图像处理和特征提取。Among them, l represents the grayscale image of the left view, r represents the grayscale image of the right view after parallax compensation processing, μ and σ represent the mean value and standard deviation of the corresponding grayscale image respectively, and C 1 and C 2 represent constant items respectively. Both the simulated disparity map and the uncertainty map will be used for subsequent difference of Gaussian image processing and feature extraction.

步骤三、利用灰度信息及其滤波响应和模拟视差图校正合成单眼图像。Step 3, correcting and synthesizing the monocular image by using the gray level information and its filter response and the simulated disparity map.

单眼图像的计算方法如下:The calculation method of the monocular image is as follows:

CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)CI(x,y)=W l (x,y)*I l (x,y)+W r ((x+d),y)*I r ((x+d),y) (2)

其中,(x,y)为坐标,Il和Ir分别代表立体图形对左、右视图的灰度图,d代 表左、右视图之间对应映射像素点的视差,CI代表合成的单眼图像,Wl和Wr代 表图像信息权重,GEl和GEr代表以数值形式表示的左、右视图的滤波响应总和。Among them, (x, y) are the coordinates, I l and I r respectively represent the grayscale images of the left and right views of the stereographic image, d represents the parallax of the corresponding mapped pixels between the left and right views, and CI represents the synthesized monocular image , W l and W r represent the image information weights, GE l and GE r represent the sum of the filter responses of the left and right views in numerical form.

步骤四、从单眼图像和不确定性图的不同尺度空间和频率空间中得到高斯 差分图像,并完成自然场景统计和视觉感知特征提取。Step 4. Gaussian difference images are obtained from different scale spaces and frequency spaces of monocular images and uncertainty maps, and natural scene statistics and visual perception feature extraction are completed.

高斯差分图像的计算方法如下:The calculation method of the difference of Gaussian image is as follows:

σ2 ij=L*σ1 ij (7)σ 2 ij = L*σ 1 ij (7)

其中,代表高斯差分图像,分别代表对原始图像(单眼图像 或不确定性图)进行不同卷积核下的高斯滤波得到的图像,σ1 ij和σ2 ij分别代表 两个不同的卷积核,w和h代表某一尺度下待处理图像的宽和高,f代表频率,i 和j分别代表某一个尺度空间和频率空间。in, represents the difference of Gaussian image, with Represent the original image (monocular image or uncertainty map) Gaussian filtering under different convolution kernels respectively, σ 1 ij and σ 2 ij represent two different convolution kernels, w and h represent a certain The width and height of the image to be processed under the scale, f represents the frequency, i and j represent a certain scale space and frequency space, respectively.

视觉感知特征的提取方法如下:The extraction method of visual perception features is as follows:

能量特征的提取:Extraction of energy features:

其中,H代表图像的信息熵,m代表图像灰度级别的数量,pl代表第l个灰 度级别出现的概率相关值。Among them, H represents the information entropy of the image, m represents the number of image gray levels, and p l represents the probability correlation value of the lth gray level.

边缘特征的提取:Extraction of edge features:

其中,Canny代表利用Canny方法进行图像的边缘检测,并将符合条件的边 缘像素点用数值方式表示。Among them, Canny means to use the Canny method to detect the edge of the image, and express the qualified edge pixels numerically.

步骤五、采用步骤一、步骤二、步骤三和步骤四的方法对数据库中的每一 幅彩色立体图像对进行处理,计算得到每一组立体图像对应的质量特征向量; 然后利用基于学习的机器学习方法,在训练集上进行训练,在测试集上进行测 试,把质量特征向量映射为对应的质量分数;进而利用现有的算法性能指标(SROCC、LCC等)对算法的优劣进行评估。Step 5, using the methods of step 1, step 2, step 3 and step 4 to process each color stereo image pair in the database, and calculate the quality feature vector corresponding to each group of stereo images; then use the learning-based machine The learning method is to train on the training set, test on the test set, and map the quality feature vector to the corresponding quality score; then use the existing algorithm performance indicators (SROCC, LCC, etc.) to evaluate the pros and cons of the algorithm.

我们在三个立体图像质量评价数据库上实施了我们的算法,包括LIVE Phase II,Waterloo IVC 3D Phase I和Phase II。这些数据库的基本信息列举在了表 一中。同时,我们选取了六种算法公开,性能优秀的质量评价算法与我们的方 法进行比较,包括四种2D基础上的立体图像质量评价算法:PSNR,SSIM, MS-SSIM,BRISQUE。一种全参考立体图像质量评价方法C-FR和一种无参考 立体图像质量评价方法C-NR。为了消除训练数据和随机性的影响,我们在数据 库上进行了1000次80%训练-20%测试的重复试验,即80%的数据用于训练,剩 下的20%的数据用于测试,训练数据和测试数据不存在内容的重叠。最后利用 现有的算法性能指标(1000次重复试验SRCC,PCC,RMSE的中值)对算法的 优劣进行评估,实验结果见表二。We implemented our algorithm on three stereoscopic image quality assessment databases, including LIVE Phase II, Waterloo IVC 3D Phase I and Phase II. The basic information of these databases is listed in Table 1. At the same time, we have selected six public quality evaluation algorithms with excellent performance to compare with our method, including four stereoscopic image quality evaluation algorithms based on 2D: PSNR, SSIM, MS-SSIM, and BRISQUE. A full-reference stereoscopic image quality assessment method C-FR and a no-reference stereoscopic image quality assessment method C-NR. In order to eliminate the influence of training data and randomness, we performed 1000 repeated trials of 80% training-20% testing on the database, that is, 80% of the data is used for training, and the remaining 20% of the data is used for testing, training There is no content overlap between data and test data. Finally, use the existing algorithm performance indicators (median value of SRCC, PCC, RMSE for 1000 repeated tests) to evaluate the pros and cons of the algorithm, and the experimental results are shown in Table 2.

表一数据库基本信息Table 1 Basic information of the database

结合附图2,可以看出,本发明提出的算法在四个数据库的测试中,不仅表 现出了比其他无参考图像质量评价算法更优秀的主观一致性和稳定性,在LIVE 和TID2013数据库上,甚至优于全参考的质量评价方法。In conjunction with accompanying drawing 2, it can be seen that the algorithm proposed by the present invention not only exhibits better subjective consistency and stability than other no-reference image quality evaluation algorithms in the tests of the four databases, but also has better subjective consistency and stability in the LIVE and TID2013 databases. , even better than full-reference quality assessment methods.

表二三个数据库上算法性能比较Table 2 Algorithm performance comparison on three databases

Claims (6)

1. The no-reference stereo image quality evaluation method based on visual perception and binocular competition is characterized by comprising the following steps of: the method comprises the following specific steps:
converting an input stereo image pair to be tested into gray information;
step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax image and an uncertainty image, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering;
correcting and synthesizing a monocular image by utilizing the gray information, the filtering response of the gray information and the analog disparity map;
obtaining a Gaussian difference image from the monocular image and the space with different scales and the frequency space of the uncertainty image, and completing natural scene statistics and visual perception feature extraction;
step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
2. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: and in the first step, the color information is obtained by RGB color space transformation.
3. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: and the simulated parallax image in the second step is obtained by matching the structural similarity of the gray information of the left view and the right view.
The method for calculating the uncertainty map in the second step comprises the following steps:
<mrow> <mi>U</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> <mi>r</mi> <mi>t</mi> <mi>a</mi> <mi>int</mi> <mi>y</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>,</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;mu;</mi> <mi>l</mi> </msub> <msub> <mi>&amp;mu;</mi> <mi>r</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;sigma;</mi> <mrow> <mi>l</mi> <mi>r</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>&amp;mu;</mi> <mi>l</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&amp;mu;</mi> <mi>r</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <msup> <msub> <mi>&amp;sigma;</mi> <mi>l</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&amp;sigma;</mi> <mi>r</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein, l represents a left view gray scale image, r represents a right view gray scale image after parallax compensation processing, mu and sigma represent the mean value and standard deviation value of the corresponding gray scale image respectively, and C1And C2Each represents a constant term. The simulated disparity map and the uncertainty map are used for subsequent Gaussian difference image processing and feature extraction.
4. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the monocular image in the third step comprises the following steps:
CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)
<mrow> <msub> <mi>W</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>W</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mi>x</mi> <mo>+</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
wherein, (x, y) is a coordinate, IlAnd IrRespectively representing the gray scale images of the stereo image pair left and right views, d representing the parallax of the corresponding mapping pixel point between the left and right views, CI representing the synthesized monocular image, WlAnd WrRepresenting image information weight, GElAnd GErRepresenting the sum of the filter responses of the left and right views expressed in numerical form.
5. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the Gaussian difference image in the fourth step is as follows:
<mrow> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <msub> <mi>I</mi> <mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>1</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>2</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>1</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>&amp;alpha;</mi> <mo>*</mo> <msup> <msub> <mi>f</mi> <mi>j</mi> </msub> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
σ2 ij=L*σ1 ij(7)
wherein,representing the image of the difference of the gaussians,andrespectively representing the original image (monocular image or uncertainty image) by Gaussian filtering under different convolution kernelsImage of σ1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features in the fourth step comprises the following steps:
extracting energy characteristics:
<mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>f</mi> <mi>n</mi> </msub> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>*</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>p</mi> <mi>l</mi> </msub> <msub> <mi>log</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray level.
Extracting edge features:
<mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>f</mi> <mi>n</mi> </msub> </munderover> <mi>E</mi> <mi>P</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>a</mi> <mi>n</mi> <mi>n</mi> <mi>y</mi> <mo>(</mo> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>*</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
6. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the machine learning method in the fifth step comprises machine learning methods such as a support vector machine (SVR) and a neural network.
CN201711003045.4A 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method Active CN107635136B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710886018 2017-09-27
CN201710886018X 2017-09-27

Publications (2)

Publication Number Publication Date
CN107635136A true CN107635136A (en) 2018-01-26
CN107635136B CN107635136B (en) 2019-03-19

Family

ID=61106357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711003045.4A Active CN107635136B (en) 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method

Country Status (1)

Country Link
CN (1) CN107635136B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257131A (en) * 2018-02-24 2018-07-06 南通大学 A 3D Image Quality Evaluation Method
CN108520510A (en) * 2018-03-19 2018-09-11 天津大学 A No-reference Stereo Image Quality Evaluation Method Based on Global and Local Analysis
CN108648186A (en) * 2018-05-11 2018-10-12 北京理工大学 Based on primary vision perception mechanism without with reference to stereo image quality evaluation method
CN109257593A (en) * 2018-10-12 2019-01-22 天津大学 Immersive VR quality evaluating method based on human eye visual perception process
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 A reference-free image quality assessment method based on image entropy
CN110517308A (en) * 2019-07-12 2019-11-29 重庆邮电大学 A No-reference Method for Asymmetric Distorted Stereo Image Quality Evaluation
CN110838120A (en) * 2019-11-18 2020-02-25 方玉明 A weighted quality evaluation method for asymmetric distorted 3D video based on spatiotemporal information
CN113269204A (en) * 2021-05-17 2021-08-17 山东大学 Color stability analysis method and system for color direct part marking image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MING-JUN CHEN等: "No-Reference Quality Assessment of Natural Stereopairs", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
SEUNGCHUL RYU等: "No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception", 《 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
WANG YING等: "New no-reference stereo image quality method for image communication", 《2016 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING & COMMUNICATION TECHNOLOGIES, RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257131A (en) * 2018-02-24 2018-07-06 南通大学 A 3D Image Quality Evaluation Method
CN108520510A (en) * 2018-03-19 2018-09-11 天津大学 A No-reference Stereo Image Quality Evaluation Method Based on Global and Local Analysis
CN108520510B (en) * 2018-03-19 2021-10-19 天津大学 A reference-free stereo image quality assessment method based on global and local analysis
CN108648186A (en) * 2018-05-11 2018-10-12 北京理工大学 Based on primary vision perception mechanism without with reference to stereo image quality evaluation method
CN108648186B (en) * 2018-05-11 2021-11-19 北京理工大学 No-reference stereo image quality evaluation method based on primary visual perception mechanism
CN109257593A (en) * 2018-10-12 2019-01-22 天津大学 Immersive VR quality evaluating method based on human eye visual perception process
CN109257593B (en) * 2018-10-12 2020-08-18 天津大学 Immersive virtual reality quality evaluation method based on human eye visual perception process
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 A reference-free image quality assessment method based on image entropy
CN109325550B (en) * 2018-11-02 2020-07-10 武汉大学 A reference-free image quality assessment method based on image entropy
CN110517308A (en) * 2019-07-12 2019-11-29 重庆邮电大学 A No-reference Method for Asymmetric Distorted Stereo Image Quality Evaluation
CN110838120A (en) * 2019-11-18 2020-02-25 方玉明 A weighted quality evaluation method for asymmetric distorted 3D video based on spatiotemporal information
CN113269204A (en) * 2021-05-17 2021-08-17 山东大学 Color stability analysis method and system for color direct part marking image

Also Published As

Publication number Publication date
CN107635136B (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN107635136B (en) View-based access control model perception and binocular competition are without reference stereo image quality evaluation method
CN105338343B (en) It is a kind of based on binocular perceive without refer to stereo image quality evaluation method
CN107578404B (en) Objective evaluation method of full-reference stereo image quality based on visual salient feature extraction
CN101877143B (en) Three-dimensional scene reconstruction method of two-dimensional image group
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
CN106097327B (en) In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN104658001B (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN101610425B (en) Method for evaluating stereo image quality and device
CN109523513B (en) Stereoscopic Image Quality Evaluation Method Based on Sparsely Reconstructed Color Fusion Image
CN108389192A (en) Stereo-picture Comfort Evaluation method based on convolutional neural networks
CN104394403B (en) A kind of stereoscopic video quality method for objectively evaluating towards compression artefacts
Su et al. Color and depth priors in natural images
CN109831664B (en) A fast compressed stereoscopic video quality evaluation method based on deep learning
Xu et al. EPES: Point cloud quality modeling using elastic potential energy similarity
CN108520510B (en) A reference-free stereo image quality assessment method based on global and local analysis
CN101950422A (en) Singular value decomposition(SVD)-based image quality evaluation method
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN105654465A (en) Stereo image quality evaluation method through parallax compensation and inter-viewpoint filtering
CN106530282A (en) Spatial feature-based non-reference three-dimensional image quality objective assessment method
CN111882516B (en) An Image Quality Assessment Method Based on Visual Saliency and Deep Neural Networks
CN106651835A (en) Entropy-based double-viewpoint reference-free objective stereo-image quality evaluation method
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description
CN108648186B (en) No-reference stereo image quality evaluation method based on primary visual perception mechanism
CN108259893B (en) A virtual reality video quality evaluation method based on two-stream convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant