CN107635136B - View-based access control model perception and binocular competition are without reference stereo image quality evaluation method - Google Patents

View-based access control model perception and binocular competition are without reference stereo image quality evaluation method Download PDF

Info

Publication number
CN107635136B
CN107635136B CN201711003045.4A CN201711003045A CN107635136B CN 107635136 B CN107635136 B CN 107635136B CN 201711003045 A CN201711003045 A CN 201711003045A CN 107635136 B CN107635136 B CN 107635136B
Authority
CN
China
Prior art keywords
image
quality evaluation
visual perception
gray
evaluation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711003045.4A
Other languages
Chinese (zh)
Other versions
CN107635136A (en
Inventor
刘利雄
张久发
王天舒
黄华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN107635136A publication Critical patent/CN107635136A/en
Application granted granted Critical
Publication of CN107635136B publication Critical patent/CN107635136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of stereo image quality evaluation method, in particular to a kind of view-based access control model perception, without reference stereo image quality evaluation method, belongs to art of image analysis with binocular competition.This method converts grayscale information for input stereo pairs first, obtains the simulation disparity map and uncertain figure of stereo pairs using matching algorithm to grayscale information, while synthesizing single eye images using grayscale information and its filter response and simulation disparity map correction.Secondly, obtained single eye images and uncertain figure are carried out difference of Gaussian processing on different scale space and frequency space, and extract nature scene statistics and visual perception feature vector.Then, feature is trained respectively using support vector machines and BP neural network, obtains prediction model, applied forecasting model and test and corresponding feature vector, carry out prediction of quality and assessment.This method has subjective consistency high, data base-independent is high, the high feature of stability, the effect of great competitiveness is all shown when handling Various Complex type of distortion, it can be embedded into the relevant application systems of stereoscopic visions content such as stereoscopic image/video processing, there is very strong application value.

Description

No-reference stereo image quality evaluation method based on visual perception and binocular competition
Technical Field
The invention relates to a stereo image quality evaluation method, in particular to a non-reference stereo image quality evaluation method based on visual perception and binocular competition, and belongs to the field of image analysis.
Background
In recent years, with the development of scientific technology, the cost of generating and transmitting stereoscopic images becomes lower and lower, which makes stereoscopic images become more and more popular and indispensable in our daily life as an excellent medium for information transmission. However, the stereo image inevitably introduces distortion in each stage of scene acquisition, encoding, network transmission, decoding, post-processing, compression storage and projection, for example, blur distortion caused by device parameter setting, lens shaking and other factors during the scene acquisition process; compression distortion caused by image compression storage, and the like. The introduction of distortion can greatly reduce the visual experience of people and seriously affect the physical and mental health of people. How to restrain the propagation of low-quality stereo images and ensure the visual experience of people becomes a problem to be solved urgently.
The media for generating and transmitting the stereo image has the capability of automatically evaluating the image quality, thereby improving the image quality of the output end of the media and having important significance for solving the problem. In particular, the study has the following application values:
(1) the system can be embedded into practical application systems (such as video projection systems, network transmission systems and the like) to monitor the quality of images/videos in real time;
(2) the method can be used for evaluating the advantages and disadvantages of various stereo image/video processing algorithms and tools (such as stereo image compression coding, image/video acquisition tools and the like);
(3) the method can be used for quality audit of three-dimensional image/video works, and prevents poor-quality image products from harming physical and mental health of audiences.
In conclusion, the method has important theoretical value and practical significance for researching an objective non-reference stereo image quality evaluation model. The invention provides a no-reference stereo image quality evaluation method based on visual perception and binocular perception, and the prior theory and technology of reference of the method are the visual perception theory proposed by Kruger et al and the visual perception characteristic extraction theory proposed by Joshi et al.
Theory of visual perception
Kruger et al propose the theory of visual perception, and the study on the theory of visual perception first considers the perception phenomenon of the retina of the human eye. Photoreceptor cells in the retina produce light conduction, and the signals produced by light conduction are transmitted in excitatory or inhibitory visual pathways. Studies have shown that there is low-pass filtering in human retinal ganglion cells, and one prominent feature that appears in this context is the central-surround receptive field of the retina [40 ]. The central-surround receiving field is generally concentric, i.e. the central region of the receiving field is excited (or suppressed) for the received optical signal, while the surrounding field is suppressed (or excited) for the received optical signal. This received field can be modeled by difference of gaussians and is similar to the laplacian filter for edge detection [41 ]. It therefore emphasizes the spatial variation of the luminance, and furthermore, such a receive field is also sensitive to temporal variations and thus forms the basis for motion processing. In addition, there are also separate and highly interconnected channels in the human visual system that handle different types of visual information (color, shape, motion, texture, stereo information), which contributes to the efficiency and stability of visual information expression. Under such a visual perception mechanism, the brain perceives three-dimensional features of a stereoscopic image through a large amount of depth information, of which binocular parallax is one of the most important depth information. Considering that there may be multiple spatial frequencies in the retina, to simulate the center-surround receive field for these frequencies, multiple standard deviation values need to be generated and the difference image computed by a gaussian difference operator.
(II) visual perception feature extraction
On the basis of researching the problems of visual perception, retinal perception and the like, Joshi et al provides a method for extracting energy features and edge features of an image as visual perception features.
The extraction calculation formula of the energy features is as follows:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray level.
The extraction calculation formula of the edge features is as follows:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
Disclosure of Invention
The invention aims to solve the problems that a human eye vision perception system simulation method is not perfect, the utilization of vision perception information in an image is not sufficient, the subjective consistency is poor, the database independence is poor, the algorithm stability is poor and the like in the quality evaluation of a non-reference stereo image, and provides a non-reference stereo image quality evaluation method based on vision perception and binocular competition.
The method is realized by the following technical scheme.
The no-reference stereo image quality evaluation method based on visual perception and binocular competition comprises the following specific steps:
step one, converting an input stereo image pair to be tested into gray information.
And step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax map and an uncertainty map, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering.
And thirdly, correcting and synthesizing the monocular image by utilizing the gray information, the filtering response of the gray information and the analog parallax image.
And step four, obtaining Gaussian difference images from the monocular image and the different scale spaces and frequency spaces of the uncertainty images, and completing natural scene statistics and visual perception feature extraction.
The method for calculating the Gaussian difference image comprises the following steps:
σ2 ij=L*σ1 ij (3)
wherein,representing the image of the difference of the gaussians,andrespectively representing images obtained by performing Gaussian filtering under different convolution kernels on an original image (monocular image or uncertainty map), sigma1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features comprises the following steps:
extracting energy characteristics:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray level.
Extracting edge features:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
Step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
Advantageous effects
Compared with the prior art, the non-reference stereo image quality evaluation method based on visual perception and binocular competition has the characteristics of high subjective consistency, high database independence, high algorithm stability and the like; the method can be used in cooperation with a related application system of stereo image/video processing, and has strong application value.
Drawings
FIG. 1 is a flow chart of a method for non-reference stereo image quality evaluation based on visual perception and binocular competition according to the present invention;
fig. 2 is a box diagram of tests performed on LIVE database by the present invention and other stereoscopic image quality evaluation methods.
Detailed Description
The following detailed description of embodiments of the method of the present invention will be made with reference to the accompanying drawings and specific examples.
Examples
The flow of the method is shown in figure 1, and the specific implementation process is as follows:
step one, converting an input stereo image pair to be tested into gray information.
And step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax map and an uncertainty map, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering.
The simulated parallax image is obtained by matching the structural similarity of the gray information of the left view and the right view.
The uncertainty map is calculated as follows:
wherein, l represents a left view gray scale image, r represents a right view gray scale image after parallax compensation processing, mu and sigma represent the mean value and standard deviation value of the corresponding gray scale image respectively, and C1And C2Each represents a constant term. The simulated disparity map and the uncertainty map are used for subsequent Gaussian difference image processing and feature extraction.
And thirdly, correcting and synthesizing the monocular image by utilizing the gray information, the filtering response of the gray information and the analog parallax image.
The monocular image is calculated as follows:
CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)
wherein, (x, y) is a coordinate, IlAnd IrRespectively representing the gray scale images of the stereo image pair left and right views, d representing the parallax of the corresponding mapping pixel point between the left and right views, and CI representing the synthesizedMonocular image, WlAnd WrRepresenting image information weight, GElAnd GErRepresenting the sum of the filter responses of the left and right views expressed in numerical form.
And step four, obtaining Gaussian difference images from the monocular image and the different scale spaces and frequency spaces of the uncertainty images, and completing natural scene statistics and visual perception feature extraction.
The method for calculating the Gaussian difference image comprises the following steps:
σ2 ij=L*σ1 ij (7)
wherein,representing the image of the difference of the gaussians,andrespectively representing images obtained by performing Gaussian filtering under different convolution kernels on an original image (monocular image or uncertainty map), sigma1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features comprises the following steps:
extracting energy characteristics:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray level.
Extracting edge features:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
Step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
We implemented our algorithm on three stereo image quality assessment databases, including LIVE Phase II, watermark IVC 3D Phase I and Phase II. The basic information of these databases is listed in table one. Meanwhile, the quality evaluation algorithm with excellent performance is compared with the method by selecting six algorithms for disclosure, and the method comprises four stereoscopic image quality evaluation algorithms based on 2D: PSNR, SSIM, MS-SSIM, BRISQUE. A full-reference stereo image quality evaluation method C-FR and a no-reference stereo image quality evaluation method C-NR. To eliminate the effect of training data and randomness, we performed 1000 replicates of 80% training-20% testing on the database, i.e., 80% of the data was used for training and the remaining 20% of the data was used for testing, with no overlap of the content between the training data and the testing data. And finally, evaluating the advantages and disadvantages of the algorithm by using the performance indexes (median values of 1000 repeated tests SRCC, PCC and RMSE) of the existing algorithm, wherein the experimental results are shown in the second table.
Table-database basic information
With reference to fig. 2, it can be seen that the algorithm provided by the present invention not only shows better subjective consistency and stability than other non-reference image quality evaluation algorithms in the test of four databases, but also is even better than the full-reference quality evaluation method in LIVE and TID2013 databases.
Comparison of algorithm Performance on Table two and three databases

Claims (6)

1. The no-reference stereo image quality evaluation method based on visual perception and binocular competition is characterized by comprising the following steps of: the method comprises the following specific steps:
converting an input stereo image pair to be tested into gray information;
step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax image and an uncertainty image, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering;
correcting and synthesizing a monocular image by utilizing the gray information, the filtering response of the gray information and the analog disparity map;
obtaining a Gaussian difference image from the monocular image and the space with different scales and the frequency space of the uncertainty image, and completing natural scene statistics and visual perception feature extraction;
step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the advantages and disadvantages of the algorithm by using the existing algorithm performance indexes SROCC and LCC.
2. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: and in the first step, the color information is obtained by RGB color space transformation.
3. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the simulated parallax image in the second step is obtained by matching the structural similarity of the gray information of the left view and the right view;
the uncertainty map calculation method in the second step is as follows:
wherein, l represents a left view gray scale image, r represents a right view gray scale image after parallax compensation processing, mu and sigma represent the mean value and standard deviation value of the corresponding gray scale image respectively, and C1And C2Respectively represent constant terms; the simulated disparity map and the uncertainty map are used for subsequent Gaussian difference image processing and feature extraction.
4. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the monocular image in the third step is as follows:
CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)
wherein, (x, y) is a coordinate, IlAnd IrRespectively representing the gray scale images of the stereo image pair left and right views, d representing the parallax of the corresponding mapping pixel point between the left and right views, CI representing the synthesized monocular image, WlAnd WrRepresenting image information weight, GElAnd GErRepresenting the sum of the filter responses of the left and right views expressed in numerical form.
5. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the Gaussian difference image in the fourth step is as follows:
σ2 ij=L*σ1 ij (7)
wherein,representing the image of the difference of the gaussians,andrespectively representing images obtained by Gaussian filtering under different convolution kernels on monocular images or uncertainty maps, sigma1 ijAnd σ2 ijRespectively representing two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively;
the method for extracting the visual perception features in the fourth step comprises the following steps:
extracting energy characteristics:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plA probability correlation value representing the occurrence of the l-th gray level;
extracting edge features:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
6. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the machine learning method in the fifth step comprises a support vector machine (SVR) and a neural network machine learning method.
CN201711003045.4A 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method Active CN107635136B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710886018X 2017-09-27
CN201710886018 2017-09-27

Publications (2)

Publication Number Publication Date
CN107635136A CN107635136A (en) 2018-01-26
CN107635136B true CN107635136B (en) 2019-03-19

Family

ID=61106357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711003045.4A Active CN107635136B (en) 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method

Country Status (1)

Country Link
CN (1) CN107635136B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257131A (en) * 2018-02-24 2018-07-06 南通大学 A kind of 3D rendering quality evaluating method
CN108520510B (en) * 2018-03-19 2021-10-19 天津大学 No-reference stereo image quality evaluation method based on overall and local analysis
CN108648186B (en) * 2018-05-11 2021-11-19 北京理工大学 No-reference stereo image quality evaluation method based on primary visual perception mechanism
CN109257593B (en) * 2018-10-12 2020-08-18 天津大学 Immersive virtual reality quality evaluation method based on human eye visual perception process
CN109325550B (en) * 2018-11-02 2020-07-10 武汉大学 No-reference image quality evaluation method based on image entropy
CN110517308A (en) * 2019-07-12 2019-11-29 重庆邮电大学 It is a kind of without refer to asymmetric distortion stereo image quality evaluation method
CN110838120A (en) * 2019-11-18 2020-02-25 方玉明 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information
CN113269204B (en) * 2021-05-17 2022-06-17 山东大学 Color stability analysis method and system for color direct part marking image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
New no-reference stereo image quality method for image communication;Wang Ying等;《2016 IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future (RIVF)》;20161229;全文
No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception;Seungchul Ryu等;《 IEEE Transactions on Circuits and Systems for Video Technology》;20130829;全文
No-Reference Quality Assessment of Natural Stereopairs;Ming-Jun Chen等;《 IEEE Transactions on Image Processing》;20130610;全文

Also Published As

Publication number Publication date
CN107635136A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
CN107635136B (en) View-based access control model perception and binocular competition are without reference stereo image quality evaluation method
Shao et al. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties
CN106097327B (en) In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN107578403B (en) The stereo image quality evaluation method for instructing binocular view to merge based on gradient information
CN109523513B (en) Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image
CN105338343B (en) It is a kind of based on binocular perceive without refer to stereo image quality evaluation method
CN104658001B (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN101610425B (en) Method for evaluating stereo image quality and device
CN108769671B (en) Stereo image quality evaluation method based on self-adaptive fusion image
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN109429051B (en) Non-reference stereo video quality objective evaluation method based on multi-view feature learning
Yan et al. Blind stereoscopic image quality assessment by deep neural network of multi-level feature fusion
CN109510981B (en) Stereo image comfort degree prediction method based on multi-scale DCT
CN110246111A (en) Based on blending image with reinforcing image without reference stereo image quality evaluation method
CN114648482A (en) Quality evaluation method and system for three-dimensional panoramic image
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
Ma et al. Joint binocular energy-contrast perception for quality assessment of stereoscopic images
Appina et al. A full reference stereoscopic video quality assessment metric
Liu et al. Blind stereoscopic image quality assessment accounting for human monocular visual properties and binocular interactions
CN109523508B (en) Dense light field quality evaluation method
CN108492275B (en) No-reference stereo image quality evaluation method based on deep neural network
CN108648186B (en) No-reference stereo image quality evaluation method based on primary visual perception mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant