CN107635136A - View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method - Google Patents

View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method Download PDF

Info

Publication number
CN107635136A
CN107635136A CN201711003045.4A CN201711003045A CN107635136A CN 107635136 A CN107635136 A CN 107635136A CN 201711003045 A CN201711003045 A CN 201711003045A CN 107635136 A CN107635136 A CN 107635136A
Authority
CN
China
Prior art keywords
mrow
msub
image
msup
munderover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711003045.4A
Other languages
Chinese (zh)
Other versions
CN107635136B (en
Inventor
刘利雄
张久发
王天舒
黄华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN107635136A publication Critical patent/CN107635136A/en
Application granted granted Critical
Publication of CN107635136B publication Critical patent/CN107635136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of stereo image quality evaluation method, more particularly to a kind of view-based access control model is perceived with binocular competition without stereo image quality evaluation method is referred to, and belongs to art of image analysis.Input stereo pairs are converted into half-tone information by this method first, obtain the simulation disparity map of stereo pairs and uncertain figure using matching algorithm to half-tone information, while synthesize single eye images using half-tone information and its filter response and simulation disparity map correction.Secondly, obtained single eye images and uncertain figure are subjected to difference of Gaussian processing on different scale space and frequency space, and extract nature scene statistics and visually-perceptible characteristic vector.Then, feature is trained respectively using SVMs and BP neural network, obtains forecast model, applied forecasting model and test and corresponding characteristic vector, carry out prediction of quality and assessment.This method has subjective consistency height, data base-independent is high, the characteristics of stability is high, the effect of great competitiveness is all shown when handling Various Complex type of distortion, it can be embedded into the related application system of the stereoscopic vision content such as stereoscopic image/video processing, there is very strong application value.

Description

No-reference stereo image quality evaluation method based on visual perception and binocular competition
Technical Field
The invention relates to a stereo image quality evaluation method, in particular to a no-reference stereo image quality evaluation method based on visual perception and binocular competition, and belongs to the field of image analysis.
Background
In recent years, with the development of scientific technology, the cost of generating and transmitting stereoscopic images becomes lower and lower, which makes stereoscopic images become more and more popular and indispensable in our daily life as an excellent medium for information transmission. However, the stereo image inevitably introduces distortion in each stage of scene acquisition, encoding, network transmission, decoding, post-processing, compression storage and projection, for example, blur distortion caused by device parameter setting, lens shaking and other factors during the scene acquisition process; compression distortion caused by image compression storage, and the like. The introduction of distortion can greatly reduce the visual experience of people and seriously affect the physical and mental health of people. How to inhibit the propagation of low-quality stereo images and ensure the visual experience of people becomes a problem to be solved urgently.
The media for generating and transmitting the three-dimensional image has the capability of automatically evaluating the image quality, thereby improving the quality of the image at the output end of the media and having important significance for solving the problem. Specifically, the study has the following application values:
(1) the system can be embedded into practical application systems (such as video projection systems, network transmission systems and the like) to monitor the quality of images/videos in real time;
(2) can be used for evaluating the advantages and disadvantages of various stereo image/video processing algorithms and tools (such as stereo image compression coding, image/video acquisition tools and the like);
(3) the method can be used for quality audit of three-dimensional image/video works, and prevents poor-quality image products from harming physical and mental health of audiences.
In conclusion, the method has important theoretical value and practical significance for researching an objective non-reference stereo image quality evaluation model. The invention provides a no-reference stereo image quality evaluation method based on visual perception and binocular perception, which refers to the existing theory and technology of Kruger et al and the visual perception theory and the visual perception characteristic extraction theory of Joshi et al.
Theory of visual perception
Kruger et al propose the theory of visual perception, and the study on the theory of visual perception first considers the perception phenomenon of the retina of the human eye. Photoreceptor cells in the retina produce light conduction, and the signals produced by light conduction are transmitted in excitatory or inhibitory visual pathways. Studies have shown that there is low-pass filtering in human retinal ganglion cells, and one prominent feature that appears in this context is the central-surround field of reception of the retina [40 ]. The central-surround receiving field is generally concentric, i.e. the central region of the receiving field is excited (or suppressed) for the received optical signal, while the surrounding field is suppressed (or excited) for the received optical signal. This received field can be modeled by difference of gaussians and is similar to a laplacian filter [41] for edge detection. It therefore emphasizes the spatial variation of the luminance, and furthermore, such a receive field is also sensitive to temporal variations and thus forms the basis for motion processing. In addition, there are also separate and highly interconnected channels in the human visual system that handle different types of visual information (color, shape, motion, texture, stereo information), which contributes to the efficiency and stability of visual information expression. Under such a visual perception mechanism, the brain perceives three-dimensional features of a stereoscopic image through a large amount of depth information, of which binocular parallax is one of the most important depth information. Considering that there may be multiple spatial frequencies in the retina, to simulate the center-surround receive field for these frequencies, multiple standard deviation values need to be generated and the difference image computed by a gaussian difference operator.
(II) visual perception feature extraction
On the basis of researching the problems of visual perception, retinal perception and the like, Joshi et al provides a method for extracting energy features and edge features of an image as visual perception features.
The extraction calculation formula of the energy features is as follows:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray scale level.
The extraction calculation formula of the edge features is as follows:
canny represents the edge detection of the image by using a Canny method, and the qualified edge pixel points are represented in a numerical mode.
Disclosure of Invention
The invention aims to solve the problems that a simulation method of a human eye vision perception system is not perfect, the utilization of vision perception information in an image is not sufficient, the subjective consistency is poor, the database independence is poor, the algorithm stability is poor and the like in the quality evaluation of a non-reference stereo image, and provides a non-reference stereo image quality evaluation method based on vision perception and binocular competition.
The method is realized by the following technical scheme.
The no-reference stereo image quality evaluation method based on visual perception and binocular competition comprises the following specific steps:
step one, converting an input stereo image pair to be tested into gray information.
And step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax map and an uncertainty map, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering.
And thirdly, correcting and synthesizing the monocular image by utilizing the gray information, the filtering response of the gray information and the analog parallax image.
And step four, obtaining Gaussian difference images from the monocular image and the different scale spaces and frequency spaces of the uncertainty images, and completing natural scene statistics and visual perception feature extraction.
The method for calculating the Gaussian difference image comprises the following steps:
σ2 ij=L*σ1 ij(3)
wherein,representing the image of the difference of the gaussians,andrespectively representing images obtained by performing Gaussian filtering under different convolution kernels on an original image (monocular image or uncertainty map), sigma1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features comprises the following steps:
extracting energy characteristics:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray scale level.
Extracting edge features:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
Step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
Advantageous effects
Compared with the prior art, the non-reference stereo image quality evaluation method based on visual perception and binocular competition has the characteristics of high subjective consistency, high database independence, high algorithm stability and the like; the method can be used in cooperation with a related application system of stereo image/video processing, and has strong application value.
Drawings
FIG. 1 is a flow chart of a method for non-reference stereo image quality evaluation based on visual perception and binocular competition according to the present invention;
fig. 2 is a box diagram of tests performed on LIVE database by the present invention and other stereoscopic image quality evaluation methods.
Detailed Description
The following detailed description of embodiments of the method of the present invention will be made with reference to the accompanying drawings and specific examples.
Examples
The flow of the method is shown in figure 1, and the specific implementation process is as follows:
step one, converting an input stereo image pair to be tested into gray information.
And step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax map and an uncertainty map, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering.
The simulated parallax image is obtained by matching the structural similarity of the gray information of the left view and the right view.
The uncertainty map is calculated as follows:
wherein l represents a left view gray scale image, r represents a right view gray scale image after parallax compensation processing, and μ and σ respectivelyMean and standard deviation values, C, representing the corresponding grey scale map1And C2Each represents a constant term. The simulated parallax image and the uncertainty image are used for subsequent Gaussian difference image processing and feature extraction.
And thirdly, correcting and synthesizing the monocular image by utilizing the gray information, the filtering response of the gray information and the analog parallax image.
The monocular image is calculated as follows:
CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)
wherein, (x, y) is a coordinate, IlAnd IrRepresenting gray-scale images of the stereo image pair left and right views respectively, d representing parallax of corresponding mapping pixel points between the left and right views, CI representing synthesized monocular image, and WlAnd WrRepresenting image information weights, GElAnd GErRepresenting the sum of the filter responses of the left and right views expressed in numerical form.
And step four, obtaining Gaussian difference images from the monocular image and the different scale spaces and frequency spaces of the uncertainty images, and completing natural scene statistics and visual perception feature extraction.
The method for calculating the Gaussian difference image comprises the following steps:
σ2 ij=L*σ1 ij(7)
wherein,representing the image of the difference of the gaussians,andrespectively representing images obtained by performing Gaussian filtering under different convolution kernels on an original image (monocular image or uncertainty map), sigma1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features comprises the following steps:
extracting energy characteristics:
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray scale level.
Extracting edge features:
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
Step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
We implemented our algorithm on three stereo image quality assessment databases, including LIVE Phase II, watermark IVC 3D Phase I and Phase II. The basic information of these databases is listed in table one. Meanwhile, six algorithms are selected for disclosure, and the quality evaluation algorithm with excellent performance is compared with the method, which comprises four stereoscopic image quality evaluation algorithms based on 2D: PSNR, SSIM, MS-SSIM, BRISQUE. A full-reference stereo image quality evaluation method C-FR and a no-reference stereo image quality evaluation method C-NR. To eliminate the effect of training data and randomness, we performed 1000 replicates of 80% training-20% testing on the database, i.e., 80% of the data was used for training and the remaining 20% of the data was used for testing, with no overlap of the content between the training data and the testing data. And finally, evaluating the advantages and disadvantages of the algorithm by using the performance indexes (median values of 1000 repeated tests SRCC, PCC and RMSE) of the existing algorithm, wherein the experimental results are shown in the second table.
Table-database basic information
With reference to fig. 2, it can be seen that the algorithm provided by the present invention not only shows superior subjective consistency and stability compared with other non-reference image quality evaluation algorithms in the test of four databases, but is even superior to the fully-reference quality evaluation method in LIVE and TID2013 databases.
Comparison of algorithm Performance on Table two and three databases

Claims (6)

1. The no-reference stereo image quality evaluation method based on visual perception and binocular competition is characterized by comprising the following steps of: the method comprises the following specific steps:
converting an input stereo image pair to be tested into gray information;
step two, further processing the gray information by applying a matching algorithm to obtain a simulated parallax image and an uncertainty image, and simultaneously obtaining a filtering response of the gray information by utilizing Gabor filtering;
correcting and synthesizing a monocular image by utilizing the gray information, the filtering response of the gray information and the analog disparity map;
obtaining a Gaussian difference image from the monocular image and the space with different scales and the frequency space of the uncertainty image, and completing natural scene statistics and visual perception feature extraction;
step five, processing each color stereo image pair in the database by adopting the method of the step one, the step two, the step three and the step four, and calculating to obtain a quality feature vector corresponding to each group of stereo images; then, training on a training set by using a machine learning method based on learning, testing on a testing set, and mapping the quality characteristic vectors into corresponding quality scores; and evaluating the quality of the algorithm by using the existing algorithm performance indexes (SROCC, LCC and the like).
2. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: and in the first step, the color information is obtained by RGB color space transformation.
3. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: and the simulated parallax image in the second step is obtained by matching the structural similarity of the gray information of the left view and the right view.
The method for calculating the uncertainty map in the second step comprises the following steps:
<mrow> <mi>U</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> <mi>r</mi> <mi>t</mi> <mi>a</mi> <mi>int</mi> <mi>y</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>,</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;mu;</mi> <mi>l</mi> </msub> <msub> <mi>&amp;mu;</mi> <mi>r</mi> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <mn>2</mn> <msub> <mi>&amp;sigma;</mi> <mrow> <mi>l</mi> <mi>r</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msup> <msub> <mi>&amp;mu;</mi> <mi>l</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&amp;mu;</mi> <mi>r</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>)</mo> <mo>(</mo> <msup> <msub> <mi>&amp;sigma;</mi> <mi>l</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>&amp;sigma;</mi> <mi>r</mi> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein, l represents a left view gray scale image, r represents a right view gray scale image after parallax compensation processing, mu and sigma represent the mean value and standard deviation value of the corresponding gray scale image respectively, and C1And C2Each represents a constant term. The simulated disparity map and the uncertainty map are used for subsequent Gaussian difference image processing and feature extraction.
4. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the monocular image in the third step comprises the following steps:
CI(x,y)=Wl(x,y)*Il(x,y)+Wr((x+d),y)*Ir((x+d),y) (2)
<mrow> <msub> <mi>W</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>W</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mrow> <mi>x</mi> <mo>+</mo> <mi>d</mi> </mrow> <mo>)</mo> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <msub> <mi>GE</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>GE</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
wherein, (x, y) is a coordinate, IlAnd IrRespectively representing the gray scale images of the stereo image pair left and right views, d representing the parallax of the corresponding mapping pixel point between the left and right views, CI representing the synthesized monocular image, WlAnd WrRepresenting image information weight, GElAnd GErRepresenting the sum of the filter responses of the left and right views expressed in numerical form.
5. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the method for calculating the Gaussian difference image in the fourth step is as follows:
<mrow> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>=</mo> <msub> <mi>I</mi> <mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>1</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> </mrow> </msub> <mo>-</mo> <msub> <mi>I</mi> <mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>2</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msup> <msub> <mi>&amp;sigma;</mi> <mn>1</mn> </msub> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>&amp;alpha;</mi> <mo>*</mo> <msup> <msub> <mi>f</mi> <mi>j</mi> </msub> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
σ2 ij=L*σ1 ij(7)
wherein,representing the image of the difference of the gaussians,andrespectively representing the original image (monocular image or uncertainty image) by Gaussian filtering under different convolution kernelsImage of σ1 ijAnd σ2 ijRespectively represent two different convolution kernels, w and h represent the width and height of an image to be processed under a certain scale, f represents frequency, and i and j represent a certain scale space and a certain frequency space respectively.
The method for extracting the visual perception features in the fourth step comprises the following steps:
extracting energy characteristics:
<mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>f</mi> <mi>n</mi> </msub> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>*</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>p</mi> <mi>l</mi> </msub> <msub> <mi>log</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
where H represents the information entropy of the image, m represents the number of gray levels of the image, plRepresenting the probability-related value of the occurrence of the ith gray level.
Extracting edge features:
<mrow> <msub> <mi>E</mi> <mn>2</mn> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>f</mi> <mi>n</mi> </msub> </munderover> <mi>E</mi> <mi>P</mi> <mrow> <mo>(</mo> <mi>G</mi> <mi>a</mi> <mi>n</mi> <mi>n</mi> <mi>y</mi> <mo>(</mo> <msub> <mi>I</mi> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>n</mi> </msub> <mo>*</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>*</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
the Canny represents that the Canny method is used for carrying out edge detection on the image, and edge pixel points meeting the conditions are represented in a numerical mode.
6. The non-reference stereoscopic image quality evaluation method based on visual perception and binocular competition according to claim 1, wherein: the machine learning method in the fifth step comprises machine learning methods such as a support vector machine (SVR) and a neural network.
CN201711003045.4A 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method Active CN107635136B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710886018X 2017-09-27
CN201710886018 2017-09-27

Publications (2)

Publication Number Publication Date
CN107635136A true CN107635136A (en) 2018-01-26
CN107635136B CN107635136B (en) 2019-03-19

Family

ID=61106357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711003045.4A Active CN107635136B (en) 2017-09-27 2017-10-24 View-based access control model perception and binocular competition are without reference stereo image quality evaluation method

Country Status (1)

Country Link
CN (1) CN107635136B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257131A (en) * 2018-02-24 2018-07-06 南通大学 A kind of 3D rendering quality evaluating method
CN108520510A (en) * 2018-03-19 2018-09-11 天津大学 It is a kind of based on entirety and partial analysis without referring to stereo image quality evaluation method
CN108648186A (en) * 2018-05-11 2018-10-12 北京理工大学 Based on primary vision perception mechanism without with reference to stereo image quality evaluation method
CN109257593A (en) * 2018-10-12 2019-01-22 天津大学 Immersive VR quality evaluating method based on human eye visual perception process
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 Non-reference picture quality appraisement method based on image entropy
CN110517308A (en) * 2019-07-12 2019-11-29 重庆邮电大学 It is a kind of without refer to asymmetric distortion stereo image quality evaluation method
CN110838120A (en) * 2019-11-18 2020-02-25 方玉明 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information
CN113269204A (en) * 2021-05-17 2021-08-17 山东大学 Color stability analysis method and system for color direct part marking image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338343A (en) * 2015-10-20 2016-02-17 北京理工大学 No-reference stereo image quality evaluation method based on binocular perception

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MING-JUN CHEN等: "No-Reference Quality Assessment of Natural Stereopairs", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
SEUNGCHUL RYU等: "No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception", 《 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
WANG YING等: "New no-reference stereo image quality method for image communication", 《2016 IEEE RIVF INTERNATIONAL CONFERENCE ON COMPUTING & COMMUNICATION TECHNOLOGIES, RESEARCH, INNOVATION, AND VISION FOR THE FUTURE (RIVF)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257131A (en) * 2018-02-24 2018-07-06 南通大学 A kind of 3D rendering quality evaluating method
CN108520510A (en) * 2018-03-19 2018-09-11 天津大学 It is a kind of based on entirety and partial analysis without referring to stereo image quality evaluation method
CN108520510B (en) * 2018-03-19 2021-10-19 天津大学 No-reference stereo image quality evaluation method based on overall and local analysis
CN108648186A (en) * 2018-05-11 2018-10-12 北京理工大学 Based on primary vision perception mechanism without with reference to stereo image quality evaluation method
CN108648186B (en) * 2018-05-11 2021-11-19 北京理工大学 No-reference stereo image quality evaluation method based on primary visual perception mechanism
CN109257593A (en) * 2018-10-12 2019-01-22 天津大学 Immersive VR quality evaluating method based on human eye visual perception process
CN109257593B (en) * 2018-10-12 2020-08-18 天津大学 Immersive virtual reality quality evaluation method based on human eye visual perception process
CN109325550A (en) * 2018-11-02 2019-02-12 武汉大学 Non-reference picture quality appraisement method based on image entropy
CN109325550B (en) * 2018-11-02 2020-07-10 武汉大学 No-reference image quality evaluation method based on image entropy
CN110517308A (en) * 2019-07-12 2019-11-29 重庆邮电大学 It is a kind of without refer to asymmetric distortion stereo image quality evaluation method
CN110838120A (en) * 2019-11-18 2020-02-25 方玉明 Weighting quality evaluation method of asymmetric distortion three-dimensional video based on space-time information
CN113269204A (en) * 2021-05-17 2021-08-17 山东大学 Color stability analysis method and system for color direct part marking image

Also Published As

Publication number Publication date
CN107635136B (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN107635136B (en) View-based access control model perception and binocular competition are without reference stereo image quality evaluation method
Shao et al. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties
CN109523513B (en) Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image
CN106097327B (en) In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic
CN105338343B (en) It is a kind of based on binocular perceive without refer to stereo image quality evaluation method
CN104658001B (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN101610425B (en) Method for evaluating stereo image quality and device
CN108769671B (en) Stereo image quality evaluation method based on self-adaptive fusion image
Su et al. Color and depth priors in natural images
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN109429051B (en) Non-reference stereo video quality objective evaluation method based on multi-view feature learning
Ma et al. Reduced-reference stereoscopic image quality assessment using natural scene statistics and structural degradation
CN109788275A (en) Naturality, structure and binocular asymmetry are without reference stereo image quality evaluation method
CN109257592B (en) Stereoscopic video quality objective evaluation method based on deep learning
CN110246111A (en) Based on blending image with reinforcing image without reference stereo image quality evaluation method
CN107085835A (en) Color image filtering method based on quaternary number Weighted Kernel Norm minimum
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
Ma et al. Joint binocular energy-contrast perception for quality assessment of stereoscopic images
Appina et al. A full reference stereoscopic video quality assessment metric
Liu et al. Blind stereoscopic image quality assessment accounting for human monocular visual properties and binocular interactions
CN109978928B (en) Binocular vision stereo matching method and system based on weighted voting
CN113191962B (en) Underwater image color recovery method and device based on ambient background light and storage medium
CN108492275B (en) No-reference stereo image quality evaluation method based on deep neural network
CN108648186B (en) No-reference stereo image quality evaluation method based on primary visual perception mechanism
Huang et al. Light field image quality assessment using contourlet transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant