CN110246111B - No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image - Google Patents
No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image Download PDFInfo
- Publication number
- CN110246111B CN110246111B CN201811498041.2A CN201811498041A CN110246111B CN 110246111 B CN110246111 B CN 110246111B CN 201811498041 A CN201811498041 A CN 201811498041A CN 110246111 B CN110246111 B CN 110246111B
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- parallax
- enhanced
- fused
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 19
- 241000282414 Homo sapiens Species 0.000 claims abstract description 18
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 10
- 238000009826 distribution Methods 0.000 claims description 40
- 238000011156 evaluation Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 14
- 238000010606 normalization Methods 0.000 claims description 10
- 238000007499 fusion processing Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000001174 ascending effect Effects 0.000 claims description 2
- 230000000052 comparative effect Effects 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000012821 model calculation Methods 0.000 claims description 2
- 238000007619 statistical method Methods 0.000 claims description 2
- 230000005764 inhibitory process Effects 0.000 abstract description 2
- 238000009792 diffusion process Methods 0.000 description 3
- 238000001303 quality assessment method Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001125 extrusion Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
- G06T7/596—Depth or shape recovery from multiple images from stereo images from three or more stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The non-reference stereoscopic image quality evaluation method based on the fusion image and the enhanced image comprises the steps of firstly, fusing a left viewpoint and a right viewpoint of a stereoscopic image in a red channel, a green channel and a blue channel based on the characteristics of binocular fusion, binocular competition, binocular inhibition and the like of a human visual system to obtain a color fusion image; secondly, a stereo matching algorithm is used for obtaining a parallax image of the distorted stereo image pair, and the gradient weight of the parallax image is used for weighting the gray level image of the color fusion image; thirdly, generating an enhanced image according to the fusion image and the parallax map; then, natural statistical features are extracted from the fused image and the enhanced image in a spatial domain, and kurtosis and skewness features are extracted from the disparity map; and finally, fusing the extracted features and sending the fused features into support vector regression (support vector regression, SVR) to obtain the quality of the stereoscopic image to be evaluated.
Description
Technical Field
The invention belongs to the field of image processing, relates to objective evaluation research of stereoscopic image quality, and particularly relates to a non-reference stereoscopic image quality objective evaluation method based on a fusion image and an enhanced image.
Background
With the rapid development of 3D technology, stereoscopic image quality evaluation has become one of the indispensable research directions in the 3D field. At present, the evaluation method of the stereoscopic image quality can be divided into subjective evaluation and objective evaluation, wherein the subjective evaluation accords with human visual characteristics, but the realization process is tedious, time-consuming and labor-consuming; the objective evaluation method is simpler and quicker to realize and has good operability, so a large number of students throw into the field of objective evaluation [1-3] 。
Objective quality assessment is classified into three categories according to the degree of using the original image information: full reference stereoscopic image quality assessment [4-6] They make full use of the information of the original image as a reference to evaluate the distorted image; semi-reference stereoscopic image quality assessment [7-8] Performing quality evaluation by using partial information of the original image; ginseng-freeExamination stereo image quality evaluation [9-10] The quality evaluation can be completed by only utilizing the characteristics of the distorted image, and the method has good applicability.
At present, many students start from left and right views of a stereoscopic image, respectively perform feature extraction on the left and right views, and then obtain an evaluation result according to the features of the left and right views, and the method often cannot well evaluate an asymmetric stereoscopic image. Literature [3] A gradient dictionary learning method for performing color visual characteristics on left and right views respectively is provided, so that sparse representation is used for performing feature extraction; literature [10] Luminance statistics features are extracted for the left and right views respectively, and then depth and structure statistics features are further extracted by combining the disparity map with the left and right views respectively. However, in practice, after receiving the information of the left and right viewpoints, the human eye first forms a binocular fusion image from the brain, and then perceives the obtained fusion image. To better simulate this characteristic, some students began using binocular fusion images for stereoscopic image quality evaluation. Shen (Chinese character) [11] Considering the importance of spatial frequency to human eyes, the left and right views are processed by Gabor filters, and the processed left and right views are added to form a fusion image, and the model can only accord with human visual characteristics to a certain extent. Levelt [12] Based on the binocular competition characteristic of human eyes, a linear model of a fusion image is provided, and the left view and the right view are weighted respectively and then added to obtain the fusion image; literature [13][14] Such a linear model is improved in view of the importance of parallax compensation and contrast sensitivity in human visual characteristics, respectively. Ding for more accurate simulation of human visual properties [15] Several binocular fusion models are proposed based on gain control and gain enhancement.
Reference to the literature
[1] Chen M J, Cormack L K, Bovik A C. No-reference quality assessment of natural stereopairs.[J]. IEEE Transactions on Image Processing, 2013, 22(9):3379-3391.
[2] Zhou W, Jiang G, Yu M, et al. Reduced-reference stereoscopic image quality assessment based on view and disparity zero-watermarks[J]. Signal Processing Image Communication, 2014, 29(1):167-176.
[3] Yang J, An P, Ma J, et al. No-reference stereo image quality assessment by learning gradient dictionary-based color visual characteristics[C]// IEEE International Symposium on Circuits and Systems. IEEE, 2018.
[4] Shao F, Lin W, Gu S, et al. Perceptual Full-Reference Quality Assessment of Stereoscopic Images by Considering Binocular Visual Characteristics[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2013, 22(5):1940-1953.
[5] Zhang Y, Chandler D M. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2015, 24(11):3810-25.
[6] Lin Y, Yang J, Wen L, et al. Quality Index for Stereoscopic Images by Jointly Evaluating Cyclopean Amplitude and Cyclopean Phase[J]. IEEE Journal of Selected Topics in Signal Processing, 2017, PP(99):1-1.
[7] Qi F, Zhao D, Gao W. Reduced Reference Stereoscopic Image Quality Assessment Based on Binocular Perceptual Information[J]. IEEE Transactions on Multimedia, 2015, 17(12):2338-2344.
[8] Ma J, An P, Shen L, et al. Reduced-Reference Stereoscopic Image Quality Assessment Using Natural Scene Statistics and Structural Degradation[J]. IEEE Access, 2017, PP(99):1-1.
[9] Sazzad Z M P, Horita Y. No-reference stereoscopic image quality assessment[J]. Proceedings of SPIE - The International Society for Optical Engineering, 2010, 7524(2):75240T-75240T-12.
[10] Fang Y, Yan J, Wang J. No reference quality assessment for stereoscopic images by statistical features[C]// Ninth International Conference on Quality of Multimedia Experience. IEEE, 2017.
[11] Shen L, Lei J, Hou C. No-reference stereoscopic 3D image quality assessment via combined model[J]. Multimedia Tools & Applications, 2017(9):1-18.
[12] W.J.M. Levelt, On Binocular Rivalry, Mouton, The Hague, Paris, 1968.
[13] Chen M J , Su C C , Kwon D K , et al. Full-reference quality assessment of stereoscopic images by modeling binocular rivalry[C]// Signals, Systems & Computers. IEEE, 2013.
[14] Lu K , Zhu W . Stereoscopic Image Quality Assessment Based on Cyclopean Image[C]// Dependable, Autonomic & Secure Computing, Intl Conf on Pervasive Intelligence & Computing, Intl Conf on Big Data Intelligence & Computing & Cyber Science & Technology Congress. IEEE, 2016.
[15] Ding J, Klein S A, Levi D M. Binocular combination of phase and contrast explained by a gain-control and gain-enhancement model[J]. Journal of Vision, 2013, 13(2):13.
[16] Liu L, Liu B, Su C C, et al. Binocular spatial activity and reverse saliency driven no-reference stereopair quality assessment[J]. Signal Processing Image Communication, 2017.
[17] Yang J, Wang Y, Li B, et al. Quality assessment metric of stereo images considering cyclopean integration and visual saliency[J]. Information Sciences An International Journal, 2016, 373(C):251-268.
[18] Xu X, Zhao Y, Ding Y. No-reference stereoscopic image quality assessment based on saliency-guided binocular feature consolidation[J]. Electronics Letters, 2017, 53(22):1468-1470.
[19] Ma J, An P, Shen L, et al. SSIM-based binocular perceptual model for quality assessment of stereoscopic images[C]// Visual Communications and Image Processing. IEEE, 2018.
[20] Shao F, Li K, Lin W, et al. Learning Blind Quality Evaluator for Stereoscopic Images Using Joint Sparse Representation[J]. IEEE Transactions on Multimedia, 2016, 18(10):2104-2114.
[21] Ding Y, Zhao Y. No-reference quality assessment for stereoscopic images considering visual discomfort and binocular rivalry[J]. Electronics Letters, 2018, 53(25):1646-1647.
[22] Yue G, Hou C, Jiang Q, et al. Blind Stereoscopic 3D Image Quality Assessment via Analysis of Naturalness, Structure, and Binocular Asymmetry[J]. Signal Processing, 2018。
Disclosure of Invention
In order to solve the problems in the prior art, the non-reference stereoscopic image quality evaluation method based on the fusion image and the enhanced image not only has good consistency with subjective evaluation of human eyes, but also can effectively evaluate symmetric distortion and asymmetric distortion stereoscopic images, and promotes the development of stereoscopic imaging technology on a certain basis.
The non-reference stereoscopic image quality evaluation method based on the fusion image and the enhanced image comprises the following specific contents:
acquisition of color fusion image
Firstly, simulating human eye characteristics, and performing Gabor filtering on three channels of red, green and blue; secondly, the gain control theory states that the left eye applies gain control to the right eye in proportion to the contrast energy of the input signal; and applies gain control, called gain enhancement, to the gain control from the right eye; the right eye applies gain control and gain enhancement to the left eye as well; and then respectively generating weights for the left view and the right view according to the total contrast energy, giving the weights to the left view and the right view, and finally summing to obtain a color fusion image, wherein the detailed process is as follows:
1. gabor filter mimics the receptive field:
wherein ,/> and />Respectively representing left and right viewpoints; />An intensity value representing the left or right view at each spatial location; />Is per image and with spatial frequency +.>And angle->Gabor filter of->Convolutionally derived response, wherein->With 6 scales>Has 8 directions, upper corner mark->Representing the number of the feature images, 48 feature images can be obtained in total; />Representing the magnitude of each response, +.>Representing the phase of each responseA bit;
The left view and the right view of the stereoscopic image are processed by Gabor filter to obtain 48 feature images with different scales and different directions, and the 48 feature images are arranged according to the ascending order of average brightness value to obtain a setThe method comprises the steps of carrying out a first treatment on the surface of the Gain enhancement +.>And gain control:
3. Total contrast energy:
applying contrast sensitivity functions to feature mapsTo get +.>Formula (4) whereinCalculating the weight by the formula (5)>Then the total comparative energy of the gain control is obtained by equation (6)>And gain-enhanced total specific energy +.> ;
4. And (3) a left and right image fusion process:
the image fusion process is performed in three channels of the color image, red, green, and blue, wherein,weight for left view, +.>For right view weights, the final fused image is derived from equation (9):
represents R, G, B channel,)>The fusion image representing the channel can be obtained after three-channel fusion into a color fusion image +.> ;
Secondly, obtaining a disparity map and a disparity gradient weight:
processing the distorted stereo image pair by using a stereo matching algorithm based on structural similarity to obtain a parallax image; then calculating kurtosis and skewness of the parallax map by using a statistical method;
weighting the normalized fused image with weights generated by parallax gradients to predict visual saliency, the parallax gradient weights being generated as in equation (10), whereinGradient magnitude representing disparity map:
third, obtaining an enhanced image:
the parallax compensation is applied to the fusion image in a multiplication mode, and the fusion image subjected to the parallax compensation are multiplied to form a reinforced image, wherein the reinforced image can highlight the texture of the picture; the enhanced image calculation method is represented by formula (11), wherein,representing a reinforced image; />A gray scale map representing the fused image; />Representing horizontal parallax; />Representing the spatial coordinates of the image; />
Normalization and feature extraction of the image:
1. normalization of images:
respectively carrying out average contrast ratio normalization (mean subtracted contrast normalized, MSCN) on the fused image and the reinforced image thereof, wherein the normalization can remove local correlation of the image and make brightness values of the image tend to Gaussian distribution; calculating the subtracted mean contrast normalization coefficient as in equation (12), the MSCN coefficients of the resulting fused image may further weight the fused image as in equation (15):
wherein ,gray-scale map for fusion or enhancement image, < >>Representing the height and width of the image, respectively; />Is a constant; />Represents a local mean; />Representing local variance;is a circularly symmetric Gaussian weighting function sampled to 3 standard deviations,/o>The window of the Gaussian filter is set to +.> ;
2. Fitting a Gaussian model to extract characteristics:
the Gaussian model is used for capturing the change rule of statistical features in a natural scene in a spatial domain and evaluating the quality of a plane image, and the statistical features of the natural scene have very important roles in simulating a human visual system; the two Gaussian models are applied to stereoscopic image quality evaluation, so that good results are obtained;
to capture the differences in the case of different distortion types, the weighted and enhanced images are extracted by fitting a generalized gaussian distribution (generalized Gaussian distribution, GGD) and an asymmetric generalized gaussian distribution (asymmetric generalized Gaussian distribution, AGGD), respectively, at two scales, the process of which can be divided into two phases:
in the first stage, a GGD model is used to fit the MSCN coefficient distribution of the weighted image and the enhanced image, the GGD model can be used to effectively capture the statistical characteristics of the distorted image, and the zero-mean GGD model calculation method is as follows:
wherein ,is a gamma function, the zero-mean GGD model makes the MSCN coefficient distribution approximately symmetrical,/A->Control the general shape of the gaussian distribution, +.>Representing variance, the degree of shape change can be controlled, so using these two parametersTo capture information of the image as a feature of the first stage;
in the second stage, the AGGD model is used for fitting MSCN coefficients multiplied by adjacent elements in the image in pairs, and the weighted image and the reinforced image are respectively fitted along four directions, namely a horizontal direction H, a vertical direction V, a main diagonal direction D1 and a secondary diagonal direction D2; the image calculation method for the four directions is as follows:
the commonly used AGGD model is as follows:
wherein ,
shape parametersControl the shape of the distribution->For the mean value of the AGGD model, the scale parameter +.>、/>Control the distribution of left and right sides, respectively, will +.>The four parameters are taken as the extracted characteristics of AGGD, and 16 characteristics are taken in four directions;
3. extracting kurtosis and skewness of the disparity map:
the statistical data is modified in a specific way by different distortions of the image, kurtosis can describe the flatness or the abrupt degree of the image, skewness can describe the distortability of the image, and the statistical characteristics of the parallax image under different distortions are captured by using the kurtosis and the skewness, and the formula (27) is as follows:
and />Represents kurtosis and skewness of disparity map, respectively, ">Representing a disparity map->Is the mean value of the disparity map;
fifthly, feature fusion and SVR:
because the images show different characteristics under different scales, 72 characteristics can be obtained by utilizing the GDD and AGDD models to extract the characteristics of the weighted fusion image and the reinforced image based on different scales; the kurtosis and skewness characteristics of the parallax images are combined to form 74 characteristics; then the 74 obtained features are fused and sent into SVR and subjective evaluation value for fitting; wherein, the nonlinear regression function uses a logistic function, and the kernel function of the SVR uses a radial basis function.
The non-reference stereoscopic image quality evaluation method based on the fusion image and the enhanced image comprises the steps of firstly, fusing a left viewpoint and a right viewpoint of a stereoscopic image in a red channel, a green channel and a blue channel based on the characteristics of binocular fusion, binocular competition, binocular inhibition and the like of a human visual system to obtain a color fusion image; secondly, a stereo matching algorithm is used for obtaining a parallax image of the distorted stereo image pair, and the gradient weight of the parallax image is used for weighting the gray level image of the color fusion image; thirdly, generating an enhanced image according to the fusion image and the parallax map; then, natural statistical features are extracted from the fused image and the enhanced image in a spatial domain, and kurtosis and skewness features are extracted from the disparity map; and finally, fusing the extracted features and sending the fused features into support vector regression (support vector regression, SVR) to obtain the quality of the stereoscopic image to be evaluated.
Drawings
FIG. 1 is a block diagram of an algorithm of the present invention;
FIG. 2 is a schematic diagram of a color fusion image fusion process;
FIG. 3 is a fused image and enhanced image MSCN coefficient distribution map (ori: original image. Wn: white noise. Jp2k: JPEG2000. JPEG: JPEG compression. Blur: gaussian blur. Ff: fast fading);
FIG. 4 is a MSCN coefficient distribution map (ori: original image, wn: white noise, jp2k: JPEG2000. JPEG: JPEG compression, blur: gaussian blur, ff: fast) of a fused image multiplied by adjacent pixels in the horizontal direction.
Detailed Description
First, a color fusion image and a disparity map are formed from left and right views. And obtaining an enhanced image according to the parallax image and the fusion image. Considering the importance of parallax information, the parallax images are subjected to multi-angle mining, not only are the statistical features of the parallax images extracted, but also parallax gradient weights are calculated, and fusion images are weighted to better accord with the human eye characteristics. And then, capturing statistical features of the weighted fusion image and the enhanced image by adopting a Gaussian model, and finally fusing all the features and fitting with subjective scores. Experimental results show that the algorithm disclosed by the invention is excellent in performance, can well accord with subjective evaluation of human beings, and is accurate in model prediction result. The experimental structure is shown in fig. 1, and the color fusion image fusion process is shown in fig. 2.
In the technical scheme of the patent, when the parallax gradient weight, the kurtosis and the skewness of the parallax map and all the factors are the same, three other methods for evaluating the quality of the stereoscopic image can be formed, and the quality of the stereoscopic image can be evaluated more excellently through the performance comparison by the first three methods in the table 1. However, the PLCC, SROCC and RMSE indices are inferior to the methods described herein, which illustrates that the present invention can be used to derive several other non-optimal but viable methods according to the technical scheme, and further integrate the comparisons of the remaining methods in Table 1, the scores were greatly reduced without using enhanced images, while SINQ was used in the framework presented herein [16] In the case of the multiplied images, the scoring is inferior to the method, and the superiority of the enhanced image is reflected. The information of the enhanced image, the parallax gradient weight, the kurtosis and the skewness of the parallax image come from different angles in the parallax image, the information plays a very remarkable role, the quality score is improved to a great extent, the method is perfected, and the best implementation scheme of the invention is obtained.
Table 1 comparison of the performance of the methods herein
Fig. 3 (a) and (b) are distributions of MSCN coefficients of a fused image and an enhanced image, respectively, and fig. 4 (a) and (b) are distributions of MSCN coefficients multiplied by horizontally adjacent elements of the fused image and the enhanced image, respectively, wherein the original images and distortion maps of the fused image and the enhanced image use the same scene. Different distortion types have different shapes according to different statistical characteristics, and in fig. 3 and fig. 4, the distorted images cause the original image distribution to have different degrees of extrusion or diffusion. According to different deformation modes and degrees of image distribution, different distortion types of the image can be approximately embodied.
The distribution of the original image in fig. 3 presents a gaussian distribution, and the introduction of different distortions causes a different degree of pinching or spreading of this distribution. In fig. 3 (a), the distribution of the jp2k distorted image is obviously squeezed, and the distribution is similar to the laplace distribution; the distribution of the wn distortion image is diffused, and still shows Gaussian distribution. In fig. 3 (b), the peak of the wn-distorted image distribution is significantly shifted.
The distribution of the original image in fig. 4 shows left-right asymmetry, and the introduction of distortion type causes the extrusion or diffusion phenomenon to occur in the distribution, and the degree of asymmetry varies differently. FIGS. 4 (a), (b) show an asymmetry phenomenon in the original image; the distribution of the wn distortion image not only generates a diffusion phenomenon, but also has more obvious asymmetry degree compared with the original image distribution; the distribution of the jp2k distorted image is less likely to be squeezed, and the degree of asymmetry is more pronounced than the original image distribution.
From the above analysis, it is clear that the statistical features of the image MSCN coefficients can reflect to some extent the differences of different distorted images, which can be quantified. Literature is used herein [9] The difference is quantified by a method for extracting features, the weighted image and the enhanced image are respectively subjected to fitting generalized Gaussian distribution and asymmetric generalized Gaussian distribution under two scales to obtain statistical features, and the process can be divided into two stages.
The present invention performs performance testing of the proposed algorithm on two disclosed stereo image databases (LIVE Phase i and LIVE Phase ii). The LIVE Phase I database comprises 365 symmetrically distorted stereo image pairs and 20 original stereo image pairs; LIVE Phase ii contains a total of 360 and 8 original stereo image pairs of symmetrical and asymmetrical distorted stereo image pairs. The eigenvectors were fed into the SVR and fitted to DMOS values to give PLCC (Pearson's Correlation Coefficient), SROCC (Spearman's Rank Ordered Correlation Coefficient), and RMSE (Root Mean Squared Error) scores were used to measure the quality of the results. The lower the RMSE value is, the higher the PLCC and SROCC values are, which shows that the better the performance of the algorithm provided by the invention is, and the obtained objective quality score has better consistency with the subjective quality score.
The invention compares and analyzes the three-dimensional image quality evaluation result published by the prior art. Literature [18] Performing non-reference quality evaluation based on a multi-scale feature fusion method; literature [17][19] Performing full-reference stereoscopic image quality evaluation by using the fusion image; literature [20] Providing a method for joint sparse representation without reference to perform quality evaluation; literature [6][16][21] Performing non-reference quality evaluation by using the fusion image; literature [22] Evaluation is performed by analyzing natural statistical properties, structural properties, and asymmetry of the distorted image. Table 2 shows the results of all algorithms on LIVE Phase I and LIVE Phase II databases. The best performing algorithm is represented in the table by bold fonts.
As can be seen from Table 2, the performance of other methods on the LIVE Phase I database is significantly higher than that on the LIVE Phase II database due to the presence of a large number of asymmetrically distorted images in the LIVE Phase II database, and the performance of the methods herein on both databases is close and excellent. This demonstrates that the method herein conforms to the visual characteristics of the human eye and enables accurate assessment of symmetrically distorted and asymmetrically distorted images. The advantages of the methods herein are evident compared to existing full-reference and no-reference methods. PLCC was 0.9583, SROCC was 0.9507 and RMSE was 4.3811 on LIVE Phase I database; PLCC was 0.9575, SROCC was 0.9542 and RMSE was 3.0689 on LIVE Phase II database. The method can obtain good performance without using original image information, and the performance is higher than that of other reference-free methods, and the method has good practicability and robustness.
Table 2 overall performance comparison of different methods
Because of the adoption and improvement of the literature [16]Tables 3 and 4 are methods and literature herein [16] Detailed comparison of the medium index.As can be seen from Table 3, the individual distortion types and overall scores of the methods herein are higher on the LIVE Phase I database than in the literature [16] As can be seen from Table 4, the method herein has a single distortion type score and a total score on the LIVE Phase II database that are higher than those of the literature [16] This phenomenon is not only reflected in the methods herein compared with the literature [16] The method has the advantages that the indexes are improved, the quality of the symmetrical distortion and the asymmetrical distortion stereoscopic image can be effectively evaluated, the method is more in line with the visual characteristics of human eyes, and the method is suitable for asymmetrical distortion stereoscopic images.
TABLE 3 Performance ratio of two different methods on LIVE Phase I database
Table 4 shows a comparison of the performance of two different methods on LIVE Phase II.
Claims (1)
1. The non-reference stereoscopic image quality evaluation method based on the fusion image and the enhanced image is characterized by comprising the following steps of: detailed description of the preferred embodiments
Acquisition of color fusion image
Firstly, simulating human eye characteristics, and performing Gabor filtering on three channels of red, green and blue;
secondly, the left eye applies gain control to the right eye in proportion to the contrast energy of the input signal according to the gain control theory; and applies gain control, called gain enhancement, to the gain control from the right eye; the right eye applies gain control and gain enhancement to the left eye as well; and then respectively generating weights for the left view and the right view according to the total contrast energy, giving the weights to the left view and the right view, and finally summing to obtain a color fusion image, wherein the detailed process is as follows:
1. gabor filter mimics the receptive field:
wherein v is { l, r }, l and r represent left and right views respectively; i v (ζ, η) represents the intensity value of the left or right view at each spatial position;is per image and with spatial frequency f s Convolving the resulting response with Gabor filter g at angle θ, where f s The upper corner mark n represents the number of the feature images, and 48 feature images can be obtained in total, wherein the 6 scales theta have 8 directions; />Representing the magnitude of each response, +.>Representing the phase of each response;
2. gain control gc and gain enhancement ge:
the left view and the right view of the stereoscopic image are processed by Gabor filter to obtain 48 feature images with different scales and different directions, and the 48 feature images are arranged according to the ascending order of average brightness value to obtain a setGain enhancement ge and gain control gc are obtained by equations (2) and (3):
3. total contrast energy:
applying contrast sensitivity functions to feature mapsTo get +.>As in equation (4), where v ε { l, r }, n=1, 2,3 … 48, weight +.>The total comparative energy TCE of the gain control is then obtained by equation (6) v And gain-enhanced total specific energy +.>
A(f)=2.6(0.192+0.114f)exp[-(0.114f) 1.1 ] (4)
4. And (3) a left and right image fusion process:
the image fusion process is performed in three channels of red, green and blue of the color image, wherein G l Weight G for left view r For right view weights, the final fused image is derived from equation (9):
i represents R, G, B channel, C i (x, y) representing the fusion image of the channel, obtaining a color fusion image C after three-channel fusion r (x,y);
Secondly, obtaining a disparity map and a disparity gradient weight:
processing the distorted stereo image pair by using a stereo matching algorithm based on structural similarity to obtain a parallax image; then calculating kurtosis and skewness of the parallax map by using a statistical method;
weighting the normalized fused image with weights generated by parallax gradients to predict visual saliency, the parallax gradient weights being generated as in equation (10), whereinGradient magnitude representing disparity map:
third, obtaining an enhanced image:
the parallax compensation is applied to the fusion image in a multiplication mode, and the fusion image subjected to the parallax compensation are multiplied to form a reinforced image, wherein the reinforced image can highlight the texture of the picture; the enhanced image calculating method is represented by formula (11), wherein P represents an enhanced image; c represents a gray scale image of the fusion image; d represents horizontal parallax; x, y represent the spatial coordinates of the image;
P(x,y)=C(x,y)·C(x+d(x,y),y) (11)
normalization and feature extraction of the image:
1. normalization of images:
respectively carrying out average-reduction contrast normalization MSCN operation on the fused image and the reinforced image thereof, wherein the normalization can remove the local correlation of the image and lead the brightness value of the image to be prone to Gaussian distribution; calculating the subtracted mean contrast normalization coefficient as in equation (12), the MSCN coefficients of the resulting fused image may further weight the fused image as in equation (15):
MSCN w =W(x,y)·C MSCN (x,y) (15)
wherein C is a gray scale of the fused or enhanced image, x.epsilon.1, 2..M, y.epsilon.1, 2..N; m and N respectively represent the height and width of the image; a is a constant; mu (mu) C Represents a local mean; sigma (sigma) C Representing local variance; omega= { omega k,l I k= -k..k, l= -l..l..l } is a circularly symmetric gaussian weighting function sampled to 3 standard deviations, k=l=3, the window of the gaussian filter is set to 7×7;
2. fitting a Gaussian model to extract characteristics:
the Gaussian model is used for capturing the change rule of statistical features in a natural scene in a spatial domain and evaluating the quality of a plane image, and the statistical features of the natural scene have very important roles in simulating a human visual system; the two Gaussian models are applied to stereoscopic image quality evaluation, so that good results are obtained;
in order to capture the difference under different distortion types, the weighted image and the enhanced image are respectively subjected to feature extraction under two scales by fitting the generalized Gaussian distribution GGD and the asymmetric generalized Gaussian distribution AGGD, and the process can be divided into two stages:
in the first stage, a GGD model is used to fit the MSCN coefficient distribution of the weighted image and the enhanced image, the GGD model can be used to effectively capture the statistical characteristics of the distorted image, and the zero-mean GGD model calculation method is as follows:
wherein Γ (·) is a gamma function, the zero-mean GGD model causes the MSCN coefficient distribution to be approximately symmetrical, α controls the approximate shape of the Gaussian distribution, σ 2 Representing variance, the degree of shape change can be controlled, so that the two parameters (alpha, sigma 2 ) To capture information of the image as a feature of the first stage;
in the second stage, the AGGD model is used for fitting MSCN coefficients multiplied by adjacent elements in the image in pairs, and the weighted image and the reinforced image are respectively fitted along four directions, namely a horizontal direction H, a vertical direction V, a main diagonal direction D1 and a secondary diagonal direction D2; the image calculation method for the four directions is as follows:
the commonly used AGGD model is as follows:
wherein ,
shape parameter v controls the shape of the distribution, eta is the mean value of the AGGD model, and the scale parameter sigma l 2 、σ r 2 Control the distribution of left and right sides respectively, will (v, eta, sigma) l 2 ,σ r 2 ) The four parameters are taken as the extracted characteristics of AGGD, and 16 characteristics are taken in four directions;
3. extracting kurtosis and skewness of the disparity map:
the statistical data is modified in a specific way by different distortions of the image, kurtosis can describe the flatness or the abrupt degree of the image, skewness can describe the distortability of the image, and the statistical characteristics of the parallax image under different distortions are captured by using the kurtosis and the skewness, and the formula (27) is as follows:
k and S represent kurtosis and skewness of the disparity map, respectively, d represents the disparity map, and E (d) is the average value of the disparity map;
fifthly, feature fusion and SVR:
because the images show different characteristics under different scales, 72 characteristics can be obtained by utilizing the GDD and AGDD models to extract the characteristics of the weighted fusion image and the reinforced image based on different scales; the kurtosis and skewness characteristics of the parallax images are combined to form 74 characteristics; then the 74 obtained features are fused and sent into SVR and subjective evaluation value for fitting; wherein, the nonlinear regression function uses a logistic function, and the kernel function of the SVR uses a radial basis function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811498041.2A CN110246111B (en) | 2018-12-07 | 2018-12-07 | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811498041.2A CN110246111B (en) | 2018-12-07 | 2018-12-07 | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110246111A CN110246111A (en) | 2019-09-17 |
CN110246111B true CN110246111B (en) | 2023-05-26 |
Family
ID=67882428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811498041.2A Active CN110246111B (en) | 2018-12-07 | 2018-12-07 | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110246111B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110944165B (en) * | 2019-11-13 | 2021-02-19 | 宁波大学 | Stereoscopic image visual comfort level improving method combining perceived depth quality |
CN112651922A (en) * | 2020-10-13 | 2021-04-13 | 天津大学 | Stereo image quality objective evaluation method based on feature extraction and ensemble learning |
CN112767385B (en) * | 2021-01-29 | 2022-05-17 | 天津大学 | No-reference image quality evaluation method based on significance strategy and feature fusion |
CN113014918B (en) * | 2021-03-03 | 2022-09-02 | 重庆理工大学 | Virtual viewpoint image quality evaluation method based on skewness and structural features |
CN113191424A (en) * | 2021-04-28 | 2021-07-30 | 中国石油大学(华东) | Color fusion image quality evaluation method based on multi-model fusion |
CN114998596A (en) * | 2022-05-23 | 2022-09-02 | 宁波大学 | High dynamic range stereo omnidirectional image quality evaluation method based on visual perception |
CN114782422B (en) * | 2022-06-17 | 2022-10-14 | 电子科技大学 | SVR feature fusion non-reference JPEG image quality evaluation method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413298A (en) * | 2013-07-17 | 2013-11-27 | 宁波大学 | Three-dimensional image objective evaluation method based on visual characteristics |
CN105654142A (en) * | 2016-01-06 | 2016-06-08 | 上海大学 | Natural scene statistics-based non-reference stereo image quality evaluation method |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN108391121A (en) * | 2018-04-24 | 2018-08-10 | 中国科学技术大学 | It is a kind of based on deep neural network without refer to stereo image quality evaluation method |
CN108769671A (en) * | 2018-06-13 | 2018-11-06 | 天津大学 | Stereo image quality evaluation method based on adaptive blending image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102595185B (en) * | 2012-02-27 | 2014-06-25 | 宁波大学 | Stereo image quality objective evaluation method |
-
2018
- 2018-12-07 CN CN201811498041.2A patent/CN110246111B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413298A (en) * | 2013-07-17 | 2013-11-27 | 宁波大学 | Three-dimensional image objective evaluation method based on visual characteristics |
CN105654142A (en) * | 2016-01-06 | 2016-06-08 | 上海大学 | Natural scene statistics-based non-reference stereo image quality evaluation method |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN108391121A (en) * | 2018-04-24 | 2018-08-10 | 中国科学技术大学 | It is a kind of based on deep neural network without refer to stereo image quality evaluation method |
CN108769671A (en) * | 2018-06-13 | 2018-11-06 | 天津大学 | Stereo image quality evaluation method based on adaptive blending image |
Also Published As
Publication number | Publication date |
---|---|
CN110246111A (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110246111B (en) | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image | |
Fang et al. | Objective quality assessment of screen content images by uncertainty weighting | |
CN109360178B (en) | Fusion image-based non-reference stereo image quality evaluation method | |
CN108769671B (en) | Stereo image quality evaluation method based on self-adaptive fusion image | |
Md et al. | Full-reference stereo image quality assessment using natural stereo scene statistics | |
CN109919959B (en) | Tone mapping image quality evaluation method based on color, naturalness and structure | |
CN104658001B (en) | Non-reference asymmetric distorted stereo image objective quality assessment method | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN109523513B (en) | Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image | |
CN110111304B (en) | No-reference stereoscopic image quality evaluation method based on local-global feature regression | |
CN109191428B (en) | Masking texture feature-based full-reference image quality evaluation method | |
Khan et al. | Estimating depth-salient edges and its application to stereoscopic image quality assessment | |
CN105654142B (en) | Based on natural scene statistics without reference stereo image quality evaluation method | |
CN109831664B (en) | Rapid compressed stereo video quality evaluation method based on deep learning | |
CN103780895B (en) | A kind of three-dimensional video quality evaluation method | |
TWI457853B (en) | Image processing method for providing depth information and image processing system using the same | |
CN107371016A (en) | Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods | |
CN111915589A (en) | Stereo image quality evaluation method based on hole convolution | |
CN109510981B (en) | Stereo image comfort degree prediction method based on multi-scale DCT | |
Karimi et al. | Blind stereo quality assessment based on learned features from binocular combined images | |
CN103136748A (en) | Stereo-image quality objective evaluation method based on characteristic image | |
CN109257592B (en) | Stereoscopic video quality objective evaluation method based on deep learning | |
CN103841411B (en) | A kind of stereo image quality evaluation method based on binocular information processing | |
CN102903107A (en) | Three-dimensional picture quality objective evaluation method based on feature fusion | |
CN104144339B (en) | A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |