CN110246111A - Based on blending image with reinforcing image without reference stereo image quality evaluation method - Google Patents

Based on blending image with reinforcing image without reference stereo image quality evaluation method Download PDF

Info

Publication number
CN110246111A
CN110246111A CN201811498041.2A CN201811498041A CN110246111A CN 110246111 A CN110246111 A CN 110246111A CN 201811498041 A CN201811498041 A CN 201811498041A CN 110246111 A CN110246111 A CN 110246111A
Authority
CN
China
Prior art keywords
image
fusion
disparity map
representing
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811498041.2A
Other languages
Chinese (zh)
Other versions
CN110246111B (en
Inventor
李素梅
丁义修
常永莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University Marine Technology Research Institute
Original Assignee
Tianjin University Marine Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University Marine Technology Research Institute filed Critical Tianjin University Marine Technology Research Institute
Priority to CN201811498041.2A priority Critical patent/CN110246111B/en
Publication of CN110246111A publication Critical patent/CN110246111A/en
Application granted granted Critical
Publication of CN110246111B publication Critical patent/CN110246111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Based on blending image with reinforcing image without reference stereo image quality evaluation method, it is primarily based on the characteristics such as binocular fusion, binocular competition, the binocular inhibition of human visual system, the fusion that the left and right viewpoint of stereo-picture is carried out to red, green, blue triple channel, obtains color fusion image;Secondly, obtaining the disparity map of distortion stereo pairs using Stereo Matching Algorithm, grayscale image of the gradient weight of disparity map to color fusion image is weighted;Again, it is generated according to blending image and disparity map and strengthens image;Then, nature statistical nature is extracted in spatial domain to blending image and reinforcing image, kurtosis and degree of bias feature is extracted to disparity map;Finally, mentioned feature is carried out fusion and fused feature is sent into support vector regression (support vector regression, SVR) to obtain the quality of stereo-picture to be evaluated.

Description

No-reference stereo image quality evaluation method based on fusion image and enhanced image
Technical Field
The invention belongs to the field of image processing, relates to the objective evaluation research of the quality of a three-dimensional image, and particularly relates to a non-reference objective evaluation method of the quality of the three-dimensional image based on a fusion image and an enhanced image.
Background
With the rapid development of 3D technology, stereoscopic image quality evaluation has become one of indispensable research directions in the 3D field. At present, the evaluation method of the quality of the three-dimensional image can be divided into subjective evaluation and objective evaluation, the subjective evaluation accords with the visual characteristics of human beings, but the realization process is complicated, time-consuming and labor-consuming; the objective evaluation method is simpler and faster to realize and has good operability, so that a large number of students are invested in the field of objective evaluation[1-3]
According to the difference of the information degree of the original image, objective quality evaluation is divided into three categories: full-reference stereo image quality evaluation[4-6]They make full use of the information of the original image as a reference to evaluate the distorted image; semi-reference stereo image quality evaluation[7-8]The quality evaluation is carried out by utilizing partial information of the original image; no-reference stereo image quality evaluation[9-10]The quality evaluation can be completed only by utilizing the characteristics of the distorted image, and the method has good applicability.
At present, many scholars start with left and right views of a stereo image, respectively extract features of the left and right views, and then obtain evaluation results according to the features of the left and right views, and the method cannot evaluate the asymmetric stereo image well. Literature reference[3]A gradient dictionary learning method for performing color visual characteristics on left and right views respectively is provided, so that feature extraction is performed by sparse representation; literature reference[10]Respectively to the left and the rightAnd extracting the brightness statistical features from the views, and then further extracting the depth and structure statistical features by combining the disparity map with the left view and the right view respectively. However, in practice, after receiving information of left and right viewpoints, human eyes first form a binocular fusion image by the brain, and then perceive the resulting fusion image. To better simulate this characteristic, some scholars began using binocular fusion images for evaluation of stereoscopic image quality. Shen (a)[11]In consideration of the importance of spatial frequency to human eyes, the left view and the right view are processed by a Gabor filter, and the processed left view and the processed right view are added to form a fused image. Level[12]Providing a linear model of a fused image based on the binocular competition characteristic of human eyes, weighting left and right views respectively, and adding the weighted left and right views to obtain a fused image; literature reference[13][14]Such a linear model is improved in consideration of the importance of parallax compensation and contrast sensitivity in human visual characteristics, respectively. In order to more accurately simulate human visual characteristics, Ding[15]Several binocular fusion models are proposed based on gain control and gain enhancement.
Reference to the literature
[1] Chen M J, Cormack L K, Bovik A C. No-reference quality assessment ofnatural stereopairs.[J]. IEEE Transactions on Image Processing, 2013, 22(9):3379-3391.
[2] Zhou W, Jiang G, Yu M, et al. Reduced-reference stereoscopic imagequality assessment based on view and disparity zero-watermarks[J]. SignalProcessing Image Communication, 2014, 29(1):167-176.
[3] Yang J, An P, Ma J, et al. No-reference stereo image qualityassessment by learning gradient dictionary-based color visual characteristics[C]// IEEE International Symposium on Circuits and Systems. IEEE, 2018.
[4] Shao F, Lin W, Gu S, et al. Perceptual Full-Reference QualityAssessment of Stereoscopic Images by Considering Binocular VisualCharacteristics[J]. IEEE Transactions on Image Processing A Publication ofthe IEEE Signal Processing Society, 2013, 22(5):1940-1953.
[5] Zhang Y, Chandler D M. 3D-MAD: A Full Reference Stereoscopic ImageQuality Estimator Based on Binocular Lightness and Contrast Perception[J].IEEE Transactions on Image Processing A Publication of the IEEE SignalProcessing Society, 2015, 24(11):3810-25.
[6] Lin Y, Yang J, Wen L, et al. Quality Index for Stereoscopic Images byJointly Evaluating Cyclopean Amplitude and Cyclopean Phase[J]. IEEE Journalof Selected Topics in Signal Processing, 2017, PP(99):1-1.
[7] Qi F, Zhao D, Gao W. Reduced Reference Stereoscopic Image QualityAssessment Based on Binocular Perceptual Information[J]. IEEE Transactions onMultimedia, 2015, 17(12):2338-2344.
[8] Ma J, An P, Shen L, et al. Reduced-Reference Stereoscopic ImageQuality Assessment Using Natural Scene Statistics and Structural Degradation[J]. IEEE Access, 2017, PP(99):1-1.
[9] Sazzad Z M P, Horita Y. No-reference stereoscopic image qualityassessment[J]. Proceedings of SPIE - The International Society for OpticalEngineering, 2010, 7524(2):75240T-75240T-12.
[10] Fang Y, Yan J, Wang J. No reference quality assessment forstereoscopic images by statistical features[C]// Ninth InternationalConference on Quality of Multimedia Experience. IEEE, 2017.
[11] Shen L, Lei J, Hou C. No-reference stereoscopic 3D image qualityassessment via combined model[J]. Multimedia Tools & Applications, 2017(9):1-18.
[12] W.J.M. Levelt, On Binocular Rivalry, Mouton, The Hague, Paris, 1968.
[13] Chen M J , Su C C , Kwon D K , et al. Full-reference qualityassessment of stereoscopic images by modeling binocular rivalry[C]// Signals,Systems & Computers. IEEE, 2013.
[14] Lu K , Zhu W . Stereoscopic Image Quality Assessment Based onCyclopean Image[C]// Dependable, Autonomic & Secure Computing, Intl Conf onPervasive Intelligence & Computing, Intl Conf on Big Data Intelligence &Computing & Cyber Science & Technology Congress. IEEE, 2016.
[15] Ding J, Klein S A, Levi D M. Binocular combination of phase andcontrast explained by a gain-control and gain-enhancement model[J]. Journalof Vision, 2013, 13(2):13.
[16] Liu L, Liu B, Su C C, et al. Binocular spatial activity and reversesaliency driven no-reference stereopair quality assessment[J]. SignalProcessing Image Communication, 2017.
[17] Yang J, Wang Y, Li B, et al. Quality assessment metric of stereoimages considering cyclopean integration and visual saliency[J]. InformationSciences An International Journal, 2016, 373(C):251-268.
[18] Xu X, Zhao Y, Ding Y. No-reference stereoscopic image qualityassessment based on saliency-guided binocular feature consolidation[J].Electronics Letters, 2017, 53(22):1468-1470.
[19] Ma J, An P, Shen L, et al. SSIM-based binocular perceptual model forquality assessment of stereoscopic images[C]// Visual Communications andImage Processing. IEEE, 2018.
[20] Shao F, Li K, Lin W, et al. Learning Blind Quality Evaluator forStereoscopic Images Using Joint Sparse Representation[J]. IEEE Transactionson Multimedia, 2016, 18(10):2104-2114.
[21] Ding Y, Zhao Y. No-reference quality assessment for stereoscopicimages considering visual discomfort and binocular rivalry[J]. ElectronicsLetters, 2018, 53(25):1646-1647.
[22] Yue G, Hou C, Jiang Q, et al. Blind Stereoscopic 3D Image QualityAssessment via Analysis of Naturalness, Structure, and Binocular Asymmetry[J]. Signal Processing, 2018。
Disclosure of Invention
In order to solve the problems in the prior art, the method for evaluating the quality of the non-reference stereo image based on the fusion image and the enhanced image has good consistency with subjective evaluation of human eyes, can effectively evaluate the symmetrically distorted and asymmetrically distorted stereo images, and promotes the development of the stereo imaging technology on a certain basis.
The no-reference stereo image quality evaluation method based on the fusion image and the enhanced image comprises the following specific contents:
acquisition of color fusion images
Firstly, simulating human eye characteristics to perform Gabor filtering on a red channel, a green channel and a blue channel; secondly, the gain control theory shows that the left eye applies gain control to the right eye in proportion to the contrast energy of the input signal; and applying gain control, referred to as gain enhancement, to the gain control from the right eye; the right eye also applies gain control and gain enhancement to the left eye; then, respectively generating weights for the left view and the right view according to the total contrast energy, assigning the weights to the left view and the right view, and finally summing to obtain a color fusion image, wherein the detailed process is as follows:
1. gabor filter simulation receptive field:
wherein Andrespectively representing left and right viewpoints;an intensity value representing the left or right view at each spatial location;is per image and with spatial frequencyAnd angleOf a Gabor filterA response resulting from the convolution, whereinThere are 6 dimensions of the number of the scales,has 8 directions, an upper corner markRepresenting the number of feature maps, and obtaining 48 feature maps in total;which is representative of the magnitude of each of the responses,representing the phase of each response;
2. gain controlAnd gain enhancement
The left view and the right view of the stereo image are processed by a Gabor filter to obtain 48 feature maps with different scales and different directions, and the 48 feature maps are arranged according to the ascending order of the average brightness value to obtain a set(ii) a Gain enhancement by equations (2) and (3)And gain control
3. Total comparative energy:
applying a contrast sensitivity function to a feature mapTo obtainAs shown in formula (4), whereinCalculating the weight by equation (5)Then, the total contrast energy of gain control is obtained by equation (6)And total contrast energy of gain enhancement
4. Left and right image fusion process:
the image fusion process is performed in three channels of the color image, red, green, and blue, wherein,is the weight of the left view and is,for right view weighting, the final fused image is given by equation (9):
representing the R, G, B channel of the channel,the fused image representing the channel can be obtained into a color fused image after three-channel fusion
Obtaining a disparity map and a disparity gradient weight:
processing the distorted stereo image pair by using a stereo matching algorithm based on structural similarity to obtain a disparity map; then, calculating the kurtosis and skewness of the disparity map by using a statistical method;
the normalized fused image is weighted by a disparity gradient generated weight to predict visual saliency as given in equation (10), whereGradient magnitude representing the disparity map:
acquiring an enhanced image:
the parallax compensation is acted on the fused image in a multiplication mode, the fused image and the fused image after the parallax compensation are multiplied to form an enhanced image, and the enhanced image can highlight the texture of the image; the enhanced image calculation method is expressed by equation (11), wherein,representing an enhanced image;a grayscale map representing the fused image;represents horizontal parallax;spatial coordinates representing an image;
normalizing the image and extracting the characteristics:
1. normalization of the image:
respectively carrying out mean subtracted normalized (MSCN) operation on the fused image and the enhanced image thereof, wherein the normalization can remove local correlation of the image and lead the brightness value of the image to be prone to Gaussian distribution; calculating the normalized coefficient of the mean-reducing contrast ratio as shown in formula (12), obtaining the MSCN coefficient of the fused image, and further weighting the fused image as shown in formula (15):
wherein ,to fuse or enhance the gray-scale map of the image,respectively representing the height and width of the image;is a constant;represents a local mean;represents the local variance;is a circularly symmetric gaussian weighting function sampled to 3 standard deviations,window setting of Gaussian filtering
2. And (3) fitting a Gaussian model to extract features:
the Gaussian model is used for capturing the change rule of statistical characteristics in a natural scene in a spatial domain and is used for evaluating the quality of a plane image, and the statistical characteristics of the natural scene play an important role in simulating a human visual system; the two Gaussian models are applied to the quality evaluation of the three-dimensional image, and a good result is obtained;
in order to capture the difference under different distortion types, the weighted image and the enhanced image are respectively subjected to feature extraction by fitting Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) under two scales, and the process can be divided into two stages:
in the first stage, the distribution of MSCN coefficients of the weighted image and the intensified image is fitted by using a GGD model, the GGD model can be used for effectively capturing the statistical characteristics of the distorted image, and the calculation method of the zero-mean GGD model is as follows:
wherein ,is a gamma function, the zero mean GGD model makes the MSCN coefficient distribution approximately symmetric,the general shape of the gaussian distribution is controlled,representing variance, the degree of shape change can be controlled, so using these two parametersTo capture information of the image as a feature of the first stage;
in the second stage, the AGGD model is used for fitting MSCN coefficients multiplied by adjacent elements in the image in pairs, and the weighted image and the enhanced image are respectively fitted along four directions, namely a horizontal direction H, a vertical direction V, a main diagonal direction D1 and a secondary diagonal direction D2; the image calculation method for these four directions is as follows:
the commonly used AGGD model is as follows:
wherein ,
shape parameterThe shape of the distribution is controlled such that,is the mean, scale parameter of the AGGD modelControl the distribution of the left and right sides respectively, willThe four parameters are used as the characteristics extracted by the AGGD, and the four directions have 16 characteristics;
3. extracting kurtosis and skewness of the disparity map:
the statistical data is modified in a specific way by different distortions of the image, the kurtosis can describe the flatness or the obtrusiveness degree of the image, the skewness can describe the distortion of the image, and the statistical characteristics of the disparity map under different distortion conditions are captured by using the kurtosis and the skewness, which is shown in formula (27):
andrespectively representing the kurtosis and skewness of the disparity map,which represents a disparity map, is generated by the disparity map,is the mean of the disparity map;
feature fusion and SVR:
because the image can show different characteristics under different scales, the GDD and AGDD models are utilized to carry out feature extraction on the weighted fusion image and the enhanced image based on different scales, so that 72 features can be obtained; adding the kurtosis and skewness characteristics of the disparity map to form 74 characteristics; then, fusing the 74 obtained features, and sending the fused features into an SVR (singular value regression) to be fitted with the subjective evaluation value; wherein the nonlinear regression function uses a logistic function, and the kernel function of the SVR uses a radial basis function.
A no-reference stereo image quality evaluation method based on a fusion image and an enhanced image comprises the steps of firstly, fusing a red channel, a green channel and a blue channel of a left viewpoint and a right viewpoint of a stereo image based on the characteristics of binocular fusion, binocular competition, binocular inhibition and the like of a human vision system to obtain a color fusion image; secondly, obtaining a disparity map of the distorted stereo image pair by using a stereo matching algorithm, and weighting the gray level map of the color fusion image by using the gradient weight of the disparity map; thirdly, generating an enhanced image according to the fused image and the disparity map; then, extracting natural statistical characteristics from the fusion image and the enhanced image in a spatial domain, and extracting kurtosis and skewness characteristics from the disparity map; and finally, fusing the extracted features and sending the fused features into Support Vector Regression (SVR) to obtain the quality of the stereo image to be evaluated.
Drawings
FIG. 1 is a block diagram of the algorithm of the present invention;
FIG. 2 is a schematic diagram of a color fusion image fusion process;
FIG. 3 is a MSCN coefficient distribution diagram (ori: original image. wn: white element. jp2k: JPEG2000. JPEG compression. blu: Gaussian blur. ff: facial) of the fused image and the enhanced image;
FIG. 4 is a distribution diagram of MSCN coefficients (ori: original image. wn: white noise. jp2k: JPEG2000. JPEG: JPEG compression. blu: Gaussian blur. ff: fast warp) multiplied by adjacent pixels in the horizontal direction of the fused image.
Detailed Description
First, a color fusion image and a disparity map are formed from left and right views. And obtaining an enhanced image according to the parallax map and the fused image. In consideration of the importance of parallax information, the parallax image is mined from multiple angles, the statistical characteristics of the parallax image are extracted, the parallax gradient weight is calculated, and the fusion image is weighted to better accord with the characteristics of human eyes. And then, capturing the statistical characteristics of the weighted fusion image and the enhanced image by adopting a Gaussian model, and finally fusing all the characteristics and fitting the characteristics with the subjective score. The experimental result shows that the algorithm performance of the invention is excellent, the invention can well accord with human subjective evaluation, and the model prediction result is more accurate. The experimental setup is shown in fig. 1, where the color fusion image fusion process is shown in fig. 2.
In the technical scheme of the patent, the parallax gradient weight and the parallax map are respectively lackedIn the case where the kurtosis and skewness are all absent, and other factors are the same, three other methods for evaluating the quality of a stereoscopic image can be formed, and the quality of a stereoscopic image can be evaluated comparatively excellently by comparing the performances as the first three methods in table 1. However, the PLCC, SROCC and RMSE indices are inferior to the methods in this document, which means that the present invention can derive several other non-optimal but feasible methods according to the technical solution, and by combining the comparison of the remaining methods in Table 1, the score is greatly reduced without using the enhanced image, and at the same time, SINQ is used in the framework proposed herein[16]The multiplied image in (2) is inferior to the text method in score, and the method embodies the superiority of the enhanced image proposed by the text. The enhanced image, the parallax gradient weight, and the kurtosis and skewness of the parallax image come from information of different angles in the parallax image, the information plays a very significant role, quality scoring is improved to a great extent, the method is perfected, and the best implementation scheme of the method is obtained.
Table 1 comparison of performance of the methods herein
Fig. 3(a), (b) are distribution cases of MSCN coefficients of the fused image and the enhanced image, respectively, and fig. 4(a), (b) are distribution cases of MSCN coefficients by which horizontally adjacent elements of the fused image and the enhanced image are multiplied, respectively, in which the original image and the distortion map of the fused image and the enhanced image use the same scene. The shapes of different distortion types are changed differently according to different statistical characteristics, and in fig. 3 and 4, the distortion images cause the original image distribution to generate squeezing or diffusion phenomena of different degrees. According to different deformation modes and degrees of image distribution, different distortion types of the image can be approximately reflected.
The distribution of the original image in fig. 3 shows a gaussian distribution, and the introduction of different distortions causes the distribution to generate squeezing or diffusion phenomena of different degrees. In fig. 3(a), the distribution of the jp2k distorted image is obviously squeezed, and the distribution shape is similar to the laplacian distribution; the distribution of wn distorted images is diffused and still shows gaussian distribution. In fig. 3(b), the peak value of the wn distorted image distribution is significantly shifted.
The distribution of the original image in fig. 4 shows a left-right asymmetry phenomenon, and the introduction of the distortion type causes the squeezing or diffusion phenomenon of the distribution, and the degree of asymmetry varies differently. FIG. 4(a), (b) shows the original images showing asymmetry; the distribution of wn distorted images not only generates a diffusion phenomenon, but also has more obvious asymmetric degree compared with the distribution of original images; the distribution of the jp2k distorted image is not squeezed and is more asymmetrical than the distribution of the original image.
From the above analysis, the statistical characteristics of the MSCN coefficients of the image can reflect the differences of different distorted images to some extent, and the differences can be quantified. The literature used herein[9]The method for extracting the features quantifies the difference, the weighted image and the enhanced image are respectively subjected to fitting generalized Gaussian distribution and asymmetric generalized Gaussian distribution under two scales to obtain statistical features, and the process can be divided into two stages.
The invention carries out the performance test of the proposed algorithm on two public three-dimensional image databases (LIVE Phase I and LIVE Phase II). Wherein, LIVE Phase I database contains 365 symmetrical distortion stereo image pairs and 20 original stereo image pairs; LIVE Phase ii contains 360 and 8 original stereo image pairs of symmetric and asymmetric distorted stereo image pairs. The characteristic vector is sent into SVR, and fitted with DMOS value to obtain PLCC (Pearson's Correlation Coefficient), SROCC (Spearman's Rank Ordered Correlation Coefficient), RMSE (Rootmean Squared error) three scores to measure the result. The lower the RMSE value, the higher the PLCC and SROCC values, the better the performance of the algorithm provided by the invention is, and the obtained objective quality score has better consistency with the subjective quality score.
Compared with the quality evaluation result of the three-dimensional image published in the prior art, the method carries out comparative analysis. Literature reference[18]Performing non-reference quality evaluation based on a multi-scale feature fusion method; literature reference[17][19]Performing full-reference stereo image quality evaluation by using the fusion image; literature reference[20]Providing a method for non-reference joint sparse representation for quality evaluation; literature reference[6][16][21]Performing quality evaluation without reference by using the fused image; literature reference[22]Evaluation is performed by analyzing natural statistical characteristics, structural characteristics and asymmetry of the distorted image. Table 2 shows the results of all algorithms on LIVE Phase I and LIVE Phase II databases. The best performing algorithm is represented in bold font in the table.
As can be seen from Table 2, due to the presence of a large number of asymmetric distorted images in LIVE Phase II database, the performance of other methods on LIVE Phase I database is significantly higher than that on LIVE Phase II database, and the performance of the method herein on two databases is close and excellent. This shows that the method conforms to the visual characteristics of human eyes and can accurately evaluate the symmetrically distorted images and the asymmetrically distorted images. The advantages of the methods herein are evident compared to prior fully referenced and non-referenced methods. PLCC is 0.9583, SROCC is 0.9507, RMSE is 4.3811 on LIVE Phase I database; PLCC on LIVE Phase II database is 0.9575, SROCC is 0.9542, RMSE is 3.0689. The method can obtain good performance without using original image information, and the performance is higher than that of other non-reference methods, and the method has good practicability and robustness.
TABLE 2 comparison of Overall Performance between different methods
Because of the adoption and improvement of the document [16 ]]Tables 3 and 4 are methods and references herein[16]Detailed comparison of medium indicators. As can be seen from Table 3, the methods herein are described in LIVE PhasThe type and total score of individual distortion in the EI database are higher than those in the literature[16]As can be seen from Table 4, the scores and total scores for the single distortion types in LIVE Phase II database were higher for the methods described herein than for the literature[16]This phenomenon not only reflects the method of the present document[16]The method improves indexes, can effectively evaluate the quality of the three-dimensional images with symmetric distortion and asymmetric distortion, better accords with the visual characteristics of human eyes, and is suitable for the three-dimensional images with asymmetric distortion.
TABLE 3 ratio of Performance of two different methods on LIVE Phase I database
Table 4 LIVE Phase II two different methods of performance comparison.

Claims (1)

1. The no-reference stereo image quality evaluation method based on the fusion image and the enhanced image is characterized by comprising the following steps of: the concrete contents are as follows:
acquisition of color fusion images
Firstly, simulating human eye characteristics to perform Gabor filtering on a red channel, a green channel and a blue channel; secondly, the gain control theory shows that the left eye applies gain control to the right eye in proportion to the contrast energy of the input signal; and applying gain control, referred to as gain enhancement, to the gain control from the right eye; the right eye also applies gain control and gain enhancement to the left eye; then, respectively generating weights for the left view and the right view according to the total contrast energy, assigning the weights to the left view and the right view, and finally summing to obtain a color fusion image, wherein the detailed process is as follows:
1. gabor filter simulation receptive field:
wherein Andrespectively representing left and right viewpoints;an intensity value representing the left or right view at each spatial location;is per image and with spatial frequencyAnd angleOf a Gabor filterA response resulting from the convolution, whereinThere are 6 dimensions of the number of the scales,has 8 directions, an upper corner markRepresenting the number of feature maps, and obtaining 48 feature maps in total;which is representative of the magnitude of each of the responses,representing the phase of each response;
2. gain controlAnd gain enhancement
The left view and the right view of the stereo image are processed by a Gabor filter to obtain 48 feature maps with different scales and different directions, and the 48 feature maps are arranged according to the ascending order of the average brightness value to obtain a set(ii) a Gain enhancement by equations (2) and (3)And gain control
3. Total comparative energy:
applying a contrast sensitivity function to a feature mapTo obtainAs shown in formula (4), whereinCalculating the weight by equation (5)Then, the total contrast energy of gain control is obtained by equation (6)And total contrast energy of gain enhancement
4. Left and right image fusion process:
the image fusion process is performed in three channels of the color image, red, green, and blue, wherein,is the weight of the left view and is,for right view weighting, the final fused image is given by equation (9):
representing the R, G, B channel of the channel,the fused image representing the channel can be obtained into a color fused image after three-channel fusion
Obtaining a disparity map and a disparity gradient weight:
processing the distorted stereo image pair by using a stereo matching algorithm based on structural similarity to obtain a disparity map; then, calculating the kurtosis and skewness of the disparity map by using a statistical method;
the normalized fused image is weighted by a disparity gradient generated weight to predict visual saliency as given in equation (10), whereGradient magnitude representing the disparity map:
acquiring an enhanced image:
the parallax compensation is acted on the fused image in a multiplication mode, the fused image and the fused image after the parallax compensation are multiplied to form an enhanced image, and the enhanced image can highlight the texture of the image; the enhanced image calculation method is expressed by equation (11), wherein,representing an enhanced image;a grayscale map representing the fused image;represents horizontal parallax;spatial coordinates representing an image;
normalizing the image and extracting the characteristics:
1. normalization of the image:
respectively carrying out mean subtracted normalized (MSCN) operation on the fused image and the enhanced image thereof, wherein the normalization can remove local correlation of the image and lead the brightness value of the image to be prone to Gaussian distribution; calculating the normalized coefficient of the mean-reducing contrast ratio as shown in formula (12), obtaining the MSCN coefficient of the fused image, and further weighting the fused image as shown in formula (15):
wherein ,to fuse or enhance the gray-scale map of the image,respectively representing the height and width of the image;is a constant;represents a local mean;represents the local variance;is a circularly symmetric gaussian weighting function sampled to 3 standard deviations,window setting of Gaussian filtering
2. And (3) fitting a Gaussian model to extract features:
the Gaussian model is used for capturing the change rule of statistical characteristics in a natural scene in a spatial domain and is used for evaluating the quality of a plane image, and the statistical characteristics of the natural scene play an important role in simulating a human visual system; the two Gaussian models are applied to the quality evaluation of the three-dimensional image, and a good result is obtained;
in order to capture the difference under different distortion types, the weighted image and the enhanced image are respectively subjected to feature extraction by fitting Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) under two scales, and the process can be divided into two stages:
in the first stage, the distribution of MSCN coefficients of the weighted image and the intensified image is fitted by using a GGD model, the GGD model can be used for effectively capturing the statistical characteristics of the distorted image, and the calculation method of the zero-mean GGD model is as follows:
wherein ,is a gamma function, the zero mean GGD model makes the MSCN coefficient distribution approximately symmetric,the general shape of the gaussian distribution is controlled,representing variance, the degree of shape change can be controlled, so using these two parametersTo capture information of the image as a feature of the first stage;
in the second stage, the AGGD model is used for fitting MSCN coefficients multiplied by adjacent elements in the image in pairs, and the weighted image and the enhanced image are respectively fitted along four directions, namely a horizontal direction H, a vertical direction V, a main diagonal direction D1 and a secondary diagonal direction D2; the image calculation method for these four directions is as follows:
the commonly used AGGD model is as follows:
wherein ,
shape parameterThe shape of the distribution is controlled such that,is the mean, scale parameter of the AGGD modelControl the distribution of the left and right sides respectively, willThe four parameters are used as the characteristics extracted by the AGGD, and the four directions have 16 characteristics;
3. extracting kurtosis and skewness of the disparity map:
the statistical data is modified in a specific way by different distortions of the image, the kurtosis can describe the flatness or the obtrusiveness degree of the image, the skewness can describe the distortion of the image, and the statistical characteristics of the disparity map under different distortion conditions are captured by using the kurtosis and the skewness, which is shown in formula (27):
andrespectively representing the kurtosis and skewness of the disparity map,which represents a disparity map, is generated by the disparity map,is the mean of the disparity map;
feature fusion and SVR:
because the image can show different characteristics under different scales, the GDD and AGDD models are utilized to carry out feature extraction on the weighted fusion image and the enhanced image based on different scales, so that 72 features can be obtained; adding the kurtosis and skewness characteristics of the disparity map to form 74 characteristics; then, fusing the 74 obtained features, and sending the fused features into an SVR (singular value regression) to be fitted with the subjective evaluation value; wherein the nonlinear regression function uses a logistic function, and the kernel function of the SVR uses a radial basis function.
CN201811498041.2A 2018-12-07 2018-12-07 No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image Active CN110246111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811498041.2A CN110246111B (en) 2018-12-07 2018-12-07 No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811498041.2A CN110246111B (en) 2018-12-07 2018-12-07 No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image

Publications (2)

Publication Number Publication Date
CN110246111A true CN110246111A (en) 2019-09-17
CN110246111B CN110246111B (en) 2023-05-26

Family

ID=67882428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811498041.2A Active CN110246111B (en) 2018-12-07 2018-12-07 No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image

Country Status (1)

Country Link
CN (1) CN110246111B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944165A (en) * 2019-11-13 2020-03-31 宁波大学 Stereoscopic image visual comfort level improving method combining perceived depth quality
CN112651922A (en) * 2020-10-13 2021-04-13 天津大学 Stereo image quality objective evaluation method based on feature extraction and ensemble learning
CN112767385A (en) * 2021-01-29 2021-05-07 天津大学 No-reference image quality evaluation method based on significance strategy and feature fusion
CN113014918A (en) * 2021-03-03 2021-06-22 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113191424A (en) * 2021-04-28 2021-07-30 中国石油大学(华东) Color fusion image quality evaluation method based on multi-model fusion
CN114782422A (en) * 2022-06-17 2022-07-22 电子科技大学 SVR feature fusion non-reference JPEG image quality evaluation method
CN114998596A (en) * 2022-05-23 2022-09-02 宁波大学 High dynamic range stereo omnidirectional image quality evaluation method based on visual perception

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413298A (en) * 2013-07-17 2013-11-27 宁波大学 Three-dimensional image objective evaluation method based on visual characteristics
US20140064604A1 (en) * 2012-02-27 2014-03-06 Ningbo University Method for objectively evaluating quality of stereo image
CN105654142A (en) * 2016-01-06 2016-06-08 上海大学 Natural scene statistics-based non-reference stereo image quality evaluation method
CN105959684A (en) * 2016-05-26 2016-09-21 天津大学 Stereo image quality evaluation method based on binocular fusion
CN108391121A (en) * 2018-04-24 2018-08-10 中国科学技术大学 It is a kind of based on deep neural network without refer to stereo image quality evaluation method
CN108769671A (en) * 2018-06-13 2018-11-06 天津大学 Stereo image quality evaluation method based on adaptive blending image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064604A1 (en) * 2012-02-27 2014-03-06 Ningbo University Method for objectively evaluating quality of stereo image
CN103413298A (en) * 2013-07-17 2013-11-27 宁波大学 Three-dimensional image objective evaluation method based on visual characteristics
CN105654142A (en) * 2016-01-06 2016-06-08 上海大学 Natural scene statistics-based non-reference stereo image quality evaluation method
CN105959684A (en) * 2016-05-26 2016-09-21 天津大学 Stereo image quality evaluation method based on binocular fusion
CN108391121A (en) * 2018-04-24 2018-08-10 中国科学技术大学 It is a kind of based on deep neural network without refer to stereo image quality evaluation method
CN108769671A (en) * 2018-06-13 2018-11-06 天津大学 Stereo image quality evaluation method based on adaptive blending image

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944165A (en) * 2019-11-13 2020-03-31 宁波大学 Stereoscopic image visual comfort level improving method combining perceived depth quality
CN110944165B (en) * 2019-11-13 2021-02-19 宁波大学 Stereoscopic image visual comfort level improving method combining perceived depth quality
CN112651922A (en) * 2020-10-13 2021-04-13 天津大学 Stereo image quality objective evaluation method based on feature extraction and ensemble learning
CN112767385A (en) * 2021-01-29 2021-05-07 天津大学 No-reference image quality evaluation method based on significance strategy and feature fusion
CN112767385B (en) * 2021-01-29 2022-05-17 天津大学 No-reference image quality evaluation method based on significance strategy and feature fusion
CN113014918A (en) * 2021-03-03 2021-06-22 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113014918B (en) * 2021-03-03 2022-09-02 重庆理工大学 Virtual viewpoint image quality evaluation method based on skewness and structural features
CN113191424A (en) * 2021-04-28 2021-07-30 中国石油大学(华东) Color fusion image quality evaluation method based on multi-model fusion
CN114998596A (en) * 2022-05-23 2022-09-02 宁波大学 High dynamic range stereo omnidirectional image quality evaluation method based on visual perception
CN114782422A (en) * 2022-06-17 2022-07-22 电子科技大学 SVR feature fusion non-reference JPEG image quality evaluation method
CN114782422B (en) * 2022-06-17 2022-10-14 电子科技大学 SVR feature fusion non-reference JPEG image quality evaluation method

Also Published As

Publication number Publication date
CN110246111B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN110246111B (en) No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image
Shao et al. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties
CN109919959B (en) Tone mapping image quality evaluation method based on color, naturalness and structure
CN104658001B (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
Khan et al. Estimating depth-salient edges and its application to stereoscopic image quality assessment
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
CN105654142B (en) Based on natural scene statistics without reference stereo image quality evaluation method
CN108769671B (en) Stereo image quality evaluation method based on self-adaptive fusion image
CN107635136B (en) View-based access control model perception and binocular competition are without reference stereo image quality evaluation method
CN109523513B (en) Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image
CN102663747B (en) Stereo image objectivity quality evaluation method based on visual perception
CN108830823B (en) Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis
CN110189294B (en) RGB-D image significance detection method based on depth reliability analysis
CN109257592B (en) Stereoscopic video quality objective evaluation method based on deep learning
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN107371016A (en) Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods
CN109788275A (en) Naturality, structure and binocular asymmetry are without reference stereo image quality evaluation method
TWI457853B (en) Image processing method for providing depth information and image processing system using the same
CN109510981B (en) Stereo image comfort degree prediction method based on multi-scale DCT
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
Karimi et al. Blind stereo quality assessment based on learned features from binocular combined images
Appina et al. A full reference stereoscopic video quality assessment metric
CN103841411B (en) A kind of stereo image quality evaluation method based on binocular information processing
CN104243970A (en) 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant