CN107977967A - A kind of non-reference picture quality appraisement method towards visual angle synthesis - Google Patents

A kind of non-reference picture quality appraisement method towards visual angle synthesis Download PDF

Info

Publication number
CN107977967A
CN107977967A CN201711399720.XA CN201711399720A CN107977967A CN 107977967 A CN107977967 A CN 107977967A CN 201711399720 A CN201711399720 A CN 201711399720A CN 107977967 A CN107977967 A CN 107977967A
Authority
CN
China
Prior art keywords
mrow
msub
dog
msup
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711399720.XA
Other languages
Chinese (zh)
Other versions
CN107977967B (en
Inventor
周玉
李雷达
卢兆林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN201711399720.XA priority Critical patent/CN107977967B/en
Publication of CN107977967A publication Critical patent/CN107977967A/en
Application granted granted Critical
Publication of CN107977967B publication Critical patent/CN107977967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The present invention proposes a kind of non-reference picture quality appraisement method towards visual angle synthesis, and this method quantifies the distortion in composograph by analyzing the characteristics of two class distortions, design feature:The distortion brought first to the edge damage of image and the unnatural property of texture quantify, and extract individual features.Then feature is integrated using the method for machine learning, so as to train the Environmental Evaluation Model that the distortion that can be brought to whole building-up process is evaluated.The present invention overcomes two shortcomings of existing method:(1) existing method can only evaluate a kind of distortion in building-up process, and the two class distortions that this method can be in the whole building-up process of effective evaluation.(2) existing method is largely full reference method, i.e., they must could carry out quality evaluation in the case where providing original undistorted image to distorted image, and context of methods is no reference method, has and is more widely applied prospect.

Description

A kind of non-reference picture quality appraisement method towards visual angle synthesis
Technical field
The present invention relates to virtual perspective synthesis objective visual quality evaluation method, it is especially a kind of towards visual angle synthesis Non-reference picture quality appraisement method.
Background technology
Visual angle synthesis is that new multi-view image is synthesized using texture image and depth image.Visual angle synthetic technology is regarding more There is very extensive application [1] in the fields such as angle video, free-viewing angle TV.Quality evaluation to the visual angle of synthesis, which can quantify, to be commented The quality of valency visual angle synthetic technology, can also be used to optimum synthesis technology.In addition, the quality evaluation to visual angle composograph is straight Connecing influences the success or not of these applications.So the quality evaluation towards visual angle synthesis is of great significance.
Distortion in synthesis visual angle is broadly divided into two classes.The first kind is to obtain, and handles and transmit texture image and depth The traditional distortion brought in image process;Second class is the drafting distortion that empty filling is brought in the drawing process of visual angle. The existing quality evaluating method for visual angle composograph is designed both for wherein independent a kind of distortion.For A kind of distortion, existing algorithm have:1.Ryu et al. [2] is first with traditional quality evaluation method and synthesis tolerance method point It is other that quality evaluation is carried out to texture image and depth image, respectively obtain two mass fractions.Finally two fractions are added Power obtains the mass fraction of view picture composograph;2.Wang et al. [3] calculates composograph and original undistorted image first Texture maps similarity image and depth map similarity image;Then normalizing is carried out to depth similarity image using texture information figure Change;Finally, the depth similarity image after normalization and texture paging image are combined to the quality point to produce composograph Number.For the second class distortion, existing method is:Bosc et al. [4] constructs visual angle composograph storehouse first, and A kind of modified quality evaluating method is proposed on the basis of SSIM algorithms.The image library includes 7 kinds of DIBR algorithms and carries out visual angle The image of synthesis;Only corresponding fringe region in the texture image and composograph at original visual angle is utilized in quality evaluation SSIM is evaluated, finally using SSIM averages as final mass fraction;Conze et al. [5] is calculated using SSIM algorithms close first Into the distortion map between image and original texture image, then three are calculated according to Texture complication, gradient direction and contrast Weighted graph, is finally weighted distortion map processing using weighted graph, so as to obtain mass fraction.Open gorgeous grade [6] and be directed to visual angle The characteristics of distortion at edge is frequently more obvious in composograph, by analyzing the pixel difference of composograph and original image, And higher weights are assigned to edge pixel, and then obtain final mass fraction;Stankovic etc. [7] proposes small using morphology Ripple carries out multi-level decomposition to original image and composograph, and calculates mean square error in multiple detail subbands, on this basis Further calculate multiple dimensioned Y-PSNR and as mass fraction;The algorithm that Battisti etc. [8] is proposed is first to ginseng Examine image and composograph carries out piecemeal, matched with motion estimation algorithm;Small echo change is carried out to the image block after matching Simultaneously design factor histogram is changed, utilizes the distortion level of Kolmogorov-Smirnov distance description composographs;Jung etc. [9] First main distortion zone is detected with the left and right multi-view image after synthesis and disparity map;Then to the distortion zone meter at two visual angles SSIM fractions are calculated, finally the SSIM fractions at left and right visual angle be averaged as final mass fraction.This method lays particular emphasis on conjunction Influence of the left and right visual angle asymmetry to synthesis quality during.Li et al. people [10] proposes one kind and is based on local geometric distortion With the visual angle composograph quality evaluation of global clarity.First, detect hole region, then by the size of hole region and Intensity combines and calculates local geometric distortion fraction;Then, the global clarity of image is calculated using method fuzzy again Fraction.Finally, two fractions are combined to the mass fraction of generation composograph.Gu et al. [11] proposes one kind and is based on returning certainly Return the composograph quality evaluating method of model.This method calculates the autoregression image of composograph first, then utilizes conjunction Geometric distortion region is extracted into the difference of image and autoregression image.Using a threshold value, by the error image of two images It is transformed to bianry image.Finally, the similarity figure between bianry image and the natural image predicted is by composite diagram the most The mass fraction of picture.
Existing composograph quality evaluating method above has following defect:First, each method is both for it In a type of distortion be designed, and have ignored another distortion brought in multi-view image building-up process.Therefore, They can not the whole visual angle building-up process of effectively evaluating;In addition, in all of the above processes, only the method for Gu is no ginseng Examine image quality evaluating method, i.e., this method without original image as reference.Other methods are full reference method, i.e., he Have to rely on original image could to composograph carry out quality evaluation.And in reality, original image can not be often obtained, this Constrain the application of existing full reference method.To sum up, there is an urgent need for design one kind to be evaluated for whole building-up process Reference-free quality evaluation method.
[1]Y.C.Fan,P.K.Huang,and D.W.Shen,“3DTV depth map reconstruction based on structured light scheme,”IEEE Int.Instrum.Meas.Technol.Conf.,pp.835- 838,May 2013.
[2]S.Ryu,S.Kim,and K.Sohn,“Synthesis quality prediction model based on distortion intolerance,”IEEE Int.Conf.Image Process.,pp.585-589,Oct.2014.
[3]J.H.Wang,S.Q.Wang,K.Zeng and Z.Wang,“Quality assessment of multi- view-plus-depth images,”IEEE International Conference on Multimedia and Expo, pp.85-90,Jul.2017.
[4]E.Bosc,R.Pépion,P.L.Callet,M.Koppel,P.N.Nya,L.Morin and M.Pressigout,“Towards a new qualtiy metric for 3-D synthesized view assessment,”IEEE J.Select.Top.Signal Process.,vol.5,no.7,pp.1332-1343, Sep.2011.
[5]P.H.Conze,P.Robert and L.Morin,“Objective view synthesis quality assessment,”Electron.Imag.Int.Society for Optics and Photonics,vol.8288, pp.8288-8256,Feb.2012.
[6] Zhang Yan, Anping, You Zhixiang, Zhang Zhaoyang, the virtual view image quality evaluation method based on Edge difference,《Electronics With information journal》, 35 (8):1894-1900,2013.
[7]D.S.Stankovic,D.Kukolj and P.L.Callet,“DIBR synthesized image quality assessment based on morphological wavelets,”IEEE Int.Workshop on Quality of Multimedia Experience,pp.1-6,Jan.2015.
[8]F.Battisti,E.Bosc,M.Carli and P.L.Callet,“Objective image quality assessment of 3D synthesized views,”Sig.Process.:Image Commun.,vol.30,pp.78- 88,Jan.2015.
[9]Y.J.Jung,H.G.Kim,and Y.M.Ro,“Critical binocular asymmetry measure for perceptual quality assessment of synthesized stereo 3D images in view synthesis”,IEEE Transactions on Circuits and Systems for Video Technology,26 (7):1201-1214,2016.
[10]L.D.Li,Y.Zhou,K.Gu,W.S.Lin,and S.Q.Wang,“Quality assessment of DIBR-synthesized images by measuring local geometric distortions and global sharpness,”IEEE Trans.Multimedia.
[11]K.Gu,V.Jakhetiya,J.F.Qiao,X.L.Li,W.S.Lin and D.Thalmann,“Model- based referenceless quality metric of 3D synthesized images using local image description,”IEEE Trans.Image Process.,vol.PP,pp.1-1,Jul.2017.DOI:10.1109/ TIP.2017.2733164.
The content of the invention
Goal of the invention:The process of visual angle synthesis mainly includes the acquisition of texture image and depth image, processing, transmission, void Intend the drawing process at visual angle.Distortion can be introduced during these, wherein, transmission is got from texture and depth image, Traditional distortion can be introduced, such as fuzzy, blocking effect etc.;It is imperfect due to rendering algorithm in the drawing process of virtual perspective, Drafting distortion can be brought in the visual angle newly synthesized.To sum up, two kinds of distortions are co-existed in the visual angle finally synthesized, i.e., traditional distortion With drafting distortion.And existing method is designed for one of which distortion, it is impossible to which effectively evaluating entirely regards Two class distortions in the building-up process of angle.In order to solve the above technical problems, the present invention proposes a kind of virtual perspective synthesis view image quality Amount without with reference to evaluation method, for the present invention by analyzing the characteristics of two class distortions, design feature quantifies the mistake in composograph Very:The distortion brought first to the edge damage of image and the unnatural property of texture quantify, and extract individual features, then All features are integrated, the evaluation model for the distortion that can be brought to whole building-up process is found using machine learning.
Technical solution:To realize above-mentioned technique effect, technical solution proposed by the present invention is:
A kind of non-reference picture quality appraisement method towards visual angle synthesis, including step:
(1) one group of visual angle composograph is collected, forms visual angle composograph storehouse;
(2) the step of distortion in quantization visual angle composograph storehouse in the composograph of each width visual angle, quantization, is included to every Width visual angle composograph performs step (2-1) to (2-3):
(2-1) definition visual angle composograph is scalogram as 1;Carry out n-1 Gauss low pass respectively to visual angle composograph Filtering, and remember that the filter result of ith is scalogram as i+1, i ∈ [1,2 ..., n-1];Scalogram is formed with n as 1 to n The metric space of scale;
(2-2) builds DoG models;DoG models include n image, are denoted as DoG1,DoG2,…,DoGn;Wherein, DoGiFor Scalogram is as i+1 and scalogram are as the difference of i, i ∈ [1,2 ..., n-1];DoGnIt is scalogram as n;
(12-3) carries out characteristic parameter extraction respectively to each image in DoG models, obtains 7 edges of each image Set direction characteristic parameter and 2 texture naturality characteristic parameters;
(3) image in the composograph storehouse of visual angle is randomly divided into training image and test image two parts;Using random Forest law is modeled the characteristic parameter of training image, obtains Environmental Evaluation Model;Using the characteristic parameter of test image as matter The input of evaluation model is measured, obtains the objective quality scores of test image.
Further, any one width scalogram is expressed as i:
Wherein, * represents convolution symbol, σiRepresent scalogram as the standard deviation of the gaussian kernel function of i;Li(x, y) represents scale The pixel value at pixel (x, y) place in image i;I (x, y) represents the pixel value at pixel (x, y) place in the composograph of visual angle;G () is gaussian kernel function, G (x, y, σi) expression formula be:
Further, any one width graphical representation is in the DoG models:
In formula, DoGi(x, y) represents image DoGiThe pixel value at middle pixel (x, y) place.
Further, any one image DoG in the model to DoGiCarry out edge direction selectional feature parameter The step of extraction, includes:
1) DoG that changes commanders is become using overcomplete waveletiDecompose on 2 scales and 6 directions:
2) wavelet coefficient of same direction different scale is included into a set, 6 wavelet coefficient set is obtained, are denoted as Zit, t=[1,2 ..., 6];
3) step S3-1 to S3-4 is performed to each wavelet coefficient set:
S3-1:Calculate ZitSingle order absolute moment:
In formula, J1For ZitSingle order absolute moment, z is stochastic variable, and θ represents gamma function, γitRepresent ZitForm parameter, y2For an intermediate parameters,σ represents the standard deviation of wavelet coefficient;
Order-| zy2|γit=Yit, obtainWillSubstitute into J1Meter Formula is calculated, is obtained:
S3-2:Calculate ZitSecond moment be:
J22 (6)
S3-3:Orderγ is calculated according to formula (7)it
In formula, zjRepresent set ZitIn j-th of wavelet coefficient, h represent ZitThe number of middle wavelet coefficient;
4) by 6 wavelet coefficient set Zi1To Zi6A set is merged into, is denoted as Zi7;To Zi7Step 3) is performed, is obtained γi7;γi1i2,…,γi7As DoGi7 edge direction selectional feature parameters.
Further, to any one DoG image DoGiThe step of carrying out texture naturality characteristic parameter extraction is wrapped Include:
(5-1) calculates DoG images DoGiGradient image gi,giMeet:
In formula, gi(x, y) represents gradient image giThe pixel value at middle pixel (x, y) place,WithNot Represent DoGiHorizontal direction gradient and vertical gradient;Wherein,
(5-2) is to DoG images DoGiAnd its gradient image giIt is normalized:
Wherein,Represent gradient image giThe pixel value at pixel (p, q) place after normalization;Represent DoG Image DoGiThe pixel value at pixel (p, q) place after normalization;INT represents floor operation;gi,minRepresent gradient image giMost Small pixel value, gi,maxRepresent gradient image giMax pixel value, Ni,gRepresent gradient image giMaximum gray scale after normalization Value;DoGi,minRepresent DoG images DoGiMinimum pixel value, DoGi,maxRepresent DoG images DoGiMax pixel value, Ni,DTable Show DoG images DoGiMaximum gradation value after normalization;
(5-3) calculates DoG images DoG according to the result of step (5-2)iGray level-gradient co-occurrence matrix Mi, MiIn element It is expressed as Mi(p, q), MiThe value of (p, q) is:MeetAndPixel number;
(5-4) is from Gray level-gradient co-occurrence matrix MiMiddle extraction energy and gradient mean square deviation are as image DoGi2 textures from Right property characteristic parameter.
Further, from Gray level-gradient co-occurrence matrix M in the step (5-4)iIt is middle extraction energy computational methods be:
From Gray level-gradient co-occurrence matrix MiIt is middle extraction gradient mean square deviation computational methods be:
Wherein,Represent gradient image giAverage.
Beneficial effect:Compared with prior art, the present invention has the advantage that:
1st, the shortcomings that existing method can only evaluate a kind of distortion in building-up process is overcome, this method can be commented effectively Two class distortions in the whole building-up process of valency;
2nd, performance of the invention is substantially better than existing without visual quality evaluation method is referred to, and is specifically included:General nothing Reference image quality appraisement method, existing visual angle composograph quality evaluating method;
3 compare with existing method, and inter-library performance of the invention is best, and scalability is most strong.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention.
Embodiment
The present invention is further described below in conjunction with the accompanying drawings.
Fig. 1 show the flow chart of the present invention, as seen from the figure:The overall flow of the present invention is divided into four module:
1st, scale-space representation 2, DoG model foundations 3, feature extraction 4, training quality evaluation model.Below to this four Step describes in detail:
Module 1:Scale-space representation-and for a width visual angle composograph, image is carried out using gauss low frequency filter Repeatedly filtering, so as to construct the metric space of image.
Module 2:The foundation of the DOG models-image work of adjacent scale is poor, obtains the error image of gaussian filtering, i.e., DoG images.
Module 3:Feature extraction-carry out feature extraction in DoG images and last scalogram picture, in each width figure The feature specifically extracted as in is 7 edge direction selectional features and 2 texture naturality features.
Module 4:The foundation of Environmental Evaluation Model-by the image in database is randomly divided into two parts.Part I is used for Training quality evaluation model, Part II are used for testing.All training characteristics in Part I are inputted, are trained using random forest Mass evaluation model.Then prediction of quality is carried out to the test image in Part II using the model of training, is used for The objective quality scores of assessment design composograph quality.
Technical scheme is explained in detail with reference to specific embodiment:
Step 1:Due to the two class distortions brought in building-up process can cause edge destruction and texture it is unnatural Property, and DoG decomposition can effectively capture edge and textural characteristics in image.Needed since DoG is decomposed in adjacent scalogram picture Upper progress, so, the foundation of metric space is carried out first by Gaussian filter:
Definition visual angle composograph is scalogram as 1;N-1 Gassian low-pass filter is carried out respectively to visual angle composograph, And remember that the filter result of ith is scalogram as i+1, i ∈ [1,2 ..., n-1];Scalogram has n scale as 1 to n formation Metric space;Any one width scalogram is expressed as i:
Wherein, * represents convolution symbol, σiRepresent scalogram as the standard deviation of the gaussian kernel function of i;Li(x, y) represents scale The pixel value at pixel (x, y) place in image i;I (x, y) represents the pixel value at pixel (x, y) place in the composograph of visual angle;G () is gaussian kernel function, G (x, y, σi) expression formula be:
In the present embodiment, the value of n is 4, so constructing the metric space containing four scalogram pictures, is represented respectively For:L1,L2,L3,L4
Step 2:Build DoG models;DoG models include 4 DoG images, are denoted as DoG1,DoG2,DoG3,DoG4;Wherein, DoGiIt is scalogram as i+1 and scalogram are as the difference of i, i ∈ [1,2 ..., 3];DoG4It is scalogram as the expression of 4, DoG images Formula is:
DoGi+1(x, y)=Li(x,y)-Li+1(x,y),i∈[1,2,3]
DoG4(x, y)=L4(x,y)
In formula, DoGi+1(x, y) represents pixel values of the DoG images i+1 at pixel (x, y) place.
Step 3:For a width natural image, the wavelet coefficient on different scale equidirectional meets Generalized Gaussian point Cloth, this rule are called set direction statistics.And the distortion in composograph can cause the statistics of this natural image special Property.Become each width DoG picture breakdowns of changing commanders using overcomplete wavelet on 2 scales and 6 directions, then to DoG images Wavelet coefficient carries out discrete normalization, and uses Generalized Gaussian to the wavelet coefficient on the equidirectional different scale after normalization Distribution function is fitted.
To any one image DoG in DoG modelsiThe step of carrying out edge direction selectional feature parameter extraction is wrapped Include:
S1 becomes the DoG that changes commanders using overcomplete waveletiDecompose on 2 scales and 6 directions:
The wavelet coefficient of same direction different scale is included into a set by S2, since each width DoG images are divided into 6 Direction, 2 scales, therefore 6 wavelet coefficient set are obtained, it is denoted as Zit, t=[1,2 ..., 6];
S3 performs step S3-1 to S3-4 to each wavelet coefficient set:
S3-1:Calculate ZitSingle order absolute moment:
In formula, J1For ZitSingle order absolute moment, z is stochastic variable, and θ represents gamma function, γitRepresent ZitForm parameter, y2For an intermediate parameters,
Order-| zy2|γit=Yit, obtainWillSubstitute into J1Calculating it is public Formula, obtains:
S3-2:Calculate ZitSecond moment be:
J22 (6)
S3-3:Orderγ is calculated according to formula (7)it
In formula, zjRepresent set ZitIn j-th of wavelet coefficient, h represent ZitThe number of middle wavelet coefficient;
S4 is by 6 wavelet coefficient set Zi1To Zi6A set is merged into, is denoted as Zi7;To Zi7Step S3 is performed, is obtained γi7;γi1i2,…,γi7As DoGi7 edge direction selectional feature parameters.
In the present embodiment, 4 width DoG images, which amount to, obtains 28 edge direction selectional feature parameters.
Step 4:Due to equally bringing the loss of texture naturality in the building-up process of visual angle, so we use gray scale Gradient co-occurrence matrix describes the textural characteristics of image.
To any one DoG image DoGiThe step of carrying out texture naturality characteristic parameter extraction includes:
(5-1) calculates DoG images DoGiGradient image gi,giMeet:
In formula, gi(x, y) represents gradient image giThe pixel value at middle pixel (x, y) place,WithNot Represent DoGiHorizontal direction gradient and vertical gradient;Wherein,
(5-2) is to image DoGiAnd its gradient image giIt is normalized:
Wherein,Represent gradient image giThe pixel value at pixel (p, q) place after normalization;Represent DoG Image DoGiThe pixel value at pixel (p, q) place after normalization;INT represents floor operation;gi,minRepresent gradient image giMost Small pixel value, gi,maxRepresent gradient image giMax pixel value, Ni,gRepresent gradient image giMaximum gray scale after normalization Value;DoGi,minRepresent DoG images DoGiMinimum pixel value, DoGi,maxRepresent DoG images DoGiMax pixel value, Ni,DTable Show DoG images DoGiMaximum gradation value after normalization;
(5-3) calculates DoG images DoG according to the result of step (5-2)iGray level-gradient co-occurrence matrix Mi, MiIn element It is expressed as Mi(p, q), MiThe value of (p, q) is:MeetAndPixel number;
(5-4) is from Gray level-gradient co-occurrence matrix MiMiddle extraction energy and gradient mean square deviation are as image DoGi2 textures from Right property characteristic parameter;From Gray level-gradient co-occurrence matrix MiIt is middle extraction energy computational methods be:
From Gray level-gradient co-occurrence matrix MiIt is middle extraction gradient mean square deviation computational methods be:
Wherein,Represent gradient image giAverage.
Each width DoG images can extract 2 and represent the feature of texture naturality, so 2 × 4=8 texture is obtained Naturality characteristic parameter.
Step 5:For the visual angle composograph of any one width input, 28 set direction features and 8 can be obtained Texture naturality feature, shares 36 characteristic parameters.Image in the composograph storehouse of visual angle is randomly divided into training image by us With test image two parts;The characteristic parameter of training image is modeled using random forest method, obtains Environmental Evaluation Model; Input using the characteristic parameter of test image as Environmental Evaluation Model, obtains the objective quality scores of test image.
The technique effect of the present invention is illustrated below by specific test data.
The experimental section of the present invention is carrying out disclosed in 2 on the composograph data set of visual angle.That is MCL storehouses and IVC- DIBR storehouses.684 pairs of visual angle composographs, i.e. 684 LOOK LEFT images and 684 LOOK RIGHT images are included in MCL storehouses.The data Image in storehouse, has 648 pairs comprising traditional distortion, distortion is drawn at the visual angle that remaining 36 pairs of images include.IVC is provided in storehouse 12 width original images and 84 width comprise only the image that distortion is drawn at visual angle.It is to be tested for one in the operating process of experiment Visual angle composograph database, image is divided into 80% and 20% at random first, wherein, 80% image is used for carrying out model Foundation, 20% image is used for the test of model.In order to avoid the generation of contingency, this process circulates 1000 times, 4 individual characteies The intermediate value of energy index is by as final performance parameter.
First, we by the present invention performance and existing visual angle composograph quality evaluating method in two databases It is compared.PLCC/SRCC/KRCC numerical value is bigger, and RMSE numerical value is smaller, illustrates that algorithm performance is better.In table 1:Related coefficient (Pearson linear correlation coefficient, PLCC) is the linearly dependent coefficient after nonlinear regression; Square error root (Root mean squared error, RMSE) is the standard deviation after nonlinear regression;Kendall grades are related Coefficient (Kendall ' s Rank Correlation Coefficient, KRCC);Spearman related coefficients (Spearman Rank order correlation coefficient, SRCC).Since the method for Wang relies on depth image, and IVC storehouses do not have Depth image is provided with, so performance of this method on IVC storehouses can not be evaluated.Therefore, in table 1, with strigula "-" mark Note.
The performance comparison of 1 method provided by the present invention of table and existing visual angle composograph quality evaluation algorithm
As shown in Table 1, PLCC/SRCC/KRCC of the method provided by the present invention on two storehouses apparently higher than it is all its His algorithm, RMSE are minimum.This illustrates that method performance provided by the present invention has obvious superiority.And due to MCL Image in storehouse contains two kinds of distortion, this further proves that method provided by the present invention can synthesize whole visual angle During distortion carry out the most accurately evaluation.
In order to further verify the performance of method provided by the present invention, we are by method provided by the present invention and general Image quality evaluating method is compared.General image quality evaluation algorithm refers to the type of distortion that need not know image, you can The algorithm of quality evaluation is carried out to image.
The performance comparison sheet of 2 method provided by the present invention of table and general image quality evaluation algorithm
From the data of table 2, the performance of method provided by the present invention is substantially better than general image quality evaluation algorithm. It is embodied in best forecasting accuracy and monotonicity.
For based on trained quality evaluation algorithm, inter-library performance, you can autgmentability is that the important evaluation of these methods refers to Mark.Inter-library experiment, refers to train Environmental Evaluation Model with the feature of all image zooming-outs in an image library, then utilizes The model tests the performance of all images in another database.Based on this, to all based on trained quality evaluation side Method carries out the confirmatory experiment of scalability.
Inter-library performance comparison table of the table 3 based on trained algorithm
From the experimental result of table 3, method provided by the present invention has best inter-library performance, i.e., the present invention is carried The scalability of the method for confession is most strong.
To sum up, all experimental results prove the superiority of method provided by the present invention.
The above is only the preferred embodiment of the present invention, it should be pointed out that:For the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (6)

1. a kind of non-reference picture quality appraisement method towards visual angle synthesis, it is characterised in that including step:
(1) one group of visual angle composograph is collected, forms visual angle composograph storehouse;
(2) the step of distortion in quantization visual angle composograph storehouse in the composograph of each width visual angle, quantization, includes regarding every width Angle composograph performs step (2-1) to (2-3):
(2-1) definition visual angle composograph is scalogram as 1;N-1 Gassian low-pass filter is carried out respectively to visual angle composograph, And remember that the filter result of ith is scalogram as i+1, i ∈ [1,2 ..., n-1];Scalogram has n scale as 1 to n formation Metric space;
(2-2) builds DoG models;DoG models include n image, are denoted as DoG1,DoG2,…,DoGn;Wherein, DoGiFor scalogram As i+1 and scalogram are as the difference of i, i ∈ [1,2 ..., n-1];DoGnIt is scalogram as n;
(12-3) carries out characteristic parameter extraction respectively to each image in DoG models, obtains 7 edge directions of each image Selectional feature parameter and 2 texture naturality characteristic parameters;
(3) image in the composograph storehouse of visual angle is randomly divided into training image and test image two parts;Using random forest Method is modeled the characteristic parameter of training image, obtains Environmental Evaluation Model;Commented using the characteristic parameter of test image as quality The input of valency model, obtains the objective quality scores of test image.
A kind of 2. non-reference picture quality appraisement method towards visual angle synthesis according to claim 1, it is characterised in that Any one width scalogram is expressed as i:
<mrow> <msub> <mi>L</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <msub> <mi>&amp;sigma;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&amp;Element;</mo> <mo>&amp;lsqb;</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>n</mi> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, * represents convolution symbol, σiRepresent scalogram as the standard deviation of the gaussian kernel function of i;Li(x, y) represents scalogram picture The pixel value at pixel (x, y) place in i;I (x, y) represents the pixel value at pixel (x, y) place in the composograph of visual angle;G () is Gaussian kernel function, G (x, y, σi) expression formula be:
<mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <msub> <mi>&amp;sigma;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msubsup> <mi>&amp;pi;&amp;sigma;</mi> <mi>i</mi> <mn>2</mn> </msubsup> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <msup> <msub> <mi>&amp;sigma;</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
A kind of 3. non-reference picture quality appraisement method towards visual angle synthesis according to claim 2, it is characterised in that Any one width graphical representation is in the DoG models:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>DoG</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>L</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&amp;Element;</mo> <mo>&amp;lsqb;</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>DoG</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>L</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
In formula, DoGi(x, y) represents image DoGiThe pixel value at middle pixel (x, y) place.
A kind of 4. non-reference picture quality appraisement method towards visual angle synthesis according to claim 3, it is characterised in that Any one image DoG in the model to DoGiThe step of carrying out edge direction selectional feature parameter extraction includes:
1) DoG that changes commanders is become using overcomplete waveletiDecompose on 2 scales and 6 directions:
2) wavelet coefficient of same direction different scale is included into a set, 6 wavelet coefficient set is obtained, are denoted as Zit, t =[1,2 ..., 6];
3) step S3-1 to S3-4 is performed to each wavelet coefficient set:
S3-1:Calculate ZitSingle order absolute moment:
<mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mrow> <mo>+</mo> <mi>&amp;infin;</mi> </mrow> </msubsup> <mo>|</mo> <mi>z</mi> <mo>|</mo> <mfrac> <mrow> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <msub> <mi>y</mi> <mn>2</mn> </msub> </mrow> <mrow> <mn>2</mn> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mo>|</mo> <msub> <mi>zy</mi> <mn>2</mn> </msub> <msup> <mo>|</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> </msup> </mrow> </msup> <mi>d</mi> <mi>z</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <msub> <mi>y</mi> <mn>2</mn> </msub> </mrow> <mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mn>0</mn> <mrow> <mo>+</mo> <mi>&amp;infin;</mi> </mrow> </msubsup> <mo>|</mo> <mi>z</mi> <mo>|</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mo>|</mo> <msub> <mi>zy</mi> <mn>2</mn> </msub> <msup> <mo>|</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> </msup> </mrow> </msup> <mi>d</mi> <mi>z</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
In formula, J1For ZitSingle order absolute moment, z is stochastic variable, and θ represents gamma function, γitRepresent ZitForm parameter, y2For an intermediate parameters,σ represents the standard deviation of wavelet coefficient;
Order-| zy2|γit=Yit, obtainWillSubstitute into J1Calculating it is public Formula, obtains:
<mrow> <msub> <mi>J</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>&amp;sigma;</mi> <mfrac> <mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </msqrt> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
S3-2:Calculate ZitSecond moment be:
J22 (6)
S3-3:Orderγ is calculated according to formula (7)it
<mrow> <mfrac> <msubsup> <mi>J</mi> <mn>1</mn> <mn>2</mn> </msubsup> <msub> <mi>J</mi> <mn>2</mn> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <msup> <mi>&amp;theta;</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> <mi>&amp;theta;</mi> <mrow> <mo>(</mo> <mrow> <mn>3</mn> <mo>/</mo> <msub> <mi>&amp;gamma;</mi> <mrow> <mi>i</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>=</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mfrac> <mn>1</mn> <mi>h</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>h</mi> </msubsup> <mo>|</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mfrac> <mn>1</mn> <mi>h</mi> </mfrac> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>h</mi> </msubsup> <mo>|</mo> <msub> <mi>z</mi> <mi>j</mi> </msub> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
In formula, zjRepresent set ZitIn j-th of wavelet coefficient, h represent ZitThe number of middle wavelet coefficient;
4) by 6 wavelet coefficient set Zi1To Zi6A set is merged into, is denoted as Zi7;To Zi7Step 3) is performed, obtains γi7; γi1i2,…,γi7As DoGi7 edge direction selectional feature parameters.
A kind of 5. non-reference picture quality appraisement method towards visual angle synthesis according to claim 4, it is characterised in that To any one DoG image DoGiThe step of carrying out texture naturality characteristic parameter extraction includes:
(5-1) calculates DoG images DoGiGradient image gi,giMeet:
<mrow> <msub> <mi>g</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <mo>&amp;dtri;</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mo>&amp;dtri;</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mo>&amp;lsqb;</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>n</mi> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
In formula, gi(x, y) represents gradient image giThe pixel value at middle pixel (x, y) place,WithDo not represent DoGiHorizontal direction gradient and vertical gradient;Wherein,
<mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>Do</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>Do</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi></mi> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>Do</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>Do</mi> <msub> <mi>G</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi></mi> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
(5-2) is to DoG images DoGiAnd its gradient image giIt is normalized:
<mrow> <msub> <mover> <mi>g</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>I</mi> <mi>N</mi> <mi>T</mi> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <msub> <mi>g</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>g</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> <mrow> <msub> <mi>g</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>g</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>min</mi> </mrow> </msub> </mrow> </mfrac> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>g</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mover> <mi>D</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>I</mi> <mi>N</mi> <mi>T</mi> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <msub> <mi>DoG</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> <mrow> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>DoG</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>D</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent gradient image giThe pixel value at pixel (p, q) place after normalization;Represent DoG images DoGiThe pixel value at pixel (p, q) place after normalization;INT represents floor operation;gi,minRepresent gradient image giMinimum image Element value, gi,maxRepresent gradient image giMax pixel value, Ni,gRepresent gradient image giMaximum gradation value after normalization; DoGi,minRepresent DoG images DoGiMinimum pixel value, DoGi,maxRepresent DoG images DoGiMax pixel value, Ni,DRepresent DoG images DoGiMaximum gradation value after normalization;
(5-3) calculates DoG images DoG according to the result of step (5-2)iGray level-gradient co-occurrence matrix Mi, MiIn element representation For Mi(p, q), MiThe value of (p, q) is:MeetAndPixel number;
(5-4) is from Gray level-gradient co-occurrence matrix MiMiddle extraction energy and gradient mean square deviation are as image DoGi2 texture naturalities Characteristic parameter.
A kind of 6. non-reference picture quality appraisement method towards visual angle synthesis according to claim 5, it is characterised in that From Gray level-gradient co-occurrence matrix M in the step (5-4)iIt is middle extraction energy computational methods be:
<mrow> <msub> <mi>E</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>D</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>g</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
From Gray level-gradient co-occurrence matrix MiIt is middle extraction gradient mean square deviation computational methods be:
<mrow> <msub> <mi>G</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>g</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mover> <mi>g</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <msub> <mi>N</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>D</mi> </mrow> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
Wherein,Represent gradient image giAverage.
CN201711399720.XA 2017-12-22 2017-12-22 No-reference image quality evaluation method for view angle synthesis Active CN107977967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711399720.XA CN107977967B (en) 2017-12-22 2017-12-22 No-reference image quality evaluation method for view angle synthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711399720.XA CN107977967B (en) 2017-12-22 2017-12-22 No-reference image quality evaluation method for view angle synthesis

Publications (2)

Publication Number Publication Date
CN107977967A true CN107977967A (en) 2018-05-01
CN107977967B CN107977967B (en) 2022-05-03

Family

ID=62007424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711399720.XA Active CN107977967B (en) 2017-12-22 2017-12-22 No-reference image quality evaluation method for view angle synthesis

Country Status (1)

Country Link
CN (1) CN107977967B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712134A (en) * 2018-12-28 2019-05-03 武汉虹识技术有限公司 Iris image quality evaluation method, device and electronic equipment
CN110211090A (en) * 2019-04-24 2019-09-06 西安电子科技大学 A method of for assessment design composograph quality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611910A (en) * 2011-01-19 2012-07-25 北京东方文骏软件科技有限责任公司 Objective evaluation method of no-reference video quality weighted based by key frame image quality
US20120257164A1 (en) * 2011-04-07 2012-10-11 The Chinese University Of Hong Kong Method and device for retinal image analysis
CN103581661A (en) * 2013-10-28 2014-02-12 宁波大学 Method for evaluating visual comfort degree of three-dimensional image
CN105744256A (en) * 2016-03-31 2016-07-06 天津大学 Three-dimensional image quality objective evaluation method based on graph-based visual saliency
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN107343196A (en) * 2017-07-18 2017-11-10 天津大学 One kind mixing distortion non-reference picture quality appraisement method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611910A (en) * 2011-01-19 2012-07-25 北京东方文骏软件科技有限责任公司 Objective evaluation method of no-reference video quality weighted based by key frame image quality
US20120257164A1 (en) * 2011-04-07 2012-10-11 The Chinese University Of Hong Kong Method and device for retinal image analysis
CN103581661A (en) * 2013-10-28 2014-02-12 宁波大学 Method for evaluating visual comfort degree of three-dimensional image
CN105744256A (en) * 2016-03-31 2016-07-06 天津大学 Three-dimensional image quality objective evaluation method based on graph-based visual saliency
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN107343196A (en) * 2017-07-18 2017-11-10 天津大学 One kind mixing distortion non-reference picture quality appraisement method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LI, L.,ET.AL: "No-reference quality assessment of deblocked images", 《NEUROCOMPUTING》 *
PEI, S. C.,ET.AL: "Image quality assessment using human visual DOG model fused with random forest", 《TRANSACTIONS ON IMAGE PROCESSING》 *
WANG Z, ET.AL: "Image quality assessment: from error visibility to structural similarity", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
卢兆林: "基于偏微分方程理论的图像复原技术研究", 《中国优秀硕士学位论文全文数据库》 *
吴东: "基于感知特征的图像质量评价方法研究", 《中国优秀硕士学位论文全文数据库》 *
贾永红: "《数字图像处理》", 31 July 2015, 武汉大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712134A (en) * 2018-12-28 2019-05-03 武汉虹识技术有限公司 Iris image quality evaluation method, device and electronic equipment
CN109712134B (en) * 2018-12-28 2020-11-06 武汉虹识技术有限公司 Iris image quality evaluation method and device and electronic equipment
CN110211090A (en) * 2019-04-24 2019-09-06 西安电子科技大学 A method of for assessment design composograph quality

Also Published As

Publication number Publication date
CN107977967B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
Li et al. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features
Li et al. No-reference image blur assessment based on discrete orthogonal moments
Gao et al. Image quality assessment based on multiscale geometric analysis
CN106462771A (en) 3D image significance detection method
CN105338343B (en) It is a kind of based on binocular perceive without refer to stereo image quality evaluation method
CN104751456B (en) Blind image quality evaluating method based on conditional histograms code book
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN107194872A (en) Remote sensed image super-resolution reconstruction method based on perception of content deep learning network
CN105049851B (en) General non-reference picture quality appraisement method based on Color perception
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN108053396A (en) A kind of more distorted image quality without with reference to evaluation method
CN106709958A (en) Gray scale gradient and color histogram-based image quality evaluation method
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN109345502B (en) Stereo image quality evaluation method based on disparity map stereo structure information extraction
Zhou et al. Blind quality index for multiply distorted images using biorder structure degradation and nonlocal statistics
Yang et al. Stereoscopic video quality assessment based on 3D convolutional neural networks
CN104036493B (en) No-reference image quality evaluation method based on multifractal spectrum
Bhateja et al. Fast SSIM index for color images employing reduced-reference evaluation
CN108765414A (en) Based on wavelet decomposition and natural scene count without referring to stereo image quality evaluation method
CN106650572A (en) Method for assessing quality of fingerprint image
CN106485741A (en) A kind of method of the non-firm point set registration retaining partial structurtes
CN112950596A (en) Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer
CN109344860A (en) A kind of non-reference picture quality appraisement method based on LBP
CN107977967A (en) A kind of non-reference picture quality appraisement method towards visual angle synthesis
CN104408694B (en) Denoising method for nonlocal average image based on soft threshold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Leida

Inventor after: Zhou Yu

Inventor after: Lu Zhaolin

Inventor before: Zhou Yu

Inventor before: Li Leida

Inventor before: Lu Zhaolin

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant