CN105913413B - A kind of color image quality method for objectively evaluating based on online manifold learning - Google Patents

A kind of color image quality method for objectively evaluating based on online manifold learning Download PDF

Info

Publication number
CN105913413B
CN105913413B CN201610202181.5A CN201610202181A CN105913413B CN 105913413 B CN105913413 B CN 105913413B CN 201610202181 A CN201610202181 A CN 201610202181A CN 105913413 B CN105913413 B CN 105913413B
Authority
CN
China
Prior art keywords
image block
value
pixel
denoted
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610202181.5A
Other languages
Chinese (zh)
Other versions
CN105913413A (en
Inventor
蒋刚毅
何美伶
陈芬
宋洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201610202181.5A priority Critical patent/CN105913413B/en
Priority to US15/197,604 priority patent/US9846818B2/en
Publication of CN105913413A publication Critical patent/CN105913413A/en
Application granted granted Critical
Publication of CN105913413B publication Critical patent/CN105913413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Abstract

The invention discloses a kind of color image quality method for objectively evaluating based on online manifold learning, its relationship objectively evaluated in view of conspicuousness and picture quality, utilize the significant detection algorithm of vision, notable figure is merged to obtain maximum with the respective notable figure of distorted image by seeking reference picture, and the significant difference value of reference image block with corresponding distorted image block is measured on the basis of the maximum conspicuousness of the image block in maximum fusion notable figure using absolute difference, thus screening is extracted with reference to the important image block of vision and the distortion important image block of vision, the manifold feature vector with reference to the important image block of vision and the distortion important image block of vision is recycled to carry out the evaluating objective quality value of calculated distortion image, evaluation effect significantly improves, the correlation objectively evaluated between result and subjective perception is high.

Description

A kind of color image quality method for objectively evaluating based on online manifold learning
Technical field
The present invention relates to a kind of image quality evaluating methods, more particularly, to a kind of cromogram based on online manifold learning As assessment method for encoding quality.
Background technique
It is limited by image processing system performance, durings the acquisition of image, transmission and coding etc., can be introduced various types of The introducing of the distortion of type, distortion can reduce the quality of image, while can also people be hindered to obtain information from image.Picture quality The important indicator of more various image processing algorithm performance superiority and inferiority and image processing system parameter, thus image transmitting, Effective image quality evaluating method is constructed in the fields such as multimedia network communication and video analysis with important value.Generally Ground, image quality evaluating method are divided into subjective assessment and objectively evaluate two major classes, since the final stay of two nights of image is the mankind, Subjective evaluation method is most reliable evaluation method, but its is time-consuming and laborious, and is not easy embedded images processing system, so in reality It is restricted in.In contrast, method for objectively evaluating have many advantages, such as it is easy to operate, convenient for practical, be current science The research emphasis of boundary or even industry.
Currently, most simple and most popular method for objectively evaluating is Y-PSNR (peak signal to Noise ratio, PSNR) and mean square error (mean square error, MSE), such method calculates simple, physical significance It is clear, but the visual characteristic due to not considering human eye, evaluation result often will appear to be misfitted with human eye subjective feeling The case where.In fact, human eye is not to carry out point by point to the processing of picture signal, in consideration of it, researcher is by introducing people Eye visual signature, so that objectively evaluating result and the human eye visual perception goodness of fit is higher.For example, being based on structural similarity (Structural Similarity, SSIM) characterizes the structure of image in terms of the brightness of image, contrast and three, structure Information, and then evaluate picture quality.Continue in work behind, and multiple dimensioned SSIM evaluation method is proposed based on SSIM, is answered Small echo SSIM evaluation method and the SSIM evaluation method weighted based on the information content, improve the performance of SSIM.In addition to based on knot The evaluation method of structure similarity, Sheikh et al. regards full reference image quality appraisement as fidelity of information problem, according to amount The amount lost for changing image information in Distortion course, proposes a kind of view-based access control model fidelity of information (Visual Information Fidelity, VIF) image quality evaluating method.Threshold limit value and superthreshold of the Chandler et al. from the visual perception of image Value characteristic is set out, and in conjunction with wavelet transformation, is proposed a kind of based on small echo visual signal to noise ratio (visual signal-to-noise Ratio, VSNR) image quality evaluating method, this method can preferably adapt to different visual conditions.Although researcher is to the mankind Vision system is furtherd investigate, but due to the complexity of human eye system, still more shallow to the cognition of human visual system, So can not still propose the method for objectively evaluating image quality completely the same with human eye subjective perception.
In order to preferably embody human visual system's characteristic, the picture quality based on rarefaction representation and vision attention is objective Evaluation method is more and more concerned.Many studies have shown that rarefaction representation can describe nerve in human brain primary visual cortex well The activity of member.For example, Guha et al. discloses a kind of image quality evaluating method based on rarefaction representation, this method is divided into two Stage, first stage are the dictionary learning stages: using the image block randomly selected from reference picture as training sample, being utilized KSVD algorithm trained complete dictionary;Second stage is evaluation phase: using orthogonal matching pursuit algorithm (OMP) to reference Image block in image carries out sparse coding with the image block in corresponding distorted image, obtains reference picture sparse coefficient and loses True image sparse coefficient, and then obtain image and objectively evaluate value.However, such picture quality based on rarefaction representation objectively evaluates Method requires to carry out sparse coding using orthogonal matching pursuit algorithm, needs largely to move expense, moreover, such method is excessively complete The acquisition of standby dictionary is completed by off-line operation, needs the natural image of mass efficient as training sample, and for Having the image procossing of requirement of real time has limitation.
The data of higher-dimension such for digital picture need to utilize dimensionality reduction technology there is essentially a large amount of information redundancy It is handled, and it is expected that its essential structure can be maintained while dimensionality reduction.Manifold learning (Manifold Learning), from 2000 in famous Scientific Magazine " Science " by since being put forward for the first time, it has also become information science field Research hotspot.Assuming that data are low dimensional manifold of the uniform sampling in a dimensional Euclidean Space, manifold learning is exactly from higher-dimension Restore low dimensional manifold structure in sampled data, that is, find the low dimensional manifold in higher dimensional space, and finds out corresponding insertion mapping, with Realize Dimensionality Reduction.Some researches show that manifold is the basis of perception, things is perceived in a manner of manifold in brain.In recent years Come, manifold learning is widely used in terms of image denoising, recognition of face, human body, and achieves preferable effect. Deng et al. is directed to the column vector in locality preserving projections (Locality Preserving Projection, LPP) algorithm not It is orthogonal problem, it is improved to obtain orthogonal locality preserving projections algorithm (Orthogonal Locality Preserving Projection, OLPP), the algorithm can find the manifold structure of data and have the characteristics that it is linear, achieve preferably part protect Hold ability and discriminating power.Manifold learning can description of the analog picture signal in primary visual cortex cell, so as to standard Really extract the visual perception feature of image.The low dimensional manifold feature of image preferably describes non-between each distorted image Linear changing relation, distorted image can arrange in manifold space according to the size of the type of variation and intensity.Therefore, it studies It is a kind of to objectively evaluate result and the high method for objectively evaluating image quality based on manifold learning of the human eye visual perception goodness of fit very It is necessary to.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of color image quality based on online manifold learning is objective Evaluation method can effectively improve the correlation objectively evaluated between result and subjective perception.
The technical scheme of the invention to solve the technical problem is: a kind of cromogram based on online manifold learning As assessment method for encoding quality, it is characterised in that the following steps are included:
1. enabling IRIt indicates the undistorted reference picture that width is W and height is H, enables IDExpression and IRIt is corresponding to be evaluated Distorted image;
2. using the significant detection algorithm of vision, I is obtained respectivelyRAnd IDRespective notable figure, correspondence are denoted as MRAnd MD;Then According to MRAnd MD, maximum fusion notable figure is calculated, M is denoted asF, by MFMiddle coordinate position is that the pixel value of the pixel of (x, y) is denoted as MF(x, y), MF(x, y)=max (MR(x,y),MD(x, y)), wherein 1≤x≤W, 1≤y≤H, max () they are to be maximized letter Number, MR(x, y) indicates MRMiddle coordinate position is the pixel value of the pixel of (x, y), MD(x, y) indicates MDMiddle coordinate position be (x, Y) pixel value of pixel;
3. by IR、ID、MR、MDAnd MFIt is divided into respectively by the sliding window that size is 8 × 8It is a not weigh mutually The identical image block of folded size;
Then by IRAnd IDThe color value vectorization of the R, G, channel B of all pixels point in each image block in respectively, By IRIn j-th of image block in R, G of all pixels point, the color vector note that is formed after the color value vectorization of channel B ForBy IDIn j-th of image block in R, G of all pixels point, the color that is formed after the color value vectorization of channel B to Amount is denoted asWherein, the initial value of j is 1, WithDimension be 192 × 1,In the 1st The value of a element to the 64th element corresponds to scan I with progressive scan modeRIn j-th of image block in each picture The color value in the channel R of vegetarian refreshments,In the value of the 65th element to the 128th element correspond as with progressive scan side Formula scans IRIn j-th of image block in each pixel the channel G color value,In the 129th element to The value of 192 elements corresponds to scan I with progressive scan modeRIn j-th of image block in each pixel B it is logical The color value in road,In the value of the 1st element to the 64th element correspond to scan I with progressive scan modeDIn The color value in the channel R of each pixel in j-th of image block,In the 65th element to the 128th element value one One corresponds to scan I with progressive scan modeDIn j-th of image block in each pixel the channel G color value, In the value of the 129th element to the 192nd element correspond to scan I with progressive scan modeDIn j-th of image block In each pixel channel B color value;
And by MR、MDAnd MFThe pixel value vectorization of all pixels point in each image block in respectively, by MRIn The pixel value vector formed after the pixel value vectorization of all pixels point in j image block is denoted asBy MDIn j-th of figure As all pixels point in block pixel value vectorization after the pixel value vector that is formed be denoted asBy MFIn j-th of image block In all pixels point pixel value vectorization after the pixel value vector that is formed be denoted asWherein,WithDimension It is 64 × 1,In the value of the 1st element to the 64th element correspond to scan M with progressive scan modeRIn The pixel value of each pixel in j image block,In the 1st element to the 64th element value correspond for Progressive scan mode scans MDIn j-th of image block in each pixel pixel value,In the 1st element to the 64th The value of a element corresponds to scan M with progressive scan modeFIn j-th of image block in each pixel pixel Value;
4. calculating MFIn each image block conspicuousness, by MFIn the conspicuousness of j-th of image block be denoted as dj,Wherein, 1≤i≤64,It indicatesIn i-th of element value;
Then M is arranged by sequence from big to smallFIn all image blocks conspicuousness, after sequence again determine before t1It is a aobvious The serial number of the corresponding image block of work property, whereinλ1Indicate that image block chooses proportionality coefficient, λ1∈(0,1];
Then I is found outRIn with identified t1A corresponding image block of serial number, and it is defined as reference image block;Find out IDIn With identified t1A corresponding image block of serial number, and it is defined as distorted image block;Find out MRIn with identified t1A serial number phase The image block answered, and be defined as with reference to specific image block;Find out MDIn with identified t1A corresponding image block of serial number, and it is fixed Justice is distortion specific image block;
5. measuring I using absolute differenceRIn each reference image block and IDIn corresponding distorted image block significant difference Value, by IRIn t' reference image block and IDIn the significant difference value of the t' distorted image block be denoted as et',Wherein, the initial value of t' is 1,1≤t'≤t1, symbol " | | " it is the symbol that takes absolute value,Indicate MRIn the t' with reference to the corresponding pixel value vector of specific image blockIn i-th of element value, Indicate MDIn the corresponding pixel value vector of the t' distortion specific image blockIn i-th of element value;
Then obtained t is measured by sequence arrangement from big to small1A significant difference value, t before being determined again after sequence2It is a aobvious The corresponding reference image block of difference value and distorted image block are write, by identified t2A reference image block is defined as with reference to vision weight Image block is wanted, and using all matrixes constituted with reference to the corresponding color vector of the important image block of vision as with reference to vision significance map As block matrix, it is denoted as YR;By identified t2A distorted image block is defined as the distortion important image block of vision, and by all distortions The matrix that the corresponding color vector of the important image block of vision is constituted is denoted as Y as the distortion important image block matrix of visionD, wherein t22×t1, λ2Indicate that reference image block and distorted image block choose proportionality coefficient, λ2∈ (0,1], YRAnd YDDimension be 192×t2, YRIn t " a column vector is identified t " the corresponding color vector of a reference image block, YDIn t " A column vector is identified t " the corresponding color vector of a distorted image block, the initial value of t " is 1,1≤t "≤t2
6. by YRIn each column vector in the value of each element subtract the mean value of the value of all elements in the column vector Centralization is carried out, the matrix obtained after centralization is handled is denoted as Y, wherein the dimension of Y is 192 × t2
Then dimensionality reduction and whitening operation are carried out to Y using principal component analysis, the square after obtaining dimensionality reduction and whitening operation Battle array, is denoted as Yw, Yw=W × Y, wherein YwDimension be M × t2, W expression whitening matrix, the dimension of W is M × 192,1 < M < < 192, symbol " < < " is much smaller than symbol;
7. using orthogonal locality preserving projections algorithm to YwOn-line training is carried out, Y is obtainedwFeature basic matrix, be denoted as D, In, the dimension of D is M × 192;
8. according to YRAnd D, each manifold feature vector with reference to the important image block of vision is calculated, by t " it is a with reference to vision The manifold feature vector of important image block is denoted as ut”,Wherein, ut”Dimension be M × 1,For YRIn t " A column vector;And according to YDAnd D, the manifold feature vector of each distortion important image block of vision being calculated, by t " a distortion regards Feel that the manifold feature vector of important image block is denoted as vt”,Wherein, vt”Dimension be M × 1,For YDIn A column vector of t ";
9. according to all manifold feature vectors with reference to the important image block of vision and all distortion important image blocks of vision Manifold feature vector calculates IDEvaluating objective quality value, be denoted as Score,Wherein, 1≤m≤M, ut”(m) u is indicatedt”In m-th of element Value, vt”(m) v is indicatedt”In m-th of element value, C be a very little constant, for guaranteeing the stability of result.
The step 6. in YwAcquisition process are as follows: 6. _ 1, enable C indicate Y covariance matrix, Wherein, the dimension of C is 192 × 192, YTFor the transposition of Y;6. _ 2, carrying out Eigenvalues Decomposition to C, all maximum eigenvalue are obtained With corresponding feature vector, wherein the dimension of feature vector is 192 × 1;6. _ 3, taking M maximum eigenvalue and corresponding M Feature vector;6. _ 4, calculating whitening matrix W, W=according to the M maximum eigenvalue taken and corresponding M feature vector Ψ-1/2×ET, wherein the dimension of Ψ is M × M, Ψ=diag (ψ1,...,ψM),E Dimension be 192 × M, E=[e1,...,eM], diag () is main diagonal matrix representation, ψ1,...,ψMIt is corresponding to indicate institute Take the 1st ..., m-th maximum eigenvalue, e1,...,eMIt is corresponding indicate taken the 1st ..., m-th feature vector;⑥_ 5, whitening operation is carried out to Y according to W, the matrix Y after obtaining dimensionality reduction and whitening operationw, Yw=W × Y.
The step 4. in take λ1=0.7.
The step 5. in take λ2=0.6.
The step 9. in take C=0.04.
Compared with the prior art, the advantages of the present invention are as follows:
1) relationship that the method for the present invention is objectively evaluated in view of conspicuousness and picture quality, calculation is significantly detected using vision Method merges notable figure with the respective notable figure of distorted image by seeking reference picture to obtain maximum, and aobvious in maximum fusion Reference image block and corresponding distortion map are measured using absolute difference on the basis of the maximum conspicuousness of image block in work figure As the significant difference value of block, thus screening is extracted with reference to the important image block of vision and the distortion important image block of vision, is recycled Carry out the objective quality of calculated distortion image with reference to the manifold feature vector of the important image block of vision and the distortion important image block of vision Evaluation of estimate, evaluation effect significantly improve, and the correlation objectively evaluated between result and subjective perception is high.
2) the method for the present invention finds the inherent geometry knot of data from image data by way of manifold learning Structure, training obtain feature basic matrix, to the important image block of reference vision and are distorted important image block progress using feature basic matrix Dimensionality reduction obtains manifold feature vector, and the manifold feature vector after dimensionality reduction has still maintained the geometrical property of high dimensional image, subtracts Lack many redundancies, it is simpler, more acurrate in the evaluating objective quality value of calculated distortion image.
3) the method for the present invention is for excessively complete dictionary in the existing method for objectively evaluating image quality based on rarefaction representation Off-line learning obtain the training sample for needing mass efficient and the problem of to image procossing limitation with the presence of requirement of real time, Feature group moment is obtained using orthogonal locality preserving projections algorithm on-line study training with reference to the important image block of vision to extracted Battle array, can obtain feature basic matrix, therefore robustness is higher, evaluation effect is more stable in real time.
Detailed description of the invention
Fig. 1 is that the overall of the method for the present invention realizes block diagram;
Fig. 2 a is scatterplot matched curve figure of the method for the present invention in LIVE image data base;
Fig. 2 b is scatterplot matched curve figure of the method for the present invention in CSIQ image data base;
Fig. 2 c is scatterplot matched curve figure of the method for the present invention in TID2008 image data base.
Specific embodiment
The present invention will be described in further detail below with reference to the embodiments of the drawings.
A kind of color image quality method for objectively evaluating based on online manifold learning proposed by the present invention, it is overall to realize Block diagram as shown in Figure 1, itself the following steps are included:
1. enabling IRIt indicates the undistorted reference picture that width is W and height is H, enables IDExpression and IRIt is corresponding to be evaluated Distorted image.
2. using the significant detection algorithm of existing vision (Saliency Detection based on Simple Priors, SDSP), I is obtained respectivelyRAnd IDRespective notable figure, correspondence are denoted as MRAnd MD;Then according to MRAnd MD, calculate maximum Notable figure is merged, M is denoted asF, by MFMiddle coordinate position is that the pixel value of the pixel of (x, y) is denoted as MF(x, y), MF(x, y)= max(MR(x,y),MD(x, y)), wherein 1≤x≤W, 1≤y≤H, max () they are to be maximized function, MR(x, y) indicates MRIn Coordinate position is the pixel value of the pixel of (x, y), MD(x, y) indicates MDMiddle coordinate position is the pixel of the pixel of (x, y) Value.
3. by IR、ID、MR、MDAnd MFIt is divided into respectively by the sliding window that size is 8 × 8It is a not weigh mutually The identical image block of folded size, if the size of image cannot be divided exactly by 8 × 8, extra pixel is not dealt with.
Then by IRAnd IDThe color value vectorization of the R, G, channel B of all pixels point in each image block in respectively, By IRIn j-th of image block in R, G of all pixels point, the color vector note that is formed after the color value vectorization of channel B ForBy IDIn j-th of image block in R, G of all pixels point, the color that is formed after the color value vectorization of channel B to Amount is denoted asWherein, the initial value of j is 1, WithDimension be 192 × 1,In the 1st The value of element to the 64th element corresponds to scan I with progressive scan modeRIn j-th of image block in each pixel The color value in the channel R of point, i.e.,In the 1st element value be IRIn j-th of image block in the 1st row the 1st column pixel The color value in the channel R of point,In the 2nd element value be IRIn j-th of image block in the 1st row the 2nd column pixel The channel R color value, and so on;In the value of the 65th element to the 128th element correspond as to sweep line by line The mode of retouching scans IRIn j-th of image block in each pixel the channel G color value, i.e.,In the 65th element Value be IRIn j-th of image block in the 1st row the 1st column pixel the channel G color value,In the 66th element Value be IRIn j-th of image block in the 1st row the 2nd column pixel the channel G color value, and so on;In The value of 129 elements to the 192nd element corresponds to scan I with progressive scan modeRIn j-th of image block in it is every The color value of the channel B of a pixel, i.e.,In the 129th element value be IRIn j-th of image block in the 1st row the 1st The color value of the channel B of the pixel of column,In the 130th element value be IRIn j-th of image block in the 1st row the 2nd The color value of the channel B of the pixel of column, and so on;In the value one-to-one correspondence of the 1st element to the 64th element be I is scanned with progressive scan modeDIn j-th of image block in each pixel the channel R color value, i.e.,In the 1st The value of a element is IDIn j-th of image block in the 1st row the 1st column pixel the channel R color value,In the 2nd The value of element is IDIn j-th of image block in the 1st row the 2nd column pixel the channel R color value, and so on;In The value of the 65th element to the 128th element correspond to scan I with progressive scan modeDIn j-th of image block in Each pixel the channel G color value, i.e.,In the 65th element value be IDIn j-th of image block in the 1st row The color value in the channel G of the pixel of the 1st column,In the 66th element value be IDIn j-th of image block in the 1st row The color value in the channel G of the pixel of 2 column, and so on;In the 129th element to the 192nd element value one it is a pair of It should be and I is scanned with progressive scan modeDIn j-th of image block in each pixel channel B color value, i.e.,In The 129th element value be IDIn j-th of image block in the 1st row the 1st column pixel channel B color value,In The 130th element value be IDIn j-th of image block in the 1st row the 2nd column pixel channel B color value, successively Analogize.
And by MR、MDAnd MFThe pixel value vectorization of all pixels point in each image block in respectively, by MRIn The pixel value vector formed after the pixel value vectorization of all pixels point in j image block is denoted asBy MDIn j-th of figure As all pixels point in block pixel value vectorization after the pixel value vector that is formed be denoted asBy MFIn j-th of image block In all pixels point pixel value vectorization after the pixel value vector that is formed be denoted asWherein,WithDimension It is 64 × 1,In the value of the 1st element to the 64th element correspond to scan M with progressive scan modeRIn The pixel value of each pixel in j image block, i.e.,In the 1st element value be MRIn j-th of image block in the 1st The pixel value for the pixel that row the 1st arranges,In the 2nd element value be MRIn j-th of image block in the 1st row the 2nd column The pixel value of pixel, and so on;In the value of the 1st element to the 64th element correspond as with progressive scan side Formula scans MDIn j-th of image block in each pixel pixel value, i.e.,In the 1st element value be MDIn The pixel value for the pixel that the 1st row the 1st arranges in j image block,In the 2nd element value be MDIn j-th of image block In the 1st row the 2nd column pixel pixel value, and so on;In the value of the 1st element to the 64th element correspond To scan M with progressive scan modeFIn j-th of image block in each pixel pixel value, i.e.,In the 1st element Value be MFIn j-th of image block in the 1st row the 1st column pixel pixel value,In the 2nd element value be MFIn J-th of image block in the 1st row the 2nd column pixel pixel value, and so on.
4. calculating MFIn each image block conspicuousness, by MFIn the conspicuousness of j-th of image block be denoted as dj,Wherein, 1≤i≤64,It indicatesIn i-th of element value, i.e., expression MFIn j-th of image The pixel value of ith pixel point in block.
Then M is arranged by sequence from big to smallFIn all image blocks conspicuousness, after sequence again determine before t1It is a aobvious Work property (i.e. maximum t1A conspicuousness) corresponding image block serial number, whereinλ1Indicate that image block is chosen Proportionality coefficient, λ1∈ (0,1], λ is taken in the present embodiment1=0.7.
Then I is found outRIn with identified t1A corresponding image block of serial number, and it is defined as reference image block;Find out IDIn With identified t1A corresponding image block of serial number, and it is defined as distorted image block;Find out MRIn with identified t1A serial number phase The image block answered, and be defined as with reference to specific image block;Find out MDIn with identified t1A corresponding image block of serial number, and it is fixed Justice is distortion specific image block.
5. measuring I using absolute differenceRIn each reference image block and IDIn corresponding distorted image block significant difference Value, by IRIn t' reference image block and IDIn the significant difference value of the t' distorted image block be denoted as et',Wherein, the initial value of t' is 1,1≤t'≤t1, symbol " | | " it is the symbol that takes absolute value,Indicate MRIn the t' with reference to the corresponding pixel value vector of specific image blockIn i-th of element value, that is, indicate MRIn the t' pixel value with reference to the ith pixel point in specific image block,Indicate MDIn the t' distortion it is aobvious Write the corresponding pixel value vector of image blockIn i-th of element value, i.e., expression MDIn the t' distortion specific image block In ith pixel point pixel value.
Then obtained t is measured by sequence arrangement from big to small1A significant difference value, t before being determined again after sequence2It is a aobvious Write difference value (i.e. maximum t2A significance difference is anisotropic) corresponding reference image block and distorted image block, by identified t2A ginseng It examines image block to be defined as with reference to the important image block of vision, and is constituted all with reference to the corresponding color vector of the important image block of vision Matrix be used as refer to the important image block matrix of vision, be denoted as YR;By identified t2A distorted image block is defined as distortion vision Important image block, and the matrix that the corresponding color vector of all distortion important image blocks of vision is constituted is important as distortion vision Image block matrix, is denoted as YD, wherein t22×t1, λ2Indicate that reference image block and distorted image block choose proportionality coefficient, λ2∈ (0,1], λ is taken in the present embodiment2=0.6, YRAnd YDDimension be 192 × t2, YRIn t " a column vector is determined T " the corresponding color vector of a reference image block, YDIn t " a column vector is identified t " a distorted image block Corresponding color vector, the initial value of t " are 1,1≤t "≤t2
6. by YRIn each column vector in the value of each element subtract the mean value of the value of all elements in the column vector Centralization is carried out, the matrix obtained after centralization is handled is denoted as Y, wherein the dimension of Y is 192 × t2
Then at using existing principal component analysis (Principal Components Analysis, PCA) to centralization The Y obtained after reason carries out dimensionality reduction and whitening operation, the matrix after obtaining dimensionality reduction and whitening operation are denoted as Yw, Yw=W × Y, Wherein, YwDimension be M × t2, W expression whitening matrix, the dimension of W is M × 192, and 1 < M < < 192, symbol " < < " is remote In-less-than symbol.
Principal component analysis is realized by carrying out Eigenvalues Decomposition to sample data covariance matrix in the present embodiment Journey, i.e. step 6. in YwAcquisition process are as follows: 6. _ 1, enable C indicate Y covariance matrix,Wherein, the dimension of C Number is 192 × 192, YTFor the transposition of Y;6. _ 2, carrying out Eigenvalues Decomposition to C, all maximum eigenvalue and corresponding spy are obtained Levy vector, wherein the dimension of feature vector is 192 × 1;6. M maximum eigenvalue and corresponding M feature vector _ 3, are taken, with It realizes and the dimensionality reduction of Y is operated, take M=8 in the present embodiment, i.e., only taken preceding 8 principal components for training, that is to say, that dimension M=8 dimension is fallen below from 192 dimensions;6. _ 4, calculating albefaction square according to the M maximum eigenvalue taken and corresponding M feature vector Battle array W, W=Ψ-1/2×ET, wherein the dimension of Ψ is M × M, Ψ=diag (ψ1,...,ψM),The dimension of E is 192 × M, E=[e1,...,eM], diag () is leading diagonal square Matrix representation form, ψ1,...,ψMIt is corresponding indicate taken the 1st ..., m-th maximum eigenvalue, e1,...,eMIt is corresponding to indicate institute Take the 1st ..., m-th feature vector;6. _ 5, whitening operation is carried out to Y according to W, after obtaining dimensionality reduction and whitening operation Matrix Yw, Yw=W × Y.
7. using existing orthogonal locality preserving projections (OLPP) algorithm to YwOn-line training is carried out, Y is obtainedwFeature base Matrix is denoted as D, wherein the dimension of D is M × 192.
8. according to YRAnd D, each manifold feature vector with reference to the important image block of vision is calculated, by t " it is a with reference to vision The manifold feature vector of important image block is denoted as ut”,Wherein, ut”Dimension be M × 1,For YRIn t " A column vector;And according to YDAnd D, the manifold feature vector of each distortion important image block of vision being calculated, by t " a distortion regards Feel that the manifold feature vector of important image block is denoted as vt”,Wherein, vt”Dimension be M × 1,For YDIn A column vector of t ".
9. according to all manifold feature vectors with reference to the important image block of vision and all distortion important image blocks of vision Manifold feature vector calculates IDEvaluating objective quality value, be denoted as Score,Wherein, 1≤m≤M, ut”(m) u is indicatedt”In m-th of element Value, vt”(m) v is indicatedt”In m-th of element value, C be a very little constant, for guaranteeing the stability of result, C=0.04 is taken in the present embodiment.
For the feasibility and validity for further illustrating the method for the present invention, tested.
In the present embodiment, authority's image data base disclosed in selection three is respectively LIVE image data base, CSIQ figure As database, TID2008 image data base are tested.Each index of each image data base is described in detail in table 1, wraps Include reference picture number, distorted image number, type of distortion number.Wherein, each image data base both provides every width distortion The mean subjective scoring difference of image.
The indices of the authoritative image data base of table 1
Image data base Reference picture number Distorted image number Type of distortion number
LIVE 29 779 5
CSIQ 30 866 6
TID2008 25 1700 17
Next, the evaluating objective quality value and mean subjective of every width distorted image that analysis and utilization the method for the present invention obtains Correlation between the difference that scores.Here, using 3 common objective parameters of assessment image quality evaluating method as evaluation Index, i.e., linearly related property coefficient (Pearson Linear Correlation Coefficients, PLCC) reflect prediction Accuracy, Spearman rank correlation coefficient (Spearman Rank Order Correlation coefficient, SROCC) Reflect the monotonicity of prediction, the consistency of root-mean-square error (Root mean squared error, RMSE) reflection prediction.Its In, the value range of PLCC and SROCC are [0,1], and value shows that method for objectively evaluating image quality is better closer to 1, on the contrary It is poorer;RMSE value is smaller, indicates that the prediction of method for objectively evaluating image quality is more accurate, and performance is better, conversely, then poorer.
For all distortions in above-mentioned LIVE image data base, CSIQ image data base and TID2008 image data base Image, the step of pressing the method for the present invention respectively the 1. process to step 9., adopt and every width distortion map be calculated in a like fashion The evaluating objective quality value of picture.The evaluating objective quality value of distorted image and mean subjective scoring difference that analysis experiment obtains it Between correlation.Evaluating objective quality value is obtained first, and it is non-that evaluating objective quality value is then done into five parameter Logistic functions Linear fit finally obtains the performance index value objectively evaluated between result and mean subjective scoring difference.In order to verify this hair Bright validity exists the method for the present invention and 6 kinds of relatively advanced full reference picture assessment method for encoding qualities of existing performance Carried out comparative analysis in three image data bases that table 1 is listed, indicate the evaluation performance of three image data bases PLCC, SROCC and RMSE coefficient as listed in table 2, participates in compare 6 kinds of methods and is respectively as follows: classical PSNR method, Z.Wang is mentioned in table 2 The evaluation method (SSIM) based on structural similarity out, the method based on degradation model that N.Damera Venkata is proposed (IFC), H.R.Sheikh propose the method (VIF) based on fidelity of information criterion, D.M.Chandler propose based on small The method (VSNR) of the visual signal to noise ratio of wave zone, the image quality evaluating method based on rarefaction representation that T.Guha is proposed (SPARQ).By data listed in table 2 as it can be seen that performance of the method for the present invention in LIVE image data base is only second to the side VIF Method, and all showed in CSIQ image data base and TID image data base it is optimal, therefore, in three image data bases by this There is good phase between the evaluating objective quality value for the distorted image that inventive method is calculated and mean subjective scoring difference Guan Xing.In addition, the PLCC value and SROCC value of LIVE image data base and CSIQ image data base have been above 0.94, it is distorted class PLCC the and SROCC value of the more complicated TID2008 image data base of type has also reached 0.82, and the present invention after weighted average The performance of method has different degrees of raising than existing 6 kinds of methods.Show the method for the present invention objectively evaluate result with The result of human eye subjective perception is more consistent, and evaluation effect is stablized, and has absolutely proved the validity of the method for the present invention.
2 the method for the present invention of table is compared with the performance of existing method for objectively evaluating image quality
Fig. 2 a gives scatterplot matched curve figure of the method for the present invention in LIVE image data base, and Fig. 2 b gives this hair Scatterplot matched curve figure of the bright method in CSIQ image data base, Fig. 2 c give the method for the present invention in TID2008 picture number According to the scatterplot matched curve figure in library.It can be clearly seen that from Fig. 2 a, Fig. 2 b and Fig. 2 c, scatterplot is distributed in fit line Nearby and good monotonicity and continuity is presented.

Claims (4)

1. a kind of color image quality method for objectively evaluating based on online manifold learning, it is characterised in that the following steps are included:
1. enabling IRIt indicates the undistorted reference picture that width is W and height is H, enables IDExpression and IRCorresponding mistake to be evaluated True image;
2. using the significant detection algorithm of vision, I is obtained respectivelyRAnd IDRespective notable figure, correspondence are denoted as MRAnd MD;Then according to MR And MD, maximum fusion notable figure is calculated, M is denoted asF, by MFMiddle coordinate position is that the pixel value of the pixel of (x, y) is denoted as MF(x, Y), MF(x, y)=max (MR(x,y),MD(x, y)), wherein 1≤x≤W, 1≤y≤H, max () they are to be maximized function, MR (x, y) indicates MRMiddle coordinate position is the pixel value of the pixel of (x, y), MD(x, y) indicates MDMiddle coordinate position is (x's, y) The pixel value of pixel;
3. by IR、ID、MR、MDAnd MFIt is divided into respectively by the sliding window that size is 8 × 8It is a not overlap The identical image block of size;
Then by IRAnd IDThe color value vectorization of the R, G, channel B of all pixels point in each image block in respectively, by IR In j-th of image block in R, G of all pixels point, the color vector that is formed after the color value vectorization of channel B is denoted asBy IDIn j-th of image block in R, G of all pixels point, the color that is formed after the color value vectorization of channel B to Amount is denoted asWherein, the initial value of j is 1, WithDimension be 192 × 1,In the 1st The value of element to the 64th element corresponds to scan I with progressive scan modeRIn j-th of image block in each pixel The color value in the channel R of point,In the value of the 65th element to the 128th element correspond as with progressive scan mode Scan IRIn j-th of image block in each pixel the channel G color value,In the 129th element to the 192nd The value of a element corresponds to scan I with progressive scan modeRIn j-th of image block in each pixel channel B Color value,In the value of the 1st element to the 64th element correspond to scan I with progressive scan modeDIn The color value in the channel R of each pixel in j image block,In the 65th element to the 128th element value one by one It corresponds to scan I with progressive scan modeDIn j-th of image block in each pixel the channel G color value,In The value of the 129th element to the 192nd element correspond to scan I with progressive scan modeDIn j-th of image block in Each pixel channel B color value;
And by MR、MDAnd MFThe pixel value vectorization of all pixels point in each image block in respectively, by MRIn j-th of figure As all pixels point in block pixel value vectorization after the pixel value vector that is formed be denoted asBy MDIn j-th of image block In all pixels point pixel value vectorization after the pixel value vector that is formed be denoted asBy MFIn j-th of image block in The pixel value vector formed after the pixel value vectorization of all pixels point is denoted asWherein,WithDimension be 64 × 1,In the value of the 1st element to the 64th element correspond to scan M with progressive scan modeRIn j-th The pixel value of each pixel in image block,In the value of the 1st element to the 64th element correspond as with line by line Scanning mode scans MDIn j-th of image block in each pixel pixel value,In the 1st element to the 64th member The value of element corresponds to scan M with progressive scan modeFIn j-th of image block in each pixel pixel value;
4. calculating MFIn each image block conspicuousness, by MFIn the conspicuousness of j-th of image block be denoted as dj,Wherein, 1≤i≤64,It indicatesIn i-th of element value;
Then M is arranged by sequence from big to smallFIn all image blocks conspicuousness, after sequence again determine before t1A conspicuousness The serial number of corresponding image block, whereinλ1Indicate that image block chooses proportionality coefficient, λ1∈(0,1];
Then I is found outRIn with identified t1A corresponding image block of serial number, and it is defined as reference image block;Find out IDIn with institute Determining t1A corresponding image block of serial number, and it is defined as distorted image block;Find out MRIn with identified t1A serial number is corresponding Image block, and be defined as with reference to specific image block;Find out MDIn with identified t1A corresponding image block of serial number, and be defined as It is distorted specific image block;
5. measuring I using absolute differenceRIn each reference image block and IDIn corresponding distorted image block significant difference value, By IRIn t' reference image block and IDIn the significant difference value of the t' distorted image block be denoted as et',Wherein, the initial value of t' is 1,1≤t'≤t1, symbol " | | " it is the symbol that takes absolute value,Indicate MRIn the t' with reference to the corresponding pixel value vector of specific image blockIn i-th of element value, Indicate MDIn the corresponding pixel value vector of the t' distortion specific image blockIn i-th of element value;
Then obtained t is measured by sequence arrangement from big to small1A significant difference value, t before being determined again after sequence2A significance difference The corresponding reference image block of different value and distorted image block, by identified t2A reference image block is defined as with reference to vision significance map As block, and using all matrixes constituted with reference to the corresponding color vector of the important image block of vision as with reference to the important image block of vision Matrix is denoted as YR;By identified t2A distorted image block is defined as the distortion important image block of vision, and by all distortion visions The matrix that the corresponding color vector of important image block is constituted is denoted as Y as the distortion important image block matrix of visionD, wherein t2= λ2×t1, λ2Indicate that reference image block and distorted image block choose proportionality coefficient, λ2∈ (0,1], YRAnd YDDimension be 192 × t2, YRIn t " a column vector is identified t " the corresponding color vector of a reference image block, YDIn t " it is a arrange to Amount is identified t " the corresponding color vector of a distorted image block, the initial value of t " is 1,1≤t "≤t2
6. by YRIn each column vector in each element value subtract the value of all elements in the column vector mean value carry out in The heart, the matrix obtained after centralization is handled are denoted as Y, wherein the dimension of Y is 192 × t2
Then dimensionality reduction and whitening operation are carried out to Y using principal component analysis, the matrix after obtaining dimensionality reduction and whitening operation, note For Yw, Yw=W × Y, wherein YwDimension be M × t2, W indicates whitening matrix, and the dimension of W is M × 192,1 < M < < 192, Symbol " < < " is much smaller than symbol;
7. using orthogonal locality preserving projections algorithm to YwOn-line training is carried out, Y is obtainedwFeature basic matrix, be denoted as D, wherein D Dimension be M × 192;
8. according to YRAnd D, each manifold feature vector with reference to the important image block of vision is calculated, by t " it is a important with reference to vision The manifold feature vector of image block is denoted as ut”,Wherein, ut”Dimension be M × 1,For YRIn t " a column Vector;And according to YDAnd D, the manifold feature vector of each distortion important image block of vision is calculated, by t " a distortion vision weight The manifold feature vector of image block is wanted to be denoted as vt”,Wherein, vt”Dimension be M × 1,For YDIn t " it is a Column vector;
9. according to the manifold of all manifold feature vectors with reference to the important image block of vision and all distortion important image blocks of vision Feature vector calculates IDEvaluating objective quality value, be denoted as Score,Wherein, 1≤m≤M, ut”(m) u is indicatedt”In m-th of element Value, vt”(m) v is indicatedt”In m-th of element value, C be a very little constant, for guaranteeing the stability of result;
The step 6. in YwAcquisition process are as follows: 6. _ 1, enable C indicate Y covariance matrix,Wherein, The dimension of C is 192 × 192, YTFor the transposition of Y;6. _ 2, carrying out Eigenvalues Decomposition to C, all maximum eigenvalue and correspondence are obtained Feature vector, wherein the dimension of feature vector be 192 × 1;6. _ 3, take M maximum eigenvalue and corresponding M feature to Amount;6. _ 4, calculating whitening matrix W, W=Ψ according to the M maximum eigenvalue taken and corresponding M feature vector-1/2×ET, Wherein, the dimension of Ψ is M × M, Ψ=diag (ψ1,...,ψM),The dimension of E is 192 × M, E=[e1,...,eM], diag () is main diagonal matrix representation, ψ1,...,ψMIt is corresponding to indicate the taken the 1st It is a ..., m-th maximum eigenvalue, e1,...,eMIt is corresponding indicate taken the 1st ..., m-th feature vector;6. _ 5, according to W Whitening operation is carried out to Y, the matrix Y after obtaining dimensionality reduction and whitening operationw, Yw=W × Y.
2. a kind of color image quality method for objectively evaluating based on online manifold learning according to claim 1, special Sign takes λ in being the step 4.1=0.7.
3. a kind of color image quality method for objectively evaluating based on online manifold learning according to claim 1, special Sign takes λ in being the step 5.2=0.6.
4. a kind of color image quality method for objectively evaluating based on online manifold learning according to claim 1, special Sign takes C=0.04 in being the step 9..
CN201610202181.5A 2016-03-31 2016-03-31 A kind of color image quality method for objectively evaluating based on online manifold learning Active CN105913413B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610202181.5A CN105913413B (en) 2016-03-31 2016-03-31 A kind of color image quality method for objectively evaluating based on online manifold learning
US15/197,604 US9846818B2 (en) 2016-03-31 2016-06-29 Objective assessment method for color image quality based on online manifold learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610202181.5A CN105913413B (en) 2016-03-31 2016-03-31 A kind of color image quality method for objectively evaluating based on online manifold learning

Publications (2)

Publication Number Publication Date
CN105913413A CN105913413A (en) 2016-08-31
CN105913413B true CN105913413B (en) 2019-02-22

Family

ID=56745319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610202181.5A Active CN105913413B (en) 2016-03-31 2016-03-31 A kind of color image quality method for objectively evaluating based on online manifold learning

Country Status (2)

Country Link
US (1) US9846818B2 (en)
CN (1) CN105913413B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220962B (en) * 2017-04-07 2020-04-21 北京工业大学 Image detection method and device for tunnel cracks
CN108921824A (en) * 2018-06-11 2018-11-30 中国科学院国家空间科学中心 A kind of color image quality evaluation method based on rarefaction feature extraction
CN109003256B (en) * 2018-06-13 2022-03-04 天津师范大学 Multi-focus image fusion quality evaluation method based on joint sparse representation
CN109003265B (en) * 2018-07-09 2022-02-11 嘉兴学院 No-reference image quality objective evaluation method based on Bayesian compressed sensing
CN109636397A (en) * 2018-11-13 2019-04-16 平安科技(深圳)有限公司 Transit trip control method, device, computer equipment and storage medium
CN109523542B (en) * 2018-11-23 2022-12-30 嘉兴学院 No-reference color image quality evaluation method based on color vector included angle LBP operator
CN109754391B (en) * 2018-12-18 2021-10-22 北京爱奇艺科技有限公司 Image quality evaluation method and device and electronic equipment
CN109978834A (en) * 2019-03-05 2019-07-05 方玉明 A kind of screen picture quality evaluating method based on color and textural characteristics
CN110189243B (en) * 2019-05-13 2023-03-24 杭州电子科技大学上虞科学与工程研究院有限公司 Color image robust watermarking method based on tensor singular value decomposition
CN110147792B (en) * 2019-05-22 2021-05-28 齐鲁工业大学 Medicine package character high-speed detection system and method based on memory optimization
CN111127387B (en) * 2019-07-11 2024-02-09 宁夏大学 Quality evaluation method for reference-free image
CN110399887B (en) * 2019-07-19 2022-11-04 合肥工业大学 Representative color extraction method based on visual saliency and histogram statistical technology
CN111354048B (en) * 2020-02-24 2023-06-20 清华大学深圳国际研究生院 Quality evaluation method and device for obtaining pictures by facing camera
CN111881758B (en) * 2020-06-29 2021-03-19 普瑞达建设有限公司 Parking management method and system
CN112233065B (en) * 2020-09-15 2023-02-24 西北大学 Total-blind image quality evaluation method based on multi-dimensional visual feature cooperation under saliency modulation
US20240054607A1 (en) * 2021-09-20 2024-02-15 Meta Platforms, Inc. Reducing the complexity of video quality metric calculations
CN117456208A (en) * 2023-11-07 2024-01-26 广东新裕信息科技有限公司 Double-flow sketch quality evaluation method based on significance detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036501A (en) * 2014-06-03 2014-09-10 宁波大学 Three-dimensional image quality objective evaluation method based on sparse representation
CN104408716A (en) * 2014-11-24 2015-03-11 宁波大学 Three-dimensional image quality objective evaluation method based on visual fidelity
CN105447884A (en) * 2015-12-21 2016-03-30 宁波大学 Objective image quality evaluation method based on manifold feature similarity

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008150840A1 (en) * 2007-05-29 2008-12-11 University Of Iowa Research Foundation Methods and systems for determining optimal features for classifying patterns or objects in images
US8848970B2 (en) * 2011-04-26 2014-09-30 Digimarc Corporation Salient point-based arrangements
US9454712B2 (en) * 2014-10-08 2016-09-27 Adobe Systems Incorporated Saliency map computation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036501A (en) * 2014-06-03 2014-09-10 宁波大学 Three-dimensional image quality objective evaluation method based on sparse representation
CN104408716A (en) * 2014-11-24 2015-03-11 宁波大学 Three-dimensional image quality objective evaluation method based on visual fidelity
CN105447884A (en) * 2015-12-21 2016-03-30 宁波大学 Objective image quality evaluation method based on manifold feature similarity

Also Published As

Publication number Publication date
CN105913413A (en) 2016-08-31
US9846818B2 (en) 2017-12-19
US20170286798A1 (en) 2017-10-05

Similar Documents

Publication Publication Date Title
CN105913413B (en) A kind of color image quality method for objectively evaluating based on online manifold learning
CN105447884B (en) A kind of method for objectively evaluating image quality based on manifold characteristic similarity
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN112288668B (en) Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN110569796A (en) Method for dynamically detecting lane line and fitting lane boundary
CN106845551B (en) Tissue pathology image identification method
CN111563418A (en) Asymmetric multi-mode fusion significance detection method based on attention mechanism
CN109523513A (en) Based on the sparse stereo image quality evaluation method for rebuilding color fusion image
CN107341776A (en) Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN109215003B (en) Image fusion method and device
WO2016145571A1 (en) Method for blind image quality assessment based on conditional histogram codebook
CN108537752B (en) Image processing method and device based on non-local self-similarity and sparse representation
CN111488951B (en) Method for generating countermeasure metric learning model for RGB-D image classification
CN110796022A (en) Low-resolution face recognition method based on multi-manifold coupling mapping
CN111695455A (en) Low-resolution face recognition method based on coupling discrimination manifold alignment
CN112149662A (en) Multi-mode fusion significance detection method based on expansion volume block
CN108492275B (en) No-reference stereo image quality evaluation method based on deep neural network
CN108596906A (en) It is a kind of to refer to screen image quality evaluating method entirely based on sparse locality preserving projections
CN116137043A (en) Infrared image colorization method based on convolution and transfomer
CN116612004A (en) Double-path fusion-based hyperspectral image reconstruction method
CN114299398B (en) Small sample remote sensing image classification method based on self-supervision contrast learning
CN116228520A (en) Image compressed sensing reconstruction method and system based on transform generation countermeasure network
CN110032984B (en) Low-resolution pedestrian relearning method based on asymmetric mapping half-coupled dictionary pairs
Zha et al. Multiple complementary priors for multispectral image compressive sensing reconstruction
CN113469998B (en) Full-reference image quality evaluation method based on subjective and objective feature fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant