CN105913413B - A kind of color image quality method for objectively evaluating based on online manifold learning - Google Patents
A kind of color image quality method for objectively evaluating based on online manifold learning Download PDFInfo
- Publication number
- CN105913413B CN105913413B CN201610202181.5A CN201610202181A CN105913413B CN 105913413 B CN105913413 B CN 105913413B CN 201610202181 A CN201610202181 A CN 201610202181A CN 105913413 B CN105913413 B CN 105913413B
- Authority
- CN
- China
- Prior art keywords
- image block
- value
- pixel
- image
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000013598 vector Substances 0.000 claims abstract description 80
- 238000011156 evaluation Methods 0.000 claims abstract description 49
- 230000004438 eyesight Effects 0.000 claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 42
- 230000000007 visual effect Effects 0.000 claims description 31
- 230000000750 progressive effect Effects 0.000 claims description 27
- 238000013441 quality evaluation Methods 0.000 claims description 20
- 230000002087 whitening effect Effects 0.000 claims description 18
- 230000009467 reduction Effects 0.000 claims description 13
- 206010063341 Metamorphopsia Diseases 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 9
- 238000000513 principal component analysis Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000012216 screening Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 description 8
- 230000016776 visual perception Effects 0.000 description 4
- 238000003646 Spearman's rank correlation coefficient Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000000977 primary visual cortex Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000012733 comparative method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of color image quality method for objectively evaluating based on online manifold learning, its relationship objectively evaluated in view of conspicuousness and picture quality, utilize the significant detection algorithm of vision, notable figure is merged to obtain maximum with the respective notable figure of distorted image by seeking reference picture, and the significant difference value of reference image block with corresponding distorted image block is measured on the basis of the maximum conspicuousness of the image block in maximum fusion notable figure using absolute difference, thus screening is extracted with reference to the important image block of vision and the distortion important image block of vision, the manifold feature vector with reference to the important image block of vision and the distortion important image block of vision is recycled to carry out the evaluating objective quality value of calculated distortion image, evaluation effect significantly improves, the correlation objectively evaluated between result and subjective perception is high.
Description
Technical Field
The invention relates to an image quality evaluation method, in particular to an objective color image quality evaluation method based on online manifold learning.
Background
Due to the limitation of the performance of the image processing system, various types of distortion can be introduced in the processes of image acquisition, transmission, encoding and the like, and the introduction of the distortion can reduce the quality of the image and also can prevent people from acquiring information from the image. The image quality is an important index for comparing the performance of various image processing algorithms and the parameters of an image processing system, so that the method for effectively evaluating the image quality is constructed in the fields of image transmission, multimedia network communication, video analysis and the like and has important value. Generally, image quality evaluation methods are classified into two categories, namely subjective evaluation and objective evaluation, and since the final sink of an image is a human, the subjective evaluation method is the most reliable evaluation method, but it is time-consuming and labor-consuming, and is not easily embedded in an image processing system, and thus is limited in practical application. In contrast, the objective evaluation method has the advantages of simple operation, convenience, practicability and the like, and is the key point of research in the academic world and even the industrial industry at present.
Currently, the simplest and most widely used objective evaluation methods are peak signal to noise ratio (PSNR) and Mean Square Error (MSE), and such methods are simple to calculate and have clear physical significance, but because the visual characteristics of human eyes are not considered, the evaluation results of the methods are often inconsistent with the subjective feeling of human eyes. In fact, the processing of the image signal by the human eye is not performed point by point, and therefore, researchers have higher coincidence between the objective evaluation result and the human eye visual perception by introducing the human eye visual characteristics. For example, the Structural information of an image is characterized from three aspects of brightness, contrast and structure of the image based on Structural Similarity (SSIM), and then the image quality is evaluated. In the subsequent work, a multi-scale SSIM evaluation method, a complex wavelet SSIM evaluation method and an SSIM evaluation method based on information content weighting are provided based on SSIM, and the performance of SSIM is improved. In addition to an evaluation method based on structural similarity, Sheikh et al consider full-reference image quality evaluation as an information fidelity problem, and propose an image quality evaluation method based on Visual Information Fidelity (VIF) according to the loss amount of image information in a quantization distortion process. Chandler et al propose an image quality evaluation method based on wavelet visual signal-to-noise ratio (VSNR) based on the critical threshold and the super-threshold characteristic of visual perception of an image in combination with wavelet transformation, and the method can better adapt to different visual conditions. Although researchers have conducted intensive research on the human visual system, due to the complexity of the human visual system, the cognition on the human visual system is still shallow, and therefore an objective image quality evaluation method completely consistent with the subjective perception of human eyes cannot be provided.
In order to better embody the characteristics of the human visual system, an image quality objective evaluation method based on sparse representation and visual attention is receiving more and more attention. Many studies have shown that sparse representations are a good description of neuronal activity in the primary visual cortex of the human brain. For example, Guha et al discloses an image quality evaluation method based on sparse representation, which is divided into two stages, the first stage being a dictionary learning stage: randomly selecting image blocks from a reference image as training samples, and training an over-complete dictionary by using a KSVD algorithm; the second phase is the evaluation phase: and carrying out sparse coding on the image blocks in the reference image and the corresponding image blocks in the distorted image by using an Orthogonal Matching Pursuit (OMP) algorithm to obtain a reference image sparse coefficient and a distorted image sparse coefficient, and further obtain an image objective evaluation value. However, the image quality objective evaluation method based on sparse representation needs sparse coding by using an orthogonal matching pursuit algorithm, needs a large amount of motion overhead, and the method obtains an over-complete dictionary through off-line operation, needs a large amount of effective natural images as training samples, and has limitation on image processing with real-time requirements.
For such high-dimensional data of digital images, there is a substantial amount of information redundancy, which needs to be processed by dimension reduction techniques, while it is expected that the essential structure thereof can be maintained while reducing the dimensions. Manifold learning (Manifold learning) has been a research hotspot in the field of information Science since its first introduction in the famous scientific journal "Science" in 2000. Assuming that the data is a low-dimensional manifold uniformly sampled in a high-dimensional Euclidean space, manifold learning is to recover the low-dimensional manifold structure from the high-dimensional sampled data, i.e. find the low-dimensional manifold in the high-dimensional space and find the corresponding embedding mapping to realize the dimensionality reduction. There are studies that show that manifold is the basis for perception, and things are perceived in the brain in a manifold fashion. In recent years, manifold learning is widely applied to image denoising, face recognition, human behavior detection and the like, and achieves better effects. Deng et al improve the problem that the column vectors in the Local Preserving Projection (LPP) algorithm are not Orthogonal, to obtain an Orthogonal Local Preserving Projection (OLPP) algorithm, which can find the manifold structure of data and has linear characteristics, and achieve better local Preserving capability and discrimination capability. Manifold learning can simulate the description of image signals in primary visual cortex cells, so that the visual perception characteristics of images can be accurately extracted. The low-dimensional manifold features of the images better describe the non-linear variation relationship between the distorted images, and the distorted images are arranged according to the variation type and the intensity in the manifold space. Therefore, it is necessary to study an objective evaluation method for image quality based on manifold learning, which has a high matching degree between objective evaluation results and human visual perception.
Disclosure of Invention
The invention aims to provide a color image quality objective evaluation method based on online manifold learning, which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a color image quality objective evaluation method based on-line manifold learning is characterized by comprising the following steps:
① order IRRepresenting a reference image of width W and height H without distortion, let IDIs represented by the formula IRCorresponding distorted images to be evaluated;
② respectively acquiring I by visual saliency detection algorithmRAnd IDRespective saliency maps, corresponding to MRAnd MD(ii) a Then according to MRAnd MDCalculating the maximum fusion saliency map, and marking as MFWill MFThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as MF(x,y),MF(x,y)=max(MR(x,y),MD(x, y)), where x is 1. ltoreq. W, y is 1. ltoreq. H, max () is a function of the maximum value, MR(x, y) represents MRThe pixel value of the pixel point with the middle coordinate position (x, y), MD(x, y) represents MDThe middle coordinate position is the pixel value of the pixel point of (x, y);
③ mixing IR、ID、MR、MDAnd MFDivided into by sliding windows of size 8 x 8, respectivelyThe image blocks are not overlapped and have the same size;
then adding IRAnd IDVectorizing the color values of R, G, B channels of all pixel points in each image block, and converting I into IRRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWill IDRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWherein the initial value of j is 1, andare all of dimensions of 192 x 1,the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerRThe color value of the R channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerRThe color value of the G channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerRThe color value of the B channel of each pixel point in the jth image block in (1),the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerDThe color value of the R channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerDThe color value of the G channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerDThe color value of the B channel of each pixel point in the jth image block;
and mix MR、MDAnd MFVectorizing the pixel values of all the pixel points in each image block, and converting M into MRThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MDThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MFThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWherein,andthe dimensions of (a) are all 64 x 1,the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerRThe pixel value of each pixel point in the jth image block in (a),the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerDThe pixel value of each pixel point in the jth image block in (a),the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerFThe pixel value of each pixel point in the jth image block;
④ calculating MFOf each image block, MFThe saliency of the jth image block in (a) is noted as dj,Wherein i is more than or equal to 1 and less than or equal to 64,to representThe value of the ith element in (1);
then arranging M in the order from big to smallFAfter sorting, determining the front t of all image blocks1The serial number of the image block corresponding to the significance, wherein,λ1indicating the selected scaling factor, λ, of the image block1∈(0,1];
Then find out IRAnd determined t1Sequence of thingsThe image block is numbered correspondingly and is defined as a reference image block; find out IDAnd determined t1The image blocks with corresponding serial numbers are defined as distorted image blocks; finding MRAnd determined t1Image blocks corresponding to the serial numbers are defined as reference significant image blocks; finding MDAnd determined t1Image blocks corresponding to the serial numbers are defined as distortion obvious image blocks;
⑤ use absolute difference measure IREach reference image block of (1) and (I)DThe significant difference value of the corresponding distorted image block is shown as IRT' th reference image block of (1) andDthe significant difference value of the t' th distorted image block in (a) is recorded as et',Wherein the initial value of t 'is 1, t' is more than or equal to 1 and less than or equal to t1, the symbol "|" is an absolute value symbol,represents MRThe t' th reference significant image block of (a) corresponds to a pixel value vectorThe value of the i-th element in (b),represents MDThe t' th distortion-significant image block of (a) corresponds to a pixel value vectorThe value of the ith element in (1);
then arranging the t obtained by measurement in the order from big to small1Sorting the significant difference values and determining the top t2The determined t is compared with the reference image block and the distorted image block corresponding to the significant difference value2Defining each reference image block as a reference vision important image block, and constructing color vectors corresponding to all the reference vision important image blocksThe resultant matrix is used as the reference vision important image block matrix and is marked as YR(ii) a Will determine t2Defining each distorted image block as a distorted vision important image block, and taking a matrix formed by color vectors corresponding to all the distorted vision important image blocks as a distorted vision important image block matrix, which is marked as YDWherein, t2=λ2×t1,λ2Expressing the reference image block and the distorted image block to select a scaling factor, lambda2∈(0,1],YRAnd YDAll dimensions of (a) are 192 × t2,YRThe t 'th column vector in (a) is the color vector corresponding to the determined t' th reference image block, YDThe t-th column vector is the color vector corresponding to the t-th distorted image block, the initial value of t is 1, t is more than or equal to 1 and is more than or equal to t2;
⑥ mixing YRThe mean value of the value of each element in each column vector minus the values of all the elements in the column vector is centered, and the matrix obtained after centering is recorded as Y, wherein the dimension of Y is 192 multiplied by t2;
Then, performing dimensionality reduction and whitening operation on Y by utilizing principal component analysis to obtain a matrix after dimensionality reduction and whitening operation, and recording the matrix as Yw,YwW × Y, wherein YwDimension of M x t2W represents a whitening matrix, the dimension of W is M × 192, 1 < M < 192, and the symbol "<" is much smaller than the symbol;
⑦ pairs Y using an orthogonal partial preserving projection algorithmwPerforming on-line training to obtain YwThe feature base matrix of (1) is marked as D, wherein the dimension of D is M multiplied by 192;
⑧ according to YRAnd D, calculating the manifold feature vector of each reference visual important image block, and recording the manifold feature vector of the t' th reference visual important image block as ut”,Wherein u ist”Has the dimension of M x 1,is YRThe t "th column vector of (1); and according to YDAnd D, calculating the manifold feature vector of each distorted important visual image block, and recording the manifold feature vector of the t-th distorted important visual image block as vt”,Wherein v ist”Has the dimension of M x 1,is YDThe t "th column vector of (1);
⑨ calculating I from the manifold feature vectors of all reference visually significant image blocks and the manifold feature vectors of all distorted visually significant image blocksDThe objective quality evaluation value of (1), denoted as Score,wherein M is more than or equal to 1 and less than or equal to M and ut”(m) represents ut”Value of the m-th element of (1), vt”(m) represents vt”C is a small constant value for ensuring the stability of the result.
Y in the step ⑥wThe acquisition procedure is ⑥ _1, let C denote the covariance matrix of Y,wherein the dimension of C is 192 × 192, YTTransposing Y, ⑥ _2, decomposing the eigenvalue of C to get all maximum eigenvalues and corresponding eigenvectors, wherein the dimension of the eigenvector is 192 × 1, ⑥ _3, taking M maximum eigenvalues and corresponding M eigenvectors, ⑥ _4, calculating a whitening matrix W according to the M maximum eigenvalues and the corresponding M eigenvectors, W ═ Ψ-1/2×ETWherein the dimension of Ψ is mxm, Ψ ═ diag (Ψ)1,...,ψM),Dimension of E is 192 XM, E ═ E1,...,eM]Diag () is the principal diagonal matrix representation, ψ1,...,ψMCorresponding to the 1 st, … th, Mth maximum eigenvalue, e1,...,eMCorresponding to the 1 st, … th and Mth eigenvectors, ⑥ _5 whitening Y according to W to obtain a matrix Y after dimension reduction and whiteningw,Yw=W×Y。
In said step ④, take lambda1=0.7。
In said step ⑤, take lambda2=0.6。
In said step ⑨, C is 0.04.
Compared with the prior art, the invention has the advantages that:
1) the method of the invention considers the relationship between the significance and the objective evaluation of the image quality, utilizes a visual significance detection algorithm, obtains a maximum fusion significant image by obtaining the significant images of the reference image and the distorted image respectively, and utilizes an absolute difference value to measure the significant difference value of the reference image block and the corresponding distorted image block on the basis of the maximum significance of the image blocks in the maximum fusion significant image, thereby screening and extracting the reference visual important image block and the distorted visual important image block, and then utilizes the manifold characteristic vectors of the reference visual important image block and the distorted visual important image block to calculate the objective quality evaluation value of the distorted image, the evaluation effect is obviously improved, and the correlation between the objective evaluation result and the subjective perception is high.
2) The method searches the internal geometric structure of the data by a manifold learning mode from the image data, trains to obtain a characteristic base matrix, reduces the dimensions of the reference vision important image block and the distortion important image block by using the characteristic base matrix to obtain a manifold characteristic vector, and the manifold characteristic vector after dimension reduction still keeps the geometric characteristics of high-dimensional image data, reduces a lot of redundant information, and is simpler and more accurate in calculating the objective quality evaluation value of the distortion image.
3) Aiming at the problems that a large number of effective training samples are needed for obtaining the off-line learning of an over-complete dictionary and the problem that the processing of an image with a real-time requirement is limited in the existing sparse representation-based image quality objective evaluation method, the method obtains the characteristic base matrix by using the orthogonal local preserving projection algorithm on-line learning and training of the extracted reference vision important image block, and can obtain the characteristic base matrix in real time, so that the robustness is higher, and the evaluation effect is more stable.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is a plot of a scatter fit of the method of the present invention in a LIVE image database;
FIG. 2b is a graph of a scatter-fit plot in a CSIQ image database according to the present invention;
figure 2c is a plot of the scatter-fit of the method of the present invention in a TID2008 image database.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides an objective evaluation method for color image quality based on online manifold learning, the overall implementation block diagram of which is shown in figure 1, and the method comprises the following steps:
① order IRRepresenting a reference image of width W and height H without distortion, let IDIs represented by the formula IRCorresponding distorted image to be evaluated.
② adopt the existing visual Saliency Detection algorithm (Saliency Detection based o)n SimplePriors, SDSP), respectively to obtain IRAnd IDRespective saliency maps, corresponding to MRAnd MD(ii) a Then according to MRAnd MDCalculating the maximum fusion saliency map, and marking as MFWill MFThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as MF(x,y),MF(x,y)=max(MR(x,y),MD(x, y)), where x is 1. ltoreq. W, y is 1. ltoreq. H, max () is a function of the maximum value, MR(x, y) represents MRThe pixel value of the pixel point with the middle coordinate position (x, y), MD(x, y) represents MDThe middle coordinate position is the pixel value of the pixel point of (x, y).
③ mixing IR、ID、MR、MDAnd MFDivided into by sliding windows of size 8 x 8, respectivelyAnd if the size of the image blocks which are not overlapped and have the same size cannot be divided by 8 multiplied by 8, redundant pixel points are not processed.
Then adding IRAnd IDVectorizing the color values of R, G, B channels of all pixel points in each image block, and converting I into IRRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWill IDRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWherein the initial value of j is 1, andare all of dimensions of 192 x 1,the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerRThe color value of the R channel of each pixel point in the jth image block, that isHas a value of I for the 1 st element in (1)RThe color value of the R channel of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 2 nd element in (1)RThe color values of R channels of the pixels on the 1 st row and the 2 nd column in the jth image block are analogized in sequence;has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerRColor value of G channel of each pixel point in jth image block, i.e. color value of G channelHas a value of I as the 65 th elementRThe color value of the G channel of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 66 th element in (b)RThe color value of the G channel of the pixel point of the 1 st row and the 2 nd column in the jth image block is analogized in sequence;has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerRIn the jth image block ofOf each pixel point, i.e. the color value of the B channelHas a value of I for the 129 th element inRThe color value of the channel B of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 130 th element in (1)RThe color values of the B channels of the pixels on the 1 st row and the 2 nd column in the jth image block are analogized in sequence;the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerDThe color value of the R channel of each pixel point in the jth image block, that isHas a value of I for the 1 st element in (1)DThe color value of the R channel of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 2 nd element in (1)DThe color values of R channels of the pixels on the 1 st row and the 2 nd column in the jth image block are analogized in sequence;has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerDColor value of G channel of each pixel point in jth image block, i.e. color value of G channelHas a value of I as the 65 th elementDThe color value of the G channel of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 66 th element in (b)DThe color value of the G channel of the pixel point of the 1 st row and the 2 nd column in the jth image block is analogized in sequence;has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerDColor value of B channel of each pixel point in jth image block, i.e. color value of B channelHas a value of I for the 129 th element inDThe color value of the channel B of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of I for the 130 th element in (1)DAnd (4) carrying out analogy on color values of channels B of the pixels on the 1 st row and the 2 nd column in the jth image block.
And mix MR、MDAnd MFVectorizing the pixel values of all the pixel points in each image block, and converting M into MRThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MDThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MFThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWherein,andthe dimensions of (a) are all 64 x 1,the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerRThe pixel value of each pixel point in the jth image block, i.e.The value of the 1 st element in (1) is MRThe pixel value of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of M for the 2 nd element in (1)RThe pixel values of the pixel points in the 1 st row and the 2 nd column in the jth image block are analogized in sequence;the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerDThe pixel value of each pixel point in the jth image block, i.e.The value of the 1 st element in (1) is MDThe pixel value of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of M for the 2 nd element in (1)DThe pixel values of the pixel points in the 1 st row and the 2 nd column in the jth image block are analogized in sequence;the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerFThe pixel value of each pixel point in the jth image block, i.e.The value of the 1 st element in (1) is MFThe pixel value of the pixel point of the 1 st row and the 1 st column in the jth image block,has a value of M for the 2 nd element in (1)FAnd (4) the pixel values of the pixel points in the 1 st row and the 2 nd column in the jth image block are analogized in sequence.
④ calculating MFOf each image block, MFThe saliency of the jth image block in (a) is noted as dj,Wherein i is more than or equal to 1 and less than or equal to 64,to representI.e. represents MFAnd the pixel value of the ith pixel point in the jth image block.
Then arranging M in the order from big to smallFAfter sorting, determining the front t of all image blocks1Is significant (i.e. maximum t)1Number of salience) of the corresponding image block, wherein,λ1indicating the selected scaling factor, λ, of the image block1∈(0,1]In this example, take λ1=0.7。
Then find out IRAnd determined t1The image blocks with corresponding serial numbers are defined as reference image blocks; find out IDAnd determined t1The image blocks with corresponding serial numbers are defined as distorted image blocks; finding MRAnd determined t1Image blocks corresponding to the serial numbers are defined as reference significant image blocks; finding MDAnd determined t1And each image block with the corresponding sequence number is defined as a distortion obvious image block.
⑤ use absolute difference measure IREach reference image block of (1) and (I)DThe significant difference value of the corresponding distorted image block is shown as IRT' th reference image block of (1) andDthe significant difference value of the t' th distorted image block in (a) is recorded as et',Wherein the initial value of t 'is 1, t' is more than or equal to 1 and less than or equal to t1The symbol "| · |" is an absolute value symbol,represents MRThe t' th reference significant image block of (a) corresponds to a pixel value vectorI.e. represents MRThe t' th one in the reference significant image block refers to the pixel value of the i-th pixel point in the significant image block,represents MDThe t' th distortion-significant image block of (a) corresponds to a pixel value vectorI.e. represents MDThe t' th distortion in the image block is obvious, and the pixel value of the ith pixel point in the image block is obvious.
Then arranging the t obtained by measurement in the order from big to small1Sorting the significant difference values and determining the top t2One significant difference value (i.e., the maximum t)2One significant difference) of the reference image block and the distorted image block, determining t2Defining each reference image block as a reference vision important image block, and taking a matrix formed by color vectors corresponding to all the reference vision important image blocks as the reference vision important image blockMatrix, denoted as YR(ii) a Will determine t2Defining each distorted image block as a distorted vision important image block, and taking a matrix formed by color vectors corresponding to all the distorted vision important image blocks as a distorted vision important image block matrix, which is marked as YDWherein, t2=λ2×t1,λ2Expressing the reference image block and the distorted image block to select a scaling factor, lambda2∈(0,1]In this example, take λ2=0.6,YRAnd YDAll dimensions of (a) are 192 × t2,YRThe t 'th column vector in (a) is the color vector corresponding to the determined t' th reference image block, YDThe t-th column vector is the color vector corresponding to the t-th distorted image block, the initial value of t is 1, t is more than or equal to 1 and is more than or equal to t2。
⑥ mixing YRThe mean value of the value of each element in each column vector minus the values of all the elements in the column vector is centered, and the matrix obtained after centering is recorded as Y, wherein the dimension of Y is 192 multiplied by t2。
Then, using the existing Principal Component Analysis (PCA), performing dimensionality reduction and whitening operation on the Y obtained after the centralization processing to obtain a matrix after the dimensionality reduction and whitening operation, and recording the matrix as Yw,YwW × Y, wherein YwDimension of M x t2W represents the whitening matrix, with dimension of W M192, 1 < M < 192, and symbol "<" being much smaller than the symbol.
In this embodiment, the principal component analysis process is implemented by performing eigenvalue decomposition on the covariance matrix of the sample data, i.e., Y in step ⑥wThe acquisition procedure is ⑥ _1, let C denote the covariance matrix of Y,wherein the dimension of C is 192 × 192, YT⑥ _2, decomposing the characteristic value of C to obtain all the maximum characteristic values and corresponding characteristic vectorsDimension 192 × 1, ⑥ _3, taking M maximum eigenvalues and corresponding M eigenvectors to realize dimension reduction operation on Y, taking M as 8 in the present embodiment, that is, only the first 8 principal components are taken for training, that is, the dimension is reduced from 192 dimension to M as 8 dimension, ⑥ _4, calculating a whitening matrix W, W as Ψ, according to the taken M maximum eigenvalues and corresponding M eigenvectors-1/2×ETWherein the dimension of Ψ is mxm, Ψ ═ diag (Ψ)1,...,ψM),Dimension of E is 192 XM, E ═ E1,...,eM]Diag () is the principal diagonal matrix representation, ψ1,...,ψMCorresponding to the 1 st, … th, Mth maximum eigenvalue, e1,...,eMCorresponding to the 1 st, … th and Mth eigenvectors, ⑥ _5 whitening Y according to W to obtain a matrix Y after dimension reduction and whiteningw,Yw=W×Y。
⑦ use the existing orthogonal partial preserving projection (OLPP) algorithm to YwPerforming on-line training to obtain YwIs denoted as D, wherein the dimension of D is M × 192.
⑧ according to YRAnd D, calculating the manifold feature vector of each reference visual important image block, and recording the manifold feature vector of the t' th reference visual important image block as ut”,Wherein u ist”Has the dimension of M x 1,is YRThe t "th column vector of (1); and according to YDAnd D, calculating the manifold feature vector of each distorted important visual image block, and recording the manifold feature vector of the t-th distorted important visual image block as vt”,Wherein v ist”Has the dimension of M x 1,is YDThe t "th column vector of (1).
⑨ calculating I from the manifold feature vectors of all reference visually significant image blocks and the manifold feature vectors of all distorted visually significant image blocksDThe objective quality evaluation value of (1), denoted as Score,wherein M is more than or equal to 1 and less than or equal to M and ut”(m) represents ut”Value of the m-th element of (1), vt”(m) represents vt”C is a small constant for ensuring the stability of the result, and in this embodiment, C is 0.04.
To further illustrate the feasibility and effectiveness of the method of the present invention, experiments were conducted.
In this embodiment, three public authoritative image databases are selected to be a LIVE image database, a CSIQ image database, and a TID2008 image database, respectively, for performing an experiment. The respective indices of each image database including the number of reference images, the number of distorted images, and the number of distortion types are detailed in table 1. Wherein each image database provides an average subjective score difference for each distorted image.
TABLE 1 indices of authoritative image database
Image database | Number of reference pictures | Number of distorted images | Number of distortion types |
LIVE | 29 | 779 | 5 |
CSIQ | 30 | 866 | 6 |
TID2008 | 25 | 1700 | 17 |
Next, the correlation between the objective quality evaluation value and the average subjective score difference of each distorted image obtained by the method of the present invention is analyzed. Here, 3 commonly used objective parameters for evaluating the image quality are used as evaluation indexes, that is, a Linear Correlation coefficient (PLCC) reflects the accuracy of prediction, a Spearman Rank Correlation coefficient (SROCC) reflects monotonicity of prediction, and a Root Mean Square Error (RMSE) reflects the uniformity of prediction. Wherein, the value range of PLCC and SROCC is [0,1], the closer the value is to 1, the better the image quality objective evaluation method is, otherwise, the worse is; the smaller the RMSE value is, the more accurate the prediction of the objective evaluation method for representing the image quality is, the better the performance is, and otherwise, the worse the performance is.
The method comprises the steps of obtaining an objective quality evaluation value according to the steps ① - ⑨ of the method, performing nonlinear fitting on the objective quality evaluation value by a five-parameter Logistic function, and finally obtaining a performance index value between the objective evaluation result and an average subjective score difference value, wherein the method comprises the following steps of obtaining the objective quality evaluation value, performing comparative analysis on three image databases listed in table 1 by using 6 full-reference image quality objective evaluation methods with advanced performance, wherein the parameters of the objective quality evaluation value and the average subjective score difference value are listed in table 2, the 6 methods referred to in table 2 are PSNR methods, Z.Wang proposed structural similarity-based evaluation method (SRM), N.Dada proposed subjective coefficients are listed in table 2, the 6 methods referred to the comparative methods are PSNR methods, Z.Wang proposed structural similarity-based evaluation method (SRC), N.David proposed visual quality evaluation method and SRC evaluation method are based on a weighted average weighted linear correlation evaluation method, and the evaluation method is based on weighted correlation evaluation method of SRC, the results of SRC, the weighted correlation evaluation method is well shown by a weighted correlation evaluation method, and the evaluation method of SRC evaluation method, the invention is more favorable for evaluating the performance of the results of the three images, and the SRC evaluation method is shown by a simplified evaluation method, and the invention, and the method is shown by SRNR 2.
TABLE 2 comparison of Performance of the method of the present invention with existing objective methods of image quality evaluation
Fig. 2a shows a scattered point fitting curve graph of the method of the present invention in a LIVE image database, fig. 2b shows a scattered point fitting curve graph of the method of the present invention in a CSIQ image database, and fig. 2c shows a scattered point fitting curve graph of the method of the present invention in a TID2008 image database. As can be clearly seen from fig. 2a, 2b and 2c, the scatter points are evenly distributed in the vicinity of the fit line and exhibit good monotonicity and continuity.
Claims (4)
1. A color image quality objective evaluation method based on-line manifold learning is characterized by comprising the following steps:
① order IRRepresenting a reference image of width W and height H without distortion, let IDIs represented by the formula IRCorresponding distorted images to be evaluated;
② respectively acquiring I by visual saliency detection algorithmRAnd IDRespective saliency maps, corresponding to MRAnd MD(ii) a Then according to MRAnd MDCalculating the maximum fusion saliency mapIs marked as MFWill MFThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as MF(x,y),MF(x,y)=max(MR(x,y),MD(x, y)), where x is 1. ltoreq. W, y is 1. ltoreq. H, max () is a function of the maximum value, MR(x, y) represents MRThe pixel value of the pixel point with the middle coordinate position (x, y), MD(x, y) represents MDThe middle coordinate position is the pixel value of the pixel point of (x, y);
③ mixing IR、ID、MR、MDAnd MFDivided into by sliding windows of size 8 x 8, respectivelyThe image blocks are not overlapped and have the same size;
then adding IRAnd IDVectorizing the color values of R, G, B channels of all pixel points in each image block, and converting I into IRRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWill IDRecording color vectors formed after vectorization of color values of R, G, B channels of all pixel points in the jth image block as color vectorsWherein the initial value of j is 1, andare all of dimensions of 192 x 1,the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerRThe color value of the R channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerRThe color value of the G channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerRThe color value of the B channel of each pixel point in the jth image block in (1),the value of the 1 st element to the 64 th element in the first row is in one-to-one correspondence to scan I in a progressive scanning mannerDThe color value of the R channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of values of the 65 th element to the 128 th element in scanning I in a progressive scanning mannerDThe color value of the G channel of each pixel point in the jth image block in (1),has a one-to-one correspondence of the values of the 129 th element to the 192 th element to scan I in a progressive scanning mannerDThe color value of the B channel of each pixel point in the jth image block;
and mix MR、MDAnd MFVectorizing the pixel values of all the pixel points in each image block, and converting M into MRThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MDThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWill MFThe pixel value vector formed after the vectorization of the pixel values of all the pixel points in the jth image block is recorded asWherein,andthe dimensions of (a) are all 64 x 1,the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerRThe pixel value of each pixel point in the jth image block in (a),the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerDThe pixel value of each pixel point in the jth image block in (a),the value of the 1 st element to the 64 th element in the scanning is in a one-to-one correspondence of scanning the M in a progressive scanning mannerFThe pixel value of each pixel point in the jth image block;
④ calculating MFOf each image block, MFThe saliency of the jth image block in (a) is noted as dj,Wherein i is more than or equal to 1 and less than or equal to 64,to representThe value of the ith element in (1);
then arranging M in the order from big to smallFAfter sorting, determining the front t of all image blocks1The serial number of the image block corresponding to the significance, wherein,λ1indicating the selected scaling factor, λ, of the image block1∈(0,1];
Then find out IRAnd determined t1The image blocks with corresponding serial numbers are defined as reference image blocks; find out IDAnd determined t1The image blocks with corresponding serial numbers are defined as distorted image blocks; finding MRAnd determined t1Image blocks corresponding to the serial numbers are defined as reference significant image blocks; finding MDAnd determined t1Image blocks corresponding to the serial numbers are defined as distortion obvious image blocks;
⑤ use absolute difference measure IREach reference image block of (1) and (I)DThe significant difference value of the corresponding distorted image block is shown as IRT' th reference image block of (1) andDthe significant difference value of the t' th distorted image block in (a) is recorded as et',Wherein the initial value of t 'is 1, t' is more than or equal to 1 and less than or equal to t1The symbol "|" is an absolute value-taking symbol,represents MRThe t' th reference significant image block of (a) corresponds to a pixel value vectorThe value of the i-th element in (b),represents MDThe t' th distortion-significant image block of (a) corresponds to a pixel value vectorThe value of the ith element in (1);
then arranging the t obtained by measurement in the order from big to small1Sorting the significant difference values and determining the top t2The determined t is compared with the reference image block and the distorted image block corresponding to the significant difference value2Defining each reference image block as a reference vision important image block, and taking a matrix formed by color vectors corresponding to all the reference vision important image blocks as a reference vision important image block matrix, which is marked as YR(ii) a Will determine t2Defining each distorted image block as a distorted vision important image block, and taking a matrix formed by color vectors corresponding to all the distorted vision important image blocks as a distorted vision important image block matrix, which is marked as YDWherein, t2=λ2×t1,λ2Expressing the reference image block and the distorted image block to select a scaling factor, lambda2∈(0,1],YRAnd YDAll dimensions of (a) are 192 × t2,YRThe t 'th column vector in (a) is the color vector corresponding to the determined t' th reference image block, YDThe t-th column vector is the color vector corresponding to the t-th distorted image block, the initial value of t is 1, t is more than or equal to 1 and is more than or equal to t2;
⑥ mixing YRThe mean value of the value of each element in each column vector minus the values of all the elements in the column vector is centered, and the matrix obtained after centering is recorded as Y, wherein the dimension of Y is 192 multiplied by t2;
Then, performing dimensionality reduction and whitening operation on Y by utilizing principal component analysis to obtain a matrix after dimensionality reduction and whitening operation, and recording the matrix as Yw,YwW × Y, wherein YwDimension of M x t2W represents a whitening matrix, the dimension of W is M × 192, 1 < M < 192, and the symbol "<" is much smaller than the symbol;
⑦ pairs Y using an orthogonal partial preserving projection algorithmwPerforming on-line training to obtain YwThe feature base matrix of (1) is marked as D, wherein the dimension of D is M multiplied by 192;
⑧ according to YRAnd D, calculating the manifold feature vector of each reference visual important image block, and recording the manifold feature vector of the t' th reference visual important image block as ut”,Wherein u ist”Has the dimension of M x 1,is YRThe t "th column vector of (1); and according to YDAnd D, calculating the manifold feature vector of each distorted important visual image block, and recording the manifold feature vector of the t-th distorted important visual image block as vt”,Wherein v ist”Has the dimension of M x 1,is YDThe t "th column vector of (1);
⑨ calculating I from the manifold feature vectors of all reference visually significant image blocks and the manifold feature vectors of all distorted visually significant image blocksDThe objective quality evaluation value of (1), denoted as Score,wherein M is more than or equal to 1 and less than or equal to M and ut”(m) represents ut”Value of the m-th element of (1), vt”(m) represents vt”C is a very small constant value for ensuring the stability of the result;
y in the step ⑥wThe acquisition procedure is ⑥ _1, let C denote the covariance matrix of Y,wherein the dimension of C is 192 × 192, YTTransposing Y, ⑥ _2, decomposing the eigenvalue of C to get all maximum eigenvalues and corresponding eigenvectors, wherein the dimension of the eigenvector is 192 × 1, ⑥ _3, taking M maximum eigenvalues and corresponding M eigenvectors, ⑥ _4, calculating a whitening matrix W according to the M maximum eigenvalues and the corresponding M eigenvectors, W ═ Ψ-1/2×ETWherein the dimension of Ψ is mxm, Ψ ═ diag (Ψ)1,...,ψM),Dimension of E is 192 XM, E ═ E1,...,eM]Diag () is the principal diagonal matrix representation, ψ1,...,ψMCorresponding to the 1 st, … th, Mth maximum eigenvalue, e1,...,eMCorresponding to the 1 st, … th and Mth eigenvectors, ⑥ _5 whitening Y according to W to obtain a matrix Y after dimension reduction and whiteningw,Yw=W×Y。
2. The objective evaluation method for color image quality based on-line manifold learning as claimed in claim 1, wherein λ is obtained in said step ④1=0.7。
3. The objective evaluation method for color image quality based on-line manifold learning as claimed in claim 1, wherein λ is obtained in said step ⑤2=0.6。
4. The method according to claim 1, wherein C in step ⑨ is 0.04.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610202181.5A CN105913413B (en) | 2016-03-31 | 2016-03-31 | A kind of color image quality method for objectively evaluating based on online manifold learning |
US15/197,604 US9846818B2 (en) | 2016-03-31 | 2016-06-29 | Objective assessment method for color image quality based on online manifold learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610202181.5A CN105913413B (en) | 2016-03-31 | 2016-03-31 | A kind of color image quality method for objectively evaluating based on online manifold learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105913413A CN105913413A (en) | 2016-08-31 |
CN105913413B true CN105913413B (en) | 2019-02-22 |
Family
ID=56745319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610202181.5A Active CN105913413B (en) | 2016-03-31 | 2016-03-31 | A kind of color image quality method for objectively evaluating based on online manifold learning |
Country Status (2)
Country | Link |
---|---|
US (1) | US9846818B2 (en) |
CN (1) | CN105913413B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220962B (en) * | 2017-04-07 | 2020-04-21 | 北京工业大学 | Image detection method and device for tunnel cracks |
CN108921824A (en) * | 2018-06-11 | 2018-11-30 | 中国科学院国家空间科学中心 | A kind of color image quality evaluation method based on rarefaction feature extraction |
CN109003256B (en) * | 2018-06-13 | 2022-03-04 | 天津师范大学 | Multi-focus image fusion quality evaluation method based on joint sparse representation |
CN109003265B (en) * | 2018-07-09 | 2022-02-11 | 嘉兴学院 | No-reference image quality objective evaluation method based on Bayesian compressed sensing |
CN109636397A (en) * | 2018-11-13 | 2019-04-16 | 平安科技(深圳)有限公司 | Transit trip control method, device, computer equipment and storage medium |
CN109523542B (en) * | 2018-11-23 | 2022-12-30 | 嘉兴学院 | No-reference color image quality evaluation method based on color vector included angle LBP operator |
CN109754391B (en) * | 2018-12-18 | 2021-10-22 | 北京爱奇艺科技有限公司 | Image quality evaluation method and device and electronic equipment |
CN109978834A (en) * | 2019-03-05 | 2019-07-05 | 方玉明 | A kind of screen picture quality evaluating method based on color and textural characteristics |
CN110189243B (en) * | 2019-05-13 | 2023-03-24 | 杭州电子科技大学上虞科学与工程研究院有限公司 | Color image robust watermarking method based on tensor singular value decomposition |
CN110147792B (en) * | 2019-05-22 | 2021-05-28 | 齐鲁工业大学 | Medicine package character high-speed detection system and method based on memory optimization |
CN111127387B (en) * | 2019-07-11 | 2024-02-09 | 宁夏大学 | Quality evaluation method for reference-free image |
CN110399887B (en) * | 2019-07-19 | 2022-11-04 | 合肥工业大学 | Representative color extraction method based on visual saliency and histogram statistical technology |
US12079976B2 (en) * | 2020-02-05 | 2024-09-03 | Eigen Innovations Inc. | Methods and systems for reducing dimensionality in a reduction and prediction framework |
CN111354048B (en) * | 2020-02-24 | 2023-06-20 | 清华大学深圳国际研究生院 | Quality evaluation method and device for obtaining pictures by facing camera |
CN111881758B (en) * | 2020-06-29 | 2021-03-19 | 普瑞达建设有限公司 | Parking management method and system |
CN112233065B (en) * | 2020-09-15 | 2023-02-24 | 西北大学 | Total-blind image quality evaluation method based on multi-dimensional visual feature cooperation under saliency modulation |
US20240054607A1 (en) * | 2021-09-20 | 2024-02-15 | Meta Platforms, Inc. | Reducing the complexity of video quality metric calculations |
CN114170205B (en) * | 2021-12-14 | 2024-10-18 | 天津科技大学 | Contrast distortion image quality evaluation method fusing image entropy and structural similarity characteristics |
CN114418972B (en) * | 2022-01-06 | 2024-09-10 | 腾讯科技(深圳)有限公司 | Picture quality detection method, device, equipment and storage medium |
CN117456208B (en) * | 2023-11-07 | 2024-06-25 | 广东新裕信息科技有限公司 | Double-flow sketch quality evaluation method based on significance detection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036501A (en) * | 2014-06-03 | 2014-09-10 | 宁波大学 | Three-dimensional image quality objective evaluation method based on sparse representation |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN105447884A (en) * | 2015-12-21 | 2016-03-30 | 宁波大学 | Objective image quality evaluation method based on manifold feature similarity |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008150840A1 (en) * | 2007-05-29 | 2008-12-11 | University Of Iowa Research Foundation | Methods and systems for determining optimal features for classifying patterns or objects in images |
US8848970B2 (en) * | 2011-04-26 | 2014-09-30 | Digimarc Corporation | Salient point-based arrangements |
US9454712B2 (en) * | 2014-10-08 | 2016-09-27 | Adobe Systems Incorporated | Saliency map computation |
-
2016
- 2016-03-31 CN CN201610202181.5A patent/CN105913413B/en active Active
- 2016-06-29 US US15/197,604 patent/US9846818B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104036501A (en) * | 2014-06-03 | 2014-09-10 | 宁波大学 | Three-dimensional image quality objective evaluation method based on sparse representation |
CN104408716A (en) * | 2014-11-24 | 2015-03-11 | 宁波大学 | Three-dimensional image quality objective evaluation method based on visual fidelity |
CN105447884A (en) * | 2015-12-21 | 2016-03-30 | 宁波大学 | Objective image quality evaluation method based on manifold feature similarity |
Also Published As
Publication number | Publication date |
---|---|
US20170286798A1 (en) | 2017-10-05 |
US9846818B2 (en) | 2017-12-19 |
CN105913413A (en) | 2016-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105913413B (en) | A kind of color image quality method for objectively evaluating based on online manifold learning | |
Wu et al. | Blind image quality assessment based on multichannel feature fusion and label transfer | |
CN107784293B (en) | A kind of Human bodys' response method classified based on global characteristics and rarefaction representation | |
CN107622229A (en) | A kind of video frequency vehicle based on fusion feature recognition methods and system again | |
CN110991389B (en) | Matching method for judging appearance of target pedestrian in non-overlapping camera view angles | |
Zhang et al. | Training quality-aware filters for no-reference image quality assessment | |
CN105740833A (en) | Human body behavior identification method based on depth sequence | |
CN108389189B (en) | Three-dimensional image quality evaluation method based on dictionary learning | |
Lu et al. | No reference quality assessment for multiply-distorted images based on an improved bag-of-words model | |
CN112836671B (en) | Data dimension reduction method based on maximized ratio and linear discriminant analysis | |
CN112308873B (en) | Edge detection method for multi-scale Gabor wavelet PCA fusion image | |
CN106096517A (en) | A kind of face identification method based on low-rank matrix Yu eigenface | |
CN111539331A (en) | Visual image reconstruction system based on brain-computer interface | |
Jin et al. | Perceptual Gradient Similarity Deviation for Full Reference Image Quality Assessment. | |
CN111695455A (en) | Low-resolution face recognition method based on coupling discrimination manifold alignment | |
Ma et al. | Blind image quality assessment in multiple bandpass and redundancy domains | |
CN109919056B (en) | Face recognition method based on discriminant principal component analysis | |
CN109800771B (en) | Spontaneous micro-expression positioning method of local binary pattern of mixed space-time plane | |
CN104077608A (en) | Behavior recognition method based on sparsely coded slow characteristic functions | |
CN106022226A (en) | Pedestrian re-identification method based on multi-directional multi-channel bar-shaped structure | |
CN106650629A (en) | Kernel sparse representation-based fast remote sensing target detection and recognition method | |
CN113537240B (en) | Deformation zone intelligent extraction method and system based on radar sequence image | |
CN116311345A (en) | Transformer-based pedestrian shielding re-recognition method | |
Knoche et al. | Susceptibility to image resolution in face recognition and trainings strategies | |
CN110147824B (en) | Automatic image classification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |