CN108765366A - It is a kind of based on autonomous learning without with reference to color image quality evaluation method - Google Patents

It is a kind of based on autonomous learning without with reference to color image quality evaluation method Download PDF

Info

Publication number
CN108765366A
CN108765366A CN201810289172.3A CN201810289172A CN108765366A CN 108765366 A CN108765366 A CN 108765366A CN 201810289172 A CN201810289172 A CN 201810289172A CN 108765366 A CN108765366 A CN 108765366A
Authority
CN
China
Prior art keywords
image
dictionary
image block
atoms
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810289172.3A
Other languages
Chinese (zh)
Other versions
CN108765366B (en
Inventor
陈勇
吴明明
刘焕淋
朱凯欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanzhida Technology Transfer Center Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810289172.3A priority Critical patent/CN108765366B/en
Publication of CN108765366A publication Critical patent/CN108765366A/en
Application granted granted Critical
Publication of CN108765366B publication Critical patent/CN108765366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of based on autonomous learning without with reference to color image quality evaluation method, belongs to image processing field.This method includes:First, the coloured image in training set is indicated with quaternionic matrix using Quaternion Theory;Secondly, piecemeal is carried out to training image and acquires the local feature of image block using human eye visual perception;Then, image dictionary is built using independent learning strategy, using most representative image block as the atom of dictionary;Finally, after coloured image to be evaluated being carried out identical pretreatment, the maximum comparability between image to be evaluated and image dictionary is calculated, analyzes to obtain final quality evaluation score, and real-time update image dictionary by support vector regression.The present invention has fully considered the representativeness and independent learning ability of dictionary atom in image dictionary, can evaluate simultaneously the image of different distortion types, and evaluation result reaches unanimity with subjective assessment.

Description

No-reference color image quality evaluation method based on autonomous learning
Technical Field
The invention belongs to the technical field of image processing, particularly relates to the field of image quality evaluation methods, and relates to a no-reference image quality evaluation method based on autonomous learning.
Background
The image quality evaluation technology is always a key technology in the field of image processing, and can be used for evaluating the effect of an image processing method or selecting a proper image processing method according to the image quality. Therefore, the image quality evaluation technique plays a very important role in the image processing.
Image quality evaluation algorithms can be divided into three categories according to whether reference images are needed: full reference quality evaluation, partial reference image quality evaluation and no reference quality evaluation. Full reference quality evaluation needs to rely on complete reference image information, partial reference quality evaluation needs partial reference image information, and a no-reference quality evaluation method does not need a reference image. The no-reference quality evaluation method is more suitable for the practical application value, so that the method is more widely concerned and researched.
The existing no-reference evaluation methods are basically researched from gray level images, but color distortions such as hue shift and saturation reduction can occur in addition to the contrast distortion of the gray level images, so that the no-reference color image quality evaluation is more practical. Existing non-reference color image quality evaluation methods typically convert them into a grayscale image or measure the quality of each color component separately and then apply the grayscale measurement to the color image by combining the measured values with different weights. The former causes loss during the gray scale conversion and ignores the color information of the image. The latter is difficult to determine the weight to find the best color model. Therefore, the invention directly starts from the three primary colors of the color image so as to meet the non-reference effective evaluation of the color image.
Disclosure of Invention
In view of the above, the present invention provides a method for evaluating quality of a color image without reference based on autonomous learning, which has strong representativeness of atoms in a dictionary and can effectively evaluate images with different distortion types; meanwhile, the image dictionary can be automatically learned and updated along with the increase of the times of testing samples, and the applicability is wide.
In order to achieve the purpose, the invention provides the following technical scheme:
a no-reference color image quality evaluation method based on autonomous learning comprises the steps of firstly, selecting a sample with strong representativeness by using an autonomous learning strategy to construct an image dictionary, then, carrying out a mapping relation between the constructed image dictionary and an image to be evaluated to obtain a quality evaluation score, and finally, updating the image dictionary in real time by using the autonomous learning strategy;
the method specifically comprises the following steps:
s1: aiming at a color image, expressing pixels in three color channels of red (R), green (G) and blue (B) by a hypercomplex number through a quaternion theory to obtain a quaternion matrix of the color image;
s2: the image is processed in a blocking mode, the local features of the image blocks are extracted through the visual characteristics of human eyes, and the correlation among the image blocks is eliminated;
s3: utilizing an autonomous learning strategy to autonomously select an image block with the minimum similarity in the image blocks, judging the differences between the image block and all atoms in the dictionary, if the differences are small, putting the image block into the dictionary, and sequentially circulating until the dimension of the dictionary is reached and outputting the dictionary;
s4: obtaining a final quality evaluation score through a mapping relation between the image to be evaluated and the dictionary and a Support Vector Regression (SVR) method;
s5: and updating the dictionary in real time by using an autonomous learning strategy according to the image to be evaluated and the obtained quality evaluation score.
Further, the step S1 specifically includes:
a group of color images with known subjective evaluation score DMOS values are used as training samples, pixels in three color channels of red (R), green (G) and blue (B) in each color image are represented by 3 imaginary parts of quaternions, and a real part is 0, so that each pixel of the color image is represented by a pure quaternion:
f(x,y)=fR(x,y)·i+fG(x,y)·j+fB(x,y)·k
wherein x and y respectively represent the coordinates of the pixel points in the image, and fR(x,y)、fG(x,y)、fB(x, y) are pixel values corresponding to coordinates (x, y) in the color channel, respectively, and i, j, k are 3 imaginary units of quaternion.
Further, the step S2 specifically includes the following steps:
s21: decomposing each image into non-overlapping image blocks with dimension dxd, and setting xcIs the central point of the image block, and the other pixels in the block are x1,x2,…xnThen, the other pixels in the image block are respectively compared with xcSubtracting to obtain the pixel difference value y' of the image block, whose mathematical expression is:
y'=(x1-xc,x2-xc,…,xn-xc)
s22: since the response of human eyes to images has a logarithmic nonlinearity characteristic, the pixel difference value of an image block can be represented by a local feature vector through the nonlinear perception characteristic of human eyes, and the mathematical expression is as follows:
z=sign(y')·log(|y'|+1)
s23: eliminating similar image blocks by using the difference between image blocks, wherein the difference can be obtained by the included angle between the image blocks, that is
Wherein D (z)i) Representing image blocks z in a training set UiDifference from other image blocks, zi·zjRepresenting the inner product between image blocks, | | | |, represents the modulus of the vector; if D (z)i) If it is 0, it indicates that the two image blocks are the same, and the latter image block can be deleted to eliminate the similarity between the image blocks;
s24: whitening the image block by using a Principal Component Analysis (PCA) method, and eliminating redundant information of the image block, wherein a mathematical expression is as follows:
wherein x isiIs the original image feature, xPCAwhite,iIs the whitened image feature, λiIs the PCA transformation moment, C is a small constant to avoid denominator of 0, and m is the total number of image blocks.
Further, the step S3 specifically includes the following steps:
s31: initialization: setting a training set as U, setting the atom number to be constructed as K, setting the dictionary S as phi, and setting phi as an empty set;
s32: and (3) estimating the similarity among the image blocks in the training set U, namely calculating the Euclidean distance and the included angle among the image blocks:
wherein R (z)i) Representing image blocks z in a training set UiThe minimum similarity with other image blocks,is an image block ziAnd zjThe euclidean distance between them,is an image block ziAnd zjThe included angle therebetween.
S33: sequencing image blocks in a training set U from small to large in similarity, and sequentially placing the first K image blocks into a dictionary S to construct an initial dictionary;
s34, calculating the K +1 image block zK+1And the minimum difference value d between all atoms in the dictionary S, wherein the difference is the image block zK+1And the included angle between the atom in the dictionary S is shown as the mathematical expression:
wherein s isjRepresenting atoms in a dictionary S, zK+1·sjRepresenting image blocks and atoms in dictionaries sjInner product of between, | | |, represents the modulus of the vector.
Similarly, the minimum difference D (S) between atoms in the dictionary S is calculated by the same methodi) Wherein s isiIs the atom in the dictionary with the least difference from other atoms.
S35: if the minimum difference value D > D, the dictionary is updated, i.e. the image block z is usedtReplacing atoms x in a dictionary SjThen K ═ K +1 and return to S34; otherwise, go to S36
S36: and outputting the dictionary.
Further, the step S4 specifically includes:
s41: preprocessing the color image to be evaluated according to the steps S1 and S2 to obtain a local feature vector set of the image block of the image to be evaluated after the image is blockedWhereinRepresenting a local feature vector of a certain image block in an image to be evaluated;
s42: calculating each image block by using Euclidean distance and included angle correlation formula between image blocksMaximum similarity to atoms in dictionary S:
wherein,representing image blocks in an image to be evaluatedThe maximum similarity value to all atoms in the dictionary S,is an image blockAnd an atom in the dictionary sjThe euclidean distance between them,is an image blockAnd an atom in the dictionary sjThe included angle therebetween.
S43: summarizing the maximum similarity between all image blocks of the image to be evaluated and atoms in the dictionary S according to a vector matrix form, putting the image blocks into a Support Vector Regression (SVR) method, and predicting by combining DMOS values of corresponding atoms to obtain an image quality score. Wherein the vector matrix can be expressed as:
further, the step S5 specifically includes:
s51: calculating an image block with the minimum similarity among image blocks of an image to be evaluated with known image quality scores as a most representative image block, calculating the difference between the image block and all atoms in a dictionary, and determining a minimum difference value d; (ii) a
S52: calculating the difference between all atoms in the dictionary, and determining the atom with the minimum difference and the corresponding difference value t;
s53: judging whether the difference value d is larger than the difference value t, if so, replacing the atom with the minimum difference value in the dictionary with the image block, returning to S51 for continuous execution, otherwise, not updating the dictionary;
s54: and outputting the updated dictionary.
The invention has the beneficial effects that: the method of the invention realizes strong atom representativeness in the image dictionary by designing the image dictionary which can be independently learned, and can update the dictionary in real time, so that the method can obtain good evaluation effect on the quality evaluation effect of the color image under different conditions.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a flow chart of a color image quality evaluation method according to the present invention;
fig. 2 is a flow chart of the image dictionary self-learning according to the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
According to the method, atoms in the dictionary are selected independently according to the similarity between image blocks after the images are partitioned in the color image training set and the difference between the image blocks and the atoms in the dictionary, each atom has strong representativeness, and even if the dimensionality is low, the evaluation effect is obvious; meanwhile, the image dictionary can be automatically learned and updated along with the increase of times of training test samples, and the applicability is wide.
As shown in fig. 1, the method for evaluating quality of a color image without reference based on autonomous learning specifically includes the following steps:
1. image pre-processing
1.1, a group of color images with known subjective evaluation score DMOS values are used as a training data set, pixels in three color channels of red (R), green (G) and blue (B) in each color image are represented by 3 imaginary parts of quaternions, and a real part is 0, so that the color image is represented by a pure quaternion matrix. Compared with the traditional method of processing after being processed by channels or converted into gray images, the quaternion method can better embody the integrity of color images.
The invention can use the commonly used LIVE, CSIQ, TID and other database images as the training database, and certainly can select the image database of the device to be tested according to the requirement and organize the subjective evaluation so as to achieve the consistency of the used data and the subjective feeling.
1.2, decomposing each image into non-overlapping image blocks with the scale of dxd, wherein the gray difference characteristics between a pixel point and an adjacent pixel point are circularly calculated because the distortion of the image can be well described by the correlation among the pixels.
According to the fact that the human eye visual response has the logarithmic nonlinearity, the image after logarithmic transformation is more suitable for human eye visual perception, therefore, the local feature vector of each image block is obtained based on the human eye nonlinear perception characteristic, and then the features of each image are expressed in an aggregate mode.
1.3, eliminating similar image blocks by utilizing the difference between the image blocks, wherein when the difference value is 0, the two image blocks are the same, and the next image block can be deleted to eliminate the similarity between the image blocks, mainly because the image blocks often have repeated structures, the operation can ensure the independence of each training sample, and simultaneously, the calculated amount can be reduced, and the timeliness is increased;
1.4, in order to remove the correlation among image features and reduce redundant information of an image block, the invention whitens the image block by using Principal Component Analysis (PCA):
wherein x isiIs the original image feature, xPCAwhite,iIs the whitened image feature, λiIs the PCA transformation moment, C is a small constant to avoid denominator of 0, and m is the total number of image blocks.
2. Image dictionary construction
2.1, initialization: setting a training set as U, setting the atom number to be constructed as K, setting the dictionary S as phi, and setting phi as an empty set;
2.2, estimating the similarity among the image blocks in the training set U, namely calculating the Euclidean distance and the included angle among the image blocks:
wherein R (z)i) Representing image blocks z in a training set UiThe minimum similarity with other image blocks,is an image block ziAnd zjThe euclidean distance between them,is an image block ziAnd zjThe included angle therebetween.
2.3, sequencing the image blocks in the training set U from small to large in similarity, and sequentially placing the first K image blocks into a dictionary S to construct an initial dictionary;
2.4, calculating the K +1 image block zK+1And the minimum difference value d between all atoms in the dictionary S, wherein the difference is the image block zK+1And the included angle between the atom in the dictionary S is shown as the mathematical expression:
wherein s isjRepresenting atoms in a dictionary S, zK+1·sjRepresenting image blocks and atoms in dictionaries sjInner product of between, | | |, represents the modulus of the vector.
Similarly, the minimum difference D (S) between atoms in the dictionary S is calculated by the same methodi) Wherein s isiIs the atom in the dictionary with the least difference from other atoms.
2.5, if the minimum difference value D is larger than D, updating the dictionary, namely using the image block ztReplacing atoms x in a dictionary SjThen K ═ K +1 and return to 2.4; otherwise, the value is 2.6;
and 2.6, outputting the dictionary.
3. Image quality evaluation
3.1, preprocessing the color image to be evaluated according to the mode of the step 1 and the step S2 to obtain a local feature vector set of the image block of the image to be evaluated after the image is blockedWhereinRepresenting a local feature vector of a certain image block in an image to be evaluated;
3.2, calculating each image block by using the related formulas of Euclidean distance and included angle between the image blocksMaximum similarity to atoms in dictionary S:
wherein,representing image blocks in an image to be evaluatedThe maximum similarity value to all atoms in the dictionary S,is an image blockAnd an atom in the dictionary sjThe euclidean distance between them,is an image blockAnd an atom in the dictionary sjThe included angle therebetween.
And 3.3, summarizing the maximum similarity between all image blocks of the image to be evaluated and atoms in the dictionary S according to a vector matrix form, putting the image blocks into a Support Vector Regression (SVR) method, and predicting by combining DMOS values of corresponding atoms to obtain the image quality score. Wherein the vector matrix can be expressed as:
4. image dictionary update, as shown in FIG. 2:
4.1, calculating an image block with the minimum similarity among image blocks of the image to be evaluated with known image quality scores as a most representative image block, calculating the difference between the image block and all atoms in a dictionary, and determining a minimum difference value d; (ii) a
4.2, calculating the difference among all atoms in the dictionary, and determining the atom with the minimum difference and the corresponding difference value t;
4.3, judging whether the difference value d is larger than the difference value t, if so, replacing the atom with the minimum difference value in the dictionary with the image block, returning to 4.1 to continue execution, otherwise, not updating the dictionary;
and 4.4, outputting the updated dictionary.
Finally, it is noted that the above-mentioned preferred embodiments illustrate rather than limit the invention, and that, although the invention has been described in detail with reference to the above-mentioned preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims (6)

1. A no-reference color image quality evaluation method based on autonomous learning is characterized in that firstly, a sample with strong representativeness is selected by using an autonomous learning strategy to construct an image dictionary, then a mapping relation is carried out by using the constructed image dictionary and an image to be evaluated to obtain a quality evaluation score, and finally, the image dictionary is updated in real time by using the autonomous learning strategy;
the method specifically comprises the following steps:
s1: aiming at a color image, expressing pixels in three color channels of red R, green G and blue B by a hypercomplex number through a quaternion theory to obtain a quaternion matrix of the color image;
s2: the image is processed in a blocking mode, the local features of the image blocks are extracted through the visual characteristics of human eyes, and the correlation among the image blocks is eliminated;
s3: utilizing an autonomous learning strategy to autonomously select an image block with the minimum similarity in the image blocks, judging the differences between the image block and all atoms in the dictionary, if the differences are small, putting the image block into the dictionary, and sequentially circulating until the dimension of the dictionary is reached and outputting the dictionary;
s4: obtaining a final quality evaluation score through a mapping relation between the image to be evaluated and the dictionary and a Support Vector Regression (SVR) method;
s5: and updating the dictionary in real time by using an autonomous learning strategy according to the image to be evaluated and the obtained quality evaluation score.
2. The method for evaluating the quality of a color image without reference based on autonomous learning according to claim 1, wherein the step S1 specifically comprises:
a group of color images with known subjective evaluation score DMOS values are used as training samples, pixels in three color channels of red R, green G and blue B in each color image are represented by 3 imaginary parts of quaternions, the real part is 0, and therefore each pixel of the color images is represented by a pure quaternion:
f(x,y)=fR(x,y)·i+fG(x,y)·j+fB(x,y)·k
wherein x and y respectively represent the coordinates of the pixel points in the image, and fR(x,y)、fG(x,y)、fB(x, y) are pixel values corresponding to coordinates (x, y) in the color channel, respectively, and i, j, k are 3 imaginary units of quaternion.
3. The method for evaluating the quality of a color image without reference based on autonomous learning according to claim 1, wherein the step S2 specifically comprises the following steps:
s21: decomposing each image into non-overlapping image blocks with dimension dxd, and setting xcIf it is the central point of the image block, the other pixel points in the block are x1,x2,…xnThen, the other pixels in the image block are respectively compared with xcSubtracting to obtain a pixel difference value y' of the image block, wherein the mathematical expression is as follows:
y'=(x1-xc,x2-xc,…,xn-xc)
s22: since the response of human eyes to images has a logarithmic nonlinearity characteristic, the pixel difference value of an image block can be represented by a local feature vector through the nonlinear perception characteristic of human eyes, and the mathematical expression is as follows:
z=sign(y')·log(|y'|+1)
wherein z represents a local feature of the image block;
s23: eliminating similar image blocks by using differences between image blocks, wherein the differences are obtained by the included angles between the image blocks, i.e.
Wherein D (z)i) Representing image blocks z in a training set UiDifference from other image blocks, zi·zjRepresenting the inner product between image blocks, | | | |, represents the modulus of the vector; if D (z)i) If it is 0, it indicates that the two image blocks are the same, and the latter image block can be deleted to eliminate the similarity between the image blocks;
s24: whitening an image block by using a Principal Component Analysis (PCA) method, and eliminating redundant information of the image block, wherein a mathematical expression is as follows:
wherein x isiIs the original image feature, xPCAwhite,iIs the whitened image feature, λiIs the PCA transformation moment, C is a small constant to avoid denominator of 0, and m is the total number of image blocks.
4. The method for evaluating the quality of a color image without reference based on autonomous learning according to claim 1, wherein the step S3 specifically comprises the following steps:
s31: initialization: setting a training set as U, setting the atom number to be constructed as K, setting the dictionary S as phi, and setting phi as an empty set;
s32: and (3) estimating the similarity among the image blocks in the training set U, namely calculating the Euclidean distance and the included angle among the image blocks:
wherein R (z)i) Representing image blocks z in a training set UiThe minimum similarity with other image blocks,is an image block ziAnd zjThe euclidean distance between them,is an image block ziAnd zjThe included angle between them;
s33: sequencing image blocks in a training set U from small to large in similarity, and sequentially placing the first K image blocks into a dictionary S to construct an initial dictionary;
s34: computing the K +1 th image block zK+1And the minimum difference value d between all atoms in the dictionary S, wherein the difference is the image block zK+1And the included angle between the atom in the dictionary S is shown as the mathematical expression:
wherein s isjRepresenting atoms in a dictionary S, zK+1·sjRepresenting image blocks and atoms in dictionaries sjInner product between, | | | |, represents the module value of the vector;
similarly, the dictionary S primitive is calculated by the same methodMinimum value of intersubular dissimilarity D(s)i) Wherein s isiThe atom with the smallest difference between the atoms in the dictionary and other atoms;
s35: if the minimum difference value D > D, the dictionary is updated, i.e. the image block z is usedtReplacing atoms x in a dictionary SjThen K ═ K +1 and return to S34; otherwise, go to S36;
s36: and outputting the dictionary.
5. The method according to claim 3, wherein the step S4 specifically comprises:
s41: preprocessing the color image to be evaluated according to the steps S1 and S2 to obtain a local feature vector set of the image block of the image to be evaluated after the image is blockedWhereinRepresenting a local feature vector of a certain image block in an image to be evaluated;
s42: calculating each image block by using Euclidean distance and included angle correlation formula between image blocksMaximum similarity to atoms in dictionary S:
wherein,representing image blocks in an image to be evaluatedMaximum of all atoms in dictionary SThe value of the similarity is set to be,is an image blockAnd an atom in the dictionary sjThe euclidean distance between them,is an image blockAnd an atom in the dictionary sjThe included angle between them;
s43: summarizing the maximum similarity between all image blocks of the image to be evaluated and atoms in a dictionary S according to a vector matrix form, putting the image blocks into a Support Vector Regression (SVR) method, and predicting by combining DMOS values of corresponding atoms to obtain an image quality score; wherein the vector matrix is represented as:
6. the method for evaluating the quality of a color image without reference based on autonomous learning according to claim 1, wherein the step S5 specifically comprises:
s51: calculating the similarity between image blocks of an image to be evaluated with known image quality scores, taking the smallest image block as the most representative image block, calculating the difference between the image block and all atoms in a dictionary, and determining the minimum difference value d; (ii) a
S52: calculating the difference between all atoms in the dictionary, and determining the atom with the minimum difference and the corresponding difference value t;
s53: judging whether the difference value d is larger than the difference value t, if so, replacing the atom with the minimum difference value in the dictionary with the image block, returning to S51 for continuous execution, otherwise, not updating the dictionary;
s54: and outputting the updated dictionary.
CN201810289172.3A 2018-03-30 2018-03-30 No-reference color image quality evaluation method based on autonomous learning Active CN108765366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810289172.3A CN108765366B (en) 2018-03-30 2018-03-30 No-reference color image quality evaluation method based on autonomous learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810289172.3A CN108765366B (en) 2018-03-30 2018-03-30 No-reference color image quality evaluation method based on autonomous learning

Publications (2)

Publication Number Publication Date
CN108765366A true CN108765366A (en) 2018-11-06
CN108765366B CN108765366B (en) 2021-11-02

Family

ID=63980832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810289172.3A Active CN108765366B (en) 2018-03-30 2018-03-30 No-reference color image quality evaluation method based on autonomous learning

Country Status (1)

Country Link
CN (1) CN108765366B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650833A (en) * 2009-09-10 2010-02-17 重庆医科大学 Color image quality evaluation method
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN104361574A (en) * 2014-10-14 2015-02-18 南京信息工程大学 No-reference color image quality assessment method on basis of sparse representation
CN105139428A (en) * 2015-08-11 2015-12-09 鲁东大学 Quaternion based speeded up robust features (SURF) description method and system for color image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650833A (en) * 2009-09-10 2010-02-17 重庆医科大学 Color image quality evaluation method
CN102945552A (en) * 2012-10-22 2013-02-27 西安电子科技大学 No-reference image quality evaluation method based on sparse representation in natural scene statistics
CN104361574A (en) * 2014-10-14 2015-02-18 南京信息工程大学 No-reference color image quality assessment method on basis of sparse representation
CN105139428A (en) * 2015-08-11 2015-12-09 鲁东大学 Quaternion based speeded up robust features (SURF) description method and system for color image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DONG WU ETC.: ""Image Sharpness Assessment by Sparse Representation"", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
LEIDA LI ETC.: ""No-reference Image Quality Assessment With A Gradient-induced Dictionary"", 《KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS》 *
张薇: ""图像客观质量评价算法及其应用研究"", 《万方数据库》 *

Also Published As

Publication number Publication date
CN108765366B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN110046673B (en) No-reference tone mapping image quality evaluation method based on multi-feature fusion
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
CN109389591B (en) Color descriptor-based color image quality evaluation method
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN104572538B (en) A kind of Chinese medicine tongue image color correction method based on K PLS regression models
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
Chen et al. Reference-free quality assessment of sonar images via contour degradation measurement
CN107105223B (en) A kind of tone mapping method for objectively evaluating image quality based on global characteristics
CN104361574B (en) No-reference color image quality assessment method on basis of sparse representation
CN108289222A (en) A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
AU2020103251A4 (en) Method and system for identifying metallic minerals under microscope based on bp nueral network
CN106127234B (en) Non-reference picture quality appraisement method based on characteristics dictionary
CN110717892B (en) Tone mapping image quality evaluation method
CN107743225A (en) It is a kind of that the method for carrying out non-reference picture prediction of quality is characterized using multilayer depth
CN108427970A (en) Picture mask method and device
CN106651829A (en) Non-reference image objective quality evaluation method based on energy and texture analysis
CN109816646A (en) A kind of non-reference picture quality appraisement method based on degeneration decision logic
CN110555843A (en) High-precision non-reference fusion remote sensing image quality analysis method and system
CN113313682A (en) No-reference video quality evaluation method based on space-time multi-scale analysis
CN112270370B (en) Vehicle apparent damage assessment method
CN106023238A (en) Color data calibration method for camera module
CN108765366B (en) No-reference color image quality evaluation method based on autonomous learning
Gaata et al. No-reference quality metric for watermarked images based on combining of objective metrics using neural network
Ayunts et al. No-Reference Quality Metrics for Image Decolorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240506

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Patentee after: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd.

Country or region after: China

Address before: 400065 Chongqing Nan'an District huangjuezhen pass Chongwen Road No. 2

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China