CN111598826A - Image objective quality evaluation method and system based on joint multi-scale image characteristics - Google Patents

Image objective quality evaluation method and system based on joint multi-scale image characteristics Download PDF

Info

Publication number
CN111598826A
CN111598826A CN201910122634.7A CN201910122634A CN111598826A CN 111598826 A CN111598826 A CN 111598826A CN 201910122634 A CN201910122634 A CN 201910122634A CN 111598826 A CN111598826 A CN 111598826A
Authority
CN
China
Prior art keywords
picture
similarity
edge
feature
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910122634.7A
Other languages
Chinese (zh)
Other versions
CN111598826B (en
Inventor
柳宁
杨琦
徐异凌
孙军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910122634.7A priority Critical patent/CN111598826B/en
Publication of CN111598826A publication Critical patent/CN111598826A/en
Application granted granted Critical
Publication of CN111598826B publication Critical patent/CN111598826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for evaluating objective quality of a picture based on joint multi-scale picture characteristics, wherein the method comprises the following steps: picture processing: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics; side salient feature extraction: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)And extracting to obtain edge saliency characteristics. The invention has higher accuracy for evaluating the quality of the desktop picture, and the comprehensive performance is more superior to the prior art through the verification of the prior database, and the invention has the following advantages for the distortion type of the desktop picture: the Gaussian blur, motion blur and JPEG2000 compression distortion have prominenceThe performance is excellent.

Description

Image objective quality evaluation method and system based on joint multi-scale image characteristics
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for evaluating objective quality of a picture based on joint multi-scale picture characteristics.
Background
With the wide application of intelligent terminals, such as smart phones, tablets and notebook computers, desktop content pictures have replaced natural pictures and become the most common pictures with the highest consumption in people's daily life. Desktop content pictures are computer-generated pictures formed by combining graphics, text and natural pictures, and are largely used in applications such as desktop games, desktop collaboration, remote education and the like. For these applications, picture quality is of particular importance. However, since the desktop content picture and the natural picture have different characteristics, the distortion condition of the desktop content picture cannot be reflected well by the traditional quality evaluation method designed for the natural picture: compared with the characteristics of rich colors and smooth edges of natural pictures, desktop content pictures are often single in color, sharp in edges and full of a large number of repeated graphs; and the distortion of natural pictures is generally caused by the limited capability of physical sensors, but the distortion of desktop content pictures is generally caused by the computer itself. Therefore, an accurate and efficient objective quality evaluation method for the inner content pictures is urgently needed.
Patent document CN108335289A (application number: 201810049789.8) discloses an objective quality evaluation method for a full-reference fused image, which includes: selecting a picture database as input of model training, grouping pictures according to distortion types, wherein pictures with different degrees of distortion under each distortion type respectively obtain a file name and a label of each group of pictures; extracting characteristics, namely selecting various full reference measurement algorithms, respectively scoring the pictures in each distortion type, obtaining a characteristic vector by each group of pictures through one full reference measurement algorithm operation, and forming a characteristic matrix by the obtained characteristic vectors; data preprocessing, namely respectively normalizing the feature vector scores corresponding to the distorted image label and the distortion type between (1,100) and (0,1), and performing transposition processing to meet the training requirement of the SVM; and (5) performing characteristic training to obtain a quality evaluation model.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for evaluating the objective quality of a picture based on joint multi-scale picture characteristics.
The invention provides a picture objective quality evaluation method based on joint multi-scale picture characteristics, which comprises the following steps:
picture processing: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
side salient feature extraction: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
calculating the feature similarity: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination step: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling step: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Preferably, the picture processing step:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
Preferably, the edge salient feature extracting step:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure BDA0001972478370000021
Figure BDA0001972478370000022
Figure BDA0001972478370000023
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000024
representing a y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure BDA0001972478370000031
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure BDA0001972478370000032
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
and ×. 2 represents upsampling.
Preferably, the feature similarity calculation step:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure BDA0001972478370000033
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure BDA0001972478370000034
Figure BDA0001972478370000041
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2M S in (x, y)1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
GLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
GLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
Preferably, the feature combining step:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes M S1(x, y) at local mass SQM(x, y) is weighted;
μ denotes M S2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2M S in (x, y)1(x, y) takes weight.
Preferably, the feature pooling step:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure BDA0001972478370000051
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
The invention provides a desktop content picture objective quality evaluation system based on joint multi-scale picture characteristics, which comprises the following steps:
the picture processing module: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
the edge salient feature extraction module: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
the feature similarity calculation module: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination module: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling module: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Preferably, the picture processing module:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
The edge salient feature extraction module:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure BDA0001972478370000052
Figure BDA0001972478370000053
Figure BDA0001972478370000054
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000061
to representA y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure BDA0001972478370000062
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure BDA0001972478370000063
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
×. 2 represents upsampling;
the feature similarity calculation module:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure BDA0001972478370000064
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure BDA0001972478370000071
Figure BDA0001972478370000072
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2M S in (x, y)1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
CLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
CLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
Preferably, the feature combination module:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes M S1(x, y) at local mass SQM(x, y) is weighted;
μ denotes M S2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2M S in (x, y)1(x, y) takes weight.
The feature pooling module:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure BDA0001972478370000081
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
According to the present invention, there is provided a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of any of the above-mentioned methods for objective quality assessment of pictures based on joint multi-scale picture features.
Compared with the prior art, the invention has the following beneficial effects:
the invention has higher accuracy for evaluating the quality of the desktop picture, and the comprehensive performance is more superior to the prior art through the verification of the prior database, and the invention has the following advantages for the distortion type of the desktop picture: the Gaussian blur, the motion blur and the JPEG2000 compression distortion have outstanding excellent performance.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic view of a process flow provided by the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a picture objective quality evaluation method based on joint multi-scale picture characteristics, which comprises the following steps:
picture processing: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
side salient feature extraction: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
calculating the feature similarity: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination step: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling step: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Specifically, the picture processing step:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
Specifically, the edge salient feature extraction step:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure BDA0001972478370000091
Figure BDA0001972478370000092
Figure BDA0001972478370000093
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000094
representing a y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure BDA0001972478370000101
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure BDA0001972478370000102
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
and ×. 2 represents upsampling.
Specifically, the feature similarity calculation step:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure BDA0001972478370000103
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure BDA0001972478370000104
Figure BDA0001972478370000111
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2M S in (x, y)1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
CLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
CLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
Specifically, the feature combining step:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes M S1(x, y) at local mass SQM(x, y) is weighted;
μ denotes M S2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2M S in (x, y)1(x, y) takes weight.
Specifically, the feature pooling step:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure BDA0001972478370000121
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
The desktop content picture objective quality evaluation system based on the joint multi-scale picture characteristics can be realized through the step flow of the picture objective quality evaluation method based on the joint multi-scale picture characteristics. The image objective quality evaluation method based on the joint multi-scale image feature can be understood as a preferred example of the desktop content image objective quality evaluation system based on the joint multi-scale image feature by those skilled in the art.
The invention provides a desktop content picture objective quality evaluation system based on joint multi-scale picture characteristics, which comprises the following steps:
the picture processing module: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
the edge salient feature extraction module: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
the feature similarity calculation module: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination module: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling module: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
Specifically, the image processing module:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
The edge salient feature extraction module:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure BDA0001972478370000131
Figure BDA0001972478370000132
Figure BDA0001972478370000133
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure BDA0001972478370000134
representing a y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure BDA0001972478370000135
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure BDA0001972478370000136
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
×. 2 represents upsampling;
the feature similarity calculation module:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure BDA0001972478370000141
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure BDA0001972478370000142
Figure BDA0001972478370000143
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2M S in (x, y)1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
CLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
CLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
Specifically, the feature combination module:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes M S1(x, y) at local mass SQM(x, y) is weighted;
μ denotes M S2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2M S in (x, y)1(x, y) takes weight.
The feature pooling module:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure BDA0001972478370000151
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
According to the present invention, there is provided a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of any of the above-mentioned methods for objective quality assessment of pictures based on joint multi-scale picture features.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
the invention provides an objective quality evaluation method for a desktop content picture based on joint multi-scale, which is used for extracting edge structure characteristics and edge significance characteristics of the picture to evaluate the distortion degree. Specifically, a picture feature extraction mode is designed by combining the characteristics of a human visual system, and the extracted picture features comprise two features:
1. the characteristics of the side structure are as follows,
2. edge saliency characteristics.
In order to achieve the purpose, the invention adopts the following technical scheme:
1. processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)(ii) a Let n equal 1 from y1 (n)Extracting edge structure characteristics;
2. edge saliency features are extracted from the two pyramids by using a Luminance Mask (LM) and a contrast mask (LCM), and the specific calculation method is as follows:
Figure BDA0001972478370000161
Figure BDA0001972478370000162
wherein
Figure BDA0001972478370000163
G (x, y; σ) represents a Gaussian kernel, x represents convolution, and ≈ 2 represents upsampling.
Luminance mask calculation result CLM (n)Can detect the brightness change recognizable by human eyes, and converts gamma according to a Buchsbaum curve1Is set to 1; contrast mask calculation result CLCM (n)Can detect the change of the contrast recognizable by human eyes, and can detect the gamma according to the contrast detectable threshold value2Set to 0.62.
Let n be 1 from CLCM (n)And extracting edge saliency features.
3. Feature similarity calculation
Side structure similarity:
Figure BDA0001972478370000164
S2(x,y)=MS1(x,y)α·MS2(x,y),
Figure BDA0001972478370000165
edge saliency similarity:
Figure BDA0001972478370000166
where the subscripts r and d denote that the feature is taken from a reference picture or a distorted picture, w1(x,y)=y1r (2),T1,T2,T3Respectively taking 0.07,1 × 10-50 and 0.01.
4. Feature combination
The final local mass map is:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ,
wherein, mu is psi- α,
Figure BDA0001972478370000176
set to 1.8,0.02,0.9, respectively.
5. Feature pooling
Final objective evaluation score:
Figure BDA0001972478370000171
wherein, w2=max(y1r (2)(x,y),y1d (2)(x,y))。
To illustrate the effectiveness of the above model, a test was performed on the desktop content picture authority database SIQAD. The SIQAD database includes 20 reference pictures, each corresponding to 7 orders of magnitude of 7 distortion pictures of 980 distortion. The 7 kinds of distortion include Gaussian Noise (GN), Gaussian Blur (GB), Motion Blur (MB), contrast variation (CC), JPEG compression (JPEG), JPEG2000 compression (J2K), layer efficient coding (LSC).
Three indicators proposed by VQEG experts group and specifically used to measure the consistency between subjective score and objective evaluation score are used to determine the superiority of the model, and these three indicators are Pearson Linear Correlation Coefficient (PLCC), Root Mean Square Error (RMSE) and Spearman rank-order correlation coefficient (SROCC), which are calculated as follows:
Figure BDA0001972478370000172
Figure BDA0001972478370000173
Figure BDA0001972478370000174
wherein m and Q represent subjective scores and objective scores respectively,
Figure BDA0001972478370000175
mean values representing the subjective score and the objective score, respectively, diRepresenting the difference between the subjective score sorting sequence and the objective score sorting sequence of the ith picture. The values of PLCC and SROCC are between 0 and 1, and the closer to 1, the better the consistency between the subjective score and the objective score is; the smaller the RMSE value, the smaller the difference between the subjective score and the objective score, and the better the model performance.
Table 1 shows the test results on the database SIQAD, where PSNR, SSIM, MSSIM, IWSSIM, VIF, IFC, FSIM, and SCQ are quality evaluation methods designed for natural pictures, SIQM, SQI, ESIM, MDOGS, and GFM are objective quality evaluation methods designed for desktop content pictures in recent years, and it can be seen by comparing the data of each method that:
aiming at the overall performance, the bit array first name and the SROCC bit array second name are evaluated in PLCC and RMSE indexes;
for the single distortion type, the method obtains 9 first names and 1 third name, is obviously superior to other methods, and has remarkable superiority in evaluating the distortion types GB, MB and J2K.
Table one SIQAD database test results:
Figure BDA0001972478370000181
those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A picture objective quality evaluation method based on joint multi-scale picture features is characterized by comprising the following steps:
picture processing: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
side salient feature extraction: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
calculating the feature similarity: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination step: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling step: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
2. The picture objective quality evaluation method based on the joint multi-scale picture features according to claim 1, wherein the picture processing step comprises:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
3. The image objective quality evaluation method based on the joint multi-scale image features according to claim 2, wherein the edge salient feature extraction step comprises:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure FDA0001972478360000011
Figure FDA0001972478360000012
Figure FDA0001972478360000013
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure FDA0001972478360000014
representing a y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure FDA0001972478360000021
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure FDA0001972478360000022
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
and ×. 2 represents upsampling.
4. The picture objective quality evaluation method based on the joint multi-scale picture features according to claim 3, wherein the feature similarity calculation step comprises:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure FDA0001972478360000023
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure FDA0001972478360000031
Figure FDA0001972478360000032
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2(x, y) in MS1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
CLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
CLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
5. The picture objective quality evaluation method based on the joint multi-scale picture features according to claim 4, wherein the feature combination step comprises:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes MS1(x, y) at local mass SQM(x, y) is weighted;
mu denotes MS2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2(x, y) in MS1(x, y) takes weight.
6. The picture objective quality evaluation method based on the joint multi-scale picture features according to claim 5, wherein the feature pooling step comprises:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure FDA0001972478360000041
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
7. A desktop content picture objective quality evaluation system based on joint multi-scale picture features is characterized by comprising the following steps:
the picture processing module: processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)From y1 (n)Extracting and obtaining edge structure characteristics;
the edge salient feature extraction module: group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)Extracting edge saliency characteristics;
the feature similarity calculation module: calculating to obtain edge structure similarity and edge significance similarity according to the obtained edge structure feature and edge significance feature;
a characteristic combination module: calculating to obtain a final local quality map according to the obtained edge structure similarity and the edge significance similarity;
a characteristic pooling module: and calculating to obtain a final objective evaluation score according to the obtained final local quality map.
8. The system of claim 7, wherein the image processing module is configured to:
processing the original picture into picture groups with different scales by using the Gaussian pyramid and the Laplace pyramid, and respectively recording the picture groups as y0 (n)And y1 (n)
The edge salient feature extraction module:
group y of pictures processed from Gaussian pyramid using luminance mask and contrast mask0 (n)Group y of pictures processed with laplacian pyramid1 (n)The edge saliency characteristic is obtained by extraction, and the calculation formula is as follows:
Figure FDA0001972478360000051
Figure FDA0001972478360000052
Figure FDA0001972478360000053
wherein,
CLM (n)representing a brightness mask calculation result, wherein the brightness mask calculation result is based on image characteristics of the image group processed by the Gaussian pyramid and the Laplacian pyramid;
Figure FDA0001972478360000054
representing a y function;
y1 (n)representing the group of pictures after the Laplacian pyramid processing;
y0 (n)representing the picture group processed by the Gaussian pyramid;
n represents the number of layers;
layer y1 (1)The displayed picture characteristics are edge structure characteristics;
Figure FDA0001972478360000055
denotes a number y0 (n+1)Substituting the calculated result into the y function;
γ1representing a luminance contrast threshold;
| | represents an absolute value operation;
a1represents a constant that ensures the stability of the equation;
CLCM (n)representing the results of contrast mask calculations based on CLM (n)Image characteristics of the processed group of pictures;
n represents the number of layers;
CLCM (1)the picture feature expressed when n is 1 is an edge saliency feature;
a2represents a constant that ensures the stability of the equation;
Figure FDA0001972478360000056
represents that C isLM (n+1)Substituting the calculated result into the y function;
γ2represents a contrast detectable threshold;
g (x, y; sigma) represents a Gaussian kernel function;
denotes convolution;
×. 2 represents upsampling;
the feature similarity calculation module:
and calculating to obtain the edge structure similarity according to the obtained edge structure characteristic and the edge saliency characteristic, wherein the calculation formula is as follows:
Figure FDA0001972478360000061
wherein,
S1(x, y) represents the similarity of the point (x, y) side structure;
subscripts r and d indicate that the feature is taken from a reference picture or a distorted picture, respectively;
y1r (1)(x, y) represents an edge structure feature when n is 1 at the point (x, y) of the reference picture;
y1d (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the distorted picture;
T1the expression is a non-zero constant to ensure the stability of the equation;
and calculating to obtain the edge significance similarity according to the obtained edge structure characteristic and the edge significance characteristic, wherein the calculation formula is as follows:
S2(x,y)=MS1(x,y)α·MS2(x,y)
Figure FDA0001972478360000062
Figure FDA0001972478360000063
wherein,
S2(x, y) represents the point (x, y) side saliency similarity;
α denotes S2(x, y) in MS1(x, y) is weighted;
MS1(x, y) represents the edge structure similarity calculated by the similarity calculation function;
MS2(x, y) represents the edge significant similarity calculated by the similarity calculation function;
w1(x, y) represents a weighting factor;
CLMr (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
CLMd (1)(x, y) denotes an LM mask feature when n is 1 at point (x, y) of the reference picture;
T2represents a non-zero constant for equation stability;
(x,y)w1(x, y) denotes w at all points on the picture1(x, y) accumulation;
CLCMr (1)(x, y) represents LCM mask characteristics when n is 1 at point (x, y) of the reference picture;
CLCMd (1)(x, y) represents the LCM mask characteristics when the distortion picture is n-1 at point (x, y);
CLCMd (2)(x, y) represents LCM mask characteristics indicating when the distortion picture has n of 2 at point (x, y);
T3a non-zero constant is shown to ensure equation stability.
9. The system according to claim 8, wherein the feature combination module:
according to the obtained edge structure similarity and the edge significance similarity, calculating to obtain the local quality similarity, wherein the calculation formula is as follows:
SQM(x,y)=(S1(x,y))ξ·(S2(x,y))ψ
=(S1(x,y))ξ·MS1(x,y)μ·MS2(x,y)ψ
μ=ψ·α
wherein,
SQM(x, y) represents local mass similarity at point (x, y);
ξ denotes S1(x, y) at local mass SQM(x, y) is weighted;
psi denotes MS1(x, y) at local mass SQM(x, y) is weighted;
mu denotes MS2(x, y) at local mass SQM(x, y) is weighted;
α denotes S2(x, y) in MS1(x, y) takes weight.
The feature pooling module:
and calculating to obtain a final objective evaluation score according to the obtained local quality map similarity, wherein the calculation formula is as follows:
Figure FDA0001972478360000071
w2(x,y)=max(y1r (2)(x,y),y1d (2)(x,y))
wherein,
s represents the final objective evaluation score;
w2(x, y) represents a weight parameter.
y1r (2)(x, y) represents an edge structure feature when n is 2 at the point (x, y) of the reference picture.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the method for objective quality assessment of pictures based on joint multi-scale picture features according to any one of claims 1 to 6.
CN201910122634.7A 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics Active CN111598826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910122634.7A CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910122634.7A CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Publications (2)

Publication Number Publication Date
CN111598826A true CN111598826A (en) 2020-08-28
CN111598826B CN111598826B (en) 2023-05-02

Family

ID=72183150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910122634.7A Active CN111598826B (en) 2019-02-19 2019-02-19 Picture objective quality evaluation method and system based on combined multi-scale picture characteristics

Country Status (1)

Country Link
CN (1) CN111598826B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288699A (en) * 2020-10-23 2021-01-29 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN117952968A (en) * 2024-03-26 2024-04-30 沐曦集成电路(上海)有限公司 Image quality evaluation method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN104143188A (en) * 2014-07-04 2014-11-12 上海交通大学 Image quality evaluation method based on multi-scale edge expression
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images
US20180276492A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd Image processing method and apparatus for object detection
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646269A (en) * 2012-02-29 2012-08-22 中山大学 Image processing method and device based on Laplace pyramid
CN104143188A (en) * 2014-07-04 2014-11-12 上海交通大学 Image quality evaluation method based on multi-scale edge expression
US20180276492A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd Image processing method and apparatus for object detection
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288699A (en) * 2020-10-23 2021-01-29 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN112288699B (en) * 2020-10-23 2024-02-09 北京百度网讯科技有限公司 Method, device, equipment and medium for evaluating relative definition of image
CN117952968A (en) * 2024-03-26 2024-04-30 沐曦集成电路(上海)有限公司 Image quality evaluation method based on deep learning

Also Published As

Publication number Publication date
CN111598826B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
TWI665639B (en) Method and device for detecting tampering of images
CN109840560B (en) Image classification method based on clustering in capsule network
Xue et al. Learning without human scores for blind image quality assessment
CN108108751B (en) Scene recognition method based on convolution multi-feature and deep random forest
CN102413328B (en) Double compression detection method and system of joint photographic experts group (JPEG) image
Nguyen et al. Food image classification using local appearance and global structural information
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN111325687B (en) Smooth filtering evidence obtaining method based on end-to-end deep network
CN111639558A (en) Finger vein identity verification method based on ArcFace Loss and improved residual error network
CN111415323B (en) Image detection method and device and neural network training method and device
CN111104941B (en) Image direction correction method and device and electronic equipment
CN111598826B (en) Picture objective quality evaluation method and system based on combined multi-scale picture characteristics
CN108171689A (en) A kind of identification method, device and the storage medium of the reproduction of indicator screen image
CN105740790A (en) Multicore dictionary learning-based color face recognition method
CN106203448A (en) A kind of scene classification method based on Nonlinear Scale Space Theory
CN112990213B (en) Digital multimeter character recognition system and method based on deep learning
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
CN117746015A (en) Small target detection model training method, small target detection method and related equipment
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
CN111274985A (en) Video text recognition network model, video text recognition device and electronic equipment
CN106846366B (en) TLD video moving object tracking method using GPU hardware
CN115423809A (en) Image quality evaluation method and device, readable storage medium and electronic equipment
Viacheslav et al. Low-level features for inpainting quality assessment
CN108921226B (en) Zero sample image classification method based on low-rank representation and manifold regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant