CN111354048A - Quality evaluation method and device for camera-oriented acquired pictures - Google Patents

Quality evaluation method and device for camera-oriented acquired pictures Download PDF

Info

Publication number
CN111354048A
CN111354048A CN202010112925.0A CN202010112925A CN111354048A CN 111354048 A CN111354048 A CN 111354048A CN 202010112925 A CN202010112925 A CN 202010112925A CN 111354048 A CN111354048 A CN 111354048A
Authority
CN
China
Prior art keywords
image
picture
quality
extracting
variance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010112925.0A
Other languages
Chinese (zh)
Other versions
CN111354048B (en
Inventor
刘玉涛
李秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202010112925.0A priority Critical patent/CN111354048B/en
Publication of CN111354048A publication Critical patent/CN111354048A/en
Application granted granted Critical
Publication of CN111354048B publication Critical patent/CN111354048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a quality evaluation method and a device for a camera to acquire pictures, wherein the method comprises the following steps: s1, extracting brightness and chroma characteristics, and estimating the brightness and chroma of the input picture; s2, extracting the characteristics of the picture noise, and estimating the noise degree of the input picture; s3, extracting structural features and estimating the fuzzy degree of the input picture; s4, extracting contrast characteristics and estimating the contrast of the input picture; s5, extracting the statistical characteristics of the picture normalization coefficient, and estimating the naturalness of the picture; s6, extracting visual perception characteristics of the picture, and estimating the change of the visual perception of the picture; s7, learning a mapping model of the image features extracted by the steps S1-S6 to the image quality on the training set by using a support vector regression method for predicting the image quality. The method does not need to refer to the original image, can obtain higher prediction performance, and has wide application value.

Description

Quality evaluation method and device for camera-oriented acquired pictures
Technical Field
The invention relates to the technical field of image processing, in particular to a quality evaluation method and device for a camera-oriented acquired picture.
Background
With the popularization of mobile terminal devices, it becomes more and more convenient to capture images with a camera. However, in the image acquisition process, the quality of the acquired picture is poor due to various factors, and the subsequent application of the image is affected. Digital images become an important support for the information industry, and the image quality evaluation technology has a wide application value for the information industry. For example, in image acquisition, image quality evaluation may be applied to monitor the quality of acquired images in real time, give a prompt for low-quality images, and reject low-quality images; the image quality evaluation can be applied to the image compression process, and is used for measuring the effectiveness of a compression algorithm and guiding the compression algorithm to reduce the loss of the image quality after compression as much as possible; the image quality evaluation can also be used to judge the goodness of the image processing algorithm.
The image quality evaluation can be broadly divided into subjective quality evaluation and objective quality evaluation, wherein the subjective quality evaluation means that an observer scores the quality of an image according to own experience, and the objective quality evaluation means that the quality of the image is evaluated by designing an objective algorithm. A good objective quality assessment method should be consistent with the results of subjective quality assessment. In image application, the final recipient of an image is a person, so subjective quality evaluation is the most reliable and accurate evaluation method, however, the subjective quality evaluation needs to consume a large amount of manpower, material resources and time to score the quality of the image, and the difficulty of the subjective quality evaluation is higher and even the subjective quality evaluation cannot be implemented as the number of images is larger, so the objective quality evaluation method is particularly important.
In terms of objective quality assessment, Mean Square Error (MSE) and peak to noise ratio (PSNR) are currently the most popular quality assessment criteria due to their simplicity, although sometimes not particularly compatible with the subjective scores of testers. The structural similarity method (SSIM) proposed by Wang, Z et al in IEEE Trans. Image Process, Vol.13, No. 4, pp.600 to 612 in the paper "Image quality assessment from original visibility to structural similarity" assesses the quality of distorted images by comparing the structural similarity of the original Image and the distorted Image. Sheikh, H.R. et al propose a Visual Information Fidelity (VIF) method in the article Image information and visual quality published from IEEE Trans. Image processing, Vol.15, pp.2, 430 to 444, to evaluate Image quality by quantifying information loss in the Image. Zhang, l. et al, in the article "FSIM: a fe source Image identity Index for Image Quality Assessment" published by ieee trans. Image Process, volume 20, phase 8, pages 2378 to 2386, utilize gradient features and phase consistency features to assess the Quality of the Image. Quality assessment is carried out by Xue, W et al using Gradient information in the paper "Gradient magnetic homogeneity development: A high throughput Experimental Quality Index", published in IEEE Trans. Image Process, Vol.23, pp.2, 684 to 695. Gao, x. et al, in the paper "Image quality Analysis Based on Multiscale geometry Analysis" published in ieee trans. Image Process, volume 18, phase 7, page 1409 to page 1423, perform Multiscale decomposition on images, weight the decomposition coefficients with a human eye contrast sensitivity function, and then Process the coefficients with a Just Noticeable Difference (JND) model to extract histogram feature prediction Image quality. Quality assessment is carried out by Xue, W et al using Gradient information in the paper "Gradient magnetic information Quality development: A high throughput Image Quality Index", published in IEEETrans. Image Process, Vol.23, pp.2, 684 to 695. The Blund Image Quality Indexes (BIQI) model proposed by Liu, H.et al, IEEE transactions systems, video technology, 21, 7, 971-982, classifies Image distortion and then performs Quality evaluation.
Disclosure of Invention
The main purpose of the present invention is to overcome at least one of the above technical drawbacks, and to provide a method and an apparatus for evaluating quality of a picture obtained by a camera.
In order to achieve the above object, the present invention provides a quality evaluation method for a camera-oriented acquired picture, the method comprising the steps of:
s1, extracting brightness and chroma characteristics, and estimating the brightness and chroma of the input picture;
s2, extracting the characteristics of the picture noise, and estimating the noise degree of the input picture;
s3, extracting structural features and estimating the fuzzy degree of the input picture;
s4, extracting contrast characteristics and estimating the contrast of the input picture;
s5, extracting the statistical characteristics of the picture normalization coefficient, and estimating the naturalness of the picture;
s6, extracting visual perception characteristics of the picture, and estimating the change of the visual perception of the picture;
s7, a model of mapping the image features extracted by steps S1-S6 to image quality is learned on the training set using support vector regression method for predicting the image quality.
Preferably, in step S1, the image is converted from RGB space to HSI space, and the mean value of each channel of the image is extracted as the luminance feature and the chrominance feature of the image.
Preferably, in step S2, the relationship between the kurtosis of the clean image and the corresponding noise image is modeled by using the principle of scale invariance of the natural image, and the variance of the image noise is solved.
Preferably, in the step S3, the structural features of the image, including the gradient strength and phase consistency of the image, are extracted, and then the two features are fused to describe the local structural features of the image, so as to quantify the blurring degree of the image.
Preferably, the gradient strength refers to: calculating the gradient strength, performing convolution on the image by using a Sobel operator to obtain gradient maps in the horizontal direction and the vertical direction, and taking the arithmetic square root of the square sum of the gradient maps in the horizontal direction and the vertical direction as a gradient strength map.
Preferably, the extracting the phase consistency feature of the image refers to: and calculating the phase consistency characteristic of the image by adopting a Kovesi calculation method.
Preferably, in step S4, the difference between the central pixel and its surrounding upper, lower, left and right pixels is calculated to measure the contrast of the image, and the larger the difference, the higher the contrast.
Preferably, in step S5, the image is locally normalized by using the local mean and variance of the image, and then the obtained normalization coefficient is fitted by using a 0-mean generalized gaussian distribution, and fitting parameters are extracted to estimate the naturalness of the image.
Preferably, in step S6, the image is sparsely represented in units of blocks, then the difference between the image and its sparse representation is calculated, and the mean, variance, kurtosis, skewness and information entropy representing the residual are obtained.
Preferably, in step S7, features such as luminance, chrominance, and noise are extracted for each image using a set of distorted images, and then subjective scores corresponding to the extracted features and the images are input into a support vector regression model, a model for mapping image features to image quality is learned, and the model is used to predict the image quality.
A quality evaluation device for a camera-oriented picture comprises a computer-readable storage medium and a processor, wherein the computer-readable storage medium stores an executable program, and the executable program is executed by the processor to realize the quality evaluation method for the camera-oriented picture.
A computer-readable storage medium, storing an executable program, which when executed by a processor, implements the method for evaluating quality of a picture taken with a camera.
The invention has the beneficial effects that:
the invention provides a quality evaluation method and a quality evaluation device for a camera-oriented acquired picture, which are used for extracting features sensitive to image quality change, representing the change of image quality, and utilizing a support vector regression model to learn the mapping from image features to image quality so as to judge the quality of an image. The method does not need to refer to the original image, can obtain higher prediction performance, and has wide application value.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic diagram of an embodiment of a quality evaluation method for a camera-oriented image according to the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The degradation reasons of images shot by a camera are various, the embodiment of the invention grasps main factors influencing the image quality, including brightness, chroma, contrast, noisiness, fuzziness and the like, respectively carries out modeling, extracts corresponding characteristics to describe the factors influencing the image quality, and learns the mapping from the image characteristics to the image quality by utilizing a support vector regression model.
Fig. 1 is a schematic diagram of an embodiment of a quality evaluation method for a camera-oriented image according to the present invention. As shown in fig. 1, an embodiment of the present invention provides a quality evaluation method for a camera-oriented captured picture, where the method includes: s1, extracting brightness and chroma characteristics, and estimating the brightness and chroma of the input picture; s2, extracting the characteristics of the picture noise, and estimating the noise degree of the input picture; s3, extracting structural features and estimating the fuzzy degree of the input picture; s4, extracting contrast characteristics and estimating the contrast of the input picture; s5, extracting the statistical characteristics of the picture normalization coefficient, and estimating the naturalness of the picture; s6, extracting visual perception characteristics of the picture, and estimating the change of the visual perception of the picture; s7, a model of mapping the image features extracted by steps S1-S6 to image quality is learned on the training set using support vector regression method for predicting the image quality. The method comprises the steps of representing the change of image quality by extracting features sensitive to the change of the image quality, including brightness, chroma, noisiness, fuzziness, contrast, naturalness and visual perception statistical features, and learning the mapping from the image features to the image quality by using a support vector regression model so as to judge the quality of the image. The method does not need to refer to an original image and can obtain higher prediction performance.
In an embodiment, a specific implementation process and detailed details of the quality evaluation method for the camera-oriented acquired picture are as follows:
the input picture is first converted from RGB space to HSI space:
Figure BDA0002390629590000041
wherein, I is an input image,
Figure BDA0002390629590000042
for the transformed image, T (-) is the chrominance space transfer function. Then, the luminance and chrominance characteristics are calculated separately:
Figure BDA0002390629590000051
Figure BDA0002390629590000052
Figure BDA0002390629590000053
wherein ,F1Representing a luminance characteristic, F2 and F3The characteristic of the chromaticity is represented,
Figure BDA0002390629590000054
is the luminance channel of the image and,
Figure BDA0002390629590000055
and
Figure BDA0002390629590000056
for both chrominance channels of the image, N represents the number of image pixels.
Then, the variance of the noise in the image is estimated, so as to estimate the noise level of the image, and assuming that the clean noise-free image is x and the corresponding noise image is y, the relationship between the kurtosis of y and the kurtosis of x, the variance of x, and the variance of noise can be expressed as:
Figure BDA0002390629590000057
wherein ,κyDenotes the kurtosis of y, κxRepresenting the kurtosis of x, α is the shape parameter of the x distribution,
Figure BDA0002390629590000058
is the variance of x and is,
Figure BDA0002390629590000059
is the variance of the noise. Then the variance of the noise and the estimate of the image x kurtosis can be solved by minimizing:
Figure BDA00023906295900000510
wherein ,
Figure BDA00023906295900000511
for the estimation of the kurtosis of x,
Figure BDA00023906295900000512
is an estimate of the variance of the noise,
Figure BDA00023906295900000513
and
Figure BDA00023906295900000514
is yiVariance and kurtosis of yiIs the image obtained by filtering y with the ith DCT filter. By using
Figure BDA00023906295900000515
Indicating the level of image noise.
Extracting the structural features of the image, estimating the blurring degree of the image by using the structural features, and respectively calculating the gradient strength and the phase consistency of the image to extract the internal structure of the image. The gradient strength is calculated as follows: convolving the image by using a Sobel operator to obtain a gradient map in the horizontal direction and a gradient map in the vertical direction:
Figure BDA00023906295900000516
wherein ,GxDenotes the gradient in the horizontal direction, GyRepresenting the gradient in the vertical direction, and then calculating a gradient intensity map:
Figure BDA00023906295900000517
where GM denotes the gradient intensity map.
Extracting the phase consistency of the image by a Kovesi method: given a one-dimensional signal s, define
Figure BDA00023906295900000518
And
Figure BDA00023906295900000519
filters at the n-scale, even and odd symmetry respectively, forming an orthogonal pair of filters, here approximated by log-Gabor filters, with which the image is filtered to obtain a response at the j-position
Figure BDA00023906295900000520
Figure BDA0002390629590000061
Amplitude is defined as
Figure BDA0002390629590000062
Let F (j) be ∑nen(j),H(j)=∑non(j) Then the phase consistency PC is calculated as:
Figure BDA0002390629590000063
wherein ,
Figure BDA0002390629590000064
epsilon is a small positive number, the denominator is prevented from appearing 0, and the calculation of the one-dimensional signal PC is popularized to the calculation of the two-dimensional signal PC, and is defined as:
Figure BDA0002390629590000065
where o represents an index for each direction.
Fusing the gradient intensity map and the phase consistency map to obtain a local structure map of the image:
Figure BDA0002390629590000066
where LS denotes a partial structure, (i, j) denotes a pixel position, and max denotes a maximum value operation. Pooling the local structure chart to obtain an estimation of the blurring degree of the image:
Figure BDA0002390629590000067
where s represents the degree of blurring of the image, Ω is the set of the top 20% maximum values in LS, and M is the number of elements in Ω.
Estimating the contrast of the image, calculating the difference between the central pixel and the surrounding adjacent pixels pixel by pixel, wherein the larger the difference is, the higher the contrast of the image is, assuming that the current pixel value is a, and the pixel value above the current pixel value is a1The lower pixel value is a2The pixel value on the left side is a3The pixel value on the right is a4The difference between the current pixel and the surrounding pixel values is defined as:
Figure BDA0002390629590000068
where d represents a difference value between pixel values.
Estimating the naturalness of the image, and performing local normalization on the image by using the local mean and the variance to obtain a normalized image, wherein the steps are as follows: calculating the normalized coefficient of the image, namely:
Figure BDA0002390629590000069
where I is the input image, (x, y) represents positional information,
Figure BDA00023906295900000610
representing a normalized coefficient image, μ (x, y), σ (x, y) is a mean and variance local to (x, y), and then the local normalized image is fitted with a 0-mean generalized gaussian distribution whose probability density is defined as:
Figure BDA0002390629590000071
where Γ (·) is a gamma function defined as:
Figure BDA00023906295900000710
wherein α is a shape parameter describing the shape of the distribution, β represents a standard deviation, and the two parameters can describe the distribution and extract the naturalness of the image.
Extracting visual perception statistical characteristics of an image, and for an image I, firstly extracting one image block to perform sparse representation, assuming that
Figure BDA0002390629590000072
It has a size of
Figure BDA0002390629590000073
This process can be expressed as:
xk=Rk(I)
wherein ,Rk(·) is an image block extraction operator that extracts an image block at position k, where k ═ 1,2, 3.
For image block xkIt is in the dictionary
Figure BDA0002390629590000074
The sparse representation of (1) is to obtain a sparse vector
Figure BDA0002390629590000075
kMost of the elements are 0 or close to 0) satisfies:
Figure BDA0002390629590000076
the first term is a fidelity term, the second term is a sparse constraint term, lambda is a constant and is used for balancing the proportion of the two terms, p is 0 or 1, if p is 0, the sparse term represents the number of non-0 in the coefficient and is consistent with the sparsity required by people, however, the optimization problem of the 0 norm is non-convex and is difficult to solve, and the alternative solution is to set p to 1, so that the above formula is changed into the solution of the convex optimization problem. Solving the above formula by using Orthogonal Matching Pursuit (OMP) algorithm to obtain image block xkIs sparse representation coefficient
Figure BDA0002390629590000077
X is thenkCan be expressed sparsely as
Figure BDA0002390629590000078
The sparse representation of the entire image I can be written as:
Figure BDA0002390629590000079
where I' represents a sparse representation of image I. The image distortion changes the understanding mode or sparse representation mode of the brain to the image, so the difference between the image and the sparse representation mode is used for describing the change condition of the image quality. The residual between the input image and the sparse representation is first calculated:
PR(x,y)=I(x,y)-I′(x,y)
where PR is the representation residual, I is the input image (or image block), and I' is the sparse representation of the input image. Extracting statistical features representing the residual error to pool the representation residual error, calculating a mean value, a variance, a kurtosis, a skewness and an information entropy pool the representation residual error, and assuming that epsilon (-) is a mean value operation, the mean value, the variance, the kurtosis and the skewness of the representation can be calculated as:
mPR=ε(PR)
Figure BDA0002390629590000081
Figure BDA0002390629590000082
Figure BDA0002390629590000083
the information entropy is calculated as:
Figure BDA0002390629590000084
wherein ,piIs the probability density of the ith gray level.
And training a quality evaluation prediction model, extracting the above characteristics of each image by using a group of distorted images, inputting the extracted characteristics and the corresponding subjective scores into a support vector regression model, training a quality model, and predicting the quality of other images by using the model.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (10)

1. A quality evaluation method for a camera-oriented acquired picture is characterized by comprising the following steps:
s1, extracting brightness and chroma characteristics, and estimating the brightness and chroma of the input picture;
s2, extracting the characteristics of the picture noise, and estimating the noise degree of the input picture;
s3, extracting structural features and estimating the fuzzy degree of the input picture;
s4, extracting contrast characteristics and estimating the contrast of the input picture;
s5, extracting the statistical characteristics of the picture normalization coefficient, and estimating the naturalness of the picture;
s6, extracting visual perception characteristics of the picture, and estimating the change of the visual perception of the picture;
s7, learning a mapping model of the image features extracted by the steps S1-S6 to the image quality on the training set by using a support vector regression method for predicting the image quality.
2. The method for evaluating the quality of pictures acquired by a camera according to claim 1, wherein in step S1, the input picture is converted from RGB space to HSI space, and the mean value of each channel of the image is extracted as the luminance characteristic and the chrominance characteristic of the image;
preferably, the step S1 specifically includes: the input picture is first converted from RGB space to HSI space:
Figure FDA0002390629580000011
wherein, I is an input image,
Figure FDA0002390629580000012
for the transformed image, T (-) is the chrominance space transfer function; then, the luminance and chrominance characteristics are calculated separately:
Figure FDA0002390629580000013
Figure FDA0002390629580000014
Figure FDA0002390629580000015
wherein ,F1Representing a luminance characteristic, F2 and F3The characteristic of the chromaticity is represented,
Figure FDA0002390629580000016
is the luminance channel of the image and,
Figure FDA0002390629580000017
and
Figure FDA0002390629580000018
for both chrominance channels of the image, N represents the number of image pixels.
3. The method for evaluating the quality of the pictures acquired by the camera according to the claim 1, wherein in the step S2, the relationship between the kurtosis of the clean image and the corresponding noise image is modeled by using the principle of scale invariance of the natural image, and the variance of the image noise is solved;
preferably, the step S2 specifically includes: estimating the variance of noise in the image to estimate the noise level of the image, wherein if the clean noise-free image is x and the corresponding noise image is y, the relationship between the kurtosis of y and the kurtosis of x, the variance of x and the variance of noise is expressed as follows:
Figure FDA0002390629580000021
wherein ,κyDenotes the kurtosis of y, κxRepresenting the kurtosis of x, α is the shape parameter of the x distribution,
Figure FDA0002390629580000022
is the variance of x and is,
Figure FDA0002390629580000023
is the variance of the noise; the variance of the noise and the estimate of the image x kurtosis are solved by minimizing:
Figure FDA0002390629580000024
wherein ,
Figure FDA0002390629580000025
for the estimation of the kurtosis of x,
Figure FDA0002390629580000026
is an estimate of the variance of the noise,
Figure FDA0002390629580000027
and
Figure FDA0002390629580000028
is yiVariance and kurtosis of yiThe y is filtered by an ith DCT filter to obtain an image; by using
Figure FDA0002390629580000029
Indicating the level of image noise.
4. The method for evaluating the quality of the image obtained by the camera according to claim 1, wherein in the step S3, the structural features of the image, including the gradient strength and phase consistency of the image, are extracted, and then the two features are fused to describe the local structural features of the image, so as to quantify the blurring degree of the image; preferably, the calculation of the gradient strength uses a Sobel operator to perform convolution on the image to obtain gradient maps in the horizontal direction and the vertical direction, and then takes the arithmetic square root of the square sum of the gradient maps in the horizontal direction and the vertical direction as a gradient strength map; preferably, a phase consistency characteristic of the image is calculated by adopting a calculation method of Kovesi;
preferably, the step S3 specifically includes: extracting structural features of the image, respectively calculating the gradient strength and phase consistency of the image to extract the internal structure of the image, and estimating the fuzzy degree of the image by using the structural features; the gradient strength is calculated as follows: convolving the image by using a Sobel operator to obtain a gradient map in the horizontal direction and a gradient map in the vertical direction:
Figure FDA00023906295800000210
wherein ,GxDenotes the gradient in the horizontal direction, GyRepresenting the gradient in the vertical direction, and then calculating a gradient intensity map:
Figure FDA00023906295800000211
wherein GM represents a gradient intensity map;
extracting the phase consistency of the image by a Kovesi method: given a one-dimensional signal s, define
Figure FDA00023906295800000212
And
Figure FDA00023906295800000213
filters at the n-scale, even and odd symmetry respectively, forming an orthogonal pair of filters, preferably approximated by log-Gabor filters, with which the image is filtered to obtain a response at the j-position
Figure FDA00023906295800000214
Figure FDA00023906295800000215
Amplitude is defined as
Figure FDA00023906295800000216
Let F (j) be ∑nen(j),H(j)=∑non(j) Then the phase consistency PC is calculated as:
Figure FDA0002390629580000031
wherein ,
Figure FDA0002390629580000032
epsilon is a small positive number, the denominator is prevented from appearing 0, and the calculation of the one-dimensional signal PC is popularized to the calculation of the two-dimensional signal PC, and is defined as:
Figure FDA0002390629580000033
wherein o represents an index for each direction;
fusing the gradient intensity map and the phase consistency map to obtain a local structure map of the image:
LS(i,j)=max{GM(i,j),PC(i,j)}
wherein LS represents a local structure diagram, (i, j) represents a pixel position, and max represents a maximum value operation; pooling the local structure chart to obtain an estimation of the blurring degree of the image:
Figure FDA0002390629580000034
where s represents the degree of blurring of the image, Ω is the set of the top 20% maximum values in LS, and M is the number of elements in Ω.
5. The method for evaluating the quality of pictures acquired by a camera according to claim 1, wherein in step S4, the difference between the central pixel and the surrounding upper, lower, left and right pixels is calculated to measure the difference of the images, and the larger the difference is, the higher the contrast is;
preferably, the step S4 specifically includes: calculating the difference between the central pixel and the adjacent pixels on a pixel-by-pixel basis, wherein the larger the difference is, the higher the contrast of the image is, and assuming that the current pixel value is a and is adjacent to the current pixel valueSquare pixel value of a1The lower pixel value is a2The pixel value on the left side is a3The pixel value on the right is a4The difference between the current pixel and the surrounding pixel values is defined as:
Figure FDA0002390629580000035
where d represents a difference value between pixel values.
6. The method for evaluating the quality of a picture obtained by a camera according to claim 1, wherein in step S5, the image is locally normalized by using the local mean and variance of the image, and then the obtained normalization coefficient is fitted by using a 0-mean generalized gaussian distribution, and fitting parameters are extracted to estimate the naturalness of the image;
preferably, the step S5 specifically includes: the local mean and variance are used for carrying out local normalization on the image to obtain a normalized image, and the normalization means that: calculating the normalized coefficient of the image, namely:
Figure FDA0002390629580000036
where I is the input image, (x, y) represents positional information,
Figure FDA0002390629580000037
representing a normalized coefficient image, μ (x, y), σ (x, y) is a mean and variance local to (x, y), and then the local normalized image is fitted with a 0-mean generalized gaussian distribution whose probability density is defined as:
Figure FDA0002390629580000041
where Γ (·) is a gamma function defined as:
Γ(x)=∫0 tx-1e-tdt,x>0
wherein α is a shape parameter describing the shape of the distribution, β represents a standard deviation, and the two parameters can describe the distribution and extract the naturalness of the image.
7. The method for evaluating the quality of pictures acquired by a camera according to claim 1, wherein in step S6, the image is sparsely represented in units of blocks, then the difference between the image and its sparse representation is calculated, and the mean, variance, kurtosis, skewness and information entropy representing the residual are calculated;
preferably, the step S6 specifically includes: for an image I, firstly, one image block is extracted for sparse representation, and the assumption is that
Figure FDA0002390629580000042
It has a size of
Figure FDA0002390629580000043
This process can be expressed as:
xk=Rk(I)
wherein ,Rk(·) is an image block extraction operator, which extracts an image block at a position k, where k is 1,2, 3.
For image block xkIt is in the dictionary
Figure FDA0002390629580000044
The sparse representation of (1) is to obtain a sparse vector
Figure FDA0002390629580000045
kMost of the elements are 0 or close to 0) satisfies:
Figure FDA0002390629580000046
wherein the first term is fidelity term, and the second term is sparse constraint termλ is constant to balance the specific gravities of the two terms, and p is 0 or 1; preferably, p is set to 1, and the above equation is changed into the solution of the convex optimization problem; solving the above formula by using Orthogonal Matching Pursuit (OMP) algorithm to obtain image block xkIs sparse representation coefficient
Figure FDA0002390629580000047
X is thenkCan be expressed sparsely as
Figure FDA0002390629580000048
The sparse representation of the entire image I is:
Figure FDA0002390629580000049
wherein I' represents a sparse representation of image I; describing the change situation of the image quality by using the difference between the image and the sparse representation mode thereof; the residual between the input image and the sparse representation is first calculated:
PR(x,y)=I(x,y)-I′(x,y)
wherein, PR is a representation residual, I is an input image (or image block), and I' is a sparse representation of the input image; extracting statistical features representing the residual error to pool the representation residual error, calculating a mean value, a variance, a kurtosis, a skewness and an information entropy pool to represent the residual error, and assuming that epsilon (-) is a mean value operation, the mean value, the variance, the kurtosis and the skewness of the representation are calculated as follows:
mPR=ε(PR)
Figure FDA0002390629580000051
Figure FDA0002390629580000052
Figure FDA0002390629580000053
the information entropy is calculated as:
Figure FDA0002390629580000054
wherein ,piIs the probability density of the ith gray level.
8. The method for evaluating the quality of pictures acquired by a camera according to claim 1, wherein in the step S7, the step S7 specifically comprises: using a group of distorted images, executing steps S1-S6 for each image, extracting corresponding features, inputting the extracted features and corresponding subjective scores into a support vector regression model, training an image feature-to-image quality mapping model, and then using the model to predict the image quality.
9. A camera-oriented picture quality evaluation device comprising a computer-readable storage medium and a processor, wherein the computer-readable storage medium stores an executable program, and wherein the executable program, when executed by the processor, implements the camera-oriented picture quality evaluation method according to any one of claims 1 to 8.
10. A computer-readable storage medium, in which an executable program is stored, which, when executed by a processor, implements the method for quality assessment of pictures acquired by a camera according to any one of claims 1 to 8.
CN202010112925.0A 2020-02-24 2020-02-24 Quality evaluation method and device for obtaining pictures by facing camera Active CN111354048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010112925.0A CN111354048B (en) 2020-02-24 2020-02-24 Quality evaluation method and device for obtaining pictures by facing camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112925.0A CN111354048B (en) 2020-02-24 2020-02-24 Quality evaluation method and device for obtaining pictures by facing camera

Publications (2)

Publication Number Publication Date
CN111354048A true CN111354048A (en) 2020-06-30
CN111354048B CN111354048B (en) 2023-06-20

Family

ID=71197156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112925.0A Active CN111354048B (en) 2020-02-24 2020-02-24 Quality evaluation method and device for obtaining pictures by facing camera

Country Status (1)

Country Link
CN (1) CN111354048B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419300A (en) * 2020-12-04 2021-02-26 清华大学深圳国际研究生院 Underwater image quality evaluation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750695A (en) * 2012-06-04 2012-10-24 清华大学 Machine learning-based stereoscopic image quality objective assessment method
US20170286798A1 (en) * 2016-03-31 2017-10-05 Ningbo University Objective assessment method for color image quality based on online manifold learning
CN107371015A (en) * 2017-07-21 2017-11-21 华侨大学 One kind is without with reference to contrast modified-image quality evaluating method
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750695A (en) * 2012-06-04 2012-10-24 清华大学 Machine learning-based stereoscopic image quality objective assessment method
US20170286798A1 (en) * 2016-03-31 2017-10-05 Ningbo University Objective assessment method for color image quality based on online manifold learning
CN107371015A (en) * 2017-07-21 2017-11-21 华侨大学 One kind is without with reference to contrast modified-image quality evaluating method
CN109255358A (en) * 2018-08-06 2019-01-22 浙江大学 A kind of 3D rendering quality evaluating method of view-based access control model conspicuousness and depth map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘玉涛, 《基于视觉感知与统计的图像质量评价方法研究》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419300A (en) * 2020-12-04 2021-02-26 清华大学深圳国际研究生院 Underwater image quality evaluation method and system

Also Published As

Publication number Publication date
CN111354048B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
Wang et al. Structural similarity based image quality assessment
Shao et al. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties
CN108932697B (en) Distortion removing method and device for distorted image and electronic equipment
CN111932532B (en) Method for evaluating capsule endoscope without reference image, electronic device, and medium
Tian et al. Light field image quality assessment via the light field coherence
CN106127741B (en) Non-reference picture quality appraisement method based on improvement natural scene statistical model
CN104023230B (en) A kind of non-reference picture quality appraisement method based on gradient relevance
CN108830823B (en) Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis
CN110232670B (en) Method for enhancing visual effect of image based on high-low frequency separation
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
CN106127234B (en) Non-reference picture quality appraisement method based on characteristics dictionary
CN108074241B (en) Quality scoring method and device for target image, terminal and storage medium
KR20200140713A (en) Method and apparatus for training neural network model for enhancing image detail
CN105007488A (en) Universal no-reference image quality evaluation method based on transformation domain and spatial domain
CN110910347B (en) Tone mapping image non-reference quality evaluation method based on image segmentation
CN110111347B (en) Image sign extraction method, device and storage medium
CN111105357A (en) Distortion removing method and device for distorted image and electronic equipment
Shi et al. SISRSet: Single image super-resolution subjective evaluation test and objective quality assessment
CN105894507A (en) Image quality evaluation method based on image information content natural scenario statistical characteristics
Wang et al. New insights into multi-focus image fusion: A fusion method based on multi-dictionary linear sparse representation and region fusion model
CN107590804A (en) Screen picture quality evaluating method based on channel characteristics and convolutional neural networks
Morzelona Human visual system quality assessment in the images using the IQA model integrated with automated machine learning model
CN108257117B (en) Image exposure evaluation method and device
Lévêque et al. CUID: A new study of perceived image quality and its subjective assessment
CN111354048B (en) Quality evaluation method and device for obtaining pictures by facing camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant