CN111354048B - Quality evaluation method and device for obtaining pictures by facing camera - Google Patents
Quality evaluation method and device for obtaining pictures by facing camera Download PDFInfo
- Publication number
- CN111354048B CN111354048B CN202010112925.0A CN202010112925A CN111354048B CN 111354048 B CN111354048 B CN 111354048B CN 202010112925 A CN202010112925 A CN 202010112925A CN 111354048 B CN111354048 B CN 111354048B
- Authority
- CN
- China
- Prior art keywords
- image
- picture
- quality
- extracting
- estimating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 33
- 230000016776 visual perception Effects 0.000 claims abstract description 10
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 238000010606 normalization Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 7
- 238000010586 diagram Methods 0.000 claims description 11
- 238000001303 quality assessment method Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a quality evaluation method and a device for obtaining pictures by facing a camera, wherein the method comprises the following steps: s1, extracting brightness and chromaticity characteristics, and estimating brightness and chromaticity of an input picture; s2, extracting characteristics of image noise, and estimating noise degree of an input image; s3, extracting structural features, and estimating the blurring degree of an input picture; s4, extracting contrast characteristics, and estimating the contrast of the input picture; s5, extracting statistical characteristics of the picture normalization coefficient, and estimating naturalness of the picture; s6, extracting visual perception characteristics of the picture, and estimating visual perception changes of the picture; and S7, learning a mapping model of the image features extracted in the steps S1-S6 to the image quality on the training set by using a support vector regression method, and predicting the quality of the image. The method does not need to refer to the original image, can obtain higher prediction performance, and has wide application value.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a quality evaluation method and device for obtaining pictures by facing a camera.
Background
With the popularity of mobile terminal devices, it is becoming more and more convenient to take images with cameras. However, in the process of image acquisition, the quality of the acquired image is poor due to various factors, which affects the subsequent application of the image. Digital images are an important support for the information industry, and image quality evaluation techniques have a wide range of applications for the information industry. For example, in image acquisition, image quality assessment may be applied to monitor the quality of acquired images in real time, giving cues to low quality images, rejecting low quality images; the image quality evaluation can be applied to the image compression process, firstly, the effectiveness of the compression algorithm is measured, and secondly, the compression algorithm is guided, so that the loss of the image quality is reduced as much as possible after the compression; the image quality evaluation can also be used for judging the merits of the image processing algorithm.
The image quality evaluation can be broadly divided into subjective quality evaluation and objective quality evaluation, wherein the subjective quality evaluation is to score the quality of the image according to the experience of an observer, and the objective quality evaluation is to evaluate the quality of the image by designing an objective algorithm. A good objective quality assessment method should be consistent with the results of subjective quality assessment. In image application, the final receiver of the image is a person, so the subjective quality evaluation is the most reliable and accurate evaluation method, however, the subjective quality evaluation needs to consume a great deal of manpower, material resources and time to score the quality of the image, and the more the number of images is, the more difficult or even impossible to implement the subjective quality evaluation, so the objective quality evaluation method is particularly important.
In terms of objective quality assessment, the Mean Square Error (MSE) and peak-to-peak ratio (PSNR), although sometimes not particularly consistent with the subjective scores of the testers, are still currently the most common quality assessment criteria due to their simplicity. The structural similarity method (SSIM) proposed in the paper "Image quality assessment: from error visibility to structural similarity" by Wang, Z.et al, IEEE Trans. Image Process, volume 13, page 4, 600 to page 612, evaluates the quality of distorted images by comparing the structural similarity of the original image and the distorted image. Sheikh, H.R. et al, in IEEE Trans.image Process, volume 15, pages 2, 430-444 published paper Image information and visual quality, propose a visual information Fidelity method (VIF) to evaluate image quality by quantifying the loss of information in the image. Zhang, l. Et al, in the paper "FSIM: A Feature Similarity Index for Image Quality Assessment" published by IEEE trans.image Process, volume 20, 8, pages 2378 to 2386, utilize gradient features and phase consistency features to evaluate image quality. Xue, W et al use gradient information for quality assessment in paper "Gradient Magnitude Similarity Deviation: AHighly Efficient Perceptual Image Quality Index" published on pages 684 to 695 of IEEE Trans. Image Process, volume 23. Gao, x. Et al in the paper "Image Quality Assessment Based on Multiscale Geometric Analysis" published by IEEE trans.image Process, volume 18, 7, 1409 to 1423, multiscale decomposition of images, weighting the decomposed coefficients with human eye contrast sensitivity functions, and then processing the coefficients with a minimum perceptual difference (Just Noticeable Difference, JND) model to extract histogram features that predict image quality. Xue, W et al use gradient information for quality assessment in paper "Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index" published on pages 684 to 695 of IEEE Trans. Image Process, volume 23. Liu, H.et al, IEEE Trans. Circuits System. Video technology, volume 21, 7, pages 971 to 982, describe a Blind Image Quality Indices (BIQI) model that classifies image distortion before quality evaluation.
Disclosure of Invention
The invention aims to overcome at least one of the technical defects and provides a quality evaluation method and device for obtaining pictures by facing a camera.
In order to achieve the above object, the present invention provides a quality evaluation method for obtaining pictures for a camera, the method comprising the steps of:
s1, extracting brightness and chromaticity characteristics, and estimating brightness and chromaticity of an input picture;
s2, extracting characteristics of image noise, and estimating noise degree of an input image;
s3, extracting structural features, and estimating the blurring degree of an input picture;
s4, extracting contrast characteristics, and estimating the contrast of the input picture;
s5, extracting statistical characteristics of the picture normalization coefficient, and estimating naturalness of the picture;
s6, extracting visual perception characteristics of the picture, and estimating visual perception changes of the picture;
and S7, learning a mapping model of the image characteristics extracted in the steps S1-S6 to the image quality on the training set by using a support vector regression method so as to be used for predicting the quality of the image.
Preferably, in the step S1, the image is converted from RGB space to HSI space, and the average value of each channel of the image is extracted as the luminance feature and the chrominance feature of the image.
Preferably, in the step S2, a relationship between kurtosis of the clean image and the corresponding noise image is modeled by using a natural image scale invariance principle, and a variance of the image noise is solved.
Preferably, in the step S3, structural features of the image, including gradient strength and phase consistency of the image, are extracted, and then local structural features of the image are described by merging the two features, so as to quantify the blurring degree of the image.
Preferably, the gradient strength means: and (3) calculating the gradient intensity, namely convolving the image by using a Sobel operator to obtain gradient diagrams in the horizontal and vertical directions, and taking the arithmetic square root of the sum of squares of the gradient diagrams in the horizontal and vertical directions as the gradient intensity diagram.
Preferably, the phase consistency feature of the extracted image refers to: and calculating the phase consistency characteristic of the image by adopting a Kovesi calculation method.
Preferably, in the step S4, the difference between the center pixel and the pixels around the center pixel is calculated to measure the contrast of the image, and the larger the difference is, the higher the contrast is.
Preferably, in the step S5, the local mean value and variance of the image are used to normalize the image locally, then the generalized gaussian distribution of 0 mean value is used to fit the obtained normalized coefficient, and the fitting parameters are extracted to estimate the naturalness of the image.
Preferably, in the step S6, sparse representation is performed on the image in units of blocks, and then a difference between the image and the sparse representation is calculated, so as to obtain a mean, a variance, kurtosis, skewness and information entropy of the representation residual.
Preferably, in the step S7, a set of distorted images is used to extract features such as brightness, chromaticity, noisiness, etc. from each image, then subjective scores of the extracted features and the images are input into a support vector regression model, a mapping model of image features to image quality is learned, and the model is used to predict the quality of the images.
The quality evaluation device for obtaining the picture facing the camera comprises a computer readable storage medium and a processor, wherein the computer readable storage medium stores an executable program, and the quality evaluation method for obtaining the picture facing the camera is realized when the executable program is executed by the processor.
A computer readable storage medium storing an executable program which, when executed by a processor, implements the camera-oriented quality evaluation method for obtaining pictures.
The invention has the beneficial effects that:
the invention provides a quality evaluation method and a quality evaluation device for obtaining pictures by facing a camera, which are used for extracting characteristics sensitive to image quality change, representing the change of image quality and learning the mapping from the image characteristics to the image quality by using a support vector regression model so as to judge the quality of the image. The method does not need to refer to the original image, can obtain higher prediction performance, and has wide application value.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic diagram of an embodiment of a quality evaluation method for obtaining pictures by a camera according to the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The degradation reasons of the image shot by the camera are various, the main factors influencing the image quality, including brightness, chromaticity, contrast, noisiness, ambiguity and the like, are grasped, modeling is carried out respectively, the corresponding characteristics are extracted to describe the factors influencing the image quality, and the mapping from the image characteristics to the image quality is learned by using a support vector regression model.
Fig. 1 is a schematic diagram of an embodiment of a quality evaluation method for obtaining pictures by a camera according to the present invention. As shown in fig. 1, an embodiment of the present invention provides a quality evaluation method for obtaining pictures by facing a camera, where the method includes: s1, extracting brightness and chromaticity characteristics, and estimating brightness and chromaticity of an input picture; s2, extracting characteristics of image noise, and estimating noise degree of an input image; s3, extracting structural features, and estimating the blurring degree of an input picture; s4, extracting contrast characteristics, and estimating the contrast of the input picture; s5, extracting statistical characteristics of the picture normalization coefficient, and estimating naturalness of the picture; s6, extracting visual perception characteristics of the picture, and estimating visual perception changes of the picture; and S7, learning a mapping model of the image characteristics extracted in the steps S1-S6 to the image quality on the training set by using a support vector regression method so as to be used for predicting the quality of the image. The image quality change is characterized by extracting features sensitive to the image quality change, including brightness, chromaticity, noisiness, ambiguity, contrast, naturalness and visual perception statistical features, and the mapping from the image features to the image quality is learned by using a support vector regression model so as to judge the image quality. The method does not need to refer to the original image, and can obtain higher prediction performance.
In an embodiment, the specific implementation process and the detailed details of the quality evaluation method for obtaining pictures by using the camera are as follows:
firstly, converting an input picture from an RGB space to an HSI space:
wherein I is an input image,for the converted image, T (-) is the chroma space conversion function. Then, luminance features and chromaticity features are calculated separately:
wherein F1 represents luminance characteristics, F2 and F3 represent chrominance characteristics,for the luminance channel of the image +.> and />For two chrominance channels of an image, N represents the number of pixels of the image.
Then estimating the variance of noise in the image, so as to estimate the noise degree of the image, and assuming that the clean noise-free image is x and the noise image corresponding to the clean noise-free image is y, the relation between the kurtosis of y and the kurtosis of x and the variance of noise can be expressed as follows:
wherein ,κy Represents the kurtosis, κ of y x Representing the kurtosis of x, alpha is a shape parameter of the x distribution,is the variance of x>Is the variance of the noise. Then the variance of the noise and the estimate of the image x kurtosis can be solved by minimizing the following:
wherein ,for the estimation of x kurtosis, +.>For the estimation of noise variance +.> and />Is y i Variance and kurtosis of (y), y i Is an image obtained by filtering y with an ith DCT filter. By->Indicating the level of image noise.
The method comprises the steps of extracting structural features of an image, estimating the blurring degree of the image by utilizing the structural features, and respectively calculating gradient strength and phase consistency of the image to extract the internal structure of the image. The gradient strength is calculated by the following steps: convolving the image by using a Sobel operator to obtain a gradient map in two directions, namely horizontal and vertical:
wherein ,Gx Represents a gradient in the horizontal direction, G y Representing the gradient in the vertical direction, then calculating the gradient strength map:
wherein GM represents a gradient intensity map.
Extracting phase consistency of the image by using a Kovesi method: given a one-dimensional signal s, define and />Filters at n-scale, even and odd symmetry, respectively, forming an orthogonal pair of filters, approximated here by a log-Gabor filter, the image being filtered by this pair of filters to obtain a response +.> Amplitude is defined as +.>Let F (j) = Σ n e n (j),H(j)=∑ n o n (j) The phase consistency PC is calculated as:
wherein ,epsilon is a small positive number, prevents 0 from occurring in the denominator, and the calculation of the one-dimensional signal PC is generalized to the calculation of the two-dimensional signal PC, defined as:
where o represents the index of each direction.
Fusing the gradient intensity map and the phase consistency map to obtain a local structure diagram of the image:
LS(i,j)=max{GM(i,j),PC(i,j)}
where LS represents a partial structure diagram, (i, j) represents a pixel position, and max represents a maximum value taking operation. Pooling the local structure diagram to obtain the estimation of the blurring degree of the image:
where s represents the blurring degree of the image, Ω is the set of the first 20% maximum values in LS, and M is the number of elements in Ω.
Estimating contrast of the image, calculating difference between the central pixel and surrounding adjacent pixels pixel by pixel, wherein the larger the difference is, the higher the contrast of the image is, assuming that the current pixel value is a, and the pixel value adjacent to and above the current pixel value is a 1 The pixel value below is a 2 The pixel value to the left is a 3 The pixel value to the right is a 4 The difference between the current pixel and the surrounding pixel values is defined as:
where d represents the difference value between the pixel values.
Estimating the naturalness of the image, and carrying out local normalization on the image by using the local mean and variance to obtain a normalized image, wherein the step of obtaining the normalized image is as follows: calculating normalized coefficients of the image, namely:
where I is an input image, (x, y) represents position information,representing normalized coefficient image, μ (x, y), σ (x, y) being mean and variance of local with (x, y) as center, then fitting the local normalized image with 0-mean generalized Gaussian distribution, probability density of the generalized Gaussian distributionThe degree is defined as:
wherein Γ (·) is a gamma function, defined as:
wherein, alpha is a shape parameter describing the shape of the distribution, beta is a standard deviation, the two parameters can describe the distribution condition, and the two parameters are extracted to describe the nature of the image.
Extracting visual perception statistical characteristics of an image, and for the image I, firstly extracting one image block to perform sparse representation, wherein the image block is assumed to beThe size is +.>This process can be expressed as:
x k =R k (I)
wherein ,Rk (·) is an image block extraction operator, extracting an image block at position k, k=1, 2, 3.
For image block x k It is in dictionaryThe sparse representation on the model refers to solving a sparse vectorMost of the elements are 0 or close to 0) satisfy:
the first term is a fidelity term, the second term is a sparse constraint term, lambda is a constant and used for balancing the specific gravity of the two terms, the value of p is 0 or 1, if the value of p is 0, the number of the sparse terms representing non-0 in the coefficient is consistent with the sparsity required by us, however, the optimization problem of 0 norm is non-convex, the solution is difficult, and the alternative solution is to set p to be 1, so that the above formula becomes the solution of the convex optimization problem. Solving the above by using an orthogonal matching pursuit algorithm (OMP) to obtain an image block x k Sparse representation coefficients of (a)Then x k Can be sparsely represented as +.>The sparse representation of the entire image I can be written as:
wherein I' represents a sparse representation of image I. Image distortion changes the way the brain understands the image, or the way the image is sparsely represented, so the difference between the image and its sparse representation is used to describe the change in image quality. First, the residual error between the input image and the sparse representation is calculated:
PR(x,y)=I(x,y)-I′(x,y)
where PR is the representation residual, I is the input image (or image block), and I' is the sparse representation of the input image. Extracting statistical features representing the residual to pool the residual, calculating a mean, a variance, kurtosis, skewness and information entropy to pool the residual, and assuming epsilon (·) is a mean-taking operation, then the mean, variance, kurtosis and skewness of the representation can be calculated as:
mPR=ε(PR)
the information entropy is calculated as follows:
wherein ,pi The probability density for the i-th gray level.
Training a quality evaluation prediction model, extracting the characteristics from each image by using a group of distorted images, inputting the extracted characteristics and the corresponding subjective scores into a support vector regression model, training a quality model, and predicting the quality of other images by using the model.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.
Claims (9)
1. The quality evaluation method for obtaining the picture by facing the camera is characterized by comprising the following steps:
s1, extracting brightness and chromaticity characteristics, and estimating brightness and chromaticity of an input picture;
s2, extracting characteristics of image noise, and estimating noise degree of an input image; modeling the relation between the kurtosis of the clean image and the corresponding noise image by utilizing the principle of natural image scale invariance, and solving the variance of the image noise;
s3, extracting structural features of the image, and estimating the blurring degree of the input picture; the extracted structural features of the image comprise gradient strength and phase consistency of the image, and then the local structural features of the image are described by fusing the gradient strength and the phase consistency, so that the blurring degree of the image is quantized;
s4, extracting contrast characteristics, and estimating the contrast of the input picture; the difference between the center pixel and the pixels around the center pixel is calculated to measure the difference of the images, and the larger the difference is, the higher the contrast is;
s5, extracting statistical characteristics of the picture normalization coefficient, and estimating naturalness of the picture; carrying out local normalization on the image by using local mean and variance of the image, then fitting the obtained normalized coefficient by using generalized Gaussian distribution of 0 mean, and extracting fitting parameters to estimate the naturalness of the image;
s6, extracting visual perception characteristics of the picture, and estimating visual perception changes of the picture; performing sparse representation on an image by taking a block as a unit, and then calculating the difference between the image and the sparse representation, and solving the mean value, variance, kurtosis, skewness and information entropy of a representation residual error;
and S7, learning a mapping model of the image features extracted in the steps S1-S6 to the image quality on the training set by using a support vector regression method, and predicting the quality of the image.
2. The method for evaluating the quality of a camera-oriented acquired picture according to claim 1, wherein in the step S1, an input picture is converted from an RGB space to an HSI space, and the average value of each channel of the image is extracted as the luminance feature and the chrominance feature of the image;
the step S1 specifically includes: firstly, converting an input picture from an RGB space to an HSI space:
wherein I is an input image,t (·) is the chromaticity space transfer function for the converted image; then, respectively calculating the brightnessDegree and chromaticity characteristics:
3. The method for evaluating the quality of a picture acquired by a camera according to claim 1, wherein the step S2 specifically includes: estimating the variance of noise in the image, thereby estimating the noise degree of the image, and assuming that the clean noiseless image is x and the noise image corresponding to the clean noiseless image is y, the relation between the kurtosis of y and the kurtosis of x and the variance of noise is expressed as follows:
wherein ,κy Represents the kurtosis, κ of y x Representing the kurtosis of x, alpha is a shape parameter of the x distribution,is the variance of x>Is the variance of the noise; the variance of the noise and the estimate of the image x kurtosis are solved by minimizing the following equation:
4. The camera-oriented quality assessment method of pictures of claim 1, wherein,
the step S3 specifically includes: extracting structural features of the image, respectively calculating gradient strength and phase consistency of the image to extract an internal structure of the image, and estimating the blurring degree of the image by utilizing the structural features; the gradient strength is calculated by the following steps: convolving the image by using a Sobel operator to obtain a gradient map in two directions, namely horizontal and vertical:
wherein I is an input image, G x Represents a gradient in the horizontal direction, G y Representing the gradient in the vertical direction, then calculating the gradient strength map:
wherein GM represents a gradient intensity map;
extracting phase consistency of the image by using a Kovesi method: given a one-dimensional signal s, define and />Filters at n-scale, even and odd symmetry, respectively, forming an orthogonal pair of filters approximated by a log-Gabor filter, filtering the image with the pair of filters to obtain a response at the j-position ∈ -> Amplitude is defined as +.>Let F (j) = Σ n e n (j),H(j)=∑ n o n (j) The phase consistency PC is calculated as:
wherein ,epsilon is oneThe small positive number prevents 0 from appearing in the denominator, and the calculation of the one-dimensional signal PC is generalized to the calculation of the two-dimensional signal PC, which is defined as:
wherein o represents the index of each direction;
fusing the gradient intensity map and the phase consistency map to obtain a local structure diagram of the image:
LS(i,j)=max{GM(i,j),PC(i,j)}
wherein LS represents a partial structure diagram, (i, j) represents a pixel position, and max represents a maximum value taking operation; pooling the local structure diagram to obtain the estimation of the blurring degree of the image:
where s represents the blurring degree of the image, Ω is the set of the first 20% maximum values in LS, and M is the number of elements in Ω.
5. The method for evaluating the quality of a picture acquired by a camera according to claim 1, wherein the step S4 specifically includes: calculating the difference between the central pixel and the adjacent pixels, the larger the difference is, the higher the contrast of the image is, assuming that the current pixel value is a, and the pixel value adjacent to and above the current pixel value is a 1 The pixel value below is a 2 The pixel value to the left is a 3 The pixel value to the right is a 4 The difference between the current pixel and the surrounding pixel values is defined as:
where d represents the difference value between the pixel values.
6. The method for evaluating the quality of a picture acquired by a camera according to claim 1, wherein the step S5 specifically includes: carrying out local normalization on the image by using the local mean and variance to obtain a normalized image, which means that: calculating normalized coefficients of the image, namely:
where I is an input image, (x, y) represents position information,representing the normalized coefficient image, μ (x, y), σ (x, y) is the mean and variance of the local with (x, y) as the center, then fitting the local normalized image with a 0-mean generalized Gaussian distribution, the probability density of which is defined as:
wherein Γ (·) is a gamma function, defined as:
wherein, alpha is a shape parameter describing the shape of the distribution, beta is a standard deviation, the two parameters can describe the distribution condition, and the two parameters are extracted to describe the nature of the image.
7. The method for evaluating the quality of a picture acquired by a camera according to claim 1, wherein in step S7, specifically: and executing steps S1-S6 on each image by using a group of distorted images, extracting corresponding features, inputting the extracted features and corresponding subjective scores into a support vector regression model, training a mapping model of image features to image quality, and predicting the image quality by using the model.
8. A camera-oriented quality evaluation device comprising a computer-readable storage medium and a processor, the computer-readable storage medium storing an executable program, wherein the executable program, when executed by the processor, implements the camera-oriented quality evaluation method according to any one of claims 1 to 7.
9. A computer-readable storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the camera-oriented picture quality evaluation method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112925.0A CN111354048B (en) | 2020-02-24 | 2020-02-24 | Quality evaluation method and device for obtaining pictures by facing camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010112925.0A CN111354048B (en) | 2020-02-24 | 2020-02-24 | Quality evaluation method and device for obtaining pictures by facing camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354048A CN111354048A (en) | 2020-06-30 |
CN111354048B true CN111354048B (en) | 2023-06-20 |
Family
ID=71197156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010112925.0A Active CN111354048B (en) | 2020-02-24 | 2020-02-24 | Quality evaluation method and device for obtaining pictures by facing camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354048B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112419300A (en) * | 2020-12-04 | 2021-02-26 | 清华大学深圳国际研究生院 | Underwater image quality evaluation method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750695B (en) * | 2012-06-04 | 2015-04-15 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN105913413B (en) * | 2016-03-31 | 2019-02-22 | 宁波大学 | A kind of color image quality method for objectively evaluating based on online manifold learning |
CN107371015A (en) * | 2017-07-21 | 2017-11-21 | 华侨大学 | One kind is without with reference to contrast modified-image quality evaluating method |
CN109255358B (en) * | 2018-08-06 | 2021-03-26 | 浙江大学 | 3D image quality evaluation method based on visual saliency and depth map |
-
2020
- 2020-02-24 CN CN202010112925.0A patent/CN111354048B/en active Active
Non-Patent Citations (1)
Title |
---|
刘玉涛.《基于视觉感知与统计的图像质量评价方法研究》.2020,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111354048A (en) | 2020-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Structural similarity based image quality assessment | |
Gu et al. | Learning a no-reference quality assessment model of enhanced images with big data | |
Shen et al. | Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images | |
Xue et al. | Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features | |
Mohammadi et al. | Subjective and objective quality assessment of image: A survey | |
Li et al. | Content-partitioned structural similarity index for image quality assessment | |
Ciancio et al. | No-reference blur assessment of digital pictures based on multifeature classifiers | |
Ma et al. | Reduced-reference image quality assessment in reorganized DCT domain | |
CN111932532B (en) | Method for evaluating capsule endoscope without reference image, electronic device, and medium | |
Fan et al. | No reference image quality assessment based on multi-expert convolutional neural networks | |
CN109978854B (en) | Screen content image quality evaluation method based on edge and structural features | |
CN108830823B (en) | Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis | |
Attar et al. | Image quality assessment using edge based features | |
Morzelona | Human visual system quality assessment in the images using the IQA model integrated with automated machine learning model | |
CN110910347B (en) | Tone mapping image non-reference quality evaluation method based on image segmentation | |
CN111105357A (en) | Distortion removing method and device for distorted image and electronic equipment | |
Feng et al. | Low-light image enhancement algorithm based on an atmospheric physical model | |
Shi et al. | SISRSet: Single image super-resolution subjective evaluation test and objective quality assessment | |
CN111354048B (en) | Quality evaluation method and device for obtaining pictures by facing camera | |
CN108257117B (en) | Image exposure evaluation method and device | |
Gao et al. | A content-based image quality metric | |
Jaiswal et al. | A no-reference contrast assessment index based on foreground and background | |
Sun et al. | No-reference image quality assessment through sift intensity | |
CN113409248A (en) | No-reference quality evaluation method for night image | |
Zhang et al. | Local binary pattern statistics feature for reduced reference image quality assessment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |