CN113705587A - Image quality scoring method, device, storage medium and electronic equipment - Google Patents

Image quality scoring method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113705587A
CN113705587A CN202111168826.5A CN202111168826A CN113705587A CN 113705587 A CN113705587 A CN 113705587A CN 202111168826 A CN202111168826 A CN 202111168826A CN 113705587 A CN113705587 A CN 113705587A
Authority
CN
China
Prior art keywords
feature
target
dimension
image
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111168826.5A
Other languages
Chinese (zh)
Inventor
贺沁雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202111168826.5A priority Critical patent/CN113705587A/en
Publication of CN113705587A publication Critical patent/CN113705587A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image quality scoring method, an image quality scoring device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a target image to be scored; extracting a deep learning feature of the target image by using the first feature extraction model; acquiring target features of a target image, wherein the target features comprise at least one of color features, texture features and gradient features of the target image; splicing the deep learning features and the target features into combined features; and inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image. The invention solves the technical problem of inaccurate grading of the image quality.

Description

Image quality scoring method, device, storage medium and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to an image quality scoring method, an image quality scoring device, a storage medium and electronic equipment.
Background
In the prior art, when the image quality of an image is scored, a scoring model is generally used to extract a deep learning feature of the image, and then the score of the image is determined according to the deep learning feature.
However, since the deep learning features of the image are generally used for tasks such as recognition and classification of the image, and information affecting the image quality, such as color and exposure, of the image cannot be well expressed, the quality score of the image is determined by the deep learning features of the image, and the determined quality score is not accurate.
Disclosure of Invention
The embodiment of the invention provides an image quality scoring method, an image quality scoring device, a storage medium and electronic equipment, which are used for at least solving the technical problem of inaccurate quality scoring of images.
According to an aspect of an embodiment of the present invention, there is provided an image quality scoring method including: acquiring a target image to be scored; extracting the deep learning characteristics of the target image by using a first characteristic extraction model; acquiring a target feature of the target image, wherein the target feature comprises at least one of a color feature, a texture feature and a gradient feature of the target image; splicing the deep learning feature and the target feature into a combined feature; and inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
According to another aspect of the embodiments of the present invention, there is provided an image quality scoring apparatus including: the first acquisition module is used for acquiring a target image to be scored; the extraction module is used for extracting the deep learning characteristics of the target image by using a first characteristic extraction model; a second obtaining module, configured to obtain a target feature of the target image, where the target feature includes at least one of a color feature, a texture feature, and a gradient feature of the target image; the splicing module is used for splicing the deep learning characteristic and the target characteristic into a combined characteristic; and the input module is used for inputting the combined features into a target scoring model and scoring the quality of the target image by the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
As an optional example, the second obtaining module includes: a first acquiring unit configured to acquire a color histogram of the target image when the target feature is the color feature.
As an optional example, the first obtaining unit includes: a dividing subunit, configured to divide a minimum color value to a maximum color value of a pixel of the target image into different color intervals, where the minimum color value is a minimum value of color values of the pixel in the target image, and the maximum color value is a maximum value of color values of the pixel in the target image; and the counting subunit is used for counting the number of the pixel points in each color interval to obtain the color histogram.
As an optional example, the second obtaining module includes: the second obtaining unit is used for obtaining a gray level co-occurrence matrix or Tamura texture features or autoregressive texture features of the target image; a first determining unit, configured to use the gray level co-occurrence matrix or the Tamura texture feature or the autoregressive texture feature as the target feature of the target image.
As an optional example, the second obtaining module includes: a third obtaining unit, configured to obtain a Brenner gradient feature or a laplacian operator feature of the target image; a second determining unit, configured to use the Brenner gradient feature or the laplacian operator feature as the target feature of the target image.
As an optional example, the apparatus further includes: and the processing module is used for executing dimensionality reduction operation on the target feature before the deep learning feature and the target feature are spliced into a combined feature.
As an optional example, the processing module includes: a first dimension reduction unit configured to, in a case where the target feature includes the color feature, reduce the target feature to a first dimension of the color feature or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and a third dimension of the gradient feature; or a second dimension reduction unit configured to, in a case where the target feature includes the texture feature, reduce the target feature to a second dimension of the texture feature or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature; or a third dimension reduction unit configured to, in a case where the target feature includes the gradient feature, reduce the target feature to a third dimension of the gradient feature or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature.
According to still another aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is executed by a processor to perform the above-mentioned image quality scoring method.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the image quality scoring method by the computer program.
In the embodiment of the invention, the target image to be scored is obtained; extracting the deep learning characteristics of the target image by using a first characteristic extraction model; acquiring a target feature of the target image, wherein the target feature comprises at least one of a color feature, a texture feature and a gradient feature of the target image; splicing the deep learning feature and the target feature into a combined feature; inputting the combined features into a target scoring model, and scoring the quality of the target image by the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, the training sample comprises a plurality of training images and a quality scoring method of each training image, in the method, when the quality scoring of the target image is determined, not only the deep learning features of the target image are extracted, but also at least one of the color features, texture features and gradient features of the target image is extracted, the deep learning features and at least one of the color features, texture features and gradient features are spliced to obtain the combined features, and finally, the quality scoring of the target image is determined according to the combined features, so that the aim of improving the accuracy of scoring the quality of the image is fulfilled, and further the technical problem of inaccurate image quality scoring is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of an alternative image quality scoring method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of model training for an alternative image quality scoring method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of combined feature acquisition for an alternative image quality scoring method according to embodiments of the present invention;
fig. 4 is a schematic structural diagram of an alternative image quality scoring apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to a first aspect of the embodiments of the present invention, there is provided an image quality scoring method, optionally as shown in fig. 1, the method including:
s102, acquiring a target image to be scored;
s104, extracting the deep learning feature of the target image by using the first feature extraction model;
s106, obtaining target characteristics of the target image, wherein the target characteristics comprise at least one of color characteristics, texture characteristics and gradient characteristics of the target image;
s108, splicing the deep learning features and the target features into combined features;
and S110, inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
Alternatively, the above-described image quality scoring method may be applied, but not limited, to a process of evaluating the quality of an image. For example, for an image sharing website, when a user uploads an image, the image quality is evaluated, and then the images with different quality scores are classified. Or scoring the quality of the multiple images, and then sorting the images according to the quality score height to determine the high-quality images and the like. The above application scenarios are only examples, and the embodiment may be applied to a process of scoring the quality of any image.
Optionally, in this embodiment, if the image to be scored includes a plurality of images, each image may be taken as the target image to be scored, and scoring may be performed by using the method of this embodiment. The first feature extraction model may be a convolutional neural network model, the convolutional neural network model may include a convolutional layer, the convolutional layer includes a convolutional kernel, and the deep learning feature may be obtained by performing convolution on the target picture through the convolutional kernel.
Optionally, in this embodiment, the deep learning feature and the target feature are spliced into a combined feature, and the deep learning feature and the target feature may be spliced end to obtain a new feature, and the new feature is used as the combined feature.
Optionally, the target scoring model in this embodiment may be a model obtained by training an original scoring model in advance using a training sample. The training samples may include training images and quality scores for the training images. As shown in fig. 2, after the deep learning feature and the target feature of the training image are extracted, the deep learning feature and the target feature are spliced to obtain a combined feature, then the combined feature is input into an original scoring model, the combined feature is identified by a full connection layer of the original scoring model, a quality score of the training image is predicted, the predicted quality score is compared with the quality score of the training image, if the branch difference is large and is greater than a first threshold value, it can be considered that the score result of the original scoring model for identifying the training sample is inaccurate, and the model parameter of the original scoring model needs to be adjusted. Through multiple iterative training of multiple training samples, the model parameters of the original scoring model tend to be stable, and a target scoring model with high recognition accuracy is obtained. The target scoring model can output more accurate quality scoring after inputting the combination characteristics.
According to the method, when the quality score of the target image is determined, not only the deep learning feature of the target image is extracted, but also at least one of the color feature, the texture feature and the gradient feature of the target image is extracted, the deep learning feature and at least one of the color feature, the texture feature and the gradient feature are spliced to obtain the combined feature, and finally the quality score of the target image is determined according to the combined feature, so that the purpose of improving the accuracy of the quality score of the image is achieved.
As an alternative example, in the case where the target feature is a color feature, the acquiring the target feature includes:
a color histogram of the target image is obtained.
Optionally, in this embodiment, the target feature may include a color feature. If the target feature includes a color feature, a color histogram of the target image may be obtained when the target feature is obtained. In this embodiment, when acquiring the color histogram, different color space models may be used, such as a Hue, Saturation, Value (HSV) color space model, a HIS color space model (Hue-Saturation-Intensity, HSI) model, and the like. And acquiring a color histogram of the target image through the model.
As an alternative example, the acquiring the color histogram of the target image includes:
dividing the minimum color value to the maximum color value of a pixel of a target image into different color intervals, wherein the minimum color value is the minimum value of the color values of the pixel points in the target image, and the maximum color value is the maximum value of the color values of the pixel points in the target image;
and counting the number of the pixel points in each color interval to obtain a color histogram.
Optionally, in this embodiment, the minimum color value may be a minimum color value of a pixel in the target image, and the maximum color value is a maximum color value of a pixel in the target image. For example, if the target image has 100 pixels (for example only), the minimum color value is 16, and the maximum color value is 95, the [16, 95] interval may be divided into different color intervals, such as the [16, 95] interval being divided into [16, 55] and [56, 95] color intervals. Then, the number of the pixel points falling in different color intervals is counted to obtain a color histogram. For example, the color map mark [16, 55] includes 60 pixels in the interval, and [56, 95] includes 40 pixels in the interval.
As an alternative example, in the case that the target feature is a texture feature, the obtaining the target feature includes:
acquiring a gray level co-occurrence matrix or Tamura texture features or autoregressive texture features of a target image;
and taking the gray level co-occurrence matrix or Tamura texture features or autoregressive texture features as target features of the target image.
Optionally, in this embodiment, if the target feature includes a texture feature, when the target feature is obtained, a gray level co-occurrence matrix, a Tamura texture feature, or an autoregressive texture feature of the target image may be obtained.
Optionally, the Tamura texture features described above include six components, which are roughness (roughness), contrast (contrast), directionality (directionality), line image (linerikenss), regularity (regularity), and roughness (roughness), respectively.
The gray co-occurrence matrix (GLCM) described above represents all possible combinations of pixels in the target image, e.g. the value of GLCM (1, 1) at point (1, 1) in the matrix is 1, indicating that only a pair of pixels with gray 1 are horizontally adjacent. The value of GLCM (1, 2) is 2 because there are two pairs of pixels horizontally adjacent with 1 and 2 gray levels. And acquiring possible combinations of all pixels to generate a gray level co-occurrence matrix.
The autoregressive texture features may be features obtained by identifying the target image using an autoregressive texture model.
As an alternative example, in the case where the target feature is a gradient feature, acquiring the target feature includes:
acquiring Brenner gradient characteristics or laplacian operator characteristics of a target image;
and taking the Brenner gradient characteristic or the laplacian operator characteristic as the target characteristic of the target image.
Alternatively, the Brenner gradient feature described above may be a feature determined using a Brenner gradient function. The Brenner gradient function is used to calculate the square of the difference between the adjacent two pixel gray levels.
Optionally, the laplacian operator feature may represent divergence of a gradient of the target image.
As an optional example, before the splicing the deep learning feature and the target feature into the combined feature, the method further includes:
and performing dimension reduction operation on the target feature.
Optionally, in this embodiment, after the deep learning feature and the target feature are obtained, before the deep learning feature and the target feature are spliced to obtain the combined feature, the dimension of the target feature may be reduced, and the dimensions of the target feature and the target feature are adjusted to be the same dimension.
As an alternative example, the performing the dimension reduction operation on the target feature includes:
in the case that the target feature comprises a color feature, reducing the target feature to a first dimension of the color feature, or reducing the target feature to a target dimension, wherein the target dimension is a dimension different from the first dimension of the color feature, a second dimension of the texture feature, and a third dimension of the gradient feature; or
In the case that the target feature comprises a texture feature, reducing the dimension of the target feature to a second dimension of the texture feature, or reducing the dimension of the target feature to a target dimension, wherein the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and a third dimension of the gradient feature; or
In the case where the target feature comprises a gradient feature, the target feature is reduced to a third dimension of the gradient feature, or the target feature is reduced to a target dimension, wherein the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature.
Optionally, in this embodiment, there may be a plurality of methods for performing dimension reduction on the target feature. The first method can reduce the dimension of the target feature to the same dimension as the deep learning feature, and the second method can adjust the target feature to the target dimension, wherein the target dimension is different from the first dimension of the color feature, the second dimension of the texture feature and the third dimension of the gradient feature.
Alternatively, the present embodiment relates to an Image Quality Assessment (IQA): an image processing technique evaluates the quality of an image or the degree of image distortion. The image quality evaluation method can be classified into Full-Reference (FR), half-Reference (RR), and No-Reference (NR) image quality evaluation. And comparing the difference between the image to be evaluated and the reference image by the quality evaluation of the full reference image, and analyzing the distortion degree of the image to be evaluated. The semi-reference image quality evaluation takes partial characteristic information of the image as a reference to compare and analyze the image to be evaluated. And evaluating the quality of the non-reference image without a reference image, and evaluating the image to be evaluated independently.
In this embodiment, the quality score may be used to indicate the quality of the image. The higher the score, the higher the quality of the image may be.
In the embodiment, for a target image to be evaluated for quality, the deep learning feature may be used in combination with information on image color, exposure and the like to grade the quality of the target image. In this embodiment, the deep learning features of the image and the features related to image quality such as color, texture, noise, exposure, etc. of the image may be extracted to combine into a new feature, and an image quality score may be obtained according to the new feature by a deep learning method.
For example, the present embodiment may extract features of the image, including color features, texture features, gradient features, and the like, according to the pixel data of the target image.
As shown in fig. 3, the color feature may be a color histogram according to different color space models (such as HSV, HIS, etc.). The texture features can be selected from Tamura texture features, gray level co-occurrence matrixes, autoregressive texture models and the like. The gradient feature can select Brenner gradient, Laplacian operator and the like. The color feature, the texture feature and the gradient feature can be subjected to dimensionality reduction through Principal Component Analysis (PCA) or full-connected layer or adaptive average pooling, so that subsequent Analysis processing on the features is facilitated.
The embodiment can extract the deep learning features of the image by using a pre-training network such as a model resnet or vgg. And combining the deep learning features with the color features, the texture features, the gradient features and the like to obtain combined features, and learning a mapping relation between the new features and the image quality scores during training.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present application, there is also provided an image quality scoring apparatus, as shown in fig. 4, including:
a first obtaining module 402, configured to obtain a target image to be scored;
an extraction module 404, configured to extract a deep learning feature of the target image using a first feature extraction model;
a second obtaining module 406, configured to obtain a target feature of the target image, where the target feature includes at least one of a color feature, a texture feature, and a gradient feature of the target image;
a stitching module 408 for stitching the deep learning feature and the target feature into a combined feature;
the input module 410 is configured to input the combined features into a target scoring model, and perform quality scoring on the target image by using the target scoring model, where the target scoring model is a model obtained by training an original scoring model using a training sample, and the training sample includes a plurality of training images and a quality score of each training image.
Alternatively, the above-described image quality scoring method may be applied, but not limited, to a process of evaluating the quality of an image. For example, for an image sharing website, when a user uploads an image, the image quality is evaluated, and then the images with different quality scores are classified. Or scoring the quality of the multiple images, and then sorting the images according to the quality score height to determine the high-quality images and the like. The above application scenarios are only examples, and the embodiment may be applied to a process of scoring the quality of any image.
Optionally, in this embodiment, if the image to be scored includes a plurality of images, each image may be taken as the target image to be scored, and scoring may be performed by using the method of this embodiment. The first feature extraction model may be a convolutional neural network model, the convolutional neural network model may include a convolutional layer, the convolutional layer includes a convolutional kernel, and the deep learning feature may be obtained by performing convolution on the target picture through the convolutional kernel.
Optionally, in this embodiment, the deep learning feature and the target feature are spliced into a combined feature, and the deep learning feature and the target feature may be spliced end to obtain a new feature, and the new feature is used as the combined feature.
Optionally, the target scoring model in this embodiment may be a model obtained by training an original scoring model in advance using a training sample. The training samples may include training images and quality scores for the training images. As shown in fig. 2, after the deep learning feature and the target feature of the training image are extracted, the deep learning feature and the target feature are spliced to obtain a combined feature, then the combined feature is input into an original scoring model, the combined feature is identified by a full connection layer of the original scoring model, a quality score of the training image is predicted, the predicted quality score is compared with the quality score of the training image, if the branch difference is large and is greater than a first threshold value, it can be considered that the score result of the original scoring model for identifying the training sample is inaccurate, and the model parameter of the original scoring model needs to be adjusted. Through multiple iterative training of multiple training samples, the model parameters of the original scoring model tend to be stable, and a target scoring model with high recognition accuracy is obtained. The target scoring model can output more accurate quality scoring after inputting the combination characteristics.
According to the method, when the quality score of the target image is determined, not only the deep learning feature of the target image is extracted, but also at least one of the color feature, the texture feature and the gradient feature of the target image is extracted, the deep learning feature and at least one of the color feature, the texture feature and the gradient feature are spliced to obtain the combined feature, and finally the quality score of the target image is determined according to the combined feature, so that the purpose of improving the accuracy of the quality score of the image is achieved.
As an optional example, the second obtaining module includes:
the first acquisition unit is used for acquiring a color histogram of the target image when the target feature is a color feature.
Optionally, in this embodiment, the target feature may include a color feature. If the target feature includes a color feature, a color histogram of the target image may be obtained when the target feature is obtained. In this embodiment, when acquiring the color histogram, different color space models may be used, such as a Hue, Saturation, Value (HSV) color space model, a HIS color space model (Hue-Saturation-Intensity, HSI) model, and the like. And acquiring a color histogram of the target image through the model.
As an optional example, the first obtaining unit includes:
the dividing subunit is configured to divide a minimum color value to a maximum color value of a pixel of the target image into different color intervals, where the minimum color value is a minimum value of color values of the pixel in the target image, and the maximum color value is a maximum value of color values of the pixel in the target image;
and the counting subunit is used for counting the number of the pixel points in each color interval to obtain a color histogram.
Optionally, in this embodiment, the minimum color value may be a minimum color value of a pixel in the target image, and the maximum color value is a maximum color value of a pixel in the target image. For example, if the target image has 100 pixels (for example only), the minimum color value is 16, and the maximum color value is 95, the [16, 95] interval may be divided into different color intervals, such as the [16, 95] interval being divided into [16, 55] and [56, 95] color intervals. Then, the number of the pixel points falling in different color intervals is counted to obtain a color histogram. For example, the color map mark [16, 55] includes 60 pixels in the interval, and [56, 95] includes 40 pixels in the interval.
As an optional example, the second obtaining module includes:
the second acquisition unit is used for acquiring a gray level co-occurrence matrix or Tamura texture features or autoregressive texture features of the target image;
the first determining unit is used for taking the gray level co-occurrence matrix or Tamura texture features or autoregressive texture features as target features of the target image.
Optionally, in this embodiment, if the target feature includes a texture feature, when the target feature is obtained, a gray level co-occurrence matrix, a Tamura texture feature, or an autoregressive texture feature of the target image may be obtained.
Optionally, the Tamura texture features described above include six components, which are roughness (roughness), contrast (contrast), directionality (directionality), line image (linerikenss), regularity (regularity), and roughness (roughness), respectively.
The gray co-occurrence matrix (GLCM) described above represents all possible combinations of pixels in the target image, e.g. the value of GLCM (1, 1) at point (1, 1) in the matrix is 1, indicating that only a pair of pixels with gray 1 are horizontally adjacent. The value of GLCM (1, 2) is 2 because there are two pairs of pixels horizontally adjacent with 1 and 2 gray levels. And acquiring possible combinations of all pixels to generate a gray level co-occurrence matrix.
The autoregressive texture features may be features obtained by identifying the target image using an autoregressive texture model.
As an optional example, the second obtaining module includes:
the third acquisition unit is used for acquiring the Brenner gradient characteristic or the laplacian operator characteristic of the target image;
and the second determination unit is used for taking the Brenner gradient characteristic or the laplacian operator characteristic as the target characteristic of the target image.
Alternatively, the Brenner gradient feature described above may be a feature determined using a Brenner gradient function. The Brenner gradient function is used to calculate the square of the difference between the adjacent two pixel gray levels.
Optionally, the laplacian operator feature may represent divergence of a gradient of the target image.
As an optional example, the apparatus further includes:
and the processing module is used for executing dimensionality reduction operation on the target features before splicing the deep learning features and the target features into combined features.
Optionally, in this embodiment, after the deep learning feature and the target feature are obtained, before the deep learning feature and the target feature are spliced to obtain the combined feature, the dimension of the target feature may be reduced, and the dimensions of the target feature and the target feature are adjusted to be the same dimension.
As an optional example, the processing module includes:
a first dimension reduction unit, configured to, in a case that the target feature includes a color feature, reduce the target feature to a first dimension of the color feature, or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, a second dimension of the texture feature, and a third dimension of the gradient feature; or
A second dimension reduction unit, configured to, in a case that the target feature includes a texture feature, reduce the target feature to a second dimension of the texture feature, or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and a third dimension of the gradient feature; or
And a third dimension reduction unit, configured to, in a case that the target feature includes a gradient feature, reduce the target feature to a third dimension of the gradient feature, or reduce the target feature to a target dimension, where the target dimension is a dimension different from the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature.
Optionally, in this embodiment, there may be a plurality of methods for performing dimension reduction on the target feature. The first method can reduce the dimension of the target feature to the same dimension as the deep learning feature, and the second method can adjust the target feature to the target dimension, wherein the target dimension is different from the first dimension of the color feature, the second dimension of the texture feature and the third dimension of the gradient feature.
For other examples of this embodiment, please refer to the above examples, which are not described herein.
Fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 5, including a processor 502, a communication interface 504, a memory 506, and a communication bus 508, where the processor 502, the communication interface 504, and the memory 506 are communicated with each other via the communication bus 508, and where,
a memory 506 for storing a computer program;
the processor 502, when executing the computer program stored in the memory 506, implements the following steps:
acquiring a target image to be scored;
extracting a deep learning feature of the target image by using the first feature extraction model;
acquiring target features of a target image, wherein the target features comprise at least one of color features, texture features and gradient features of the target image;
splicing the deep learning features and the target features into combined features;
and inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
Alternatively, in this embodiment, the communication bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus. The communication interface is used for communication between the electronic equipment and other equipment.
The memory may include RAM, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
As an example, the memory 506 may include, but is not limited to, the first obtaining module 402, the extracting module 404, the second obtaining module 406, the splicing module 408, and the inputting module 410 of the processing device of the request. In addition, the module may further include, but is not limited to, other module units in the processing apparatus of the request, which is not described in this example again.
The processor may be a general-purpose processor, and may include but is not limited to: a CPU (Central Processing Unit), an NP (Network Processor), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 5 is only an illustration, and the device implementing the processing method of the request may be a terminal device, and the terminal device may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 5 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, performs the steps of the above-mentioned image quality scoring method.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image quality scoring method, comprising:
acquiring a target image to be scored;
extracting a deep learning feature of the target image by using a first feature extraction model;
acquiring a target feature of the target image, wherein the target feature comprises at least one of a color feature, a texture feature and a gradient feature of the target image;
splicing the deep learning feature and the target feature into a combined feature;
inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
2. The method of claim 1, wherein, in the case that the target feature is the color feature, acquiring the target feature comprises:
and acquiring a color histogram of the target image.
3. The method of claim 2, wherein the obtaining the color histogram of the target image comprises:
dividing the minimum color value to the maximum color value of the pixel of the target image into different color intervals, wherein the minimum color value is the minimum value of the color values of the pixel points in the target image, and the maximum color value is the maximum value of the color values of the pixel points in the target image;
and counting the number of the pixel points in each color interval to obtain the color histogram.
4. The method of claim 1, wherein, in the case that the target feature is the texture feature, obtaining the target feature comprises:
acquiring a gray level co-occurrence matrix of the target image;
and taking the gray level co-occurrence matrix as the target feature of the target image.
5. The method of claim 1, wherein, in the case that the target feature is the gradient feature, acquiring the target feature comprises:
acquiring a directional gradient histogram of the target image;
and taking the directional gradient histogram as the target feature of the target image.
6. The method of any of claims 1 to 5, wherein prior to stitching the deep-learned feature and the target feature into a combined feature, the method further comprises:
and performing dimension reduction operation on the target feature.
7. The method of claim 6, wherein the performing the dimension reduction operation on the target feature comprises:
in the case that the target feature comprises the color feature, reducing the target feature to a first dimension of the color feature or reducing the target feature to a target dimension, wherein the target dimension is a different dimension than the first dimension of the color feature, the second dimension of the texture feature, and a third dimension of the gradient feature; or
In the case that the target feature comprises the texture feature, reducing the target feature to a second dimension of the texture feature or reducing the target feature to a target dimension, wherein the target dimension is a different dimension than the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature; or
In the case where the target feature includes the gradient feature, reducing the target feature to a third dimension of the gradient feature, or reducing the target feature to a target dimension, wherein the target dimension is a different dimension than the first dimension of the color feature, the second dimension of the texture feature, and the third dimension of the gradient feature.
8. An image quality scoring apparatus, comprising:
the first acquisition module is used for acquiring a target image to be scored;
the extraction module is used for extracting the deep learning characteristics of the target image by using a first characteristic extraction model;
a second obtaining module, configured to obtain a target feature of the target image, where the target feature includes at least one of a color feature, a texture feature, and a gradient feature of the target image;
the splicing module is used for splicing the deep learning feature and the target feature into a combined feature;
and the input module is used for inputting the combined features into a target scoring model, and scoring the quality of the target image by using the target scoring model, wherein the target scoring model is obtained by training an original scoring model by using a training sample, and the training sample comprises a plurality of training images and the quality score of each training image.
9. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202111168826.5A 2021-09-30 2021-09-30 Image quality scoring method, device, storage medium and electronic equipment Pending CN113705587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111168826.5A CN113705587A (en) 2021-09-30 2021-09-30 Image quality scoring method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111168826.5A CN113705587A (en) 2021-09-30 2021-09-30 Image quality scoring method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113705587A true CN113705587A (en) 2021-11-26

Family

ID=78662492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111168826.5A Pending CN113705587A (en) 2021-09-30 2021-09-30 Image quality scoring method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113705587A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096734A (en) * 2024-04-23 2024-05-28 武汉名实生物医药科技有限责任公司 Product quality monitoring method and system based on big data
CN118096734B (en) * 2024-04-23 2024-07-12 武汉名实生物医药科技有限责任公司 Product quality monitoring method and system based on big data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428412A (en) * 2019-07-31 2019-11-08 北京奇艺世纪科技有限公司 The evaluation of picture quality and model generating method, device, equipment and storage medium
WO2021068178A1 (en) * 2019-10-11 2021-04-15 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for image quality detection
CN113205495A (en) * 2021-04-28 2021-08-03 北京百度网讯科技有限公司 Image quality evaluation and model training method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428412A (en) * 2019-07-31 2019-11-08 北京奇艺世纪科技有限公司 The evaluation of picture quality and model generating method, device, equipment and storage medium
WO2021068178A1 (en) * 2019-10-11 2021-04-15 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for image quality detection
CN113205495A (en) * 2021-04-28 2021-08-03 北京百度网讯科技有限公司 Image quality evaluation and model training method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙锴: "基于系统图谱的复杂机电系统状态分析方法", vol. 2016, 31 August 2016, 西北工业大学出版社, pages: 31 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096734A (en) * 2024-04-23 2024-05-28 武汉名实生物医药科技有限责任公司 Product quality monitoring method and system based on big data
CN118096734B (en) * 2024-04-23 2024-07-12 武汉名实生物医药科技有限责任公司 Product quality monitoring method and system based on big data

Similar Documents

Publication Publication Date Title
CN107679466B (en) Information output method and device
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN111582359B (en) Image identification method and device, electronic equipment and medium
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
CN110969170B (en) Image theme color extraction method and device and electronic equipment
CN112348765A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
CN110287862B (en) Anti-candid detection method based on deep learning
CN111160284A (en) Method, system, equipment and storage medium for evaluating quality of face photo
CN110189341B (en) Image segmentation model training method, image segmentation method and device
CN112580668B (en) Background fraud detection method and device and electronic equipment
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
CN111339884B (en) Image recognition method, related device and apparatus
CN105718931A (en) System And Method For Determining Clutter In An Acquired Image
CN112233077A (en) Image analysis method, device, equipment and storage medium
CN113706472A (en) Method, device and equipment for detecting road surface diseases and storage medium
CN111882559A (en) ECG signal acquisition method and device, storage medium and electronic device
CN111144425A (en) Method and device for detecting screen shot picture, electronic equipment and storage medium
CN114266740A (en) Quality inspection method, device, equipment and storage medium for traditional Chinese medicine decoction pieces
CN113537248A (en) Image recognition method and device, electronic equipment and storage medium
CN108268868B (en) Method and device for acquiring inclination value of identity card image, terminal and storage medium
CN111179245B (en) Image quality detection method, device, electronic equipment and storage medium
CN116433643A (en) Fish meat quality identification method and device based on computer vision
CN113705587A (en) Image quality scoring method, device, storage medium and electronic equipment
CN112581001B (en) Evaluation method and device of equipment, electronic equipment and readable storage medium
CN111523605B (en) Image identification method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination