CN111028198B - Image quality evaluation method, device, terminal and readable storage medium - Google Patents

Image quality evaluation method, device, terminal and readable storage medium Download PDF

Info

Publication number
CN111028198B
CN111028198B CN201911083970.1A CN201911083970A CN111028198B CN 111028198 B CN111028198 B CN 111028198B CN 201911083970 A CN201911083970 A CN 201911083970A CN 111028198 B CN111028198 B CN 111028198B
Authority
CN
China
Prior art keywords
dimension data
face
score
determining
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911083970.1A
Other languages
Chinese (zh)
Other versions
CN111028198A (en
Inventor
李马丁
郑云飞
章佳杰
宁小东
刘建辉
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Publication of CN111028198A publication Critical patent/CN111028198A/en
Application granted granted Critical
Publication of CN111028198B publication Critical patent/CN111028198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image quality evaluation method, an image quality evaluation device, a terminal and a readable storage medium, and relates to the technical field of computers. The present disclosure provides for determining target dimension data for a target image, the target dimension data comprising one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data; and determining a quality score of the target image according to the target dimension data, wherein the quality score is used for evaluating the quality of the target image. The quality score of the target image is determined by comprehensively considering the target dimension data in one or more dimensions, so that the quality of the target image is evaluated, the evaluation of the target image is more objective, the problems of poor accuracy and low speed of manual evaluation are solved, the accuracy and speed of image quality evaluation can be improved, and the method can be suitable for evaluating a large number of images.

Description

Image quality evaluation method, device, terminal and readable storage medium
The present disclosure claims priority to chinese patent application filed at 7.12 of 2019 to the intellectual property office of the people's republic of China, application number 201910631351.5, entitled "image quality assessment method, apparatus, terminal, and readable storage medium", the entire contents of which are incorporated herein by reference.
Technical Field
The disclosure relates to the field of computer technology, and in particular, to an image quality evaluation method, an image quality evaluation device, a terminal and a readable storage medium.
Background
With the continuous development of computer technology and the popularization of intelligent hardware devices, the development of video is promoted, and with the development of video, people put higher requirements on images or video frames, wherein the evaluation of the quality of the images or video frames becomes very important, and the evaluation of the quality of the images or video frames has wide application, for example, the evaluation can be used for judging the overall quality of the video, selecting the best cover or selecting the best section, and the like.
However, in the related art, the quality of the image or video frame is usually evaluated manually, so that the manual evaluation standard is difficult to unify, the accuracy of image quality evaluation is poor, the manual evaluation speed is slow, and the evaluation of a large number of images or video frames is not facilitated.
Disclosure of Invention
The embodiment of the application provides an image quality evaluation method, an image quality evaluation device, a terminal and a readable storage medium, aiming at improving the accuracy of image quality evaluation.
According to a first aspect of embodiments of the present disclosure, there is provided an image quality evaluation method, the method including:
Determining target dimension data of a target image; the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data;
determining a quality score of the target image according to the target dimension data; the quality score is used to evaluate the quality of the target image.
Optionally, the target dimension data includes image sharpness dimension data; the step of determining target dimension data of the target image includes:
performing edge detection on the target image to obtain an edge detection result of each pixel point in the target image;
calculating the variance of the edge detection result of each pixel point in the target image to obtain a first definition score of the target image;
and/or, performing fuzzy processing on the target image to obtain a fuzzy image;
calculating YUV difference values of each pixel point in the target image and the corresponding pixel point in the blurred image;
determining a second definition score of the target image according to the YUV difference value of each pixel point;
and determining image definition dimension data of the target image according to the first definition score and/or the second definition score.
Optionally, the target dimension data includes color richness dimension data; the step of determining target dimension data of the target image includes:
calculating respective variances and means of at least two components in the YUV color space components of the target image;
and determining the color richness dimension data of the target image according to the respective variances and the mean values of the at least two components.
Optionally, the target dimension data includes value degree dimension data; the step of determining target dimension data of the target image includes:
calculating the variance and the mean value of the intra-frame distortion metric value of the target image;
generating a feature vector according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data;
and inputting the feature vector into a preset value degree prediction model to obtain value degree dimension data of the target image.
Optionally, when the target image includes a face image, before the step of determining a quality score of the target image according to the target dimension data, the method further includes:
face dimension data of a face area where the face image is located are determined; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data;
The step of determining the quality score of the target image according to the target dimension data comprises the following steps:
determining a first quality score of the target image according to the target dimension data;
determining a second quality score of the target image according to the face dimension data;
and determining a quality score of the target image according to the first quality score and the second quality score.
Optionally, the face dimension data includes eye opening degree dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
calculating the ratio of the first distance between the upper eyelid and the lower eyelid of the left eye in the face area to the second distance between the inner and outer corners of the left eye to obtain the left eye opening score;
calculating the ratio of the third distance between the upper eyelid and the lower eyelid of the right eye in the face area to the fourth distance between the inner and outer corners of the right eye to obtain the right eye opening score;
when the left eye opening score and the right eye opening score are both smaller than or equal to a first threshold value, determining that the opening degree dimension data of the face area is a score smaller than zero;
when at least one of the left eye opening score and the right eye opening score is greater than the first threshold value and the absolute value of the difference between the left eye opening score and the right eye opening score is less than or equal to a second threshold value, taking the sum of the left eye opening score and the right eye opening score as the opening degree dimension data of the face region;
And when at least one of the left eye opening score and the right eye opening score is larger than the first threshold value and the absolute value of the difference value of the left eye opening score and the right eye opening score is larger than the second threshold value, taking twice of the maximum opening score of the left eye opening score and the right eye opening score as the opening degree dimension data of the face area.
Optionally, the face dimension data includes mouth opening degree dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
determining a first included angle and a second included angle of a triangle formed by the left and right mouth corners and the midpoint of the lower lip; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located;
and determining the opening degree dimension data of the face region according to the first included angle and the second included angle.
Optionally, the face dimension data includes composition dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
determining the number of faces included in the face area;
When the face area comprises a face, determining composition dimension data of the face area according to the distance between the center point of the face and the composition center of gravity;
when the face area comprises a plurality of faces, determining composition dimension data of the face area according to the distance between the gravity center of a polygon formed by the center points of the faces and the composition gravity center.
Optionally, the face dimension data includes face direction dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
determining a face direction in the face region;
and determining the face direction dimension data of the face area according to the deviation angle of the face direction from the reference direction.
According to a second aspect of embodiments of the present disclosure, there is provided an image quality evaluation apparatus, the apparatus including:
the target dimension data determining module is used for determining target dimension data of the target image; the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data;
the quality score determining module is used for determining the quality score of the target image according to the target dimension data; the quality score is used to evaluate the quality of the target image.
Optionally, the target dimension data includes image sharpness dimension data; the target dimension data determining module includes:
the edge detection sub-module is used for carrying out edge detection on the target image to obtain an edge detection result of each pixel point in the target image;
the first definition score determining submodule is used for calculating the variance of the edge detection result of each pixel point in the target image to obtain a first definition score of the target image;
and/or a blurring processing sub-module, which is used for blurring processing the target image to obtain a blurred image;
a YUV difference value calculation sub-module, configured to calculate a YUV difference value between each pixel point in the target image and a corresponding pixel point in the blurred image;
a second definition score determining sub-module, configured to determine a second definition score of the target image according to the YUV difference value of each pixel point;
and the image definition dimension data determining submodule is used for determining the image definition dimension data of the target image according to the first definition score and/or the second definition score.
Optionally, the target dimension data includes color richness dimension data; the target dimension data determining module includes:
The component variance and mean value determining sub-module is used for calculating respective variances and mean values of at least two components in YUV color space components of the target image;
and the color richness dimension data determining submodule is used for determining the color richness dimension data of the target image according to the respective variances and the mean values of the at least two components.
Optionally, the target dimension data includes value degree dimension data; the target dimension data determining module includes:
the measurement value variance and mean value determination submodule is used for calculating variances and mean values of intra-frame distortion measurement values of the target image;
the characteristic vector generation sub-module is used for generating a characteristic vector according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data;
and the value degree dimension data determining submodule is used for inputting the feature vector into a preset value degree prediction model to obtain the value degree dimension data of the target image.
Optionally, when the target image includes a face image, the image quality evaluation apparatus further includes:
the face dimension data determining module is used for determining face dimension data of a face area where the face image is located; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data;
The quality score determination module comprises:
a first quality score determining sub-module, configured to determine a first quality score of the target image according to the target dimension data;
a second quality score determining sub-module, configured to determine a second quality score of the target image according to the face dimension data;
and the quality score determining submodule is used for determining the quality score of the target image according to the first quality score and the second quality score.
Optionally, the face dimension data includes eye opening degree dimension data; the face dimension data determining module comprises:
the left eye opening score determining submodule is used for calculating the ratio between the first distance between the upper eyelid and the lower eyelid of the left eye in the face area and the second distance between the inner and outer corners of the left eye to obtain the left eye opening score;
the right eye opening score determining submodule is used for calculating the ratio of the third distance between the upper eyelid and the lower eyelid of the right eye in the face area to the fourth distance between the inner and outer corners of the right eye to obtain the right eye opening score;
the first eye opening degree dimension data determining submodule is used for determining that the eye opening degree dimension data of the face area is a score smaller than zero when the eye opening score of the left eye and the eye opening score of the right eye are smaller than or equal to a first threshold value;
A second eye-opening degree dimension data determining sub-module, configured to, when at least one of the left eye-opening score and the right eye-opening score is greater than the first threshold, and an absolute value of a difference between the left eye-opening score and the right eye-opening score is less than or equal to a second threshold, use a sum of the left eye-opening score and the right eye-opening score as eye-opening degree dimension data of the face region;
and the third eye opening degree dimension data determining submodule is used for taking twice the maximum eye opening score of the left eye opening score and the right eye opening score as the eye opening degree dimension data of the face area when at least one of the left eye opening score and the right eye opening score is larger than the first threshold value and the absolute value of the difference value of the left eye opening score and the right eye opening score is larger than the second threshold value.
Optionally, the face dimension data includes mouth opening degree dimension data; the face dimension data determining module comprises:
the included angle determining submodule is used for determining a first included angle and a second included angle of a triangle formed by the left and right mouth angles and the midpoint of the lower lip; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located;
And the mouth opening degree dimension data determining submodule is used for determining the mouth opening degree dimension data of the face area according to the first included angle and the second included angle.
Optionally, the face dimension data includes composition dimension data; the face dimension data determining module comprises:
a face number determination submodule, configured to determine the number of faces included in the face area;
the first composition dimension data determining submodule is used for determining composition dimension data of the face area according to the distance between the center point of the face and the composition center of gravity when the face area comprises a face;
and the second composition dimension data determining submodule is used for determining composition dimension data of the face area according to the distance between the gravity center of the polygon formed by the center points of the plurality of faces and the composition gravity center when the face area comprises the plurality of faces.
Optionally, the face dimension data includes face direction dimension data; the face dimension data determining module comprises:
a face direction determining sub-module for determining a face direction in the face region;
and the face direction dimension data determining submodule is used for determining face direction dimension data of the face area according to the deviation angle of the face direction from the reference direction.
According to a third aspect of embodiments of the present disclosure, there is provided a terminal comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform operations performed to implement the image quality assessment method as provided by the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of a terminal, enables the terminal to perform an operation to implement an image quality assessment method as provided by the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
determining target dimension data of a target image, wherein the target dimension data comprises one or more of image definition dimension data, color richness dimension data and value degree dimension data; and determining a quality score of the target image according to the target dimension data, wherein the quality score is used for evaluating the quality of the target image. The quality score of the target image is determined by comprehensively considering the target dimension data in one or more dimensions, so that the quality of the target image is evaluated, the evaluation of the target image is more objective, the problems of poor accuracy and low speed of manual evaluation are solved, the accuracy and speed of image quality evaluation can be improved, and the method can be suitable for evaluating a large number of images.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of an image quality evaluation method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for determining image sharpness dimension data of a target image according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for determining color richness dimension data of a target image according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for determining value degree dimension data of a target image according to an embodiment of the present application;
FIG. 5 is a flowchart of another image quality assessment method according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for determining face definition dimension data of a face region according to an embodiment of the present application;
fig. 7 is a flowchart of a method for determining eye-open degree dimension data of a face region according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for determining dimension data of mouth opening degree of a face region according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for determining composition dimension data of a face region according to an embodiment of the present application;
FIG. 10 is a flowchart of a method for determining face direction dimension data of a face region according to an embodiment of the present application;
fig. 11 is a schematic diagram of an image quality evaluation apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In the related art, the quality of the estimated image or video frame is usually estimated manually, and the manual estimation standard is difficult to unify, so that the accuracy of image quality estimation is poor, and the manual estimation speed is low, which is not beneficial to estimating a large number of images or video frames.
In view of this, the present inventors have provided an image quality evaluation method, apparatus, device, and storage medium, with the following embodiments, with the aim of improving the image quality evaluation accuracy.
Fig. 1 is a flowchart of an image quality evaluation method according to an embodiment of the present application, as shown in fig. 1, including the following steps:
in step S11, target dimension data of a target image is determined; the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data.
In the embodiment of the present disclosure, the target image refers to any image that needs quality evaluation, for example, the target image may be any single picture, or may be a video frame extracted from a video.
A target image has one or more dimensions, and is analyzed for each dimension to determine target dimension data for the target image, the target dimension data reflecting characteristics of the target image in the one or more dimensions. Wherein the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data.
The image definition dimension data is used for representing the blurring degree of the target image; the value degree dimension data is used to represent the meaning degree of the target image, and in general, if the target image is too simple and can be easily predicted by coding, the nonsensical probability of the target image is relatively high, for example, the shot white wall, floor and the like belong to nonsensical target images, and if the target image is too complex, the nonsensical probability of the target image is relatively high, for example, the shot grass, lawn and the like also belong to nonsensical target images.
In step S12, determining a quality score of the target image according to the target dimension data; the quality score is used to evaluate the quality of the target image.
In an embodiment of the present disclosure, a quality score of a target image is determined from target dimension data of the target image, the quality score being used to evaluate the quality of the target image.
When the target dimension data is larger, the quality score of the corresponding target image is higher; when the target dimension data is smaller, the quality score of the corresponding target image is lower.
When the target dimension data includes only any one of the image definition dimension data, the color richness dimension data, and the value degree dimension data, the quality score of the target image is equal to the target dimension data; when the target dimension data comprises a plurality of image definition dimension data, color richness dimension data and value degree dimension data, corresponding weights are preset for each dimension data in the target dimension data, and the weights of each dimension data are used for representing the importance degree of the corresponding dimension in determining the quality score of the target image; firstly, carrying out normalization processing on a plurality of dimension data in target dimension data, and then carrying out weighted summation on the plurality of dimension data after normalization processing according to the weight of each dimension data, so as to obtain the quality score of the target image.
For example, the target dimension data includes image definition dimension data, color richness dimension data and value degree dimension data, and the normalized image definition dimension data is M 1 The dimension data of the color richness after normalization processing is M 2 The value degree dimension data after normalization processing is M 3 The corresponding weight of the image definition dimension data is a first weight w 1 The corresponding weight of the color richness dimension data is a second weight w 2 The corresponding weight of the value degree dimension data is a third weight w 3 Then the quality score of the target image s=w 1 ×M 1 +w 2 ×M 2 +w 3 ×M 3
In the embodiment of the disclosure, the quality score of the target image is determined by comprehensively considering the target dimension data in one or more dimensions, so that the quality of the target image is evaluated, the evaluation of the target image is more objective, the problems of poor accuracy and low speed of manual evaluation are solved, the accuracy and speed of image quality evaluation can be improved, and the method and the device are applicable to evaluating a large number of images.
In combination with the above embodiments, in another embodiment of the present application, a method for analyzing a target image from an image definition dimension and determining image definition dimension data of the target image is provided.
Fig. 2 is a flowchart of a method for determining image sharpness dimension data of a target image according to an embodiment of the present application, and as shown in fig. 2, step S11 may specifically include the following sub-steps:
in sub-step S1101, edge detection is performed on the target image, and an edge detection result of each pixel point in the target image is obtained.
In the embodiment of the disclosure, the target image includes a plurality of pixel points, and an edge detection operator is adopted to perform edge detection on each pixel point in the target image, so as to obtain an edge detection result of each pixel point in the target image.
It should be noted that, any suitable manner may be used to perform edge detection on the target image, including, but not limited to, edge detection on the target image by a laplace operator, e.g., a sobel operator may also be used to perform edge detection on the target image.
In sub-step S1102, a variance of an edge detection result of each pixel point in the target image is calculated, resulting in a first sharpness score of the target image.
In the embodiment of the disclosure, after obtaining the edge detection result of each pixel point in the target image, the variance corresponding to the edge detection result of all the pixel points in the target image is calculated, so as to obtain the first sharpness score of the target image.
It should be noted that, after calculating the variances corresponding to the edge detection results of all the pixel points in the target image, the variance results may be converted into scores according to a predetermined rule, so as to obtain a first sharpness score of the target image, for example, taking the logarithm of the variance results.
In sub-step S1103, the target image is subjected to blurring processing, resulting in a blurred image.
In the embodiment of the disclosure, the target image may be subjected to blurring processing to obtain a blurred image.
It should be noted that the blurring of the target image may be performed in any suitable manner, including, but not limited to, blurring the target image by gaussian blurring, e.g., blurring the target image using median blurring.
In sub-step S1104, YUV differences are calculated for each pixel in the target image and the corresponding pixel in the blurred image.
In the embodiment of the disclosure, after obtaining a blurred image, a YUV color space component of each pixel in the target image and a YUV color space component of each pixel in the blurred image are obtained, and the YUV color space component of the corresponding pixel in the blurred image is subtracted from the YUV color space component of each pixel in the target image to obtain a YUV difference value between each pixel in the target image and the corresponding pixel in the blurred image.
In sub-step S1105, a second sharpness score of the target image is determined according to the YUV difference value of each pixel point.
In an embodiment of the disclosure, a second sharpness score of the target image is determined based on the YUV difference for each pixel.
Specifically, a mean value corresponding to the YUV difference values of all the pixel points is calculated, and a second definition score of the target image is obtained. The method comprises the steps of firstly calculating a first average value corresponding to the difference value of the Y components of all pixel points, then calculating a second average value corresponding to the difference value of the U components of all pixel points, then calculating a third average value corresponding to the difference value of the V components of all pixel points, and finally calculating the average values of the first average value, the second average value and the third average value to obtain a second definition score of a target image.
It should be noted that, when determining the second sharpness score of the target image according to the YUV difference value of each pixel, the method is not limited to calculating only the average value corresponding to the YUV difference values of all the pixels as the second sharpness score.
When the second definition score is larger, the difference between the target image and the blurred image is larger, and the target image is clearer; when the second sharpness score is smaller, the smaller the difference between the target image and the blurred image is, the more blurred the target image is.
In sub-step S1106, image sharpness dimension data of the target image is determined based on the first sharpness score and/or the second sharpness score.
In the embodiment of the disclosure, the image definition dimension data of the target image is determined according to the calculated first definition score and/or second definition score.
The image sharpness dimension data of the target image may be determined based only on the first sharpness scores determined in sub-steps S1101 and S1102, where the first sharpness score is equal to the image sharpness dimension data; the image definition dimension data of the target image may also be determined only according to the second definition scores determined in the sub-steps S1103 to S1105, where the second definition score is equal to the image definition dimension data; and determining the image definition dimension data of the target image according to the first definition score and the second definition score at the same time, for example, carrying out weighted summation on the first definition score and the second definition score to obtain the image definition dimension data.
In combination with the above embodiments, in another embodiment of the present application, a method for analyzing a target image from a color richness dimension and determining color richness dimension data of the target image is provided.
Fig. 3 is a flowchart of a method for determining color richness dimension data of a target image according to an embodiment of the present application, and as shown in fig. 3, step S11 may specifically include the following sub-steps:
in sub-step S1107, the variance and mean of each of at least two of the YUV color space components of the target image is calculated.
In the embodiment of the disclosure, a YUV color space component of each pixel point in a target image is obtained, the YUV color space component comprises a Y component, a U component and a V component, at least two components are selected from the YUV color space component, and variances and mean values corresponding to each component of all the pixel points are calculated respectively, so that respective variances and mean values of at least two components in the YUV color space component of the target image are obtained.
For example, selecting the U component and the V component in the YUV color space component, and calculating the variances and the mean values corresponding to the U component and the V component of all the pixel points, respectively, where the variances corresponding to the U component of all the pixel points are U var The average value corresponding to the U component of all the pixel points is U mean The variance corresponding to the V component of all the pixel points is V var The average value corresponding to the V component of all the pixel points is V mean
In sub-step S1108, color richness dimension data of the target image is determined from the respective variances and means of the at least two components.
In an embodiment of the disclosure, the color richness dimension data of the target image is determined according to respective variances and means of at least two components in the YUV color space components.
Specifically, the square root of the sum of the variances of at least two components is calculated to obtain first color data, then the square root of the square sum of the mean value of at least two components is calculated to obtain second color data, the first color data and the second color data are weighted and summed, namely the product of the first color data and the fourth weight is added to the product of the second color data and the fifth weight to obtain color richness dimension data of the target image.
For example, a U component and a V component in YUV color space components are selected, and the variance corresponding to the U component of all pixel points is U var The average value corresponding to the U component of all the pixel points is U mean The variance corresponding to the V component of all the pixel points is V var The average value corresponding to the V component of all the pixel points is V mean The color richness dimension data S of the target image can be calculated by the following formula 2
Wherein w is 4 Is of the fourth rightWeight, w 5 Is the fifth weight.
In combination with the above embodiments, in another embodiment of the present application, a method of analyzing a target image from a value degree dimension to determine value degree dimension data of the target image is provided.
Fig. 4 is a flowchart of a method for determining value degree dimension data of a target image according to an embodiment of the present application, and as shown in fig. 4, step S11 may specifically include the following sub-steps:
in sub-step S1109, the variance and mean of the intra distortion metric values of the target image are calculated.
In the embodiment of the disclosure, the target image is divided into a plurality of region blocks, and first, the YUV value of each pixel point in each region block is predicted by the YUV values of the pixels around the region block, for example, the YUV value of each pixel point in each region block is predicted by the YUV values of a plurality of pixels on the left side and the upper side of each region block; then, determining the difference value between the predicted YUV value and the actual YUV value of each pixel point in the region block as the intra-frame distortion value of the pixel point; and finally, determining the intra-frame distortion metric value of the region block according to the intra-frame distortion value of each pixel point in the region block, for example, calculating the average value corresponding to the intra-frame distortion value of all the pixel points in the region block, and obtaining the intra-frame distortion metric value of the region block.
After obtaining the intra-frame distortion metric value of each region block in the target image, respectively calculating the variances and the mean values corresponding to the intra-frame distortion metric values of all region blocks in the target image.
In sub-step S1110, a feature vector is generated from the variance of the intra-frame distortion metric values, the mean of the intra-frame distortion metric values, and the color richness dimension data.
In the embodiment of the present disclosure, after the variance and the mean value of the intra-frame distortion metric value of the target image are calculated, the color richness dimension data of the target image may be obtained by performing the sub-steps S1107 and S1108, and a three-dimensional feature vector may be generated according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data.
In the substep S1111, the feature vector is input into a preset value degree prediction model, so as to obtain value degree dimension data of the target image.
In the embodiment of the disclosure, a value degree prediction model is trained in advance, wherein the value degree prediction model is obtained by training according to sample feature vectors of sample images and actual value results of user calibration sample images.
Specifically, a plurality of sample images are acquired firstly, the variance of intra-frame distortion metric values of the sample images, the mean value of the intra-frame distortion metric values of the sample images and the color richness dimension data of the sample images are calculated, sample feature vectors are generated according to the variance, the mean value and the color richness dimension data of the sample images, the sample feature vectors are input into an initial value model, a value degree result is obtained through output, the value degree result is compared with an actual value result calibrated by a user, parameters in the initial value model are corrected according to the comparison result, and the training process is finished after the accuracy of the finally obtained value degree prediction model reaches a preset standard through training of the plurality of sample images.
After the feature vector is generated, the feature vector is input into a preset value degree prediction model, and value degree dimension data of the target image are obtained.
In the actual application process, a face image may exist in the target image, and at this time, when the target image is evaluated, the influence of the face image on the quality score of the target image needs to be considered.
Fig. 5 is a flowchart of another image quality evaluation method according to an embodiment of the present application, as shown in fig. 5, including the steps of:
in step S51, target dimension data of a target image is determined; the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data.
This step is similar to the principle of step S11 described above, and will not be described again here.
In step S52, face dimension data of a face area where the face image is located is determined; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data.
In the embodiment of the present disclosure, it is required to detect whether a target image includes a face image, and when the target image does not include a face image, the steps of the step S11 and the step S12 are executed, and when the target image includes a face image, a face area where the face image is located is also required to be analyzed, so as to determine face dimension data of the face area where the face image is located. The face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data.
It should be noted that, any suitable manner may be used to detect whether the target image includes a face image, including, but not limited to, detecting the face image through a face feature point; the face region where the face image is located may be a frame-shaped region for face detection, a face contour region framed according to face feature points, or a face internal region formed by eyes and chin (or mouth).
In step S53, a first quality score of the target image is determined from the target dimension data.
In an embodiment of the present disclosure, after determining target dimension data of a target image, a first quality score of the target image is determined from the target dimension data.
When the target dimension data only includes any one of image sharpness dimension data, color richness dimension data, and value degree dimension data, the first quality score of the target image is equal to the target dimension data; when the target dimension data comprises a plurality of image definition dimension data, color richness dimension data and value degree dimension data, normalizing the plurality of dimension data in the target dimension data, and then carrying out weighted summation on the plurality of dimension data after normalization according to the weight of each dimension data to obtain a first quality fraction of the target image.
In step S54, a second quality score of the target image is determined according to the face dimension data.
In the embodiment of the disclosure, after face dimension data of a face region where the face image is located is determined, a second quality score of the target image is determined according to the face dimension data.
When the face dimension data only comprises any one of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data, the second mass fraction of the target image is equal to the face dimension data; when the face dimension data comprises a plurality of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data, carrying out normalization processing on the plurality of dimension data in the face dimension data, and then carrying out weighted summation on the plurality of dimension data after normalization processing according to the weight of each dimension data in the face dimension data, so as to obtain a second quality fraction of the target image.
In step S55, a quality score of the target image is determined based on the first quality score and the second quality score.
In the embodiment of the disclosure, after the first quality score and the second quality score of the target image are obtained, the first quality score and the second quality score are weighted and summed to obtain the quality score of the target image.
Of course, if the target image does not include the face image, the quality score of the target image is equal to the first quality score.
In the embodiment of the disclosure, since part of the target image contains the face image and the face image has significance for quality evaluation of the target image, when the quality evaluation is performed on the target image, not only the target dimension data in one or more dimensions are considered, the first quality score of the target image is determined, but also the face dimension data in one or more dimensions are considered, the second quality score of the target image is determined, and finally, the quality score of the target image is determined according to the first quality score and the second quality score so as to evaluate the quality of the target image, and the data in multiple dimensions are comprehensively considered, so that the evaluation of the target image is more objective and accurate, and the quality of the evaluation of the target image containing the face image can be improved.
In combination with the above embodiment, in another embodiment of the present application, a method for analyzing a face region in a target image from a face definition dimension and determining face definition dimension data of the face region is provided.
Fig. 6 is a flowchart of a face definition dimension data method for determining a face area according to an embodiment of the present application, and as shown in fig. 6, step S52 may specifically include the following sub-steps:
in sub-step S5201, edge detection is performed on the face region, and an edge detection result of each pixel point in the face region is obtained.
In sub-step S5202, the variance of the edge detection result of each pixel in the face region is calculated, resulting in a third sharpness score for the face region.
In sub-step S5203, the face region is subjected to blurring processing to obtain a blurred region.
In sub-step S5204, YUV difference values of each pixel point in the face region and the corresponding pixel point in the blur region are calculated.
In sub-step S5205, a fourth sharpness score for the face region is determined from the YUV difference for each pixel point in the face region.
In sub-step S5206, face definition dimension data of the face region is determined according to the third definition score and/or the fourth definition score.
It should be noted that the specific implementation method of the above-mentioned sub-steps S5201 to S5206 is operable with reference to the sub-steps S1101 to S1106, that is, the specific implementation method of determining the image definition dimension data and the face definition dimension data is similar, except that the object to which the image definition dimension data is directed is the whole target image, and the face definition dimension data is directed is the face region in the target image.
In combination with the above embodiment, in another embodiment of the present application, a method for analyzing a face area in a target image from an eye-open degree dimension and determining eye-open degree dimension data of the face area is provided.
Fig. 7 is a flowchart of a method for determining eye-opening degree dimension data of a face area according to an embodiment of the present application, and as shown in fig. 7, step S52 may specifically include the following sub-steps:
in sub-step S5207, a ratio between a first distance between upper and lower eyelids of the left eye and a second distance between inner and outer eye corners of the left eye in the face region is calculated, to obtain a left eye open score.
In the embodiment of the disclosure, a first distance between the upper eyelid and the lower eyelid of the left eye in the face region and a second distance between the inner and outer corners of the left eye are measured respectively, the first distance and the second distance can be calculated according to a distance calculation formula according to detected face feature points, and a ratio of the first distance to the second distance is determined as an eye opening score of the left eye.
For example, the first distance measured is d 1 The measured second distance is d 2 The left eye opens the eye score R 1 =d 1 /d 2
In sub-step S5208, a ratio between a third distance between upper and lower eyelids of the right eye and a fourth distance between inner and outer eye corners of the right eye in the face region is calculated, to obtain a right eye open score.
In the embodiment of the disclosure, a third distance between the upper eyelid and the lower eyelid of the right eye and a fourth distance between the inner and outer corners of the right eye in the face region are measured respectively, the third distance and the fourth distance can be calculated according to a distance calculation formula according to detected face feature points, and a ratio of the third distance to the fourth distance is determined as an eye opening score of the right eye.
For example, the third distance measured is d3 and the fourth distance measured is d4 2 The right eye opens the eye score R 2 =d 3 /d 4
In sub-step S5209, when the left eye open eye score and the right eye open eye score are both less than or equal to a first threshold, determining that the eye open extent dimension data of the face region is a score less than zero.
In the embodiment of the disclosure, after the left eye opening score and the right eye opening score are calculated, the left eye opening score and the right eye opening score are compared with the first threshold value respectively, when the left eye opening score and the right eye opening score are smaller than or equal to the first threshold value, both eyes can be considered to be closed, and the quality of the target image is affected by the closing of both eyes in the face image, so that the opening degree dimension data of the face area needs to be determined to be a score smaller than zero. The first threshold may be set based on empirical values.
For example, the first threshold is F 1 When the left eye opens the eye score R 1 And right eye open eye score R 2 Are all less than or equal to the first threshold F 1 And when the eye opening degree dimension data of the face area is determined to be a score smaller than zero.
In sub-step S5210, when at least one of the left-eye open-eye score and the right-eye open-eye score is greater than the first threshold and an absolute value of a difference between the left-eye open-eye score and the right-eye open-eye score is less than or equal to a second threshold, a sum of the left-eye open-eye score and the right-eye open-eye score is taken as open-eye degree dimension data of the face region.
In the embodiment of the disclosure, after the left eye opening score and the right eye opening score are obtained by calculation, the left eye opening score and the right eye opening score are compared with a first threshold, when at least one of the left eye opening score and the right eye opening score is greater than the first threshold, the absolute value of the difference between the left eye opening score and the right eye opening score is calculated, and when the absolute value of the difference between the left eye opening score and the right eye opening score is less than or equal to a second threshold, the left eye and the right eye can be considered to be open, and the eye opening degree of both eyes is consistent, and at the moment, the sum of the left eye opening score and the right eye opening score is taken as the eye degree dimension data of the face area. The second threshold may be set based on empirical values.
For example, the first threshold is F 1 The second threshold is F 2 When the left eye opens the eye score R 1 And right eye open eye score R 2 Are all greater than a first threshold F 1 And the left eye opens the eye score R 1 And right eye open eye score R 2 The absolute value of the difference of (2) is less than or equal to the second threshold F 2 I.e. R 1 >F 1 ,R 2 >F 1 And |R 1 -R 2 ︱≤F 2 The left eye opens the eye score R 1 And right eye open eye score R 2 The sum is equal to the eye opening degree dimension data of the face area.
In sub-step S5211, when at least one of the left-eye open-eye score and the right-eye open-eye score is greater than the first threshold and an absolute value of a difference between the left-eye open-eye score and the right-eye open-eye score is greater than the second threshold, twice a maximum open-eye score of the left-eye open-eye score and the right-eye open-eye score is taken as open-eye extent dimension data of the face region.
In the embodiment of the disclosure, after the left eye opening score and the right eye opening score are obtained by calculation, the left eye opening score and the right eye opening score are compared with a first threshold, when at least one of the left eye opening score and the right eye opening score is greater than the first threshold, the absolute value of the difference between the left eye opening score and the right eye opening score is calculated again, and when the absolute value of the difference between the left eye opening score and the right eye opening score is greater than a second threshold, one eye is opened and the other eye is closed in the face image, and at this time, the two times of the maximum opening score in the left eye opening score and the right eye opening score is regarded as the eye opening degree dimension data of the face area.
For example, the first threshold is F 1 The second threshold is F 2 When the left eye opens the eye score R 1 Greater than a first threshold F 1 Right eye open eye score R 2 Less than or equal to the first threshold F 1 And the left eye opens the eye score R 1 And right eye open eye score R 2 Is the difference of (a)Absolute value is greater than a second threshold F 2 Left eye open eye score R 1 Eye opening score R with right eye 2 The maximum eye-opening score of (a) is the eye-opening score R of the left eye 1 The eye opening degree dimension data of the face area is 2 xR 1
In combination with the above embodiment, in another embodiment of the present application, a method for analyzing a face area in a target image from a mouth opening degree dimension and determining mouth opening degree dimension data of the face area is provided.
Fig. 8 is a flowchart of a method for determining opening degree dimension data of a face region according to an embodiment of the present application, and as shown in fig. 8, step S52 may specifically include the following sub-steps:
in sub-step S5212, a first included angle and a second included angle of a triangle formed by the left and right mouth corners and the midpoint of the lower lip are determined; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located.
In the embodiment of the disclosure, by detecting face feature points, identifying the left mouth corner, the right mouth corner and the midpoint of the lower lip in the face, assuming that the position of the left mouth corner is a point A, the position of the right mouth corner is a point B, the midpoint of the lower lip is a point C, and connecting the point A, the point B and the point C, a triangle formed by the left mouth corner, the right mouth corner and the midpoint of the lower lip is obtained, a first included angle BAC and a second included angle ABC in the triangle are determined, the first included angle BAC is the included angle of the area where the left mouth corner is located, and the second included angle ABC is the included angle of the area where the right mouth corner is located.
In sub-step S5213, opening degree dimension data of the face region is determined according to the first included angle and the second included angle.
In the embodiment of the disclosure, after the first included angle and the second included angle are obtained, opening degree dimension data of the face area are obtained according to a preset rule.
The average value of the angle values of the first included angle and the second included angle can be used as the dimension data of the mouth opening degree of the face area; or, presetting a mouth opening degree lookup table, wherein a plurality of included angle ranges are arranged in the mouth opening degree lookup table, each included angle range corresponds to one mouth opening degree dimension data, and after the first included angle and the second included angle are acquired, the corresponding mouth opening degree dimension data are queried from the mouth opening degree lookup table according to the included angle range where the average value of the angle values of the first included angle and the second included angle is located.
When the angle value of the first included angle and the second included angle is larger, the corresponding opening degree dimension data is larger; when the angle value of the first included angle and the second included angle is smaller, the corresponding opening degree dimension data is smaller.
In combination with the above embodiment, in another embodiment of the present application, a method for analyzing a face region in a target image by a composition dimension to determine composition dimension data of the face region is provided.
Fig. 9 is a flowchart of a method for determining composition dimension data of a face region according to an embodiment of the present application, and as shown in fig. 9, step S52 may specifically include the following substeps:
in sub-step S5214, the number of faces included in the face region is determined.
In the embodiment of the disclosure, the number of faces included in the face area is determined through face feature point detection.
In sub-step S5215, when the face region includes a face, composition dimension data of the face region is determined according to a distance between a center point of the face and a composition center of gravity.
In the embodiment of the disclosure, when the face area includes a face, a center point of the face is determined, a distance between the center point of the face and a composition center of gravity is calculated, and composition dimension data of the face area is determined according to the distance between the center point of the face and the composition center of gravity.
When the target image is a vertical image, the composition center of gravity can be the position of the vertical image with the center being shifted upwards; when the target image is a landscape image, the composition center of gravity may be a position on the left side or a position on the right side of the landscape image.
When the distance between the center point of the face and the composition center of gravity is closer, the composition dimension data is larger; the composition dimension data is smaller as the distance between the center point of the face and the composition center of gravity is farther.
For example, the inverse of the distance between the center point of the face and the composition center of gravity may be determined as composition dimension data of the face region.
In sub-step S5216, when the face region includes a plurality of faces, composition dimension data of the face region is determined according to a distance between a center of gravity of a polygon formed by center points of the plurality of faces and the composition center of gravity.
In the embodiment of the disclosure, when a face area includes a plurality of faces, center points of each face are respectively determined, the center points of each face are connected to form a polygon, a center of gravity of the polygon is determined, then a distance between the center of gravity of the polygon and a composition center of gravity is calculated, and composition dimension data of the face area is determined according to the distance between the center of gravity of the polygon and the composition center of gravity.
The closer the distance between the center of gravity of the polygon and the composition center of gravity, the larger the composition dimension data; the composition dimension data is smaller as the distance between the center of gravity of the polygon and the composition center of gravity is farther.
For example, the inverse of the distance between the centroid of the polygon and the composition centroid may be determined as composition dimension data of the face region.
In combination with the above embodiment, in another embodiment of the present application, a method for analyzing a face region in a target image from a face direction dimension and determining face direction dimension data of the face region is provided.
Fig. 10 is a flowchart of a method for determining face direction dimension data of a face area according to an embodiment of the present application, as shown in fig. 10, step S52 may specifically include the following sub-steps:
in sub-step S5217, a face direction in the face region is determined.
In the embodiment of the disclosure, the face direction in the face area is determined by the face feature points, and the face direction may be head lifting, head lowering, head left turning, head right turning, head tilting to the left, head tilting to the right, and the like.
In sub-step S5218, face direction dimension data of the face region is determined from the deviation angle of the face direction from the reference direction.
In the embodiment of the disclosure, a deviation angle of a face direction from a reference direction is calculated, and face direction dimension data of a face area is determined according to the deviation angle. The reference direction refers to a direction in which the face is directed completely toward the front.
The face direction lookup table can be preset, a plurality of included angle ranges are arranged in the face direction lookup table, each included angle range corresponds to one face direction dimension data, and after the deviation angle of the face direction from the reference direction is obtained, the corresponding face direction dimension data is queried from the face direction lookup table according to the angle range in which the deviation angle is located.
When the deviation angle is within a certain angle range, the corresponding face direction dimension data is a score larger than zero, and when the deviation angle is within other angle ranges, the corresponding face direction dimension data is a score smaller than zero.
Based on the same inventive concept, embodiments of the present application provide an image quality evaluation apparatus.
Fig. 11 is a schematic diagram of an image quality evaluation apparatus according to an embodiment of the present application, and as shown in fig. 11, the apparatus 110 includes:
a target dimension data determining module 111, configured to determine target dimension data of a target image; the target dimension data includes one or more of image sharpness dimension data, color richness dimension data, and value degree dimension data;
a quality score determining module 112, configured to determine a quality score of the target image according to the target dimension data; the quality score is used to evaluate the quality of the target image.
Optionally, the target dimension data includes image sharpness dimension data; the target dimension data determining module includes:
the edge detection sub-module is used for carrying out edge detection on the target image to obtain an edge detection result of each pixel point in the target image;
The first definition score determining submodule is used for calculating the variance of the edge detection result of each pixel point in the target image to obtain a first definition score of the target image;
and/or a blurring processing sub-module, which is used for blurring processing the target image to obtain a blurred image;
a YUV difference value calculation sub-module, configured to calculate a YUV difference value between each pixel point in the target image and a corresponding pixel point in the blurred image;
a second definition score determining sub-module, configured to determine a second definition score of the target image according to the YUV difference value of each pixel point;
and the image definition dimension data determining submodule is used for determining the image definition dimension data of the target image according to the first definition score and/or the second definition score.
Optionally, the target dimension data includes color richness dimension data; the target dimension data determining module includes:
the component variance and mean value determining sub-module is used for calculating respective variances and mean values of at least two components in YUV color space components of the target image;
and the color richness dimension data determining submodule is used for determining the color richness dimension data of the target image according to the respective variances and the mean values of the at least two components.
Optionally, the target dimension data includes value degree dimension data; the target dimension data determining module includes:
the measurement value variance and mean value determination submodule is used for calculating variances and mean values of intra-frame distortion measurement values of the target image;
the characteristic vector generation sub-module is used for generating a characteristic vector according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data;
and the value degree dimension data determining submodule is used for inputting the feature vector into a preset value degree prediction model to obtain the value degree dimension data of the target image.
Optionally, when the target image includes a face image, the image quality evaluation apparatus further includes:
the face dimension data determining module is used for determining face dimension data of a face area where the face image is located; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data;
the quality score determination module comprises:
a first quality score determining sub-module, configured to determine a first quality score of the target image according to the target dimension data;
A second quality score determining sub-module, configured to determine a second quality score of the target image according to the face dimension data;
and the quality score determining submodule is used for determining the quality score of the target image according to the first quality score and the second quality score.
Optionally, the face dimension data includes eye opening degree dimension data; the face dimension data determining module comprises:
the left eye opening score determining submodule is used for calculating the ratio between the first distance between the upper eyelid and the lower eyelid of the left eye in the face area and the second distance between the inner and outer corners of the left eye to obtain the left eye opening score;
the right eye opening score determining submodule is used for calculating the ratio of the third distance between the upper eyelid and the lower eyelid of the right eye in the face area to the fourth distance between the inner and outer corners of the right eye to obtain the right eye opening score;
the first eye opening degree dimension data determining submodule is used for determining that the eye opening degree dimension data of the face area is a score smaller than zero when the eye opening score of the left eye and the eye opening score of the right eye are smaller than or equal to a first threshold value;
a second eye-opening degree dimension data determining sub-module, configured to, when at least one of the left eye-opening score and the right eye-opening score is greater than the first threshold, and an absolute value of a difference between the left eye-opening score and the right eye-opening score is less than or equal to a second threshold, use a sum of the left eye-opening score and the right eye-opening score as eye-opening degree dimension data of the face region;
And the third eye opening degree dimension data determining submodule is used for taking twice the maximum eye opening score of the left eye opening score and the right eye opening score as the eye opening degree dimension data of the face area when at least one of the left eye opening score and the right eye opening score is larger than the first threshold value and the absolute value of the difference value of the left eye opening score and the right eye opening score is larger than the second threshold value.
Optionally, the face dimension data includes mouth opening degree dimension data; the face dimension data determining module comprises:
the included angle determining submodule is used for determining a first included angle and a second included angle of a triangle formed by the left and right mouth angles and the midpoint of the lower lip; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located;
and the mouth opening degree dimension data determining submodule is used for determining the mouth opening degree dimension data of the face area according to the first included angle and the second included angle.
Optionally, the face dimension data includes composition dimension data; the face dimension data determining module comprises:
a face number determination submodule, configured to determine the number of faces included in the face area;
The first composition dimension data determining submodule is used for determining composition dimension data of the face area according to the distance between the center point of the face and the composition center of gravity when the face area comprises a face;
and the second composition dimension data determining submodule is used for determining composition dimension data of the face area according to the distance between the gravity center of the polygon formed by the center points of the plurality of faces and the composition gravity center when the face area comprises the plurality of faces.
Optionally, the face dimension data includes face direction dimension data; the face dimension data determining module comprises:
a face direction determining sub-module for determining a face direction in the face region;
and the face direction dimension data determining submodule is used for determining face direction dimension data of the face area according to the deviation angle of the face direction from the reference direction.
In the embodiment of the disclosure, the quality score of the target image is determined by comprehensively considering the target dimension data in one or more dimensions, so that the quality of the target image is evaluated, the evaluation of the target image is more objective, the problems of poor accuracy and low speed of manual evaluation are solved, the accuracy and speed of image quality evaluation can be improved, and the method and the device are applicable to evaluating a large number of images.
Based on the same inventive concept, another embodiment of the present application provides a terminal, including: a processor, and a memory for storing instructions executable by the processor; wherein the processor is configured to perform operations performed to implement the image quality assessment method according to any of the embodiments described herein.
Based on the same inventive concept, another embodiment of the present application provides a non-transitory computer-readable storage medium, which when executed by a processor of a terminal, enables the terminal to perform an operation to implement the image quality evaluation method according to any one of the above embodiments of the present application.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail the methods, apparatuses, terminals and readable storage medium for evaluating image quality provided by the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the description of the foregoing examples is only used to help understand the methods and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (18)

1. An image quality assessment method, the method comprising:
determining target dimension data of a target image; the target dimension data comprises color richness dimension data and value degree dimension data, or the target dimension data comprises image definition dimension data, color richness dimension data and value degree dimension data; wherein the value degree dimension data is used for representing the meaningful degree of the target image;
determining a quality score of the target image according to the target dimension data; the quality score is used for evaluating the quality of the target image;
The step of determining the target dimension data of the target image comprises the following steps:
dividing the target image into a plurality of region blocks, obtaining an intra-frame distortion metric value of each region block in the target image, and calculating variances and mean values of the intra-frame distortion metric values of all region blocks in the target image;
generating a feature vector according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data;
and inputting the feature vector into a preset value degree prediction model to obtain value degree dimension data of the target image.
2. The method of claim 1, wherein the target dimension data comprises image sharpness dimension data; the step of determining target dimension data of the target image includes:
performing edge detection on the target image to obtain an edge detection result of each pixel point in the target image;
calculating the variance of the edge detection result of each pixel point in the target image to obtain a first definition score of the target image;
and/or, performing fuzzy processing on the target image to obtain a fuzzy image;
Calculating YUV difference values of each pixel point in the target image and the corresponding pixel point in the blurred image;
determining a second definition score of the target image according to the YUV difference value of each pixel point;
and determining image definition dimension data of the target image according to the first definition score and/or the second definition score.
3. The method of claim 1, wherein the target dimension data comprises color richness dimension data; the step of determining target dimension data of the target image includes:
calculating respective variances and means of at least two components in the YUV color space components of the target image;
and determining the color richness dimension data of the target image according to the respective variances and the mean values of the at least two components.
4. The method of claim 1, wherein when the target image comprises a face image, prior to the step of determining a quality score for the target image from the target dimension data, further comprising: face dimension data of a face area where the face image is located are determined; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data;
The step of determining the quality score of the target image according to the target dimension data comprises the following steps:
determining a first quality score of the target image according to the target dimension data;
determining a second quality score of the target image according to the face dimension data; and determining a quality score of the target image according to the first quality score and the second quality score.
5. The method of claim 4, wherein the face dimension data comprises eye-open degree dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
calculating the ratio of the first distance between the upper eyelid and the lower eyelid of the left eye in the face area to the second distance between the inner and outer corners of the left eye to obtain the left eye opening score;
calculating the ratio of the third distance between the upper eyelid and the lower eyelid of the right eye in the face area to the fourth distance between the inner and outer corners of the right eye to obtain the right eye opening score;
when the left eye opening score and the right eye opening score are both smaller than or equal to a first threshold value, determining that the opening degree dimension data of the face area is a score smaller than zero;
When at least one of the left eye opening score and the right eye opening score is greater than the first threshold value and the absolute value of the difference between the left eye opening score and the right eye opening score is less than or equal to a second threshold value, taking the sum of the left eye opening score and the right eye opening score as the opening degree dimension data of the face region;
and when at least one of the left eye opening score and the right eye opening score is larger than the first threshold value and the absolute value of the difference value of the left eye opening score and the right eye opening score is larger than the second threshold value, taking twice of the maximum opening score of the left eye opening score and the right eye opening score as the opening degree dimension data of the face area.
6. The method of claim 4, wherein the face dimension data comprises mouth opening degree dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps: determining a first included angle and a second included angle of a triangle formed by the left and right mouth corners and the midpoint of the lower lip; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located;
And determining the opening degree dimension data of the face region according to the first included angle and the second included angle.
7. The method of claim 4, wherein the face dimension data comprises composition dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
determining the number of faces included in the face area;
when the face area comprises a face, determining composition dimension data of the face area according to the distance between the center point of the face and the composition center of gravity;
when the face area comprises a plurality of faces, determining composition dimension data of the face area according to the distance between the gravity center of a polygon formed by the center points of the faces and the composition gravity center.
8. The method of claim 4, wherein the face dimension data comprises face direction dimension data; the step of determining the face dimension data of the face region where the face image is located comprises the following steps:
determining a face direction in the face region;
and determining the face direction dimension data of the face area according to the deviation angle of the face direction from the reference direction.
9. An image quality evaluation apparatus, characterized in that the apparatus comprises:
the target dimension data determining module is used for determining target dimension data of the target image; the target dimension data comprises color richness dimension data and value degree dimension data, or the target dimension data comprises image definition dimension data, color richness dimension data and value degree dimension data; wherein the value degree dimension data is used for representing the meaningful degree of the target image;
the quality score determining module is used for determining the quality score of the target image according to the target dimension data; the quality score is used for evaluating the quality of the target image;
the target dimension data determining module includes:
the measurement value variance and mean value determining sub-module is used for dividing the target image into a plurality of region blocks, obtaining an intra-frame distortion measurement value of each region block in the target image, and calculating variances and mean values of the intra-frame distortion measurement values of all region blocks in the target image;
the characteristic vector generation sub-module is used for generating a characteristic vector according to the variance of the intra-frame distortion metric value, the mean value of the intra-frame distortion metric value and the color richness dimension data;
And the value degree dimension data determining submodule is used for inputting the feature vector into a preset value degree prediction model to obtain the value degree dimension data of the target image.
10. The apparatus of claim 9, wherein the target dimension data comprises image sharpness dimension data; the target dimension data determining module includes:
the edge detection sub-module is used for carrying out edge detection on the target image to obtain an edge detection result of each pixel point in the target image;
the first definition score determining submodule is used for calculating the variance of the edge detection result of each pixel point in the target image to obtain a first definition score of the target image;
and/or a blurring processing sub-module, which is used for blurring processing the target image to obtain a blurred image;
a YUV difference value calculation sub-module, configured to calculate a YUV difference value between each pixel point in the target image and a corresponding pixel point in the blurred image;
a second definition score determining sub-module, configured to determine a second definition score of the target image according to the YUV difference value of each pixel point;
And the image definition dimension data determining submodule is used for determining the image definition dimension data of the target image according to the first definition score and/or the second definition score.
11. The apparatus of claim 9, wherein the target dimension data comprises color richness dimension data; the target dimension data determining module includes:
the component variance and mean value determining sub-module is used for calculating respective variances and mean values of at least two components in YUV color space components of the target image;
and the color richness dimension data determining submodule is used for determining the color richness dimension data of the target image according to the respective variances and the mean values of the at least two components.
12. The apparatus according to claim 9, wherein when the target image contains a face image, the image quality evaluation apparatus further comprises:
the face dimension data determining module is used for determining face dimension data of a face area where the face image is located; the face dimension data comprises one or more of face definition dimension data, eye opening degree dimension data, mouth opening degree dimension data, composition dimension data and face direction dimension data;
The quality score determination module comprises:
a first quality score determining sub-module, configured to determine a first quality score of the target image according to the target dimension data;
a second quality score determining sub-module, configured to determine a second quality score of the target image according to the face dimension data;
and the quality score determining submodule is used for determining the quality score of the target image according to the first quality score and the second quality score.
13. The apparatus of claim 12, wherein the face dimension data comprises eye-open degree dimension data; the face dimension data determining module comprises:
the left eye opening score determining submodule is used for calculating the ratio between the first distance between the upper eyelid and the lower eyelid of the left eye in the face area and the second distance between the inner and outer corners of the left eye to obtain the left eye opening score;
the right eye opening score determining submodule is used for calculating the ratio of the third distance between the upper eyelid and the lower eyelid of the right eye in the face area to the fourth distance between the inner and outer corners of the right eye to obtain the right eye opening score;
the first eye opening degree dimension data determining submodule is used for determining that the eye opening degree dimension data of the face area is a score smaller than zero when the eye opening score of the left eye and the eye opening score of the right eye are smaller than or equal to a first threshold value;
A second eye-opening degree dimension data determining sub-module, configured to, when at least one of the left eye-opening score and the right eye-opening score is greater than the first threshold, and an absolute value of a difference between the left eye-opening score and the right eye-opening score is less than or equal to a second threshold, use a sum of the left eye-opening score and the right eye-opening score as eye-opening degree dimension data of the face region;
and the third eye opening degree dimension data determining submodule is used for taking twice the maximum eye opening score of the left eye opening score and the right eye opening score as the eye opening degree dimension data of the face area when at least one of the left eye opening score and the right eye opening score is larger than the first threshold value and the absolute value of the difference value of the left eye opening score and the right eye opening score is larger than the second threshold value.
14. The apparatus of claim 12, wherein the face dimension data comprises mouth opening degree dimension data; the face dimension data determining module comprises:
the included angle determining submodule is used for determining a first included angle and a second included angle of a triangle formed by the left and right mouth angles and the midpoint of the lower lip; the first included angle is an included angle of an area where the left mouth angle is located, and the second included angle is an included angle of an area where the right mouth angle is located;
And the mouth opening degree dimension data determining submodule is used for determining the mouth opening degree dimension data of the face area according to the first included angle and the second included angle.
15. The apparatus of claim 12, wherein the face dimension data comprises composition dimension data; the face dimension data determining module comprises:
a face number determination submodule, configured to determine the number of faces included in the face area;
the first composition dimension data determining submodule is used for determining composition dimension data of the face area according to the distance between the center point of the face and the composition center of gravity when the face area comprises a face;
a second composition dimension data determination sub-module for, when the face region includes a plurality of faces,
and determining composition dimension data of the face region according to the distance between the center of gravity of the polygon formed by the center points of the plurality of faces and the composition center of gravity.
16. The apparatus of claim 12, wherein the face dimension data comprises face direction dimension data; the face dimension data determining module comprises:
a face direction determining sub-module for determining a face direction in the face region;
And the face direction dimension data determining submodule is used for determining face direction dimension data of the face area according to the deviation angle of the face direction from the reference direction.
17. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform to implement the operations performed by the image quality assessment method of any one of claims 1 to 8.
18. A non-transitory computer readable storage medium, which when executed by a processor of a terminal, causes the terminal to perform an operation to perform the image quality assessment method according to any one of claims 1 to 8.
CN201911083970.1A 2019-07-12 2019-11-07 Image quality evaluation method, device, terminal and readable storage medium Active CN111028198B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019106313515 2019-07-12
CN201910631351 2019-07-12

Publications (2)

Publication Number Publication Date
CN111028198A CN111028198A (en) 2020-04-17
CN111028198B true CN111028198B (en) 2024-02-23

Family

ID=70201172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911083970.1A Active CN111028198B (en) 2019-07-12 2019-11-07 Image quality evaluation method, device, terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN111028198B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066068B (en) * 2021-03-31 2024-03-26 北京达佳互联信息技术有限公司 Image evaluation method and device
CN113705650B (en) * 2021-08-20 2023-07-11 网易(杭州)网络有限公司 Face picture set processing method, device, medium and computing equipment
WO2024119322A1 (en) * 2022-12-05 2024-06-13 深圳华大生命科学研究院 Method and apparatus for evaluating quality of grayscale image, and electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
KR20080031548A (en) * 2006-10-04 2008-04-10 광운대학교 산학협력단 Method of real-time image quality evaluation and apparatus thereof
CN101262561A (en) * 2007-03-05 2008-09-10 富士胶片株式会社 Imaging apparatus and control method thereof
CN102150189A (en) * 2008-09-12 2011-08-10 爱信精机株式会社 Open-eye or closed-eye determination apparatus, degree of eye openness estimation apparatus and program
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN108701353A (en) * 2018-04-13 2018-10-23 深圳市锐明技术股份有限公司 It is a kind of to inhibit the pseudo- color method and device of image
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN109767449A (en) * 2018-12-03 2019-05-17 浙江工业大学 A kind of Measurement for Digital Image Definition based on strong edge detection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10334245B2 (en) * 2013-05-31 2019-06-25 Intel Corporation Adjustment of intra-frame encoding distortion metrics for video encoding

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080031548A (en) * 2006-10-04 2008-04-10 광운대학교 산학협력단 Method of real-time image quality evaluation and apparatus thereof
CN101262561A (en) * 2007-03-05 2008-09-10 富士胶片株式会社 Imaging apparatus and control method thereof
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN102150189A (en) * 2008-09-12 2011-08-10 爱信精机株式会社 Open-eye or closed-eye determination apparatus, degree of eye openness estimation apparatus and program
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN108701353A (en) * 2018-04-13 2018-10-23 深圳市锐明技术股份有限公司 It is a kind of to inhibit the pseudo- color method and device of image
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN109767449A (en) * 2018-12-03 2019-05-17 浙江工业大学 A kind of Measurement for Digital Image Definition based on strong edge detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
图像质量客观评价方法的研究与实现;任雪;《中国优秀硕士学位论文全文数据库 信息科技辑》(第2011 年 第S2期期);第33、40页 *
基于CPLD芯片的视频图像码率控制方法研究;陈建国等;《计算机测量与控制》;全文 *
基于双树复数小波变换的图像清晰度判定;郭敬明等;《上海交通大学学报》;第42卷(第4期);第4页 *

Also Published As

Publication number Publication date
CN111028198A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111028198B (en) Image quality evaluation method, device, terminal and readable storage medium
US8571253B2 (en) Image quality evaluation device and method
US20190164257A1 (en) Image processing method, apparatus and device
US8525847B2 (en) Enhancing images using known characteristics of image subjects
EP3944603A1 (en) Video denoising method and apparatus, and computer-readable storage medium
US20130002814A1 (en) Method for automatically improving stereo images
US8908989B2 (en) Recursive conditional means image denoising
US20160117832A1 (en) Method and apparatus for separating foreground image, and computer-readable recording medium
US9613403B2 (en) Image processing apparatus and method
JP2013545200A (en) Depth estimation based on global motion
RU2718423C2 (en) Method of determining depth map for image and device for implementation thereof
US20180276796A1 (en) Method and device for deblurring out-of-focus blurred images
US20190102886A1 (en) Method for foreground and background determination in an image
JP2015162718A (en) Image processing method, image processing device and electronic equipment
US9813698B2 (en) Image processing device, image processing method, and electronic apparatus
CN110458790A (en) A kind of image detecting method, device and computer storage medium
CN114820334A (en) Image restoration method and device, terminal equipment and readable storage medium
CN112150368A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113438386B (en) Dynamic and static judgment method and device applied to video processing
CN111126300A (en) Human body image detection method and device, electronic equipment and readable storage medium
WO2017088391A1 (en) Method and apparatus for video denoising and detail enhancement
CN116091405B (en) Image processing method and device, computer equipment and storage medium
KR101881795B1 (en) Method for Detecting Edges on Color Image Based on Fuzzy Theory
Tai et al. Underwater image enhancement through depth estimation based on random forest
CN114742774A (en) No-reference image quality evaluation method and system fusing local and global features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant