CN111401324A - Image quality evaluation method, device, storage medium and electronic equipment - Google Patents

Image quality evaluation method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111401324A
CN111401324A CN202010314028.8A CN202010314028A CN111401324A CN 111401324 A CN111401324 A CN 111401324A CN 202010314028 A CN202010314028 A CN 202010314028A CN 111401324 A CN111401324 A CN 111401324A
Authority
CN
China
Prior art keywords
image
quality score
target image
quality
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010314028.8A
Other languages
Chinese (zh)
Inventor
李翰
周玄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010314028.8A priority Critical patent/CN111401324A/en
Publication of CN111401324A publication Critical patent/CN111401324A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the application discloses an image quality evaluation method, an image quality evaluation device, a storage medium and electronic equipment, wherein a target image to be evaluated is obtained; calculating a plurality of mean contrast normalized MSCN coefficients of the target image, and calculating a first quality score of the target image according to the plurality of MSCN coefficients; recognizing expression categories in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression categories; and obtaining a comprehensive quality score of the target image according to the first quality score and the second quality score. According to the method and the device, the reference-free image quality evaluation is realized by comprehensively evaluating the image quality by combining the characteristics of the image and the expression categories in the image without referring to the image.

Description

Image quality evaluation method, device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image quality evaluation method and apparatus, a storage medium, and an electronic device.
Background
Image Quality Assessment (IQA) is methodically divided into an observation Assessment and an objective Assessment. The IQA subjective evaluation evaluates the quality of an image by means of subjective perception of people, firstly, an original reference image and a distorted image are given, and a marker scores the distorted image; objective evaluation uses a mathematical model to give a quantitative value, simulating the human visual system perception mechanism to measure image quality.
Conventional IQA subjective evaluation typically requires comparison with reference to all or part of the image features to evaluate image quality. When there is no reference image, it is difficult to evaluate the image quality.
Disclosure of Invention
The embodiment of the application provides an image quality evaluation method and device, a storage medium and electronic equipment, which can realize non-reference image quality evaluation.
In a first aspect, an embodiment of the present application provides an image quality assessment method, including:
acquiring a target image to be evaluated;
calculating a plurality of mean contrast normalized MSCN coefficients of the target image, and calculating a first quality score of the target image according to the plurality of MSCN coefficients;
recognizing expression categories in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression categories;
and obtaining a comprehensive quality score of the target image according to the first quality score and the second quality score.
In a second aspect, an embodiment of the present application further provides an image quality evaluation apparatus, including:
the image acquisition module is used for acquiring a target image to be evaluated;
the first scoring module is used for calculating a plurality of mean contrast normalized MSCN coefficients of the target image and calculating a first quality score of the target image according to the MSCN coefficients;
the second grading module is used for identifying the expression category in the target image according to a preset face identification algorithm and an image classification model, and calculating a second quality grade of the target image according to the expression category;
and the comprehensive scoring module is used for obtaining the comprehensive quality score of the target image according to the first quality score and the second quality score.
In a third aspect, embodiments of the present application further provide a storage medium having a computer program stored thereon, where the computer program is executed on a computer, so that the computer executes an image quality evaluation method according to any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the image quality assessment method according to any embodiment of the present application by calling the computer program.
According to the technical scheme provided by the embodiment of the application, the target image to be evaluated is obtained, a plurality of mean contrast normalization MSCN coefficients of the target image are calculated, and a first quality score of the target image is calculated according to the plurality of MSCN coefficients; recognizing expression categories in the target image according to a preset face recognition algorithm, and calculating a second quality score of the target image according to the expression categories; and obtaining the comprehensive quality score of the target image according to the first quality score and the second quality score. According to the scheme, when the quality of the target image is evaluated, other reference images are not needed, the image is evaluated for the first time according to the MSCN coefficient of the image, the face of the image is identified, the image is evaluated for the second time according to the expression type in the image, the scores obtained by the two evaluations are integrated, the integrated quality score of the target image is calculated, and the non-reference image quality evaluation is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first image quality evaluation method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a second image quality evaluation method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an image quality evaluation flow of the image quality evaluation method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image quality evaluation apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An execution subject of the image quality evaluation method may be the image quality evaluation device provided in the embodiment of the present application, or an electronic device integrated with the image quality evaluation device, where the image quality evaluation device may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image quality evaluation method according to an embodiment of the present disclosure. The specific process of the image quality evaluation method provided by the embodiment of the application can be as follows:
101. and acquiring a target image to be evaluated.
The embodiment of the application can be applied to evaluating the quality of the image. The evaluation target can be an image shot by a camera of the electronic equipment, an image acquired by the electronic equipment from a network, or an image sent by other terminals.
For example, in some embodiments, the electronic device may continuously capture multiple frames of images in a very short time in a continuous capture mode, where the multiple frames of images are substantially identical in content without substantial movement of the electronic device and the captured object.
The multi-frame images are used as target images, for each frame of target image, the quality of the multi-frame images can be scored by adopting the image quality assessment method provided by the application, the quality score of each frame of image is obtained, and then the image with the best quality is determined from the multi-frame continuous shooting images.
102. Calculating a plurality of mean contrast normalized MSCN coefficients of the target image, and calculating a first quality score of the target image according to the plurality of MSCN coefficients.
For images, there are differences in the image distribution for different pixel qualities when normalizing the pixel intensities and calculating the intensity distribution. After normalization, the pixel intensities of the natural image follow a gaussian distribution, while the pixel intensities of the unnatural or distorted image do not follow a gaussian distribution. Therefore, by calculating a feature vector of an image using a Mean filtered contrast normalized (MSCN) coefficient, image quality can be accurately characterized.
Wherein, the MSCN coefficient includes the following:
a. bright Light (LIGHT)Degree:
Figure BDA0002458804020000041
where i and j are pixel coordinates, i ∈ {1, 2.. multidot.M }, j ∈ {1, 2.. multidot.N }, M and N being the height and width of the image, respectively,
Figure BDA0002458804020000042
is the MSCN coefficient. I (I, j) is the image intensity at the (I, j) pixel, and μ (I, j) and σ (I, j) are the local mean and local variance, respectively. The local mean is just the gaussian blur of the original image, while the local variance is the gaussian blur of the square of the difference of the original image and the local mean. In a traditional MSCN coefficient calculation mode, index features are extracted aiming at a gray scale space. That is, the image intensity at the (i, j) pixel typically takes the form of a gray scale value, which reflects only gray scale spatial information. In the implementation, the color space index is introduced, and the algorithm accuracy is improved. The image intensity at the (i, j) pixel of the redefined image is as follows:
Figure BDA0002458804020000043
wherein, a*=R/G,b*=B/G。
b. To capture the relationship of neighboring pixels, the product of four directional neighboring elements is used as the other four MSCN coefficients: horizontal (H), vertical (V), left diagonal (D1), right diagonal (D2), calculated as follows:
Figure BDA0002458804020000044
Figure BDA0002458804020000045
Figure BDA0002458804020000046
Figure BDA0002458804020000047
after the above calculation, 5 MSCN coefficients are obtained, including 1 MSCN parameter and 4 adjacent element product parameters (horizontal, vertical, left diagonal, right diagonal).
Next, a feature vector of the target image is calculated from the five MSCN coefficients.
For example, in some embodiments, a feature vector of 5 × 1 is formed by these five coefficients.
For example, a first quality score of the target image is calculated based on the feature vectors and a pre-trained support vector machine model. The training process of the support vector machine model may include: the method comprises the steps of obtaining a sample image, obtaining an evaluation score obtained by scoring the sample image, and training a pre-built support vector machine model according to the sample image and the evaluation score to determine model parameters.
Alternatively, in other embodiments, the preset regression model may also be a pre-trained linear regression model, a random forest model, or an L ASSO regression model.
In some embodiments, after obtaining the feature vector, the feature vector may be normalized, elements in the feature vector may be converted into numerical values between-1 and 1, and then the first quality score may be calculated according to the feature vector after the normalization.
103. And recognizing the expression category in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression category.
And then, carrying out face detection on the target image, identifying the detected facial expression, and scoring according to the expression. For example, in some embodiments, identifying an expression category in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression category includes: detecting a face area in the target image according to a preset face recognition algorithm, and determining a target expression category of the face area according to the face area and an image classification model; and calculating a second quality score of the target image according to the confidence degree corresponding to the target expression category.
For example, an image classification model may be constructed based on a convolutional neural network. The training process of the model is as follows: a large number of facial images are prepared as training samples, and a class label representing the expression category of each training sample is added. And training the face images carrying the class labels to build an image classification model based on a convolutional neural network, and determining model parameters.
And inputting the face region into a pre-trained image classification model, calculating the image classification model, and outputting a confidence coefficient corresponding to each expression category, wherein the value range of the confidence coefficient is [0,1 ]. And taking the expression category with the highest confidence coefficient as a target expression category of the face area.
In some embodiments, the confidence level corresponding to the target expression category may be directly used as the second quality score of the target image.
In some other embodiments, calculating a second quality score of the target image according to the confidence corresponding to the target expression category may further include: acquiring a scoring coefficient corresponding to the target expression category; and calculating a second quality score of the target image according to the score coefficient and the confidence degree corresponding to the target expression category.
For example, the image classification model defines seven expression categories as follows: anger, disgust, fear, happiness, sadness, surprise, and neutrality. The grading coefficient corresponding to anger and disgust is-1, the grading coefficient corresponding to surprise and fear is 0, and the grading coefficient corresponding to happiness is 1. When calculating the second quality score according to the confidence coefficient, if the target expression category is angry and disgust, multiplying the confidence coefficient by-1 to obtain the second quality score; when the target expression category is surprise and fear is generated, the confidence coefficient is multiplied by 0 to obtain a second quality score; and when the target expression category is happy, multiplying the confidence coefficient by 1 to obtain a second quality score.
If a plurality of face regions are detected in the target image, calculating a score for each face region, and adding the scores to obtain a second quality score.
104. And obtaining a comprehensive quality score of the target image according to the first quality score and the second quality score.
For example, the first quality score and the second quality score are added and summed to obtain a composite quality score of the target image. The picture with the highest score is rated as the best picture.
Or, in another embodiment, calculating the ratio between the area of the face region and the total area of the target image; determining a first weight corresponding to the first quality score and a second weight corresponding to the second quality score according to the ratio, wherein the ratio is proportional to the second weight and inversely proportional to the first weight; and according to the first weight and the second weight, carrying out weighted summation on the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
In this embodiment, a weight is set for a score obtained by using two scoring methods, where the first quality score corresponds to the first weight, and the second quality score corresponds to the second weight. And calculating the ratio of the area of the face region to the total area of the target image, wherein different ratios correspond to different first weights and second weights.
After the ratio between the area of the face region and the total area of the target image is obtained, the first weight and the second weight are selected according to the ratio, and the larger the ratio of the face region in the whole image is, the larger the value of the second weight is, and the smaller the value of the first weight is. And finally, carrying out weighted summation on the two scores according to the obtained weights to obtain the comprehensive quality score of the target image.
Or, in another embodiment, when the first quality score is located in different score intervals, weights corresponding to the score intervals are allocated to the first quality score and the second quality score, and the comprehensive quality score of the image is calculated. For example, the overall quality score F of the image is calculated according to the following formula:
Figure BDA0002458804020000071
wherein E is the first quality score, B is the second quality score, β is the area of the face region in the image, which can be determined by the length (w) of the rectangular frame in which the face in the target image is locatedf) And width (h)f) To calculate β ═ wf*hf
The conventional image quality evaluation method is mostly based on parameters such as brightness and pixel distribution of a photo, and cannot be used for judging photos containing people, so that the scheme of the embodiment of the application has wider applicability. In addition, in the embodiment, the image is scored based on the characteristics of the color space of each pixel point of the image, so that a first quality score is obtained; the image is scored based on the image content of the image, namely the expression information of the face in the image, a second quality score is obtained, the scores of the two different dimensions are integrated, and a comprehensive quality score of the image is obtained, wherein the comprehensive quality score represents the whole shooting effect of the image.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, the image quality evaluation method provided in the embodiment of the present application obtains a target image to be evaluated, calculates a plurality of mean contrast normalized MSCN coefficients of the target image, and calculates a first quality score of the target image according to the plurality of MSCN coefficients; recognizing expression categories in the target image according to a preset face recognition algorithm, and calculating a second quality score of the target image according to the expression categories; and obtaining the comprehensive quality score of the target image according to the first quality score and the second quality score. According to the scheme, when the quality of the target image is evaluated, other reference images are not needed, the image is evaluated for the first time according to the MSCN coefficient of the image, the face of the image is identified, the image is evaluated for the second time according to the expression type in the image, the scores obtained by the two evaluations are integrated, the integrated quality score of the target image is calculated, and the non-reference image quality evaluation is realized.
In some embodiments, computing a feature vector for the target image from the plurality of MSCN coefficients comprises: for each MSCN coefficient, extracting Gaussian distribution characteristics from the MSCN coefficient based on a generalized Gaussian distribution algorithm; and generating the feature vector according to the Gaussian distribution feature.
In this example, a feature vector of size 36 × 1 is calculated based on the 5 MSCN coefficients calculated.fitting the MSCN parameters to a Generalized Gaussian Distribution (GGD) calculates the first two elements of the 36 × 1 feature vector, where one element is used to describe the shape and one element is used to describe the variance.
The product of 4 MSCN neighboring coefficients is fit to an Asymmetric Generalized Gaussian Distribution (AGGD) in 4 directions. Shape, mean, left variance and right variance are described separately. Thus, 16 elements can be obtained.
Through the above calculation, a total of 18 feature vector elements are obtained. As shown in the following table:
TABLE 1 feature vector elements Table
Characteristic range Description of functions Procedure for measuring the movement of a moving object
1-2 Shape and variance. GGD fitting MSCN parameters
3-6 The shape of the glass is that of the glass,mean, left variance, right variance AGGD fitting level parameters
7-10 Shape, mean, left variance, right variance AGGD fitting vertical parameters
11-14 Shape, mean, left variance, right variance AGGD fitting diagonal (left) parameters
15-18 Shape, mean, left variance, right variance AGGD fitting diagonal (right) parameters
Alternatively, the target image may be reduced to half and a quarter of the original size, respectively, and the same process repeated to obtain 18 new elements, for a total of 36 elements, forming a 36 × 1 feature vector, or the target image may be reduced to half and a quarter of the original size, respectively, and the same process repeated to obtain 18 new elements, for a total of 54 elements at three scales, forming a 54 × 1 feature vector from the 54 elements.
The method according to the preceding embodiment is illustrated in further detail below by way of example.
Referring to fig. 2, fig. 2 is a second flow chart of the image quality evaluation method according to the embodiment of the invention. The method comprises the following steps:
201. acquiring a plurality of frames of images with the shooting time interval within a preset time length, and calculating the similarity between every two frames of images.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an image quality evaluation flow of an image quality evaluation method according to an embodiment of the present disclosure.
For example, in some embodiments, the electronic device may continuously capture multiple frames of images in a very short time in a continuous capture mode, where the multiple frames of images are substantially identical in content without substantial movement of the electronic device and the captured object. For example, according to the time attribute of the image, continuous multi-frame images with the shooting time of 10s are acquired from the album.
202. And according to the similarity between every two frames of images, determining an interference image with the similarity smaller than that of other images from the multi-frame images.
And (3) solving the similarity between every two pictures by using a perceptual hash algorithm, and eliminating other interference images with smaller similarities. The effect of the perceptual hash algorithm is to generate a "fingerprint" string for each image and then compare the fingerprints of the different images. The closer the results, the more similar the images are.
Alternatively, in other embodiments, other algorithms may be used to calculate the similarity of the images. For example, a convolutional neural network is used to extract picture features, and finally, the full-link layer outputs the "matching degree" of two pictures. For another example, the similarity between two frames of images is calculated based on a Scale-invariant feature transform (SIFT-invariant feature transform) local feature detection algorithm.
203. And taking other images except the interference image in the multi-frame images as target images.
The other images except the interference image are taken as the target image. For each frame of target image, the quality of the multiple frames of images can be scored by adopting the image quality evaluation method provided by the application, so that the quality score of each frame of image is obtained, and further, the image with the best quality is determined from the multiple frames of continuous shooting images.
204. Calculating a plurality of mean contrast normalized MSCN coefficients of the target image.
For images, there are differences in the image distribution for different pixel qualities when normalizing the pixel intensities and calculating the intensity distribution. After normalization, the pixel intensities of the natural image follow a gaussian distribution, while the pixel intensities of the unnatural or distorted image do not follow a gaussian distribution. Therefore, by calculating a feature vector of an image using a Mean filtered contrast normalized (MSCN) coefficient, image quality can be accurately characterized. For each frame of target image, 5 MSCN coefficients are obtained through calculation, wherein the MSCN coefficients comprise 1 MSCN parameter and 4 adjacent element product parameters. For the specific calculation process, please refer to the above embodiments, which are not described herein again.
205. And extracting Gaussian distribution characteristics from the MSCN coefficient based on a generalized Gaussian distribution algorithm, and generating the characteristic vector according to the Gaussian distribution characteristics.
Next, an eigenvector with size 36 × is calculated based on the calculated 5 MSCN coefficients, the first two elements of the 36 × eigenvector are calculated by fitting MSCN parameters to the generalized gaussian distribution, wherein one element is used to describe the shape and one element is used to describe the variance, the product of 4 MSCN neighboring coefficients is fitted to the asymmetric generalized gaussian distribution in 4 directions, describing the shape, the mean, the left variance and the right variance, respectively, and 16 elements can be obtained.
206. And calculating a first quality score of the target image according to the feature vector and a pre-trained support vector machine model.
And inputting the calculated feature vector into a pre-trained support vector machine model, and calculating a first quality score of the target image.
207. And detecting a face region in the target image according to a preset face recognition algorithm.
The face recognition algorithm may be a FacenessNet model or the like, a rectangular frame in which a face in the target image is located may be obtained according to the model, and the original target image may be clipped according to the rectangular frame to obtain an image of the face region.
If a face region is detected from the target image, executing 209; if no face region is detected from the target image, 208 is performed.
208. And taking the first quality score as a comprehensive quality score of the target image. It is to be understood that the target image does not necessarily include a human face. When the human face cannot be detected from the target image, the second quality score is not calculated, and the first quality score can be used as the comprehensive quality score of the target image.
209. And determining the target expression category of the face area according to the face area and the image classification model.
And if the face area is detected from the target image, performing expression classification on the image cut out from the rectangular frame.
Wherein an image classification model can be constructed based on a convolutional neural network. The training process of the model is as follows: a large number of facial images are prepared as training samples, and a class label representing the expression category of each training sample is added. And training the face images carrying the class labels to build an image classification model based on a convolutional neural network, and determining model parameters.
And inputting the face region into a pre-trained image classification model, calculating the image classification model, and outputting a confidence coefficient corresponding to each expression category, wherein the value range of the confidence coefficient is [0,1 ]. And taking the expression category with the highest confidence coefficient as a target expression category of the face area.
210. And calculating a second quality score of the target image according to the confidence degree corresponding to the target expression category.
The image classification model in the embodiment of the application defines seven expression categories as follows: anger, disgust, fear, happiness, sadness, surprise, and neutrality. When the target expression category is anger, disgust and fear, adding a negative sign to the confidence coefficient to obtain a second quality score; when the target expression category is surprise and fear is generated, the confidence coefficient is multiplied by 0 to obtain a second quality score; and when the target expression category is happy, directly taking the confidence coefficient as a second quality score. If a plurality of face regions are detected in the target image, calculating a score for each face region, and adding the scores to obtain a second quality score.
It is understood that the above expression categories are only examples, and in other embodiments, other expression categories may be set as needed.
211. And adding and summing the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
After the comprehensive quality score of each frame of target image is obtained, the image with the best quality can be determined. The scheme of the embodiment of the application is applied to the mobile phone photo album, so that the best photo can be selected for repeated shooting or similar photos, and photo recommendation is provided or redundant photos are deleted.
Therefore, the image quality evaluation method provided by the embodiment of the invention uses the non-reference image space quality evaluator algorithm based on machine learning, and combines the facial expression recognition algorithm to comprehensively evaluate the photo quality by integrating the image shooting quality and the facial emotional expression. And filtering similar or repeatedly shot pictures by adopting a mean value hash algorithm for the multi-frame images shot in a fixed time. For a selected target image, firstly calculating a feature vector of the target image by using an MSCN coefficient, dividing the image into first quality scores according to the feature vector and a preset regression model, then detecting a face region of the target image by using a deep learning algorithm, and carrying out expression classification on the face. And scoring into a first quality score according to the facial expression category. And combining the first quality score and the second quality score to comprehensively obtain the picture quality. Based on the method, a facial expression recognition technology is introduced to sense the emotional atmosphere of the photo. Subjective factors and objective factors are comprehensively considered, and the photo content quality is comprehensively evaluated in a multi-dimensional manner by combining the photo shooting quality and the photo character expression.
An image quality evaluation apparatus is also provided in an embodiment. Referring to fig. 4, fig. 4 is a schematic structural diagram of an image quality evaluation apparatus 300 according to an embodiment of the present disclosure. The image quality evaluation apparatus 300 is applied to an electronic device, and the image quality evaluation apparatus 300 includes an image acquisition module 301, a first scoring module 302, a second scoring module 303, and a comprehensive scoring module 304, as follows:
an image acquisition module 301, configured to acquire a target image to be evaluated;
a first scoring module 302, configured to calculate multiple mean contrast normalized MSCN coefficients of the target image, and calculate a first quality score of the target image according to the multiple MSCN coefficients
The second scoring module 303 is configured to identify an expression category in the target image according to a preset face recognition algorithm and an image classification model, and calculate a second quality score of the target image according to the expression category;
and a comprehensive scoring module 304, configured to obtain a comprehensive quality score of the target image according to the first quality score and the second quality score.
In some embodiments, the second scoring module 303 is further configured to:
detecting a face area in the target image according to a preset face recognition algorithm, and determining a target expression category of the face area according to the face area and an image classification model;
and calculating a second quality score of the target image according to the confidence degree corresponding to the target expression category.
In some embodiments, the second scoring module 303 is further configured to: acquiring a scoring coefficient corresponding to the target expression category; and calculating a second quality score of the target image according to the score coefficient and the confidence degree corresponding to the target expression category.
In some embodiments, the second scoring module 303 is further configured to: if a face area is detected from the target image, determining a target expression category of the face area according to the face area and an image classification model;
the composite scoring module 304 is further configured to: and if the human face region cannot be detected from the target image, taking the first quality score as a comprehensive quality score of the target image.
In some embodiments, the second scoring module 303 is further configured to:
calculating the ratio of the area of the face region to the total area of the target image;
determining a first weight corresponding to the first quality score and a second weight corresponding to the second quality score according to the ratio, wherein the ratio is proportional to the second weight and inversely proportional to the first weight;
and according to the first weight and the second weight, carrying out weighted summation on the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
In some embodiments, the composite scoring module 304 is further configured to: and adding and summing the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
In some embodiments, the image acquisition module 301 is further configured to:
acquiring a plurality of frames of images with a shooting time interval within a preset time length, and calculating the similarity between every two frames of images;
according to the similarity between every two frames of images, determining an interference image with the similarity smaller than that of other images from the multi-frame images;
and taking other images except the interference image in the multi-frame images as target images.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that the image quality evaluation device provided in the embodiment of the present application and the image quality evaluation method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image quality evaluation method may be run on the image quality evaluation device, and a specific implementation process thereof is described in detail in the embodiment of the image quality evaluation method, and is not described herein again.
As can be seen from the above, the image quality assessment apparatus provided in the embodiment of the present application includes an image obtaining module 301, a first scoring module 302, a second scoring module 303, and a comprehensive scoring module 304, where when image quality needs to be assessed, the image obtaining module 301 obtains a target image to be assessed, the first scoring module 302 calculates a plurality of mean contrast normalized MSCN coefficients of the target image, and calculates a first quality score of the target image according to the plurality of MSCN coefficients; the second scoring module 303 identifies an expression category in the target image according to a preset face recognition algorithm, and calculates a second quality score of the target image according to the expression category; the comprehensive scoring module 304 obtains a comprehensive quality score of the target image according to the first quality score and the second quality score. According to the scheme, when the quality of the target image is evaluated, other reference images are not needed, the image is evaluated for the first time according to the MSCN coefficient of the image, the face of the image is identified, the image is evaluated for the second time according to the expression type in the image, the scores obtained by the two evaluations are integrated, the integrated quality score of the target image is calculated, and the non-reference image quality evaluation is realized.
The embodiment of the application further provides an electronic device, and the electronic device can be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 may include a camera module 401, a memory 402, a processor 403, a touch display 404, a speaker 405, a microphone 406, and the like.
The camera module 401 may include an Image quality evaluation circuit, which may be implemented using hardware and/or software components, and may include various Processing units that define an Image Signal Processing (Image Signal Processing) pipeline. The image quality evaluation circuit may include at least: a camera, an Image Signal Processor (ISP Processor), control logic, an Image memory, and a display. Wherein the camera may comprise at least one or more lenses and an image sensor. The image sensor may include an array of color filters (e.g., Bayer filters). The image sensor may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that may be processed by an image signal processor.
The image signal processor may process the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image quality assessment operations on the raw image data, gathering statistical information about the image data. Wherein the image quality assessment operations may be performed with the same or different bit depth precision. The raw image data can be stored in an image memory after being processed by an image signal processor. The image signal processor may also receive image data from an image memory.
The image Memory may be part of a Memory device, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
When image data is received from the image memory, the image signal processor may perform one or more image quality assessment operations, such as temporal filtering. The processed image data may be sent to an image memory for additional processing before being displayed. The image signal processor may also receive processed data from the image memory and perform image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the image signal processor may also be sent to an image memory, and the display may read image data from the image memory. In one embodiment, the image memory may be configured to implement one or more frame buffers.
The statistical data determined by the image signal processor may be sent to the control logic. For example, the statistical data may include statistical information of the image sensor such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens shading correction, and the like.
The control logic may include a processor and/or microcontroller that executes one or more routines (e.g., firmware). One or more routines may determine camera control parameters and ISP control parameters based on the received statistics. For example, the control parameters of the camera may include camera flash control parameters, control parameters of the lens (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), etc.
Referring to fig. 6, fig. 6 is a schematic structural diagram of the image quality evaluation circuit in the present embodiment. For ease of illustration, only aspects of image quality assessment techniques relating to embodiments of the present invention are shown.
For example, the image quality evaluation circuit may include: camera, image signal processor, control logic ware, image memory, display. The camera may include one or more lenses and an image sensor, among others. In some embodiments, the camera may be either a tele camera or a wide camera.
And the image collected by the camera is transmitted to an image signal processor for processing. After the image signal processor processes the image, statistical data of the image (such as brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic. The control logic device can determine the control parameters of the camera according to the statistical data, so that the camera can carry out operations such as automatic focusing and automatic exposure according to the control parameters. The image can be stored in the image memory after being processed by the image signal processor. The image signal processor may also read the image stored in the image memory for processing. In addition, the image can be directly sent to a display for displaying after being processed by the image signal processor. The display may also read the image in the image memory for display.
In addition, not shown in the figure, the electronic device may further include a CPU and a power supply module. The CPU is connected with the logic controller, the image signal processor, the image memory and the display, and is used for realizing global control. The power supply module is used for supplying power to each module.
The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
The touch display screen 404 may be used to receive user touch control operations for the electronic device. Speaker 405 may play audio signals. The microphone 406 may be used to pick up sound signals.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a target image to be evaluated;
calculating a plurality of mean contrast normalized MSCN coefficients of the target image, and calculating a first quality score of the target image according to the plurality of MSCN coefficients;
recognizing expression categories in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression categories;
and obtaining a comprehensive quality score of the target image according to the first quality score and the second quality score.
As can be seen from the above, an embodiment of the present application provides an electronic device, where the electronic device obtains a target image to be evaluated, calculates multiple mean contrast normalized MSCN coefficients of the target image, and calculates a first quality score of the target image according to the multiple MSCN coefficients; recognizing expression categories in the target image according to a preset face recognition algorithm, and calculating a second quality score of the target image according to the expression categories; and obtaining the comprehensive quality score of the target image according to the first quality score and the second quality score. According to the scheme, when the quality of the target image is evaluated, other reference images are not needed, the image is evaluated for the first time according to the MSCN coefficient of the image, the face of the image is identified, the image is evaluated for the second time according to the expression type in the image, the scores obtained by the two evaluations are integrated, the integrated quality score of the target image is calculated, and the non-reference image quality evaluation is realized.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the image quality assessment method according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Furthermore, the terms "first", "second", and "third", etc. in this application are used to distinguish different objects, and are not used to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
The image quality evaluation method, the image quality evaluation device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image quality evaluation method characterized by comprising:
acquiring a target image to be evaluated;
calculating a plurality of mean contrast normalized MSCN coefficients of the target image, and calculating a first quality score of the target image according to the plurality of MSCN coefficients;
recognizing expression categories in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression categories;
and obtaining a comprehensive quality score of the target image according to the first quality score and the second quality score.
2. The image quality assessment method according to claim 1, wherein the identifying an expression category in the target image according to a preset face recognition algorithm and an image classification model, and calculating a second quality score of the target image according to the expression category comprises:
detecting a face area in the target image according to a preset face recognition algorithm, and determining a target expression category of the face area according to the face area and an image classification model;
and calculating a second quality score of the target image according to the confidence degree corresponding to the target expression category.
3. The image quality assessment method according to claim 2, wherein the calculating a second quality score of the target image according to the confidence degree corresponding to the target expression category comprises:
acquiring a scoring coefficient corresponding to the target expression category;
and calculating a second quality score of the target image according to the score coefficient and the confidence degree corresponding to the target expression category.
4. The image quality assessment method according to claim 2, wherein after detecting the face region in the target image according to a preset face recognition algorithm, the method further comprises:
if a face area is detected from the target image, determining a target expression category of the face area according to the face area and an image classification model;
and if the human face region cannot be detected from the target image, taking the first quality score as a comprehensive quality score of the target image.
5. The image quality assessment method according to claim 2, wherein said deriving a composite quality score of the target image based on the first quality score and the second quality score comprises:
calculating the ratio of the area of the face region to the total area of the target image;
determining a first weight corresponding to the first quality score and a second weight corresponding to the second quality score according to the ratio, wherein the ratio is proportional to the second weight and inversely proportional to the first weight;
and according to the first weight and the second weight, carrying out weighted summation on the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
6. The image quality assessment method according to claim 2, wherein said deriving a composite quality score of the target image based on the first quality score and the second quality score comprises:
and adding and summing the first quality score and the second quality score to obtain a comprehensive quality score of the target image.
7. The image quality evaluation method according to claim 1, wherein the acquiring of the target image to be evaluated includes:
acquiring a plurality of frames of images with a shooting time interval within a preset time length, and calculating the similarity between every two frames of images;
according to the similarity between every two frames of images, determining an interference image with the similarity smaller than that of other images from the multi-frame images;
and taking other images except the interference image in the multi-frame images as target images.
8. An image quality evaluation apparatus characterized by comprising:
the image acquisition module is used for acquiring a target image to be evaluated;
the first scoring module is used for calculating a plurality of mean contrast normalized MSCN coefficients of the target image and calculating a first quality score of the target image according to the MSCN coefficients;
the second grading module is used for identifying the expression category in the target image according to a preset face identification algorithm and an image classification model, and calculating a second quality grade of the target image according to the expression category;
and the comprehensive scoring module is used for obtaining the comprehensive quality score of the target image according to the first quality score and the second quality score.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, the computer is caused to execute the image quality evaluation method according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the image quality assessment method according to any one of claims 1 to 7 by calling the computer program.
CN202010314028.8A 2020-04-20 2020-04-20 Image quality evaluation method, device, storage medium and electronic equipment Pending CN111401324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010314028.8A CN111401324A (en) 2020-04-20 2020-04-20 Image quality evaluation method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314028.8A CN111401324A (en) 2020-04-20 2020-04-20 Image quality evaluation method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111401324A true CN111401324A (en) 2020-07-10

Family

ID=71429692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314028.8A Pending CN111401324A (en) 2020-04-20 2020-04-20 Image quality evaluation method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111401324A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932532A (en) * 2020-09-21 2020-11-13 安翰科技(武汉)股份有限公司 Method for evaluating capsule endoscope without reference image, electronic device, and medium
CN112052350A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112396507A (en) * 2020-09-01 2021-02-23 重庆邮电大学 Shadow division-based integrated SVM personal credit evaluation method
CN112418009A (en) * 2020-11-06 2021-02-26 中保车服科技服务股份有限公司 Image quality detection method, terminal device and storage medium
CN112785572A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Image quality evaluation method, device and computer readable storage medium
CN112991313A (en) * 2021-03-29 2021-06-18 清华大学 Image quality evaluation method and device, electronic device and storage medium
CN113378695A (en) * 2021-06-08 2021-09-10 杭州萤石软件有限公司 Image quality determination method and device and electronic equipment
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium
CN113473227A (en) * 2021-08-16 2021-10-01 维沃移动通信(杭州)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113938671A (en) * 2020-07-14 2022-01-14 北京灵汐科技有限公司 Image content analysis method and device, electronic equipment and storage medium
CN114219803A (en) * 2022-02-21 2022-03-22 浙江大学 Detection method and system for three-stage image quality evaluation
CN114299037A (en) * 2021-12-30 2022-04-08 广州极飞科技股份有限公司 Method and device for evaluating quality of object detection result, electronic equipment and computer readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230823A1 (en) * 2006-02-22 2007-10-04 Altek Corporation Image evaluation system and method
US20150049910A1 (en) * 2013-08-19 2015-02-19 Kodak Alaris Inc. Imaging workflow using facial and non-facial features
CN106295468A (en) * 2015-05-19 2017-01-04 小米科技有限责任公司 Face identification method and device
KR101907792B1 (en) * 2017-06-13 2018-10-12 연세대학교 산학협력단 Apparatus and Method for Image Quality Evaluation, and Recording Medium Thereof
KR20190056499A (en) * 2017-11-17 2019-05-27 연세대학교 산학협력단 Magnetic Resonance Imaging Apparatus and Method for Controlling Rescan of Magnetic Resonance Imaging Apparatus
CN109978884A (en) * 2019-04-30 2019-07-05 恒睿(重庆)人工智能技术研究院有限公司 More people's image methods of marking, system, equipment and medium based on human face analysis
CN110046587A (en) * 2019-04-22 2019-07-23 安徽理工大学 Human face expression feature extracting method based on Gabor difference weight
CN110119673A (en) * 2019-03-27 2019-08-13 广州杰赛科技股份有限公司 Noninductive face Work attendance method, device, equipment and storage medium
CN110866236A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Private picture display method, device, terminal and storage medium
CN111028216A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image scoring method and device, storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230823A1 (en) * 2006-02-22 2007-10-04 Altek Corporation Image evaluation system and method
US20150049910A1 (en) * 2013-08-19 2015-02-19 Kodak Alaris Inc. Imaging workflow using facial and non-facial features
CN106295468A (en) * 2015-05-19 2017-01-04 小米科技有限责任公司 Face identification method and device
KR101907792B1 (en) * 2017-06-13 2018-10-12 연세대학교 산학협력단 Apparatus and Method for Image Quality Evaluation, and Recording Medium Thereof
KR20190056499A (en) * 2017-11-17 2019-05-27 연세대학교 산학협력단 Magnetic Resonance Imaging Apparatus and Method for Controlling Rescan of Magnetic Resonance Imaging Apparatus
CN110119673A (en) * 2019-03-27 2019-08-13 广州杰赛科技股份有限公司 Noninductive face Work attendance method, device, equipment and storage medium
CN110046587A (en) * 2019-04-22 2019-07-23 安徽理工大学 Human face expression feature extracting method based on Gabor difference weight
CN109978884A (en) * 2019-04-30 2019-07-05 恒睿(重庆)人工智能技术研究院有限公司 More people's image methods of marking, system, equipment and medium based on human face analysis
CN110866236A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Private picture display method, device, terminal and storage medium
CN111028216A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image scoring method and device, storage medium and electronic equipment

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938671A (en) * 2020-07-14 2022-01-14 北京灵汐科技有限公司 Image content analysis method and device, electronic equipment and storage medium
CN113938671B (en) * 2020-07-14 2023-05-23 北京灵汐科技有限公司 Image content analysis method, image content analysis device, electronic equipment and storage medium
CN112052350A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112052350B (en) * 2020-08-25 2024-03-01 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112396507A (en) * 2020-09-01 2021-02-23 重庆邮电大学 Shadow division-based integrated SVM personal credit evaluation method
CN111932532B (en) * 2020-09-21 2021-01-08 安翰科技(武汉)股份有限公司 Method for evaluating capsule endoscope without reference image, electronic device, and medium
CN111932532A (en) * 2020-09-21 2020-11-13 安翰科技(武汉)股份有限公司 Method for evaluating capsule endoscope without reference image, electronic device, and medium
WO2022057897A1 (en) * 2020-09-21 2022-03-24 安翰科技(武汉)股份有限公司 Referenceless image evaluation method for capsule endoscope, electronic device, and medium
CN112418009A (en) * 2020-11-06 2021-02-26 中保车服科技服务股份有限公司 Image quality detection method, terminal device and storage medium
CN112418009B (en) * 2020-11-06 2024-03-22 中保车服科技服务股份有限公司 Image quality detection method, terminal equipment and storage medium
CN112785572A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Image quality evaluation method, device and computer readable storage medium
CN112785572B (en) * 2021-01-21 2023-10-24 上海云从汇临人工智能科技有限公司 Image quality evaluation method, apparatus and computer readable storage medium
CN112991313B (en) * 2021-03-29 2021-09-14 清华大学 Image quality evaluation method and device, electronic device and storage medium
CN112991313A (en) * 2021-03-29 2021-06-18 清华大学 Image quality evaluation method and device, electronic device and storage medium
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium
CN113378695A (en) * 2021-06-08 2021-09-10 杭州萤石软件有限公司 Image quality determination method and device and electronic equipment
CN113473227A (en) * 2021-08-16 2021-10-01 维沃移动通信(杭州)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114299037A (en) * 2021-12-30 2022-04-08 广州极飞科技股份有限公司 Method and device for evaluating quality of object detection result, electronic equipment and computer readable storage medium
CN114299037B (en) * 2021-12-30 2023-09-01 广州极飞科技股份有限公司 Quality evaluation method and device for object detection result, electronic equipment and computer readable storage medium
CN114219803A (en) * 2022-02-21 2022-03-22 浙江大学 Detection method and system for three-stage image quality evaluation

Similar Documents

Publication Publication Date Title
CN111401324A (en) Image quality evaluation method, device, storage medium and electronic equipment
EP3937481A1 (en) Image display method and device
US20100246939A1 (en) Image Processing Apparatus and Method, Learning Apparatus and Method, and Program
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
JP2010045613A (en) Image identifying method and imaging device
JPH09102043A (en) Position detection of element at inside of picture
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN109711268B (en) Face image screening method and device
Várkonyi-Kóczy et al. Gradient-based synthesized multiple exposure time color HDR image
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN111369523B (en) Method, system, equipment and medium for detecting cell stack in microscopic image
CN110348358B (en) Skin color detection system, method, medium and computing device
WO2022116104A1 (en) Image processing method and apparatus, and device and storage medium
CN112633221A (en) Face direction detection method and related device
CN112528939A (en) Quality evaluation method and device for face image
CN112084927A (en) Lip language identification method fusing multiple visual information
CN111047618B (en) Multi-scale-based non-reference screen content image quality evaluation method
CN112396016B (en) Face recognition system based on big data technology
CN113743378A (en) Fire monitoring method and device based on video
CN112949453A (en) Training method of smoke and fire detection model, smoke and fire detection method and smoke and fire detection equipment
WO2021134485A1 (en) Method and device for scoring video, storage medium and electronic device
KR20130126386A (en) Adaptive color detection method, face detection method and apparatus
CN112559791A (en) Cloth classification retrieval method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination