CN113139462A - Unsupervised face image quality evaluation method, electronic device and storage medium - Google Patents

Unsupervised face image quality evaluation method, electronic device and storage medium Download PDF

Info

Publication number
CN113139462A
CN113139462A CN202110439127.3A CN202110439127A CN113139462A CN 113139462 A CN113139462 A CN 113139462A CN 202110439127 A CN202110439127 A CN 202110439127A CN 113139462 A CN113139462 A CN 113139462A
Authority
CN
China
Prior art keywords
class
face
image
face image
inter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110439127.3A
Other languages
Chinese (zh)
Inventor
陈白洁
王月平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202110439127.3A priority Critical patent/CN113139462A/en
Publication of CN113139462A publication Critical patent/CN113139462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an unsupervised human face image quality evaluation method, electronic equipment and a storage medium, belonging to the field of artificial intelligence, wherein the method comprises the following steps: extracting a first characteristic from the face image through a face recognition network, wherein the number of the face recognition networks is multiple; calculating the intra-class similarity distribution according to the first characteristic and a second characteristic extracted from the intra-class image of the face image through a face recognition network; calculating inter-class similarity distribution according to the first characteristic and a third characteristic extracted from the inter-class image of the face image through a face recognition network; calculating a mass fraction according to the intra-class similarity distribution and the inter-class similarity distribution; and carrying out weighted average on a plurality of quality scores obtained based on a plurality of face recognition networks to obtain a quality evaluation result. The intra-class similarity distribution and the inter-class similarity distribution are measured, the results of a plurality of face recognition networks are integrated to evaluate the quality of the face image, and the accuracy and the reliability of the evaluation result of the quality of the face image are ensured.

Description

Unsupervised face image quality evaluation method, electronic device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an unsupervised face image quality assessment method, an electronic device, and a storage medium.
Background
In recent years, face image quality evaluation has become an indispensable part of face recognition systems to ensure stability and reliability of recognition performance in an unconstrained scene. The high-quality face image is not an image which looks clear to the naked eye, but an image which can be correctly recognized by a face recognition network model. However, the trained face recognition network model has poor reliability of the output evaluation result due to the fact that the quality degrees of the model (which can be measured by the recognition accuracy) are different due to various factors, and thus the problem of poor reliability of the output evaluation result is not solved well in the related art.
Disclosure of Invention
The embodiment of the application provides an unsupervised face image quality evaluation method, electronic equipment and a storage medium, so as to at least solve the problem of how to improve the reliability of an evaluation result in the related art.
In a first aspect, an embodiment of the present application provides an unsupervised method for evaluating a quality of a face image, including: extracting a first characteristic from a face image through a face recognition network, wherein the number of the face recognition networks is multiple; calculating the distribution of similarity in the class according to the first characteristic and a second characteristic extracted from the class image of the face image through the face recognition network; calculating inter-class similarity distribution according to the first characteristic and a third characteristic extracted from the inter-class image of the face image through the face recognition network; calculating a mass fraction according to the intra-class similarity distribution and the inter-class similarity distribution; and carrying out weighted average on a plurality of quality scores obtained based on a plurality of face recognition networks to obtain a quality evaluation result.
In some embodiments, the calculating an intra-class similarity distribution according to the first feature and a second feature extracted from an intra-class image of the face image by the face recognition network includes: extracting a second feature from the intra-class image of the face image through the face recognition network; and calculating cosine similarity according to the first characteristic and the second characteristic to obtain the similar internal similarity distribution.
In some embodiments, the calculating an inter-class similarity distribution according to the first feature and a third feature extracted from the inter-class image of the face image by the face recognition network includes: extracting a third feature from the inter-class image of the face image through the face recognition network; and calculating cosine similarity according to the first characteristic and the third characteristic to obtain inter-class similarity distribution.
In some embodiments, said calculating a quality score based on said intra-class similarity distribution and said inter-class similarity distribution comprises: calculating a bulldozer Distance (Wasserstein Distance) between the intra-class similarity distribution and the inter-class similarity distribution; and calculating the mass fraction according to the bulldozer distance.
In some embodiments, the mass fraction is calculated by the following formula:
Figure BDA0003034260610000021
where δ is expressed as a function of:
Figure BDA0003034260610000022
wherein
Figure BDA0003034260610000023
The expression of (a) is as follows:
Figure BDA0003034260610000024
where WD represents the dozer distance,
Figure BDA0003034260610000025
is the distribution of the degree of similarity within the class,
Figure BDA0003034260610000026
is the distribution of inter-class similarity, xiThe ith human face image is shown, and L represents the set of the bulldozer distances of the joint distributions.
In some embodiments, before the extracting the first feature from the face image through the face recognition network, the method further includes: inputting any face image into a face recognition network, comparing the features of the face image with the features of all images in a face database, and calculating the similarity; and judging the person corresponding to the face image in the face database according to the similarity, taking the face image of the person in the face database as an intra-class image of the face image, and taking the face image of other people in the face database as an inter-class image of the face image.
In some of these embodiments, the face recognition network includes at least one of an insight face and a FaceNet.
In some of these embodiments, the loss function of the face recognition network includes at least one of arcfacce and cosface.
In a second aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform any one of the methods described above.
In a third aspect, a storage medium is provided in an embodiment of the present application, where the storage medium stores a computer program, where the computer program is configured to execute any one of the above methods when the computer program runs.
According to the above, the unsupervised face image quality evaluation method according to the embodiment of the application comprises the following steps: extracting a first characteristic from the face image through a face recognition network, wherein the number of the face recognition networks is multiple; calculating the intra-class similarity distribution according to the first characteristic and a second characteristic extracted from the intra-class image of the face image through a face recognition network; calculating inter-class similarity distribution according to the first characteristic and a third characteristic extracted from the inter-class image of the face image through a face recognition network; calculating a mass fraction according to the intra-class similarity distribution and the inter-class similarity distribution; and carrying out weighted average on a plurality of quality scores obtained based on a plurality of face recognition networks to obtain a quality evaluation result. According to the method and the device, the intra-class similarity distribution and the inter-class similarity distribution are measured, the results of a plurality of face recognition networks are integrated to evaluate the quality of the face image, and the accuracy and the reliability of the evaluation result of the quality of the face image are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of an unsupervised facial image quality assessment method according to an embodiment of the present application;
FIG. 2 is a flow chart of an unsupervised facial image quality assessment method including three face recognition networks according to an embodiment of the present application;
FIG. 3 is a representation of a similarity distribution according to an embodiment of the present application;
fig. 4 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The inventor of the present application finds that, the quality (measurable by recognition accuracy) of face recognition networks obtained through training at present is uneven, and the evaluation of the quality of a face image through a single face recognition network results in low reliability of an evaluation result, and to solve the problem, an embodiment of the present application provides an unsupervised face image quality evaluation method, fig. 1 is a flow chart of the unsupervised face image quality evaluation method according to the embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
s100: extracting a first characteristic from the face image through a face recognition network, wherein the number of the face recognition networks is multiple;
s200: calculating the intra-class similarity distribution according to the first characteristic and a second characteristic extracted from the intra-class image of the face image through a face recognition network;
s300: calculating inter-class similarity distribution according to the first characteristic and a third characteristic extracted from the inter-class image of the face image through a face recognition network;
s400: calculating a mass fraction according to the intra-class similarity distribution and the inter-class similarity distribution;
s500: and carrying out weighted average on a plurality of quality scores obtained based on a plurality of face recognition networks to obtain a quality evaluation result.
According to the content, the similarity distribution in the classes and the similarity distribution between the classes are measured, the accuracy of the evaluation result can be improved, and the reliability of the evaluation result is greatly improved by integrating a plurality of face recognition networks.
Preferably, in step S100, the face recognition network adopts an insight face, a FaceNet, or the like, and the network structure adopts a rest or an assumption-rest, wherein the network structure mainly includes a convolution layer, a residual block, a focus module, or the like, and other modules may be added as needed; the loss function adopts arcface, cosface and the like;
preferably, in step S200, a cosine similarity or an euclidean distance is calculated according to the first feature and the second feature to obtain an intra-class similarity distribution;
preferably, in step S300, the cosine similarity or the euclidean distance is calculated according to the first feature and the third feature, so as to obtain the inter-class similarity distribution.
Preferably, in S400, a bulldozer distance between the intra-class similarity distribution and the inter-class similarity distribution is calculated to obtain a mass score.
In order to more clearly illustrate the present invention, the following examples are set forth in detail.
The face recognition networks in the embodiment of the present application may be selected from more than two, it should be noted that the larger the number of face recognition networks selected, the more accurate the final evaluation result is, but the more complicated the calculation process is, and in order to balance the two, preferably, the embodiment selects and uses three face recognition networks, including the face recognition network a, the face recognition network B, and the face recognition network C.
Fig. 2 is a flowchart of an unsupervised facial image quality evaluation method including three face recognition networks according to an embodiment of the present application, as shown in fig. 2, including the following steps:
step 0: and respectively inputting all the face image data into different pre-trained face recognition networks with the final classification layer removed, and extracting features.
Step 1: for a certain face image, the feature of the image extracted by the face recognition network A is embedding A, the feature of the image extracted by the face recognition network B is embedding B, the feature of the image extracted by the face recognition network C is embedding C, and the embedding A, the embedding B and the embedding C are all first features.
Step 2: and extracting a second feature a from the intra-class image of the face image through the face recognition network A, and calculating cosine similarity or Euclidean distance according to the embedding A and the second feature a to obtain intra-class similarity distribution P1. The intra-class picture refers to all other pictures of the same type as the face image in the face image data.
And step 3: and extracting a third feature a from the inter-class images of the face image through the face recognition network A, and calculating cosine similarity or Euclidean distance according to the embedding A and the third feature a to obtain inter-class similarity distribution Q1. The inter-class image is a picture of the face image data that is different from the face image. Moreover, the inter-class similarity distribution is different from the intra-class similarity distribution, the intra-class similarity refers to the similarity between the face image and the face image of the same class (i.e. the same person), and the inter-class similarity refers to the similarity between the face image and the face images of other classes (non-same person).
And 4, step 4: and calculating the quality score S1 of the facial image under the face recognition network A according to the bulldozer distance between the intra-class similarity distribution P1 and the inter-class similarity distribution Q1.
And 5: and extracting a second feature B from the intra-class image of the face image through a face recognition network B, and calculating cosine similarity or Euclidean distance according to the embedding B and the second feature B to obtain intra-class similarity distribution P2.
Step 6: and extracting a third feature B from the inter-class image of the face image through a face recognition network B, and calculating cosine similarity or Euclidean distance according to the embedding B and the third feature B to obtain inter-class similarity distribution Q2.
And 7: and calculating the quality score S2 of the facial image under the face recognition network B according to the bulldozer distance between the intra-class similarity distribution P2 and the inter-class similarity distribution Q2.
And 8: and extracting a second feature C from the intra-class image of the face image through a face recognition network C, and calculating cosine similarity or Euclidean distance according to the embedding C and the second feature C to obtain intra-class similarity distribution P3.
And step 9: and extracting a third feature C from the inter-class image of the face image through the face recognition network C, and calculating cosine similarity or Euclidean distance according to the embedding C and the third feature C to obtain inter-class similarity distribution Q3.
Step 10: and calculating the quality score S3 of the facial image under the face recognition network C according to the bulldozer distance between the intra-class similarity distribution P3 and the inter-class similarity distribution Q3.
Step 11: and performing weighted average on the quality scores S1, S2 and S3 obtained respectively based on the face recognition network A, the face recognition network B and the face recognition network C to obtain a final quality evaluation result S.
Step 12: and after a final quality evaluation result is obtained for each face image, manual checking is carried out to remove pictures with poor obvious quality and higher scores such as shading, exposure and the like.
As an example, regarding the cosine similarity, the magnitude of the difference between two individuals is measured by the cosine value of the included angle between two vectors in two vector spaces, when the included angle between two vectors tends to 0, the closer the two vectors are, the smaller the difference is, and when the cosine value of the included angle is 1, the more similar the human face is. Fig. 3 is a schematic representation of similarity distribution according to an embodiment of the present application, and as shown in fig. 3, a frequency distribution histogram is formed according to the frequency of cosine similarity values, and the frequency distribution histogram shows the distribution of a group of data through a histogram, that is, the distribution represents the frequency of occurrence of different data (for example, the cosine similarity is the number of a certain value). The horizontal axis represents the group spacing (cosine similarity value), the vertical axis represents the frequency/group spacing, and the area of each histogram is the frequency.
As an example, Wasserstein distance, which is a measurement formula for calculating the distance between two distributions, can naturally measure the distance between a discrete distribution and a continuous distribution, can measure the distance, and can convert one distribution into another distribution, and particularly can continuously convert one distribution into another distribution, and maintain the geometric characteristics of the distributions themselves.
The calculation formula of Wasserstein distance is as follows:
Figure BDA0003034260610000071
wherein,
Figure BDA0003034260610000072
is the distribution of the degree of similarity within the class,
Figure BDA0003034260610000073
is a class roomDistribution of similarity, xiRepresenting the ith human face image,
Figure BDA0003034260610000074
is that
Figure BDA0003034260610000075
And
Figure BDA0003034260610000076
the set of all possible joint distributions that the distributions combine. For each possible joint distribution gamma, samples can be taken therefrom
Figure BDA0003034260610000077
Obtaining a sample
Figure BDA0003034260610000078
And
Figure BDA0003034260610000079
and calculating the distance between the pair of samples
Figure BDA00030342606100000710
The expected value E of the sample versus distance at this joint distribution gamma can be calculated.
Thus, for example, where the score of the quality assessment result is between 0 and 100, the expression for further obtaining the quality assessment result is as follows:
Figure BDA00030342606100000711
where δ is expressed as a function of:
Figure BDA00030342606100000712
wherein
Figure BDA00030342606100000713
The expression of (a) is as follows:
Figure BDA00030342606100000714
where WD denotes the dozer distance, and L denotes a set of dozer distances distributed in each combination.
As an example, the intra-class image and the inter-class image may be obtained by: any one of the face images M and all the images in the face database are input to the face recognition network, and features are extracted through the penultimate layer (the last layer is a classification layer and is not required to be used) of the face recognition network. Then, the features of the face image M are compared with the features of all the images in the face database, and the similarity is calculated. Then, it is determined which person in the face database is most similar to the face image M according to the similarity (e.g., cosine similarity), so as to determine which person is, for example, a nail, and the face image of the nail in the face database is an intra-class image of the face image M, and the face image of the other person in the face database is used as an inter-class image of the face image M.
As an example, with the face recognition network described above, an algorithm preparation phase, an algorithm training phase, and an algorithm application phase are included.
An algorithm preparation stage: and pre-selecting a suitable face recognition algorithm model (namely, a face recognition network) according to business requirements, for example, adopting a deep learning algorithm model. Preferably, the algorithm model is applied to the equipment side, so that the smaller the model is, the better the model is, while the recognition accuracy is considered. After selecting a proper algorithm model, training data is prepared, data preprocessing (including cleaning data) is carried out, and a network structure is built. Moreover, the image data used for training the model refers to aligned face images, for example, after the face images are input, the face is detected by a detection algorithm, the positions of key points are obtained by a key point detection model, then the aligned face is obtained by a transformation function, and the aligned face is used as the input of a face recognition algorithm model.
And (3) an algorithm training stage: reasonable hyper-parameters and Loss functions are set, and Loss and the false recognition rate change in the process of training the model, so that the model with low false recognition rate can be obtained. Therefore, in the process of training the model, the depth and the width of the model and the related parameters in the model are continuously adjusted, so that a trained model (namely, a model with low false recognition rate) can be obtained.
And an algorithm application stage: the method comprises the steps of extracting features of face image data by using a trained face recognition algorithm model, calculating the similarity between the image and the intra-class and the inter-class to obtain intra-class similarity distribution and inter-class similarity distribution, then calculating the distance between the intra-class similarity distribution and the inter-class similarity distribution, obtaining quality scores according to the distance, and carrying out weighted average on the quality scores obtained based on different models to obtain the final quality evaluation result of the image.
In conclusion, the embodiment of the application integrates the results of the plurality of face recognition networks to evaluate the quality of the face image, and the accuracy and the reliability of the evaluation result of the quality of the face image are ensured. And the quality scores after evaluation are manually checked, so that the accuracy and reliability of the quality evaluation result of the face image are ensured again. In addition, the final quality evaluation result is obtained based on the trained face recognition networks, the number of the face recognition networks is selectable, and the evaluation process is easy to operate.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the unsupervised face image quality evaluation method in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the unsupervised face image quality assessment methods of the above embodiments.
An embodiment of the present application also provides an electronic device, which may be a terminal. The electronic device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the electronic device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an unsupervised face image quality assessment method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 4 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 4, there is provided an electronic device, which may be a server, and its internal structure diagram may be as shown in fig. 4. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize an unsupervised face image quality evaluation method, and the database is used for storing data.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An unsupervised human face image quality assessment method is characterized by comprising the following steps:
extracting a first characteristic from a face image through a face recognition network, wherein the number of the face recognition networks is multiple;
calculating the distribution of similarity in the class according to the first characteristic and a second characteristic extracted from the class image of the face image through the face recognition network;
calculating inter-class similarity distribution according to the first characteristic and a third characteristic extracted from the inter-class image of the face image through the face recognition network;
calculating a mass fraction according to the intra-class similarity distribution and the inter-class similarity distribution;
and carrying out weighted average on a plurality of quality scores obtained based on a plurality of face recognition networks to obtain a quality evaluation result.
2. The method of claim 1, wherein computing an intra-class similarity distribution based on the first features and second features extracted from the intra-class image of the face image by the face recognition network comprises:
extracting a second feature from the intra-class image of the face image through the face recognition network;
and calculating cosine similarity according to the first characteristic and the second characteristic to obtain the similar internal similarity distribution.
3. The method of claim 1, wherein the calculating an inter-class similarity distribution based on the first features and third features extracted from the inter-class image of the face image through the face recognition network comprises:
extracting a third feature from the inter-class image of the face image through the face recognition network;
and calculating cosine similarity according to the first characteristic and the third characteristic to obtain inter-class similarity distribution.
4. The method of claim 1, wherein said calculating a quality score based on said intra-class similarity distribution and said inter-class similarity distribution comprises:
calculating a bulldozer distance between the intra-class similarity distribution and the inter-class similarity distribution;
and calculating the mass fraction according to the bulldozer distance.
5. The method of claim 4, wherein the mass fraction is calculated by the following formula:
Figure FDA0003034260600000011
where δ is expressed as a function of:
Figure FDA0003034260600000012
wherein
Figure FDA0003034260600000021
The expression of (a) is as follows:
Figure FDA0003034260600000022
where WD represents the dozer distance,
Figure FDA0003034260600000023
is the distribution of the degree of similarity within the class,
Figure FDA0003034260600000024
is the distribution of inter-class similarity, xiThe ith human face image is shown, and L represents the set of the bulldozer distances of the joint distributions.
6. The method of claim 1, wherein before said extracting the first feature from the face image through the face recognition network, the method further comprises:
inputting any face image into a face recognition network, comparing the features of the face image with the features of all images in a face database, and calculating the similarity;
and judging the person corresponding to the face image in the face database according to the similarity, taking the face image of the person in the face database as an intra-class image of the face image, and taking the face image of other people in the face database as an inter-class image of the face image.
7. The method of claim 1, wherein the face recognition network comprises at least one of an insight face and a FaceNet.
8. The method of claim 1, wherein the loss function of the face recognition network comprises at least one of arcfacce and cosface.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any one of claims 1 to 8 when executed.
CN202110439127.3A 2021-04-23 2021-04-23 Unsupervised face image quality evaluation method, electronic device and storage medium Pending CN113139462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110439127.3A CN113139462A (en) 2021-04-23 2021-04-23 Unsupervised face image quality evaluation method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110439127.3A CN113139462A (en) 2021-04-23 2021-04-23 Unsupervised face image quality evaluation method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113139462A true CN113139462A (en) 2021-07-20

Family

ID=76813398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110439127.3A Pending CN113139462A (en) 2021-04-23 2021-04-23 Unsupervised face image quality evaluation method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113139462A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505720A (en) * 2021-07-22 2021-10-15 浙江大华技术股份有限公司 Image processing method and device, storage medium and electronic device
CN113706502A (en) * 2021-08-26 2021-11-26 重庆紫光华山智安科技有限公司 Method and device for evaluating quality of face image
CN115953819A (en) * 2022-12-28 2023-04-11 中国科学院自动化研究所 Training method, device and equipment of face recognition model and storage medium
CN117078669A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545570A (en) * 2017-08-31 2018-01-05 中国地质大学(武汉) A kind of reconstructed image quality evaluation method of half reference chart
CN109189961A (en) * 2018-07-23 2019-01-11 上海斐讯数据通信技术有限公司 A kind of calculation method and system of recognition of face confidence level
CN109271891A (en) * 2018-08-30 2019-01-25 成都考拉悠然科技有限公司 A kind of dynamic face supervision method and system
CN109948564A (en) * 2019-03-25 2019-06-28 四川川大智胜软件股份有限公司 It is a kind of based on have supervision deep learning quality of human face image classification and appraisal procedure
CN110490177A (en) * 2017-06-02 2019-11-22 腾讯科技(深圳)有限公司 A kind of human-face detector training method and device
CN111259815A (en) * 2020-01-17 2020-06-09 厦门中控智慧信息技术有限公司 Method, system, equipment and medium for evaluating quality of face image
CN111696090A (en) * 2020-06-08 2020-09-22 电子科技大学 Method for evaluating quality of face image in unconstrained environment
CN111967381A (en) * 2020-08-16 2020-11-20 云知声智能科技股份有限公司 Face image quality grading and labeling method and device
CN112381782A (en) * 2020-11-11 2021-02-19 腾讯科技(深圳)有限公司 Human face image quality evaluation method and device, computer equipment and storage medium
CN112561813A (en) * 2020-12-10 2021-03-26 深圳云天励飞技术股份有限公司 Face image enhancement method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490177A (en) * 2017-06-02 2019-11-22 腾讯科技(深圳)有限公司 A kind of human-face detector training method and device
CN107545570A (en) * 2017-08-31 2018-01-05 中国地质大学(武汉) A kind of reconstructed image quality evaluation method of half reference chart
CN109189961A (en) * 2018-07-23 2019-01-11 上海斐讯数据通信技术有限公司 A kind of calculation method and system of recognition of face confidence level
CN109271891A (en) * 2018-08-30 2019-01-25 成都考拉悠然科技有限公司 A kind of dynamic face supervision method and system
CN109948564A (en) * 2019-03-25 2019-06-28 四川川大智胜软件股份有限公司 It is a kind of based on have supervision deep learning quality of human face image classification and appraisal procedure
CN111259815A (en) * 2020-01-17 2020-06-09 厦门中控智慧信息技术有限公司 Method, system, equipment and medium for evaluating quality of face image
CN111696090A (en) * 2020-06-08 2020-09-22 电子科技大学 Method for evaluating quality of face image in unconstrained environment
CN111967381A (en) * 2020-08-16 2020-11-20 云知声智能科技股份有限公司 Face image quality grading and labeling method and device
CN112381782A (en) * 2020-11-11 2021-02-19 腾讯科技(深圳)有限公司 Human face image quality evaluation method and device, computer equipment and storage medium
CN112561813A (en) * 2020-12-10 2021-03-26 深圳云天励飞技术股份有限公司 Face image enhancement method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505720A (en) * 2021-07-22 2021-10-15 浙江大华技术股份有限公司 Image processing method and device, storage medium and electronic device
CN113706502A (en) * 2021-08-26 2021-11-26 重庆紫光华山智安科技有限公司 Method and device for evaluating quality of face image
CN113706502B (en) * 2021-08-26 2023-09-05 重庆紫光华山智安科技有限公司 Face image quality assessment method and device
CN115953819A (en) * 2022-12-28 2023-04-11 中国科学院自动化研究所 Training method, device and equipment of face recognition model and storage medium
CN115953819B (en) * 2022-12-28 2023-08-15 中国科学院自动化研究所 Training method, device, equipment and storage medium of face recognition model
CN117078669A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Similar Documents

Publication Publication Date Title
CN110188641B (en) Image recognition and neural network model training method, device and system
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN110599451B (en) Medical image focus detection and positioning method, device, equipment and storage medium
CN108846355B (en) Image processing method, face recognition device and computer equipment
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
JP2022502751A (en) Face keypoint detection method, device, computer equipment and computer program
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN108537264A (en) Heterologous image matching method based on deep learning
CN113269149B (en) Method and device for detecting living body face image, computer equipment and storage medium
CN111783748A (en) Face recognition method and device, electronic equipment and storage medium
CN107292299B (en) Side face recognition methods based on kernel specification correlation analysis
CN111028218B (en) Fundus image quality judgment model training method, fundus image quality judgment model training device and computer equipment
CN113179421B (en) Video cover selection method and device, computer equipment and storage medium
CN113221645B (en) Target model training method, face image generating method and related device
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN111178187A (en) Face recognition method and device based on convolutional neural network
CN112733700A (en) Face key point detection method and device, computer equipment and storage medium
CN109961103B (en) Training method of feature extraction model, and image feature extraction method and device
CN108010015A (en) One kind refers to vein video quality evaluation method and its system
Chen et al. Learning to rank retargeted images
CN111340748A (en) Battery defect identification method and device, computer equipment and storage medium
CN112613445A (en) Face image generation method and device, computer equipment and storage medium
CN114049544A (en) Face quality evaluation method, device, equipment and medium based on feature comparison
Liu et al. Ranking-preserving cross-source learning for image retargeting quality assessment
CN116386117A (en) Face recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720