CN111814620A - Face image quality evaluation model establishing method, optimization method, medium and device - Google Patents

Face image quality evaluation model establishing method, optimization method, medium and device Download PDF

Info

Publication number
CN111814620A
CN111814620A CN202010601590.9A CN202010601590A CN111814620A CN 111814620 A CN111814620 A CN 111814620A CN 202010601590 A CN202010601590 A CN 202010601590A CN 111814620 A CN111814620 A CN 111814620A
Authority
CN
China
Prior art keywords
face image
face
evaluation model
sequence
quality evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010601590.9A
Other languages
Chinese (zh)
Other versions
CN111814620B (en
Inventor
李亚鹏
王宁波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010601590.9A priority Critical patent/CN111814620B/en
Publication of CN111814620A publication Critical patent/CN111814620A/en
Application granted granted Critical
Publication of CN111814620B publication Critical patent/CN111814620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application belongs to the technical field of image recognition, and particularly relates to a method, a preferable method, a medium and a device for establishing a facial image quality evaluation model, wherein the evaluation model establishing method comprises the following steps: acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person; extracting the depth feature of each face image based on a face recognition model; comparing the depth features corresponding to each face image with the standard face features of the same person respectively, and calculating the similarity; obtaining the quality sequence of the face images of each person according to the corresponding similarity of each face image; inputting the facial image sequence set into the quality evaluation model, and outputting the predicted score sequence of each person; and comparing the facial image quality sequence with the prediction score sequence to train the quality evaluation model so that the prediction score sequence is consistent with the facial image quality sequence. The optimization method can better reflect the quality of the face image and is easy for face image recognition.

Description

Face image quality evaluation model establishing method, optimization method, medium and device
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a face image quality evaluation model establishing method, a face image quality evaluation model optimizing method, a face image quality evaluation model medium and a face image quality evaluation model device.
Background
With the rapid development of scientific technology and the arrival of the big data era, information security becomes more and more important. As a safe, non-contact, convenient, friendly and efficient identity information authentication mode, face recognition has been widely applied to aspects of social life. In an actual scene, the number of people appearing in a monitoring video and the number of face pictures of each person are very large, and if the best one or more faces can be selected from a face image sequence for subsequent recognition, calculation and storage resources are saved, and the recognition efficiency and the recognition performance can be improved, so that face optimization becomes more and more important.
Disclosure of Invention
The technical problem mainly solved by the application is how to select the face image with the best quality for subsequent identification, and the method, the optimization method, the medium and the device for establishing the face image quality evaluation model are provided.
In order to solve the technical problem, the application adopts a technical scheme that: a method for establishing a facial image quality evaluation model is provided, and the evaluation model establishment method comprises the following steps:
acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person;
extracting the depth feature of each face image based on the face recognition model;
comparing the depth features corresponding to each face image with the standard face features of the same person respectively, and calculating the similarity;
obtaining the quality sequence of the face images of each person according to the corresponding similarity of each face image;
inputting the facial image sequence set into a quality evaluation model, and outputting the predicted score sequence of each person;
and comparing the facial image quality sequence with the prediction score sequence to train a quality evaluation model, so that the prediction score sequence is consistent with the facial image quality sequence.
The application also comprises a second technical scheme, and the face image optimization method comprises the following steps: performing score ordering on the human face image sequence by using the quality evaluation model established by the method;
and selecting the face image which is ranked at the front in the score ranking as the face image with the best quality.
The application further includes a fourth technical solution, which is a storage medium, in which a computer program is stored, and the computer program is used for being executed to implement the above-mentioned face image optimization method.
The present application also includes a fourth technical solution, and a computing apparatus includes at least one processing unit and at least one storage unit, where the storage unit stores a computer program, and when the program is executed by the processing unit, the processing unit executes the steps of the above-mentioned face image optimization method.
The beneficial effect of this application is: different from the situation of the prior art, the method for establishing the human face image quality evaluation model in the embodiment of the application directly extracts the depth abstract features with stronger representation capability through the human face recognition model, can better reflect the human face quality, and avoids complex artificial feature extraction; the entity training is directly carried out on the basis of the face image sequence and the prediction score sequence, the similarity between the score of the training quality evaluation model and the quality of the face image is consistent, the score can reflect the recognition difficulty degree of the image in the face image sequence, and the higher the score is, the better the face image quality is, and the easier the recognition is.
Drawings
FIG. 1 is a schematic step diagram of an embodiment of a face image quality evaluation model building method according to the present application;
FIG. 2 is a schematic structural diagram of an embodiment of a deep convolutional neural network model of the present application;
FIG. 3 is a schematic step diagram of another embodiment of a face image quality evaluation model building method according to the present application;
FIG. 4 is a schematic step diagram of a further embodiment of a face image quality evaluation model building method according to the present application;
FIG. 5 is a schematic diagram illustrating steps of an embodiment of a preferred method for face images according to the present application;
FIG. 6 is a schematic diagram illustrating steps of another embodiment of a preferred method for face images according to the present application;
FIG. 7 is a block diagram of an embodiment of a computer storage medium according to the present application;
FIG. 8 is a block diagram of an embodiment of a computing device according to the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
In an actual scene, the number of people appearing in a monitoring video and the number of face pictures of each person are very large, and if the best one or more faces can be selected from a face image sequence for subsequent recognition, calculation and storage resources are saved, and the recognition efficiency and the recognition performance can be improved.
In the prior art, there are generally three methods for obtaining a face quality label. Firstly, manual scoring is carried out, the method has the advantages that the human evaluation on the picture quality can be directly reflected, the obtained scoring model is more consistent with the subjective feeling of a human, the defects are that for the current millions of data sets, the manual scoring workload is large, the time consumption is long, errors are easy to occur, and the subjective feeling of the human is sometimes inconsistent with the recognition degree of an algorithm on the human face picture. The second is face quality classification, which simply classifies the face into several categories according to the quality of the face to obtain category labels. The third is program simulation generation, which adds different types of noise and blur of different degrees to the original face picture or rotates the face by using a 3D method to generate face images with different quality differences and obtains a face label according to the degree of face conversion.
In order to solve the above technical problem, an embodiment of the present application provides a technical solution that: as shown in fig. 1, a method for establishing a facial image quality evaluation model is provided, and the evaluation model establishment method includes:
step 110: and acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person.
In the embodiment of the application, for a video face sequence set obtained from a real scene, in a video face sequence set of M persons, the sequence of the ith person has niFace picture (x)1,x2,...,xni) I.e. obtaining a set of face image sequences (x) containing the person of the place i1,x2,...,xni)。
Step 120: and extracting the depth characteristic of each face image based on the face recognition model.
In the embodiment of the application, the depth characteristics of each face image are extracted by adopting the face recognition model, and the depth characteristics are deep abstract characteristics with stronger representation capability and can reflect the quality of the face images most. The face image recognition model in the embodiment of the application may be any one of a deep id model, a VGGFace model, and a FaceNet model, wherein the three face image recognition models have a good face image recognition effect, and in other embodiments, the face image recognition models may be other types of face image recognition models, and the type of the face image recognition model is not limited in the embodiment of the application.
Step 130: and comparing the depth features corresponding to each face image with the standard face features of the same person respectively, and calculating the similarity.
The standard face features of the same person in the embodiment of the application can be obtained by pre-storing a standard face image of the same person, for example, the standard face image can be a certificate photo or other high-definition photos; the standard face image can be pre-stored with one or two, for example, a certificate photo and a clear life photo. In the embodiment of the application, the pre-stored standard face image is directly extracted through the face recognition model, and the depth characteristic of the standard face image is obtained and used as the standard face characteristic. Preferably, the face recognition model for acquiring each face image is the same as the face recognition model used for acquiring the standard face image. In other embodiments, the standard facial features of the same person may also be directly pre-stored standard facial features of the same person.
In the embodiment of the application, the depth characteristics corresponding to each face image are respectively compared with the standard face characteristics of the same person, and the similarity is calculated, so that the similarity between each face image and the standard face image can be judged.
In the embodiment of the application, the calculation of the similarity is preferably a cosine similarity, and the similarity between the depth feature corresponding to each face image and the standard face feature can be better reflected by calculating the cosine similarity.
Step 140: and obtaining the quality sequence of the face images of each person according to the corresponding similarity of each face image.
Wherein, according to the corresponding similarity of each face image, the quality label (y) of the face image corresponding to the same person can be obtained1,y2,...,yni) By adopting the same principle, the quality label of the face image of each person in the whole data set can be obtained. In the embodiment of the application, the corresponding similarity of each face image is sequenced, and the higher the similarity is, the closer the corresponding face image is to the standard face image, and the higher the quality of the face image is. According to the similarity ranking, the face image quality ranking of the same person can be correspondingly obtained, and the similarity ranking is positively correlated with the face image quality ranking. Similarly, the quality sequence of the face image of each person can be obtained.
Step 150: and inputting the facial image sequence set into a quality evaluation model, and outputting the predicted score sequence of each person.
As shown in fig. 2, the quality evaluation model in the embodiment of the present application is a simple deep convolutional neural network model, which includes four convolutional layers 11, 12, 13, and 14, and a full connection layer 20, and the model size is smaller than 400kB, and can be run in real time on most embedded devices at present. The quality evaluation model of the embodiment of the application is small and fast, can process the face images in real time, can score the quality of each face image in the face image sequence set, and can output the prediction score sequence of each person.
Continuing as shown in FIG. 1, step 160: and comparing the facial image quality sequence with the prediction score sequence to train a quality evaluation model, so that the prediction score sequence is consistent with the facial image quality sequence.
In the embodiment of the application, the deep convolutional neural network model is trained by comparing the quality sequence of the face image with the prediction score sequence, so that when the prediction score sequence and the quality sequence of the face image tend to be consistent, the deep convolutional neural network model can better evaluate the face image, and the accuracy of the prediction performance of the prediction score of the quality evaluation model can be improved.
According to the method for establishing the facial image quality evaluation model, the depth abstract features with stronger representation capability are directly extracted through the facial recognition model, the facial quality can be better reflected, and complicated artificial feature extraction is avoided; the entity training is directly carried out on the basis of the face image sequence and the prediction score sequence, the similarity between the score of the training quality evaluation model and the quality of the face image is consistent, the score can reflect the recognition difficulty degree of the image in the face image sequence, and the higher the score is, the better the face image quality is, and the easier the recognition is.
In a preferred embodiment of the present application, in order to reduce the situation that the depth features of the face images are greatly influenced by the face recognition model when being extracted, in the present application, step 120 extracts the depth features of each face image based on the face recognition model; step 130 compares the depth features corresponding to each face image with the standard face features of the same person, and calculates the similarity, as shown in fig. 3, including:
step 120': and respectively extracting the depth characteristics of each face image by adopting at least two face recognition models.
In the embodiment of the application, three high-performance face recognition models deep ID, VGGFace and faceNet are adopted to respectively extract the depth features of each face image. In other embodiments, the depth features of each face image can be extracted by using any two models of deep id, VGGFace, and FaceNet. Of course, in another embodiment, any two or more other types of face recognition models may be used to extract the depth features of each image.
Step 130': respectively comparing the depth features of each face image extracted by each face recognition model with the standard face features of the same person, and calculating the similarity; each face image corresponds to at least two kinds of similarity, and the average value of the at least two kinds of similarity is used as the corresponding similarity of the face images.
In the embodiment of the application, three high-performance face recognition models deep ID, VGGFace and FaceNet respectively extract the depth features of each face image, meanwhile, three high-performance face recognition models deep ID, VGGFace and FaceNet respectively extract the standard face images of the same person to obtain three standard face features, the depth features of each face image obtained by extracting the deep ID of the face recognition model are respectively compared with the standard face features of the same person obtained by extracting the deep ID of the face recognition model, and a first similarity is obtained; comparing the depth features of each face image extracted by the face recognition model VGGFace with the standard face features of the same person extracted by the face recognition model VGGFace to obtain a second similarity; comparing the depth features of each face image extracted by the face recognition model FaceNet with the standard face features of the same person extracted by the face recognition model FaceNet to obtain a third similarity; and averaging the first similarity, the second similarity and the third similarity to obtain the similarity corresponding to the face image. According to the embodiment of the application, the three high-performance face recognition models are adopted to respectively extract the depth features of each face image so as to obtain three kinds of similarity, the average similarity is obtained after the average value is solved and is used as the corresponding similarity of each face image, the influence of the inherent characteristics of the face recognition models on the depth feature extraction of the face images can be reduced, and therefore the quality evaluation models can be more accurate according to objective training quality evaluation models. In other embodiments, two or more than three kinds of similarity degrees can be obtained through two or more than three kinds of face evaluation models, and the average value of the similarity degrees is obtained as the corresponding similarity degree of the face image.
Step 160 compares the facial image quality ranking with the prediction score ranking to train a quality evaluation model so that the prediction score ranking and the facial image quality ranking tend to be consistent, comprising:
and constructing a cosine loss function representing the difference degree of the quality sequence of the face image and the prediction score sequence, and training a quality evaluation model through the cosine loss function.
The cosine loss function is calculated as follows:
Figure BDA0002558721490000071
lcosine (i) represents cosine loss of ith person in face image sequence, and the face image sequence of the ith person has niFace picture (x)1,x2,...,xni) The corresponding mass label is (y)1,y2,...,yni) The prediction score sequence is(s)1,s2,...,sni)。
Wherein the second part of the formula
Figure BDA0002558721490000072
Is a vector cosine calculation formula. The cosine loss is insensitive to the fractional absolute value, and when the distribution of the prediction fractional sequence is consistent with that of the quality fractional tag sequence, the second part in the formula is 1, and the loss is 0. When the predicted score distribution is completely opposite to the mass tag sequence distribution, the second part in the equation is-1 and the loss value reaches a maximum of 2. The more consistent the predicted score sequence distribution is with the mass label distribution, the less the loss. Therefore, in the process of training convergence of the deep convolutional neural network model, loss is reduced, namely, the prediction fraction distribution of the model is enabled to be consistent with the quality label distribution, and the prediction performance of the model is improved. When the prediction score distribution of the model and the quality label distribution tend to be consistent, the output score of the model can represent the human face more effectivelyThe difficulty of recognition of the image, i.e. the quality of the image.
As a specific embodiment of the present application, as shown in fig. 4, a method for establishing a facial image quality evaluation model includes the following steps:
step 610: and acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person. Wherein the face image sequence set may be a video face sequence set.
Step 620: and respectively extracting the depth features of each face image by adopting three face recognition models. The three face recognition models adopted in the embodiment of the application are respectively DeepiD, VGGFace and FaceNet.
Step 710: and acquiring a standard face image. The standard face image is pre-stored in a standard face image library, and can be extracted from the standard face image library.
Step 720: and respectively extracting the depth features of the standard face image by adopting three face recognition models. According to the method and the device, the depth features of the standard face image are respectively extracted by the face recognition models deep ID, VGGFace and faceNet.
Step 630: and comparing the depth features of each face image respectively extracted by the three face recognition models with the standard human depth face features of the same person, and calculating three similarities. Specifically, the depth features of each face image extracted by the face recognition model deep ID are respectively compared with the standard face depth features of the same person extracted by the face recognition model deep ID to obtain a similarity; comparing the depth features of each face image extracted by the face recognition model VGGFace with the standard face depth features of the same person extracted by the face recognition model VGGFace respectively, and then obtaining a similarity; and respectively comparing the depth features of each face image extracted by the face recognition model faceNet with the standard face depth features of the same person extracted by the face recognition model faceNet, and obtaining one similarity again to obtain three similarities.
Step 631: and averaging the three similarities, wherein the obtained similarity average is the similarity corresponding to the face image.
Step 640: and obtaining the quality sequence of the face images of each person according to the corresponding similarity of each face image.
Step 650: and inputting the facial image sequence set into a quality evaluation model, and outputting the predicted score sequence of each person.
Step 660: and constructing a cosine loss function representing the difference degree of the quality sequence of the face image and the prediction score sequence, and training a quality evaluation model through the cosine loss function.
The embodiment of the present application further includes a second technical solution, and a method for optimizing a face image, as shown in fig. 5, includes:
step 410: and performing score sequencing on the human face image sequence by using the quality evaluation model established by the human face image quality evaluation model establishing method.
In the process of optimizing the actual face images, the face image sequence is input into the quality evaluation model established by the model establishing method, each face image of the face image sequence is scored to obtain a score sequence of the face image sequence, and the score sequence is sequenced to obtain the face image sequence sequenced according to the analysis of the face images.
Step 420: and selecting the face image which is ranked at the front in the score ranking as the face image with the best quality.
The highest and top ranked face image in the face image sequence in the score ranking is the best face image, and the quality of the corresponding face image is the best. For example, the higher the score value is, the more forward the corresponding is, the better the corresponding face image quality is, and the face image with the best quality is.
According to the face image optimization method, the constructed face image quality evaluation model can be used for learning deep abstract features with stronger representation capability of the face image, extraction of complex artificial design features is avoided, end-to-end scoring is realized, namely the face image can be input into the quality evaluation model through the image, corresponding scores are directly obtained, gradual change of the face quality in a face image sequence can be better distinguished through the scores, the image with the best face image quality can be selected according to the scores, and the face image optimization method is convenient, convenient and fast and high in accuracy.
In the embodiment of the present application, as shown in fig. 6, step 410 further includes, before step 400: and acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person, and the face image sequence can be a video face sequence set.
The embodiment of the present application further includes a third technical solution, as shown in fig. 7, a computer storage medium 500, where a computer program 510 is stored in the computer storage medium 500, and the computer program is used to be executed to implement the above-mentioned face image optimization method.
Based on such understanding, all or part of the flow in the method according to the embodiments described above can also be implemented by a computer program 510 to instruct related hardware, and the computer program 510 can be stored in a computer readable storage medium, and when being executed by a processor, the computer program 510 can implement the steps of the above-described embodiments of the method. The computer program 510 comprises, inter alia, computer program code, which may be in the form of source code, object code, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
The present application further includes a fourth technical solution, as shown in fig. 8, a computing apparatus 600 includes at least one processing unit 610 and at least one storage unit 620, where the storage unit 620 stores a computer program, and when the program is executed by the processing unit, the processing unit 610 executes the steps of the above-mentioned face image optimization method.
The Processing Unit 610 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processing unit 610 may be a microprocessor or the processing unit 610 may be any conventional processor, etc., and the processing unit 610 is a control center for setting the display names of the parameter information items in the monitor, and various interfaces and lines are used to connect the various device parts of the entire monitor.
The storage unit 620 can be used for storing computer programs and/or modules, and the processing unit 610 can implement the setting of the display name of the parameter information item in the monitor by running or executing the computer programs and/or modules stored in the storage unit 620 and calling the data stored in the storage unit 620. The storage unit 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the storage unit 620 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The computer apparatus 600 may also include a power component configured to perform power management of the computer device, a wired or wireless network interface configured to connect the device to a network, and an input output (I/O) interface. The device may operate based on an operating system stored in memory, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A face image quality evaluation model establishing method is characterized by comprising the following steps:
acquiring a face image sequence set, wherein the face image sequence set comprises a plurality of face images of the same person;
extracting the depth feature of each face image based on a face recognition model;
comparing the depth features corresponding to each face image with the standard face features of the same person respectively, and calculating the similarity;
obtaining the quality sequence of the face images of each person according to the corresponding similarity of each face image;
inputting the facial image sequence set into the quality evaluation model, and outputting the predicted score sequence of each person;
and comparing the facial image quality ranking with the prediction score ranking to train the quality evaluation model so that the prediction score ranking and the facial image quality ranking tend to be consistent.
2. The method for establishing the facial image quality evaluation model according to claim 1, wherein the extracting the depth feature of each facial image based on the facial recognition model comprises:
respectively extracting the depth characteristics of each face image by adopting at least two face recognition models;
comparing the depth features corresponding to each face image with the standard face features of the same person respectively, and calculating the similarity, wherein the similarity comprises the following steps:
respectively comparing the depth features of each face image extracted by each face recognition model with the standard face features of the same person, and calculating the similarity; each face image corresponds to at least two kinds of similarity, and the average value of the at least two kinds of similarity is used as the corresponding similarity of the face image.
3. The method for building a facial image quality evaluation model according to claim 1, wherein the standard facial features are obtained by extracting a pre-stored standard facial image of the same person based on a facial recognition feature model.
4. The method for establishing a facial image quality evaluation model according to claim 1 or 2, wherein the similarity is a cosine similarity.
5. The method for establishing the facial image quality evaluation model according to claim 2, wherein the at least two types of facial recognition models comprise at least two types of deep id, VGGFace and FaceNet.
6. The method for building a facial image quality evaluation model according to claim 1, wherein the comparing the facial image quality ranking and the prediction score ranking to train the quality evaluation model so that the prediction score ranking and the facial image quality ranking tend to be consistent comprises:
constructing a cosine loss function representing the difference degree of the quality sequence of the face image and the prediction score sequence, and training the quality evaluation model through the cosine loss function;
the cosine loss function is calculated as follows:
Figure FDA0002558721480000021
lcosine (i) represents cosine loss of ith person in face image sequence set, and face image sequence of ith person has niFace picture (x)1,x2,...,xni) The corresponding mass label is (y)1,y2,...,yni) The prediction score sequence is(s)1,s2,...,sni)。
7. The method for building a facial image quality evaluation model according to claim 1, wherein the quality evaluation model comprises four convolutional layers and a full link layer.
8. A face image optimization method is characterized by comprising the following steps:
performing score ordering on the human face image sequence by using a quality evaluation model established by the method of any one of claims 1-7;
and selecting the face image which is ranked at the front in the score ranking as the face image with the best quality.
9. A storage medium, characterized in that the storage medium internally stores a computer program for being executed to implement the face image preferred method of claim 8.
10. A computing device comprising at least one processing unit and at least one memory unit, said memory unit storing a computer program which, when executed by said processing unit, causes said processing unit to perform the steps of the face image preferred method of claim 8.
CN202010601590.9A 2020-06-28 2020-06-28 Face image quality evaluation model establishment method, optimization method, medium and device Active CN111814620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601590.9A CN111814620B (en) 2020-06-28 2020-06-28 Face image quality evaluation model establishment method, optimization method, medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601590.9A CN111814620B (en) 2020-06-28 2020-06-28 Face image quality evaluation model establishment method, optimization method, medium and device

Publications (2)

Publication Number Publication Date
CN111814620A true CN111814620A (en) 2020-10-23
CN111814620B CN111814620B (en) 2023-08-15

Family

ID=72855207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601590.9A Active CN111814620B (en) 2020-06-28 2020-06-28 Face image quality evaluation model establishment method, optimization method, medium and device

Country Status (1)

Country Link
CN (1) CN111814620B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200176A (en) * 2020-12-10 2021-01-08 长沙小钴科技有限公司 Method and system for detecting quality of face image and computer equipment
CN112329679A (en) * 2020-11-12 2021-02-05 济南博观智能科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112668637A (en) * 2020-12-25 2021-04-16 苏州科达科技股份有限公司 Network model training method, network model identification device and electronic equipment
CN112948612A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN112949709A (en) * 2021-02-26 2021-06-11 北京达佳互联信息技术有限公司 Image data annotation method and device, electronic equipment and storage medium
CN113075208A (en) * 2021-03-24 2021-07-06 贵州省草业研究所 Intelligent cattle and sheep fermented feed quality evaluation method and device based on picture collection
CN113192028A (en) * 2021-04-29 2021-07-30 北京的卢深视科技有限公司 Quality evaluation method and device for face image, electronic equipment and storage medium
CN113553971A (en) * 2021-07-29 2021-10-26 青岛以萨数据技术有限公司 Method and device for extracting optimal frame of face sequence and storage medium
CN113657178A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 Face recognition method, electronic device and computer-readable storage medium
CN117372405A (en) * 2023-10-31 2024-01-09 神州通立电梯有限公司 Face image quality evaluation method, device, storage medium and equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9659248B1 (en) * 2016-01-19 2017-05-23 International Business Machines Corporation Machine learning and training a computer-implemented neural network to retrieve semantically equivalent questions using hybrid in-memory representations
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108921107A (en) * 2018-07-06 2018-11-30 北京市新技术应用研究所 Pedestrian's recognition methods again based on sequence loss and Siamese network
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109344855A (en) * 2018-08-10 2019-02-15 华南理工大学 A kind of face beauty assessment method of the depth model returned based on sequence guidance
WO2019109526A1 (en) * 2017-12-06 2019-06-13 平安科技(深圳)有限公司 Method and device for age recognition of face image, storage medium
US20190258902A1 (en) * 2018-02-16 2019-08-22 Spirent Communications, Inc. Training A Non-Reference Video Scoring System With Full Reference Video Scores
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
WO2020123788A1 (en) * 2018-12-14 2020-06-18 The Board Of Trustees Of The Leland Stanford Junior University Qualitative and quantitative mri using deep learning
CN111340213A (en) * 2020-02-19 2020-06-26 浙江大华技术股份有限公司 Neural network training method, electronic device, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9659248B1 (en) * 2016-01-19 2017-05-23 International Business Machines Corporation Machine learning and training a computer-implemented neural network to retrieve semantically equivalent questions using hybrid in-memory representations
WO2019109526A1 (en) * 2017-12-06 2019-06-13 平安科技(深圳)有限公司 Method and device for age recognition of face image, storage medium
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
US20190258902A1 (en) * 2018-02-16 2019-08-22 Spirent Communications, Inc. Training A Non-Reference Video Scoring System With Full Reference Video Scores
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN108921107A (en) * 2018-07-06 2018-11-30 北京市新技术应用研究所 Pedestrian's recognition methods again based on sequence loss and Siamese network
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109344855A (en) * 2018-08-10 2019-02-15 华南理工大学 A kind of face beauty assessment method of the depth model returned based on sequence guidance
WO2020123788A1 (en) * 2018-12-14 2020-06-18 The Board Of Trustees Of The Leland Stanford Junior University Qualitative and quantitative mri using deep learning
CN110263697A (en) * 2019-06-17 2019-09-20 哈尔滨工业大学(深圳) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN111340213A (en) * 2020-02-19 2020-06-26 浙江大华技术股份有限公司 Neural network training method, electronic device, and storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329679A (en) * 2020-11-12 2021-02-05 济南博观智能科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112329679B (en) * 2020-11-12 2023-10-10 济南博观智能科技有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112200176A (en) * 2020-12-10 2021-01-08 长沙小钴科技有限公司 Method and system for detecting quality of face image and computer equipment
CN112668637A (en) * 2020-12-25 2021-04-16 苏州科达科技股份有限公司 Network model training method, network model identification device and electronic equipment
CN112949709A (en) * 2021-02-26 2021-06-11 北京达佳互联信息技术有限公司 Image data annotation method and device, electronic equipment and storage medium
CN112948612A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN112948612B (en) * 2021-03-16 2024-02-06 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN113075208A (en) * 2021-03-24 2021-07-06 贵州省草业研究所 Intelligent cattle and sheep fermented feed quality evaluation method and device based on picture collection
CN113192028A (en) * 2021-04-29 2021-07-30 北京的卢深视科技有限公司 Quality evaluation method and device for face image, electronic equipment and storage medium
CN113657178A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 Face recognition method, electronic device and computer-readable storage medium
CN113553971A (en) * 2021-07-29 2021-10-26 青岛以萨数据技术有限公司 Method and device for extracting optimal frame of face sequence and storage medium
CN117372405A (en) * 2023-10-31 2024-01-09 神州通立电梯有限公司 Face image quality evaluation method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN111814620B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN111814620B (en) Face image quality evaluation model establishment method, optimization method, medium and device
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US10354362B2 (en) Methods and software for detecting objects in images using a multiscale fast region-based convolutional neural network
CN109815826B (en) Method and device for generating face attribute model
WO2018188453A1 (en) Method for determining human face area, storage medium, and computer device
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN108197618B (en) Method and device for generating human face detection model
CN111738357B (en) Junk picture identification method, device and equipment
US11854247B2 (en) Data processing method and device for generating face image and medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN111126347B (en) Human eye state identification method, device, terminal and readable storage medium
WO2021184754A1 (en) Video comparison method and apparatus, computer device and storage medium
Parde et al. Face and image representation in deep CNN features
CN113392866A (en) Image processing method and device based on artificial intelligence and storage medium
TWI803243B (en) Method for expanding images, computer device and storage medium
Parde et al. Deep convolutional neural network features and the original image
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN112651333B (en) Silence living body detection method, silence living body detection device, terminal equipment and storage medium
CN116701706B (en) Data processing method, device, equipment and medium based on artificial intelligence
CN117689884A (en) Method for generating medical image segmentation model and medical image segmentation method
US20230021551A1 (en) Using training images and scaled training images to train an image segmentation model
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN117218398A (en) Data processing method and related device
CN117011449A (en) Reconstruction method and device of three-dimensional face model, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant