CN110399839B - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN110399839B
CN110399839B CN201910682405.0A CN201910682405A CN110399839B CN 110399839 B CN110399839 B CN 110399839B CN 201910682405 A CN201910682405 A CN 201910682405A CN 110399839 B CN110399839 B CN 110399839B
Authority
CN
China
Prior art keywords
image
face recognition
image quality
model
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910682405.0A
Other languages
Chinese (zh)
Other versions
CN110399839A (en
Inventor
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910682405.0A priority Critical patent/CN110399839B/en
Publication of CN110399839A publication Critical patent/CN110399839A/en
Application granted granted Critical
Publication of CN110399839B publication Critical patent/CN110399839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a face recognition method, apparatus, device and storage medium, the method comprising: acquiring image characteristics of a plurality of image samples based on a first face recognition model; training to obtain an image quality classification model based on image quality characteristics of a plurality of image samples and a preset first quality class; adjusting image quality characteristics of the plurality of image samples to a second quality class different from the first quality class; performing iterative training on a first face recognition model, performing feature extraction on a plurality of image samples again based on the first face recognition model after each training, inputting the obtained image quality features into an image quality classification model, repeating the training and feature extraction processes until the loss value of the image quality classification model meets a target condition, and outputting the first face recognition model obtained by the training as a second face recognition model; and carrying out face recognition on the image based on the second face recognition model.

Description

Face recognition method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a storage medium for face recognition.
Background
With the development of artificial intelligence, the application of face recognition technology is more and more extensive, and face recognition technology is usually realized by adopting a face recognition model. Face recognition models typically rely on a large number of image samples. In order to improve the accuracy of the face recognition model, when feature extraction is performed on an image sample, it is desirable that the acquired face features are biological features with self-stability and individual differences. However, in the feature extraction process, in addition to extracting the biological features of the face image, image quality features such as the definition and color of the face image in the image sample are also extracted. When the face features are used for training the face recognition model, if the biological feature difference in the image sample is small and the image quality feature difference is large, the discrimination criterion learned by the face recognition model may be actually based on the image quality features, so that the face recognition model erroneously recognizes faces with similar biological features but large image quality feature difference as different persons.
In order to avoid image quality characteristics, the recognition result of the face recognition model is influenced. In the related art, by acquiring image samples with the same image quality, the image quality characteristics of all extracted image samples are the same, and thus, recognition errors are avoided. However, the difficulty in obtaining a large number of image samples with the same image quality is high, the time consumption is long, and the training efficiency of the face recognition model is influenced.
Disclosure of Invention
The present disclosure provides a face recognition method, apparatus, device, and storage medium to at least solve the problems in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a face recognition method, the method including: acquiring image characteristics of a plurality of image samples based on a first face recognition model, wherein the image characteristics of the image samples comprise image biological characteristics and image quality characteristics; training to obtain an image quality classification model based on the image quality characteristics of the plurality of image samples and a preset first quality class; adjusting image quality characteristics of the plurality of image samples to a second quality class different from the first quality class; performing iterative training on the first face recognition model, performing feature extraction on the plurality of image samples again based on the first face recognition model after each training, inputting the obtained image quality features into the image quality classification model, repeating the training and feature extraction processes until the loss value of the image quality classification model meets a target condition, and outputting the first face recognition model obtained by the training as a second face recognition model; and carrying out face recognition on the image based on the second face recognition model.
Optionally, the first face recognition model includes a plurality of image feature extraction network layers, and the obtaining image features of a plurality of image samples based on the first face recognition model includes: and taking the output of the last but one image feature extraction network layer in the first face recognition model as the image feature of the image sample.
Optionally, the target condition includes that the corresponding loss value is not changed within a target time length.
Optionally, before performing iterative training on the first face recognition model, performing feature extraction again on the plurality of image samples based on the first face recognition model after each training, and inputting the obtained image quality features into the image quality classification model, the method further includes: obtaining a plurality of test image samples, and verifying the identification accuracy of the image quality classification model; and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
Optionally, the first face recognition model and the image quality classification model employ a resnet neural network model.
According to a second aspect of the embodiments of the present disclosure, there is provided a face recognition apparatus, the apparatus including: an obtaining module configured to perform obtaining image features of a plurality of image samples based on a first face recognition model, the image features of the image samples including image biological features and image quality features; the training module is configured to perform training to obtain an image quality classification model based on the image quality characteristics of the plurality of image samples and a preset first quality category; an adjustment module configured to perform an adjustment of image quality features of the plurality of image samples to a second quality class different from the first quality class; the processing module is configured to perform iterative training on the first face recognition model, perform feature extraction on the plurality of image samples again based on the first face recognition model after each training, input the acquired image quality features into the image quality classification model, repeat the training and feature extraction processes until the loss value of the image quality classification model meets a target condition, and output the first face recognition model obtained by the training as a second face recognition model; a recognition module configured to perform face recognition on the image based on the second face recognition model.
Optionally, the first face recognition model includes a plurality of image feature extraction network layers, and the obtaining module is configured to perform output of a penultimate image feature extraction network layer in the first face recognition model as the image feature of the image sample.
Optionally, the target condition includes that the corresponding loss value is not changed within a target time length.
Optionally, the processing module is further configured to perform obtaining a plurality of test image samples, and verifying the identification accuracy of the image quality classification model; and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
Optionally, the first face recognition model and the image quality classification model employ a resnet neural network model.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method as in the first aspect or any one of the possible implementations of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium comprising: the instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method as in the first aspect or any one of the possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program (product) comprising: computer program code which, when run by a computer, causes the computer to perform the method of the above aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the image quality classification model is obtained through training based on the image quality features extracted by the first face recognition model and a preset first quality category, the image quality features extracted by the first face recognition model are subjected to label adjustment of the quality category, and the image quality features after the labels are adjusted are input into the image quality classification model. When the loss value of the image quality classification model meets the target condition, the discrimination standard learned by the face recognition model is based on the image biological characteristics. The whole process does not need to obtain image samples with the same quality, so that the training efficiency of the face recognition model is improved, and the recognition accuracy of the face recognition model is also improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating an application scenario of a face recognition method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of face recognition according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating the structure of a convolutional neural network, according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating the structure of a convolutional neural network, according to an exemplary embodiment;
FIG. 5 is a diagram illustrating a structure of a residual block in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating a face recognition apparatus according to an example embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before explaining the technical solution provided by the embodiment of the present application, a use scenario is first introduced. As shown in fig. 1, the face recognition model includes an image feature extraction network layer and an image classification network layer. The image characteristic extraction network layer is used for inputting the extracted image characteristics of the input image into the image classification network layer, and the image classification network obtains an image classification result according to the input image characteristics and outputs the image classification. It can be seen that the image features extracted by the image feature extraction network layer are particularly important for the accuracy of the recognition result of the face recognition model.
In order to improve the accuracy of the face recognition model, when feature extraction is performed on an image sample, it is desirable that the acquired face features are biological features with self-stability and individual differences. However, in the feature extraction process, in addition to extracting the biological features of the face image, image quality features such as the definition and color of the face image in the image sample are also extracted. When the face features are used for training the face recognition model, if the biological feature difference in the image sample is small and the image quality feature difference is large, the discrimination criterion learned by the face recognition model may be actually based on the image quality features, so that the face recognition model erroneously recognizes faces with similar biological features but large image quality feature difference as different persons.
In order to avoid image quality characteristics, the recognition result of the face recognition model is influenced. In the related art, by acquiring image samples with the same image quality, the image quality characteristics of all extracted image samples are the same, and thus, recognition errors are avoided. However, the difficulty in obtaining a large number of image samples with the same image quality is high, the time consumption is long, and the training efficiency of the face recognition model is influenced. In order to avoid the above problem, an embodiment of the present application provides a face recognition method, which is specifically described in the following embodiments.
Fig. 2 is a flowchart illustrating a face recognition method according to an exemplary embodiment, where the face recognition method is used in an electronic device, and the electronic device may be any terminal or server that can be used for training a machine learning model. The embodiment of the application takes a terminal as an example, and the method comprises the following steps:
in step S201, based on the first face recognition model, image features of a plurality of image samples are obtained, the image features of the image samples including image biological features and image quality features.
For example, the first face recognition model may adopt a Resnet neural network model, and may also be other neural network models such as inclusion-Resnet-V2, NasNet, MobileNet, and the like. The network structure of the face recognition model is not limited in the embodiment of the application, and can be determined by a person skilled in the art according to actual use needs. The image quality features extracted by the first face recognition model may include a feature for characterizing image quality, such as image sharpness or image color. A plurality of image quality features can also be extracted based on the first face recognition model. The embodiment of the application does not limit the types of the image quality features extracted by the first face recognition model. For convenience of description of the technical solutions described in the embodiments of the present application, the embodiments of the present application take the extracted image quality features as an example of image sharpness. The image quality characteristics corresponding to the plurality of image samples may be the same or different.
As an optional embodiment of the present application, the first face recognition model includes a plurality of image feature extraction network layers, and in step S201, the method includes: and extracting the output of the network layer from the last but one image feature in the first face recognition model as the image feature of the image sample.
For example, in order to improve the robustness of the first face recognition model, the image feature extraction network layer of the first face recognition model may include a plurality of layers. The skilled person can determine the number of layers of the set image feature extraction network layer according to actual needs. Fig. 3 and 4 are schematic diagrams of a convolutional neural network structure corresponding to a possible image feature extraction network layer, where each layer of neurons may adopt a three-dimensional convolution operation, the second to sixth layers may include a plurality of maximum pooling layers and a residual block, and fig. 5 is a schematic diagram of a residual block, where an activation function of the residual block may adopt a pre (parametric reconstructed Linear user int) activation function, and is connected to an image classification network layer through a last full connection layer. In the embodiment of the application, the number of the image feature extraction network layers is six, and the output of the last image feature extraction network layer can be used as the image feature of the image sample. The specific number of layers of the image features serving as the image sample is not limited in the embodiment of the application, and a person skilled in the art can obtain the image features of the corresponding number of layers as the image features of the image sample according to actual use requirements.
In step S202, an image quality classification model is trained based on image quality features of a plurality of image samples and a preset first quality class.
Illustratively, the image clarity is taken as an example of the image quality characteristic extracted from the image sample. Whether the image sharpness of the image sample is the same or not, the labels of the obtained image sharpness may all be preset as the same sharpness label, for example, as "first sharpness". The labels of the images having the same image sharpness may also be preset to be the same labels according to the difference in image sharpness of the image sample. For example, the obtained plurality of image samples are obtained by photographing through different devices, or the obtained plurality of image samples include an unfinished image and a finished image. In the embodiment of the present application, taking an example that an obtained image sample includes an unrefined image and a refined image, a label of the image definition of the unrefined image sample may be preset to "first definition", and a label of the image definition of the refined image sample may be preset to "second definition". And training to obtain an image quality classification model according to the obtained image definition and a preset definition label. The method for acquiring the image samples with different quality parameters is not limited, and a person skilled in the art can acquire the image samples with different quality parameters according to actual use requirements. The setting mode of the label of the image quality characteristic of the image sample is not limited in the embodiment of the application, and a person skilled in the art can determine the label according to the actual training requirement. The image quality classification model can be obtained by training according to the image definition and the preset definition label. The image quality classification model may employ a resnet neural network model. The network model adopted by the image quality classification model is not limited in the embodiment of the present application, and those skilled in the art can determine the network model according to actual use requirements.
In step S203, the image quality characteristics of the plurality of image samples are adjusted to a second quality class different from the first quality class.
Illustratively, the quality label of the image quality characteristic of the image sample is changed by adjusting the image quality characteristic of the image sample to a second quality class different from the first quality class. For example, when the image sharpness labels of all the image samples used for training the image quality classification model are all set to "first sharpness", the sharpness labels of all the image samples may be uniformly adjusted to "second sharpness". When the image definition labels of all image samples used for training the image quality classification model are set as a first definition and a second definition according to the definition of the pattern sample, the labels of the image samples corresponding to the first definition and the second definition can be uniformly adjusted to be a third definition; or the label of the image sample corresponding to the "first definition" is adjusted to the "third definition", and the label of the image sample corresponding to the "second definition" is adjusted to the "fourth definition". The embodiment of the application does not limit the types of the labels before and after adjustment, and only needs to ensure that the image labels before and after adjustment are different from those after adjustment.
In step S204, the first face recognition model is iteratively trained, feature extraction is performed on a plurality of image samples based on the first face recognition model after each training, the obtained image quality features are input into the image quality classification model, the training and feature extraction processes are repeated until the loss value of the image quality classification model meets the target condition, and the first face recognition model obtained by the current training is output as the second face recognition model.
Illustratively, the labeled image sample is input into a first face recognition model, and the image quality characteristics of the image sample extracted by the first face recognition model and the label of the adjusted image sample are input into an image quality classification model. Those skilled in the art will appreciate that the loss value of the loss function of the machine learning model is used to measure the degree to which the predicted value of the model does not match the true value. And when the image quality characteristics after the labels are adjusted are classified and identified based on the image quality classification model obtained by label training before adjustment, the identification result of the image quality classification model and the real labels of the image samples have loss values. The output of the previous round of image feature extraction network layer is used as the input of a first face recognition model, the first face model is trained again, and the image quality features of the image sample are extracted based on the trained first face recognition model; and inputting the re-extracted image quality characteristics into the image quality classification model to obtain a loss value of the image quality classification model. And repeating the training and feature extraction processes until the loss value of the image quality classification model meets the target condition. And when the loss value corresponding to the image quality classification model meets the target condition, the discrimination standard learned by the characteristic face recognition model is based on the image biological characteristics. The target condition may be that the loss value does not change for the target duration, or that other criteria that may be used to reduce or avoid learning by the first face recognition model are based on image quality features. The target duration is not limited in the embodiment of the application, and can be determined by a person skilled in the art according to actual use needs. The loss function adopted by the image quality classification model is not limited in the embodiment of the application, and can be determined by a person skilled in the art according to actual use needs.
The target condition is introduced in conjunction with the label adjustment described in step 203. For example, when the image sharpness labels of all the image samples used for training the image quality classification model are all set to "first sharpness", the sharpness labels of all the image samples are uniformly adjusted to the application scenario of "second sharpness". According to the above technical description, it can be known that the image quality classification model is obtained by training based on the image sample with the label of "first definition", and when the label of the image quality feature extracted by the first face recognition model is set as "second definition", an error always exists between the recognition result and the real result of the image definition feature extracted by the first face recognition model after each training based on the image quality classification model obtained by training based on the image with the label of "first definition". Since the first face recognition model is iteratively trained anyway, it is impossible to make the image quality classification model recognize the class of the extracted image sharpness feature as "second sharpness". Therefore, the loss value varies every time the first face recognition model is trained. When the image features extracted by the first face recognition model do not contain the image quality features, the image quality classification model cannot recognize the image quality features, and the loss value corresponding to the image quality classification model cannot be changed. Namely, when the loss value of the first image quality classification model is unchanged, the first face recognition model obtained by the training can be used as the second face recognition model to be output.
As an optional embodiment of the present application, in order to avoid misjudgment of the case of transforming the loss value of the image quality classification model, in step S204, the method includes: determining that the loss value of the image quality classification model meets the target condition; and when the time meets the target, outputting the first face recognition model obtained by the training as a second face recognition model.
Illustratively, the case where the loss value of the image quality classification model satisfies the target condition may be started when the loss values of two adjacent times satisfy the target condition. The target can be determined according to the use requirement, and the embodiment of the application is not limited.
In step S205, face recognition is performed on the image based on the second face recognition model.
According to the face recognition method provided by the embodiment of the application, the image quality classification model is obtained through training based on the image quality features extracted by the first face recognition model and the preset first quality category, the image quality features extracted by the first face recognition model are subjected to label adjustment of the quality category, and the image quality features after the labels are adjusted are input into the image quality classification model. When the loss value of the image quality classification model meets the target condition, the discrimination standard learned by the face recognition model is based on the image biological characteristics. The whole process does not need to obtain image samples with the same quality, so that the training efficiency of the face recognition model is improved, and the recognition accuracy of the face recognition model is also improved.
As an optional embodiment of the present application, before step S204, the method further includes: obtaining a plurality of test image samples, and verifying the identification accuracy of the image quality classification model; and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
Illustratively, in order to ensure the recognition accuracy of the trained image quality classification model, the test image sample is obtained to verify the recognition accuracy of the image quality classification model. The plurality of test image samples may comprise images having the same quality parameters as the image samples. And identifying the test image sample by using the image quality classification model, and determining the identification accuracy of the image quality classification model. The embodiment of the application does not limit the test image sample as long as the accuracy of the image quality classification model can be tested according to the test image sample. And when the identification accuracy rate does not meet the requirement, adjusting the model parameters of the image quality classification model until the identification accuracy rate requirement is met. The identification accuracy requirement may be an identification accuracy greater than 90%. The requirement on the identification accuracy is not limited in the embodiment of the application, and a person skilled in the art can determine the accuracy according to actual needs.
FIG. 6 is a block diagram illustrating a face recognition apparatus according to an example embodiment. Referring to fig. 6, the apparatus includes an obtaining module 601, a training module 602, an adjusting module 603, a processing module 604, and a recognition module 605.
An obtaining module 601 configured to perform obtaining image features of a plurality of image samples based on a first face recognition model, wherein the image features of the image samples comprise image biological features and image quality features;
a training module 602 configured to perform training to obtain an image quality classification model based on image quality features of a plurality of image samples and a preset first quality category;
an adjustment module 603 configured to perform an adjustment of the image quality features of the plurality of image samples to a second quality class different from the first quality class;
the processing module 604 is configured to perform iterative training on the first face recognition model, perform feature extraction again on a plurality of image samples based on the first face recognition model after each training, input the obtained image quality features into the image quality classification model, repeat the above training and feature extraction processes until the loss value of the image quality classification model meets the target condition, and output the first face recognition model obtained by the current training as a second face recognition model;
a recognition module 605 configured to perform face recognition on the image based on the second face recognition model.
The face recognition device provided by the embodiment of the application obtains the image quality classification model through the image quality characteristics extracted based on the first face recognition model and the preset first quality category training, performs the label adjustment of the quality category on the image quality characteristics extracted by the first face recognition model, and inputs the image quality characteristics after the label adjustment into the image quality classification model. When the loss value of the image quality classification model meets the target condition, the discrimination standard learned by the face recognition model is based on the image biological characteristics. The whole process does not need to obtain image samples with the same quality, so that the training efficiency of the face recognition model is improved, and the recognition accuracy of the face recognition model is also improved.
As an optional embodiment of the present application, the first face recognition model includes a plurality of image feature extraction network layers, and the obtaining module 601 is configured to perform output of a penultimate image feature extraction network layer in the first face recognition model as an image feature of the image sample.
As an optional embodiment of the present application, the target condition includes that the corresponding loss value is not changed within a target time period.
As an optional embodiment of the present application, the processing module 604 is further configured to perform obtaining a plurality of test image samples, and verifying the identification accuracy of the image quality classification model; and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
As an alternative embodiment of the present application, the first face recognition model and the image quality classification model use a resnet neural network model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 7, the electronic device includes:
a processor 701;
a memory 702 for storing instructions executable by the processor 701;
wherein the processor is configured to execute the command to implement the face recognition method according to the above embodiment. The processor 701 and the memory 702 are connected by a communication bus 703.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Further, in an alternative embodiment, the memory may include both read-only memory and random access memory, and provide instructions and data to the processor. The memory may also include non-volatile random access memory. For example, the memory may also store device type information.
The memory may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
The present application provides a computer program, which when executed by a computer, may cause the processor or the computer to perform the respective steps and/or procedures corresponding to the above-described method embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk), among others.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. A face recognition method, comprising:
acquiring image characteristics of a plurality of image samples based on a first face recognition model, wherein the image characteristics of the image samples comprise image biological characteristics and image quality characteristics;
training to obtain an image quality classification model based on the image quality characteristics of the plurality of image samples and a preset first quality class;
adjusting image quality characteristics of the plurality of image samples to a second quality class different from the first quality class;
performing iterative training on the first face recognition model, performing feature extraction again on the plurality of image samples based on the first face recognition model after each training, inputting the obtained image quality features into the image quality classification model, repeating the training and feature extraction processes until the loss value of the image quality classification model meets a target condition, and outputting the first face recognition model obtained by the training as a second face recognition model, wherein the target condition comprises that the corresponding loss value is not changed within a target duration;
and carrying out face recognition on the image based on the second face recognition model.
2. The method of claim 1, wherein the first face recognition model comprises a plurality of image feature extraction network layers, and wherein obtaining image features of a plurality of image samples based on the first face recognition model comprises:
and taking the output of the last but one image feature extraction network layer in the first face recognition model as the image feature of the image sample.
3. The method of claim 1, wherein the iteratively training the first face recognition model, re-extracting features of the plurality of image samples based on the trained first face recognition model each time, and before inputting the obtained image quality features into the image quality classification model, the method further comprises:
obtaining a plurality of test image samples, and verifying the identification accuracy of the image quality classification model;
and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
4. The method of claim 1, wherein the first face recognition model and the image quality classification model employ a resnet neural network model.
5. An apparatus for face recognition, the apparatus comprising:
an obtaining module configured to perform obtaining image features of a plurality of image samples based on a first face recognition model, the image features of the image samples including image biological features and image quality features;
the training module is configured to perform training to obtain an image quality classification model based on the image quality characteristics of the plurality of image samples and a preset first quality category;
an adjustment module configured to perform an adjustment of image quality features of the plurality of image samples to a second quality class different from the first quality class;
the processing module is configured to perform iterative training on the first face recognition model, perform feature extraction on the multiple image samples again based on the first face recognition model after each training, input the acquired image quality features into the image quality classification model, repeat the training and feature extraction processes until the loss value of the image quality classification model meets a target condition, and output the first face recognition model obtained by the training as a second face recognition model, wherein the target condition includes that the corresponding loss value does not change within a target duration;
a recognition module configured to perform face recognition on the image based on the second face recognition model.
6. The apparatus of claim 5, wherein the first face recognition model comprises a plurality of image feature extraction network layers, and wherein the obtaining module is configured to perform the outputting of a penultimate image feature extraction network layer in the first face recognition model as the image features of the image sample.
7. The apparatus of claim 5, wherein the processing module comprises: determining a time at which a loss value of the image quality classification model satisfies the target condition; and when the time meets the target time, outputting the first face recognition model obtained by the training as a second face recognition model.
8. The apparatus according to any of claims 5-7, wherein the processing module is further configured to perform obtaining a plurality of test image samples, verifying the recognition accuracy of the image quality classification model; and according to the verification result, adjusting the model parameters of the image quality classification model to obtain the image quality classification model meeting the identification accuracy requirement.
9. The apparatus of claim 5, wherein the first face recognition model and the image quality classification model employ a resnet neural network model.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the face recognition method of any one of claims 1 to 4.
11. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the face recognition method of any one of claims 1 to 4.
CN201910682405.0A 2019-07-26 2019-07-26 Face recognition method, device, equipment and storage medium Active CN110399839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910682405.0A CN110399839B (en) 2019-07-26 2019-07-26 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910682405.0A CN110399839B (en) 2019-07-26 2019-07-26 Face recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110399839A CN110399839A (en) 2019-11-01
CN110399839B true CN110399839B (en) 2021-07-16

Family

ID=68326175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910682405.0A Active CN110399839B (en) 2019-07-26 2019-07-26 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110399839B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582365B (en) * 2020-05-06 2022-07-22 吉林大学 Junk mail classification method based on sample difficulty
CN111738083B (en) * 2020-05-20 2022-12-27 云知声智能科技股份有限公司 Training method and device for face recognition model
CN112766164A (en) * 2021-01-20 2021-05-07 深圳力维智联技术有限公司 Face recognition model training method, device and equipment and readable storage medium
CN113255576B (en) * 2021-06-18 2021-10-29 第六镜科技(北京)有限公司 Face recognition method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN109190449A (en) * 2018-07-09 2019-01-11 北京达佳互联信息技术有限公司 Age recognition methods, device, electronic equipment and storage medium
CN110046652A (en) * 2019-03-18 2019-07-23 深圳神目信息技术有限公司 Face method for evaluating quality, device, terminal and readable medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711158B2 (en) * 2004-12-04 2010-05-04 Electronics And Telecommunications Research Institute Method and apparatus for classifying fingerprint image quality, and fingerprint image recognition system using the same
US7869631B2 (en) * 2006-12-11 2011-01-11 Arcsoft, Inc. Automatic skin color model face detection and mean-shift face tracking
RU2007102021A (en) * 2007-01-19 2008-07-27 Корпораци "Самсунг Электроникс Ко., Лтд." (KR) METHOD AND SYSTEM OF IDENTITY RECOGNITION
CN105913025B (en) * 2016-04-12 2019-02-26 湖北工业大学 A kind of deep learning face identification method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN109190449A (en) * 2018-07-09 2019-01-11 北京达佳互联信息技术有限公司 Age recognition methods, device, electronic equipment and storage medium
CN110046652A (en) * 2019-03-18 2019-07-23 深圳神目信息技术有限公司 Face method for evaluating quality, device, terminal and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像质量与人脸识别关系量化模型的研究与实现;吴蔚杰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215(第2期);第I138-3701页 *

Also Published As

Publication number Publication date
CN110399839A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110399839B (en) Face recognition method, device, equipment and storage medium
CN109376615B (en) Method, device and storage medium for improving prediction performance of deep learning network
CN111144937B (en) Advertisement material determining method, device, equipment and storage medium
CN108491805B (en) Identity authentication method and device
CN110222791B (en) Sample labeling information auditing method and device
WO2019227616A1 (en) Method and apparatus for identifying animal identity, computer device, and storage medium
CN107679466B (en) Information output method and device
CN109816200B (en) Task pushing method, device, computer equipment and storage medium
CN111832581B (en) Lung feature recognition method and device, computer equipment and storage medium
CN112464809A (en) Face key point detection method and device, electronic equipment and storage medium
CN111476216A (en) Face recognition method and device, computer equipment and readable storage medium
CN110163151B (en) Training method and device of face model, computer equipment and storage medium
CN111144285A (en) Fat and thin degree identification method, device, equipment and medium
US11881052B2 (en) Face search method and apparatus
CN110427978B (en) Variational self-encoder network model and device for small sample learning
CN109101984B (en) Image identification method and device based on convolutional neural network
CN113516251A (en) Machine learning system and model training method
CN107077617B (en) Fingerprint extraction method and device
CN111510566B (en) Method and device for determining call label, computer equipment and storage medium
CN110610117A (en) Face recognition method, face recognition device and storage medium
CN110147850B (en) Image recognition method, device, equipment and storage medium
CN112541446B (en) Biological feature library updating method and device and electronic equipment
CN116110033A (en) License plate generation method and device, nonvolatile storage medium and computer equipment
CN114116456A (en) Test case generation method, system and computer readable storage medium
CN110177006B (en) Node testing method and device based on interface prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant