CN114550241A - Face recognition method and device, computer equipment and storage medium - Google Patents

Face recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114550241A
CN114550241A CN202210108544.4A CN202210108544A CN114550241A CN 114550241 A CN114550241 A CN 114550241A CN 202210108544 A CN202210108544 A CN 202210108544A CN 114550241 A CN114550241 A CN 114550241A
Authority
CN
China
Prior art keywords
face recognition
initial
training
face
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210108544.4A
Other languages
Chinese (zh)
Other versions
CN114550241B (en
Inventor
刘伟华
夏政
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN202210108544.4A priority Critical patent/CN114550241B/en
Publication of CN114550241A publication Critical patent/CN114550241A/en
Application granted granted Critical
Publication of CN114550241B publication Critical patent/CN114550241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a face recognition method, a face recognition device, computer equipment and a storage medium, which are applied to the technical field of face recognition and used for improving the face recognition efficiency. The method provided by the application comprises the following steps: acquiring a face image to be recognized, and inputting the face image to be recognized into an initial face recognition model for feature extraction to obtain a face feature vector of the face image to be recognized; calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result; and taking the feature extraction parameters and the migration classification vectors of the initial face recognition model as model parameters of the initial face recognition model, training the initial face recognition model to obtain a target face recognition model, and inputting a face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.

Description

Face recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, an apparatus, a computer device, and a storage medium.
Background
The face recognition based on deep learning is widely applied in the industry and plays a strong role in a specific scene.
In the prior art, a face recognition task is realized through a face recognition model constructed based on a deep convolutional neural network. However, training a specific face recognition model requires powerful computing resources and requires the construction of a large-scale face data set, and the huge amount of data usually requires a long training time to obtain the model.
In an actual face recognition scene, in order to reduce the training time of a face recognition model, a pre-training model is usually used, and parameters of the pre-training model are applied to the actual face recognition scene in a transfer learning manner to construct a face recognition model adapted to the actual scene.
Generally, the data architecture of the pre-training model comprises a feature extraction module composed of convolutional layers and a classifier composed of full-link layers, in the migration learning process, only the feature parameters of the feature extraction module are generally migrated, the classification information parameters of the classifier are ignored, in the application scene, the face recognition model needs to be retrained according to a newly added data set, the time for training the model is further increased, and the improvement of the recognition efficiency of face recognition is not facilitated.
Disclosure of Invention
The application provides a face recognition method, a face recognition device, computer equipment and a storage medium, so as to improve the recognition efficiency of face recognition.
A face recognition method, comprising:
acquiring a face image to be recognized, and inputting the face image to be recognized into an initial face recognition model for feature extraction to obtain a face feature vector of the face image to be recognized;
acquiring feature extraction parameters and classification weight parameters of the initial face recognition model, and constructing an initial feature vector library based on the classification weight parameters and a preset vector library construction mode, wherein the initial feature vector library is obtained through training images input into the initial face recognition model and comprises at least one initial feature vector;
calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result;
taking the feature extraction parameters and the migration classification vectors as model parameters of an initial face recognition model, and training the initial face recognition model based on the model parameters and a target face image to obtain a target face recognition model;
and inputting the face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.
A face recognition apparatus comprising:
the characteristic extraction module is used for acquiring a face image to be recognized and inputting the face image to be recognized into an initial face recognition model for characteristic extraction to obtain a face characteristic vector of the face image to be recognized;
the vector library construction module is used for obtaining feature extraction parameters and classification weight parameters of the initial face recognition model and constructing an initial feature vector library based on the classification weight parameters and a preset vector library construction mode, wherein the initial feature vector library is obtained through a training image input into the initial face recognition model and comprises at least one initial feature vector;
the migration vector determination module is used for calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result;
the target model generation module is used for taking the feature extraction parameters and the migration classification vectors as model parameters of an initial face recognition model, and training the initial face recognition model based on the model parameters and a target face image to obtain a target face recognition model;
and the face recognition module is used for inputting the face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.
A computer device comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor implementing the steps of the above-described face recognition method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned face recognition method.
The face recognition method, the device, the computer equipment and the storage medium provided by the application are characterized in that an initial face recognition model is constructed to extract image features of an image to be recognized as a face feature vector, an initial feature vector library is constructed by using initial feature vectors of training images for training the initial face recognition model, the similarity between the face feature vectors and the initial feature vectors is calculated, the corresponding migration classification vectors are matched for the face feature vectors according to the similarity result, feature extraction parameters and the migration classification vectors in the initial face recognition model are used as model parameters of the initial face recognition model, the initial face recognition model is trained according to the model parameters and a target image to obtain a target face recognition model, the target face recognition model is used for carrying out face recognition on the face image to be recognized to obtain a face recognition result, and the initial face recognition model is trained, the model parameters of the initial face recognition model are obtained, the classification weight vector corresponding to the face image to be recognized is matched and serves as a migration classification vector based on the similarity between the feature vector of the face image to be recognized and the initial feature vector, the target face recognition model is obtained based on migration learning, the time for training the target face recognition model is shortened, and the face recognition efficiency according to the target face recognition model is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic view of an application environment of a face recognition method in an embodiment of the present application;
FIG. 2 is a flow chart of a face recognition method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The face recognition method provided by the embodiment of the application can be applied to the application environment shown in fig. 1, wherein the terminal device communicates with the server through a network. The terminal device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
The system framework 100 may include terminal devices, networks, and servers. The network serves as a medium for providing a communication link between the terminal device and the server. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use a terminal device to interact with a server over a network to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, motion Picture experts compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, motion Picture experts compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the face recognition method provided by the embodiment of the present application is executed by a server, and accordingly, the face recognition apparatus is disposed in the server.
It should be understood that the number of the terminal devices, the networks, and the servers in fig. 1 is only illustrative, and any number of the terminal devices, the networks, and the servers may be provided according to implementation requirements, and the terminal devices in the embodiment of the present application may specifically correspond to an application system in actual production.
In an embodiment, as shown in fig. 2, a face recognition method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
and S10, acquiring the face image to be recognized, inputting the face image to be recognized into the initial face recognition model for feature extraction, and obtaining the face feature vector of the face image to be recognized.
Specifically, the face image to be recognized refers to an image that includes a face and needs to recognize the identity of the face, for example, the face image to be recognized included in the face recognition task.
And inputting the face image to be recognized into initial face recognition for feature extraction, and taking the extracted image features as face feature vectors of the face image to be recognized.
The initial face recognition model is obtained after the initial face recognition model is trained through a training image according to a pre-training model. In this embodiment, the initial face recognition model is constructed based on deep learning, and is composed of a feature extractor and a classifier, where the feature extractor is composed of a preset number of convolutional layers, a shallow convolutional layer is used to extract shallow image features, such as corners, textures, brightness, etc., in a face image to be recognized, and a deep convolutional layer is used to extract deep image features, such as deep image features of eyes, nose, eyes, etc., of the face image to be recognized; the classifier is constructed by full connection layers, and the image features extracted from the convolution layers are combined through full connection matrixes in the full connection layers, so that the distinguishing information between the faces can be distinguished, and the face recognition effect is achieved.
And S20, obtaining feature extraction parameters and classification weight parameters of the initial face recognition model, and constructing an initial feature vector library based on the classification weight parameters and a preset vector library construction mode, wherein the initial feature vector library is obtained through training images input into the initial face recognition model and comprises at least one initial feature vector.
Specifically, the initial face recognition model comprises a feature extractor and a classifier, and a feature extraction parameter and a classification weight parameter of the initial face recognition model are obtained, wherein the feature extraction parameter is a convolution feature of a convolution layer in the feature extractor, the classification weight parameter is a classification weight vector in the classifier, and a classification weight matrix is formed by multi-dimensional classification weight vectors.
The preset vector library construction method comprises the following steps:
acquiring a training image, training an initial face recognition model according to the training image, and finally obtaining a feature extraction parameter and a classification weight parameter of the initial face recognition model through a parameter iteration process;
inputting a training image into an initial face recognition model to obtain an initial feature vector and a classification weight vector of the training image;
and according to the training personnel identification in the training image, taking the training personnel identification as an index, and constructing an initial characteristic vector library by using the initial characteristic vector and the classification weight vector.
The training image refers to a face image used for training an initial face recognition model.
And S30, calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result.
And calculating the similarity between the face feature vector of the face image to be recognized and the initial feature vector by a cosine distance formula to obtain a similarity result.
And (4) scoring the similarity result according to the similarity, wherein the higher the similarity is, and taking the initial feature vector with the highest similarity result with the face feature vector as the corresponding migration classification vector of the face image to be recognized.
And sequentially calculating similarity results of all the face feature vectors and the initial feature vectors, and determining migration classification vectors of the face image to be recognized corresponding to the face feature vectors.
And S40, taking the feature extraction parameters and the migration classification vectors as model parameters of the initial face recognition model, and training the initial face recognition model based on the model parameters and the target face image to obtain the target face recognition model.
Specifically, the feature extraction parameters and the migration classification vectors are used as model parameters of an initial face recognition model, the migration classification vectors are migrated into the initial face recognition model through migration learning, the feature extraction parameters are parameters of a feature extractor of the initial face recognition model, the feature extraction parameters are directly migrated into the initial face recognition model, and the feature extraction parameters and the migration classification vectors are used as model parameters for retraining the initial face recognition model to obtain a target face recognition model.
The target face recognition model is obtained after the initial face recognition model is trained through new model parameters, namely feature extraction parameters and migration classification vectors.
And S50, inputting the face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.
Specifically, the target face recognition model is used for recognizing a face identifier of a face image to be recognized, so as to perform face recognition according to the face image to be recognized.
The face recognition method provided by the embodiment of the application comprises the steps of constructing an initial face recognition model for extracting image features of an image to be recognized as a face feature vector, constructing an initial feature vector library of a training image for training the initial face recognition model by using the initial feature vector, calculating the similarity between the face feature vector and the initial feature vector, matching a corresponding migration classification vector for the face feature vector according to the similarity result, using feature extraction parameters and the migration classification vector in the initial face recognition model as model parameters of the initial face recognition model, training the initial face recognition model according to the model parameters and a target face image to obtain a target face recognition model for carrying out face recognition on the face image to be recognized to obtain a face recognition result, training the initial face recognition model to obtain model parameters of the initial face recognition model, based on the similarity between the feature vector of the face image to be recognized and the initial feature vector, the classification weight vector corresponding to the face image to be recognized is matched and serves as a migration classification vector, a target face recognition model is obtained based on migration learning, the time for training the target face recognition model is shortened, and the face recognition efficiency according to the target face recognition model is further improved.
Further, as an optional implementation manner, before step S10, the method includes:
s11, constructing a pre-training model through a preset number of convolutional layers and full-link layers, wherein the convolutional layers form a feature extraction layer, and the full-link layers form a classification layer.
And S12, acquiring a training image set, and inputting the images in the training image set into a pre-training model for training to obtain a training result.
And S13, updating the pre-training model according to the training result to obtain an initial face recognition model.
Specifically, the pre-training model is constructed through deep learning, that is, in the present embodiment, the pre-training model constructed through deep learning is constructed through a feature extractor and a classifier, where the feature extractor refers to a convolutional layer composed of a preset number of convolutional layers, and the classifier refers to a classification layer composed of fully connected layers.
And acquiring a training image set, and training a pre-training model by using images in the training image set to obtain a trained initial face recognition model.
In the model, a pre-training model is constructed according to a general structure through the universality of the face recognition model, and then the pre-training model is trained according to the image in the training image set to obtain an initial face recognition model, so that the model parameters can be conveniently obtained subsequently, and the efficiency of training a target face recognition model is improved.
Further, as an optional implementation manner, the initial face recognition model includes a feature extraction layer, in step S10, the to-be-recognized face image is obtained, and the to-be-recognized face image is input into the initial face recognition model for feature extraction, and obtaining the face feature vector of the to-be-recognized face image includes:
s101, inputting the face image to be recognized into a feature extraction layer of the initial face recognition model for feature extraction, and obtaining the image features of the face image to be recognized.
And S102, taking the image characteristics as a face characteristic vector of the face image to be recognized.
Specifically, a face image to be recognized is input into a feature extraction layer in an initial face recognition model for feature extraction, and a face feature vector is generated by the extracted features in a vector form.
In this embodiment, the face feature vector of the face image to be recognized is generated by the initial face recognition model, and is used for matching the corresponding classification weight vector for the corresponding face image to be recognized according to the vector similarity.
Further, as an optional implementation manner, in step S20, obtaining a feature extraction parameter and a classification weight parameter of the initial face recognition model, and constructing an initial feature vector library based on the classification weight parameter and a preset vector library construction manner includes:
s201, acquiring an initial feature vector and a classification weight vector which are obtained after the images of the training image set are input into the initial face recognition model.
S202, determining training personnel identifications included in the images in the training image set, taking the training personnel identifications as indexes, and obtaining corresponding initial feature vectors and classification weight vectors.
S203, based on the training personnel identification, an initial feature vector library is constructed through training feature vectors and classification weight vectors.
Specifically, an initial feature vector and a classification weight vector after an image in a training image set is input to an initial face recognition model are obtained, and the initial feature vector and the classification weight vector corresponding to a training person identifier are stored in an initial feature vector library according to the training person identifier in the training image.
As an optional implementation manner, the initial feature vector library includes initial feature vectors and classification weight vectors of N training persons, and the training persons are distinguished by training person identifiers.
The trainee identities are used to distinguish the identities of the same trainee.
And if one training personnel identifier corresponds to the images of the training image sets, calculating the average value of the initial characteristic vectors of the images and the average value of the classification weight vectors, taking the average value of the initial characteristic vectors as the initial characteristic vector of the training personnel identifier, and taking the average value of the classification weight vectors as the classification weight vector of the training personnel identifier.
In this embodiment, an initial feature vector library is constructed according to a training image set, the similarity degree of image features is judged through the initial feature vectors, corresponding classification weight vectors obtained through matching are used as migration classification vectors, and the migration classification vectors are applied to a new face recognition model, so that the new face recognition model can be quickly trained, the time for training the face recognition model and the consumption of computing resources are reduced, and the efficiency of face recognition is improved.
Further, as an alternative implementation manner, in step S30, calculating a similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining the migration classification vector based on the similarity result includes:
s301, training personnel identification in the training images is determined, the training images belonging to the same training personnel identification are determined, and initial feature vectors and classification weight vectors corresponding to the training personnel identification are determined.
S302, at least one identification object identifier in the face image to be identified is obtained, and the vector similarity of the face feature vector corresponding to each identification object identifier and the initial feature vectors corresponding to all the training personnel identifiers is calculated through a cosine distance formula, so that a similarity result is obtained.
And S303, obtaining a classification weight vector corresponding to the training personnel identifier with the maximum similarity result, and taking the classification weight vector as a migration classification vector corresponding to the identification object identifier.
As an optional implementation manner, according to the identification object identifiers, the face images to be identified belonging to the same identification object identifier are grouped according to the identification object identifiers, and a face feature vector corresponding to each identification object identifier is calculated.
Specifically, the similarity between the face feature vector and the initial feature vector is calculated through a cosine calculation formula to obtain a similarity result, and the classification weight vector corresponding to the face image to be recognized is matched according to the similarity result and serves as a migration classification vector.
And respectively calculating the similarity of the face feature vector of the current identification object identifier and the initial feature vectors of all the training personnel identifiers to obtain a plurality of similarity results. And taking the classification weight vector of the training personnel identifier with the maximum similarity result as the migration classification vector of the current identification object identifier.
In this embodiment, the similarity between the face feature vector and the initial feature vector is calculated by a cosine distance formula, the classification weight vector is matched according to the similarity result, the classification parameters of the initial face recognition model are fully utilized, the convergence rate of the training target face recognition model is further increased, the consumption of training time and calculation resource cost is greatly reduced, and the face recognition efficiency is improved.
Further, as an optional implementation manner, the initial face recognition model includes a feature extraction module and a classification module, in step S40, the feature extraction parameters and the migration classification vectors are used as model parameters of the initial face recognition model, and the initial face recognition model is trained based on the model parameters and the target face image, so as to obtain the target face recognition model, including:
s401, taking the feature extraction parameters as parameters of a feature extraction module, taking the migration classification vectors as parameters of a classification module, and updating model parameters of the initial face recognition model based on the feature extraction module and the classification module.
S402, training the initial face recognition module through the target face image to obtain the trained initial face recognition module serving as a target face recognition model.
Specifically, feature extraction parameters and migration classification vectors of an initial face recognition model are used as model parameters, the initial face recognition model is trained according to the model parameters and a target face image, new model parameters are obtained, and then the target face recognition model is obtained.
The target face image refers to a face image including identification of a recognition object.
In the embodiment, the model parameters and the migration classification vectors of the initial face recognition model are used as the model parameters after migration through migration learning to obtain the target face recognition model, so that the model parameters of the initial face recognition model can be fully utilized, the training time of the target face recognition model can be reduced, and the face recognition efficiency is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In an embodiment, a face recognition apparatus is provided, and the face recognition apparatus corresponds to the face recognition method in the above embodiment one to one. As shown in fig. 3, the face recognition apparatus includes:
and the feature extraction module 31 is configured to acquire a face image to be recognized, and input the face image to be recognized into the initial face recognition model for feature extraction, so as to obtain a face feature vector of the face image to be recognized.
The vector library construction module 32 is configured to obtain feature extraction parameters and classification weight parameters of the initial face recognition model, and construct an initial feature vector library based on the classification weight parameters and a preset vector library construction manner, where the initial feature vector library is obtained by a training image input into the initial face recognition model, and the initial feature vector library includes at least one initial feature vector.
And the migration vector determination module 33 is configured to calculate a similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determine a migration classification vector based on the similarity result.
And the target model generation module 34 is configured to use the feature extraction parameters and the migration classification vectors as model parameters of the initial face recognition model, and train the initial face recognition model based on the model parameters and the target face image to obtain the target face recognition model.
And the face recognition module 35 is configured to input the face image to be recognized into the target face recognition model for face recognition, so as to obtain a face recognition result.
Further, the face recognition device further comprises the following modules:
and the pre-training model building module is used for building a pre-training model through a preset number of convolutional layers and fully-connected layers, wherein the convolutional layers form a feature extraction layer, and the fully-connected layers form a classification layer.
And the training module is used for acquiring a training image set, and inputting the images in the training image set into a pre-training model for training to obtain a training result.
And the initial model generation module is used for updating the pre-training model according to the training result to obtain an initial face recognition model.
Further, the feature extraction module 31 includes:
and the image feature extraction unit is used for inputting the face image to be recognized into the feature extraction layer of the initial face recognition model for feature extraction, so as to obtain the image features of the face image to be recognized.
And the face vector generating unit is used for taking the image characteristics as the face characteristic vector of the face image to be recognized.
Further, the vector library building module 32 includes:
and the vector acquisition unit is used for acquiring an initial feature vector and a classification weight vector which are obtained after the images of the training image set are input into the initial face recognition model.
And the indexing unit is used for determining training personnel identifications included in the images in the training image set, taking the training personnel identifications as indexes, and acquiring corresponding initial feature vectors and classification weight vectors.
And the vector library construction unit is used for constructing an initial feature vector library through the training feature vectors and the classification weight vectors based on the training personnel identification.
Further, the migration vector determination module 33 includes:
and the personnel identification determining module is used for determining training personnel identifications in the training images, determining the training images belonging to the same training personnel identification, and determining initial characteristic vectors and classification weight vectors corresponding to the training personnel identifications.
And the similarity calculation module is used for acquiring at least one identification object identifier in the face image to be identified, and calculating the vector similarity between the face feature vector corresponding to each identification object identifier and the initial feature vectors corresponding to all the training personnel identifiers through a cosine distance formula to obtain a similarity result.
And the migration vector determination module is used for acquiring a classification weight vector corresponding to the training personnel identifier with the maximum similarity result and taking the classification weight vector as a migration classification vector corresponding to the identification object identifier.
Further, the object model generation module 34 includes:
and the parameter updating unit is used for taking the feature extraction parameters as parameters of the feature extraction module, taking the migration classification vectors as parameters of the classification module, and updating model parameters of the initial face recognition model based on the feature extraction module and the classification module.
And the target model generating unit is used for training the initial face recognition module through the target face image to obtain the trained initial face recognition module which is used as the target face recognition model.
Wherein the meaning of "first" and "second" in the above modules/units is only to distinguish different modules/units, and is not used to define which module/unit has higher priority or other defining meaning. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not explicitly listed or inherent to such process, method, article, or apparatus, and such that a division of modules presented in this application is merely a logical division and may be implemented in a practical application in a further manner.
For the specific limitations of the face recognition device, reference may be made to the above limitations of the face recognition method, which is not described herein again. All or part of the modules in the face recognition device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the face recognition method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a face recognition method.
In one embodiment, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor when executing the computer program implementing the steps of the face recognition method in the above embodiments, such as the steps S10 to S50 shown in fig. 2 and other extensions of the method and related steps. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units of the face recognition apparatus in the above-described embodiments, such as the functions of the modules 31 to 35 shown in fig. 3. To avoid repetition, further description is omitted here.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc.
The memory may be integrated in the processor or may be provided separately from the processor.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the face recognition method in the above embodiments, such as the steps S10 to S50 shown in fig. 2 and extensions of other extensions and related steps of the method. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units of the face recognition apparatus in the above-described embodiments, such as the functions of the modules 31 to 35 shown in fig. 3. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring a face image to be recognized, and inputting the face image to be recognized into an initial face recognition model for feature extraction to obtain a face feature vector of the face image to be recognized;
acquiring feature extraction parameters and classification weight parameters of the initial face recognition model, and constructing an initial feature vector library based on the classification weight parameters and a preset vector library construction mode, wherein the initial feature vector library is obtained through training images input into the initial face recognition model and comprises at least one initial feature vector;
calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result;
taking the feature extraction parameters and the migration classification vectors as model parameters of an initial face recognition model, and training the initial face recognition model based on the model parameters and a target face image to obtain a target face recognition model;
and inputting the face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.
2. The method of claim 1, wherein before the obtaining of the face image to be recognized and the inputting of the face image to be recognized into the initial face recognition model for feature extraction to obtain the face feature vector of the face image to be recognized, the method further comprises:
constructing a pre-training model through a preset number of convolutional layers and full-connection layers, wherein the convolutional layers form a feature extraction layer, and the full-connection layers form a classification layer;
acquiring a training image set, and inputting images in the training image set into the pre-training model for training to obtain a training result;
and updating the pre-training model according to the training result to obtain the initial face recognition model.
3. The face recognition method according to claim 1, wherein the initial face recognition model comprises a feature extraction layer, and the obtaining of the face image to be recognized and the inputting of the face image to be recognized into the initial face recognition model for feature extraction to obtain the face feature vector of the face image to be recognized comprises:
inputting the facial image to be recognized into the feature extraction layer of the initial facial recognition model for feature extraction to obtain the image features of the facial image to be recognized;
and taking the image features as face feature vectors of the face image to be recognized.
4. The face recognition method according to claim 2, wherein the obtaining of the feature extraction parameters and the classification weight parameters of the initial face recognition model, and the constructing of the initial feature vector library based on the classification weight parameters and a preset vector library construction method comprises:
acquiring an initial feature vector and a classification weight vector which are obtained after the images of a training image set are input into the initial face recognition model;
determining training personnel identifications included in the images in the training image set, taking the training personnel identifications as indexes, and acquiring the corresponding initial feature vectors and classification weight vectors;
and constructing the initial feature vector library through the training feature vectors and the classification weight vectors based on the training personnel identifications.
5. The method of claim 1, wherein the calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and the determining the migration classification vector based on the similarity result comprises:
determining training personnel identifications in the training images, determining the training images belonging to the same training personnel identification, and determining the initial feature vectors and the classification weight vectors corresponding to the training personnel identifications;
acquiring at least one identification object identifier in the face image to be identified, and calculating the vector similarity of the face feature vector corresponding to each identification object identifier and the initial feature vectors corresponding to all training personnel identifiers through a cosine distance formula to obtain a similarity result;
and obtaining a classification weight vector corresponding to the training personnel identifier with the maximum similarity result, and taking the classification weight vector as a migration classification vector corresponding to the identification object identifier.
6. The face recognition method of claim 1, wherein the initial face recognition model comprises a feature extraction module and a classification module, and the training of the initial face recognition model based on the model parameters and the target face image by using the feature extraction parameters and the migration classification vectors as model parameters of the initial face recognition model to obtain the target face recognition model comprises:
taking the feature extraction parameters as parameters of the feature extraction module, taking the migration classification vectors as parameters of the classification module, and updating model parameters of the initial face recognition model based on the feature extraction module and the classification module;
and training the initial face recognition module through the target face image to obtain the trained initial face recognition module serving as a target face recognition model.
7. A face recognition apparatus, comprising:
the characteristic extraction module is used for acquiring a face image to be recognized and inputting the face image to be recognized into an initial face recognition model for characteristic extraction to obtain a face characteristic vector of the face image to be recognized;
the vector library construction module is used for obtaining feature extraction parameters and classification weight parameters of the initial face recognition model and constructing an initial feature vector library based on the classification weight parameters and a preset vector library construction mode, wherein the initial feature vector library is obtained through a training image input into the initial face recognition model and comprises at least one initial feature vector;
the migration vector determination module is used for calculating the similarity between the face feature vector and the initial feature vector to obtain a similarity result, and determining a migration classification vector based on the similarity result;
the target model generation module is used for taking the feature extraction parameters and the migration classification vectors as model parameters of an initial face recognition model, and training the initial face recognition model based on the model parameters and a target face image to obtain a target face recognition model;
and the face recognition module is used for inputting the face image to be recognized into the target face recognition model for face recognition to obtain a face recognition result.
8. The face recognition apparatus of claim 7, wherein the migration vector determination module comprises:
the personnel identification determining module is used for determining training personnel identifications in the training images, determining the training images belonging to the same training personnel identification, and determining the initial feature vectors and the classification weight vectors corresponding to the training personnel identifications;
the similarity calculation module is used for acquiring at least one identification object identifier in the face image to be identified, and calculating the vector similarity between the face feature vector corresponding to each identification object identifier and the initial feature vectors corresponding to all the training personnel identifiers through a cosine distance formula to obtain a similarity result;
and the migration vector determination module is used for acquiring a classification weight vector corresponding to the training personnel identifier with the maximum similarity result and taking the classification weight vector as a migration classification vector corresponding to the identification object identifier.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the steps of the face recognition method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the face recognition method according to any one of claims 1 to 6.
CN202210108544.4A 2022-01-28 2022-01-28 Face recognition method and device, computer equipment and storage medium Active CN114550241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210108544.4A CN114550241B (en) 2022-01-28 2022-01-28 Face recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210108544.4A CN114550241B (en) 2022-01-28 2022-01-28 Face recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114550241A true CN114550241A (en) 2022-05-27
CN114550241B CN114550241B (en) 2023-01-31

Family

ID=81673107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210108544.4A Active CN114550241B (en) 2022-01-28 2022-01-28 Face recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114550241B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761850A (en) * 2022-11-16 2023-03-07 智慧眼科技股份有限公司 Face recognition model training method, face recognition device and storage medium
CN115761833A (en) * 2022-10-10 2023-03-07 荣耀终端有限公司 Face recognition method, electronic device, program product, and medium
CN116885503A (en) * 2023-09-08 2023-10-13 深圳市特发信息光网科技股份有限公司 Detachable photoelectric composite cable connector structure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034749A1 (en) * 2018-07-26 2020-01-30 International Business Machines Corporation Training corpus refinement and incremental updating
CN112906554A (en) * 2021-02-08 2021-06-04 智慧眼科技股份有限公司 Model training optimization method and device based on visual image and related equipment
CN113177533A (en) * 2021-05-28 2021-07-27 济南博观智能科技有限公司 Face recognition method and device and electronic equipment
CN113807122A (en) * 2020-06-11 2021-12-17 阿里巴巴集团控股有限公司 Model training method, object recognition method and device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200034749A1 (en) * 2018-07-26 2020-01-30 International Business Machines Corporation Training corpus refinement and incremental updating
CN113807122A (en) * 2020-06-11 2021-12-17 阿里巴巴集团控股有限公司 Model training method, object recognition method and device, and storage medium
CN112906554A (en) * 2021-02-08 2021-06-04 智慧眼科技股份有限公司 Model training optimization method and device based on visual image and related equipment
CN113177533A (en) * 2021-05-28 2021-07-27 济南博观智能科技有限公司 Face recognition method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BERNHARD LUTZ,ET AL: "Predicting sentence-level polarity labels of financial news using abnormal stock returns", 《EXPERT SYSTEMS WITH APPLICATIONS》 *
何俊平: "用户生成内容的文本多模型情感分析研究", 《CNKI硕士电子期刊》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761833A (en) * 2022-10-10 2023-03-07 荣耀终端有限公司 Face recognition method, electronic device, program product, and medium
CN115761833B (en) * 2022-10-10 2023-10-24 荣耀终端有限公司 Face recognition method, electronic equipment and medium
CN115761850A (en) * 2022-11-16 2023-03-07 智慧眼科技股份有限公司 Face recognition model training method, face recognition device and storage medium
CN115761850B (en) * 2022-11-16 2024-03-22 智慧眼科技股份有限公司 Face recognition model training method, face recognition method, device and storage medium
CN116885503A (en) * 2023-09-08 2023-10-13 深圳市特发信息光网科技股份有限公司 Detachable photoelectric composite cable connector structure
CN116885503B (en) * 2023-09-08 2023-12-01 深圳市特发信息光网科技股份有限公司 Detachable photoelectric composite cable connector structure

Also Published As

Publication number Publication date
CN114550241B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN114550241B (en) Face recognition method and device, computer equipment and storage medium
CN110020620B (en) Face recognition method, device and equipment under large posture
CN112926654B (en) Pre-labeling model training and certificate pre-labeling method, device, equipment and medium
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN109271917B (en) Face recognition method and device, computer equipment and readable storage medium
EP4187492A1 (en) Image generation method and apparatus, and computer device and computer-readable storage medium
CN112395390B (en) Training corpus generation method of intention recognition model and related equipment thereof
CN114266946A (en) Feature identification method and device under shielding condition, computer equipment and medium
CN113887527B (en) Face image processing method and device, computer equipment and storage medium
CN112364799A (en) Gesture recognition method and device
CN113435608A (en) Method and device for generating federated learning model, computer equipment and storage medium
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN114399396A (en) Insurance product recommendation method and device, computer equipment and storage medium
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
CN113434648A (en) Meta learning method, device and equipment of text classification model and storage medium
CN113220828A (en) Intention recognition model processing method and device, computer equipment and storage medium
CN112381236A (en) Data processing method, device, equipment and storage medium for federal transfer learning
CN114266324B (en) Model visualization modeling method and device, computer equipment and storage medium
CN115700845B (en) Face recognition model training method, face recognition device and related equipment
CN114331388A (en) Salary calculation method, device, equipment and storage medium based on federal learning
CN111582143A (en) Student classroom attendance method and device based on image recognition and storage medium
CN113011132A (en) Method and device for identifying vertically arranged characters, computer equipment and storage medium
CN112036501A (en) Image similarity detection method based on convolutional neural network and related equipment thereof
CN112418442A (en) Data processing method, device, equipment and storage medium for federal transfer learning
CN109933969B (en) Verification code identification method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Wisdom Eye Technology Co.,Ltd.

Country or region after: China

Address before: 410205, Changsha high tech Zone, Hunan Province, China

Patentee before: Wisdom Eye Technology Co.,Ltd.

Country or region before: China