CN108491812B - Method and device for generating face recognition model - Google Patents

Method and device for generating face recognition model Download PDF

Info

Publication number
CN108491812B
CN108491812B CN201810268892.1A CN201810268892A CN108491812B CN 108491812 B CN108491812 B CN 108491812B CN 201810268892 A CN201810268892 A CN 201810268892A CN 108491812 B CN108491812 B CN 108491812B
Authority
CN
China
Prior art keywords
image
face
training sample
feature map
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810268892.1A
Other languages
Chinese (zh)
Other versions
CN108491812A (en
Inventor
张刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN201810268892.1A priority Critical patent/CN108491812B/en
Publication of CN108491812A publication Critical patent/CN108491812A/en
Application granted granted Critical
Publication of CN108491812B publication Critical patent/CN108491812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method and a device for generating a face recognition model. One embodiment of the method comprises: the method comprises the steps of obtaining a training sample set, inputting all training samples in the training sample set into an initial face recognition model, and training the initial face recognition model to obtain a trained face recognition model, wherein the face recognition model is used for recognizing whether objects corresponding to face images input into the face recognition model meet a preset blood relationship. The feature graph for training is expanded while the face image of the target object and the face image of the relation object are not increased, and the cost of manpower, material resources and time for obtaining the face image for training is reduced. The efficiency of training the face recognition model is improved.

Description

Method and device for generating face recognition model
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a generation method and device for a face recognition model.
Background
The face recognition is a computer technology for identifying identity by analyzing and comparing visual characteristic information of a face. The face recognition product is widely applied to the fields of finance, security inspection, medical treatment, public security and the like.
In the process of face recognition, the face features of the face to be recognized can be matched with the face feature template, and the identity information of the face to be recognized is predicted according to the similarity.
Generally, a face recognition model can be trained by using a training face image, so that the face recognition model can perform processes such as face image recognition.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a face recognition model.
In a first aspect, an embodiment of the present application provides a method for generating a face recognition model, where the method includes: acquiring a training sample set; inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, wherein the face recognition model is used for recognizing whether objects corresponding to the face image pair input into the face recognition model meet a preset blood relationship; wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on the steps of: acquiring a face feature map of a target object as a training sample image in a training sample image pair; acquiring face images of at least two relation objects having a preset blood relationship with a target object; generating a target image set, wherein the target image set comprises a feature map generated by a face image of a relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region; and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
In some embodiments, the training sample set further comprises at least one training sample image pair comprising a facial feature map of the target image and a facial feature map of a human object that does not have a predetermined consanguinity relationship with the target object; inputting each training sample in the training sample set into the initial face recognition model to train the initial face recognition model, and obtaining the trained face recognition model, wherein the training sample set comprises: inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, so that if a to-be-detected image pair input into the face recognition model comprises a face feature image of a to-be-detected target object and a face feature image having a predetermined blood relationship with the to-be-detected target object, the numerical value output by the face recognition model is greater than a first preset threshold value, and if the to-be-detected image pair comprises the face feature image of the target object and the face feature image not having the predetermined blood relationship with the target object, the numerical value output by the face recognition model is less than a second preset threshold value; and the second preset threshold is smaller than the first preset threshold.
In some embodiments, obtaining the set of training samples comprises: acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object and a combined image obtained from the face images of the at least two relation objects; inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image; and taking the face feature map of the target object as a training sample image, and randomly selecting one feature map from the face feature map of the relational object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some embodiments, before inputting the face image of the target object, the face image of the relationship object, and the combined image into a pre-trained face feature recognition model to obtain a face feature map of the target object, a face feature map of the relationship object, and a feature map of the combined image, respectively, the method further includes: carrying out affine transformation on the face image of the target object, the face image of the relational object and the combined image to obtain a transformed face image of the target object, a transformed face image of the relational object and a transformed combined image; inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image, and the method further comprises the following steps: respectively inputting the transformed target object face image, the transformed relation object face image and the transformed combined image into a pre-trained face feature recognition model to obtain a transformed target object face feature map, a transformed relation object face feature map and a transformed combined image feature map; and using the face feature map of the target object as a training sample image, and arbitrarily selecting one feature map from the face feature map of the relation object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set, and the method further comprises the following steps: and taking the face feature map of the target object or the transformed face feature map of the target object as a training sample image, and randomly selecting one feature map from the transformed face feature map of the target object, the transformed face feature map of the relational object and the transformed feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some embodiments, the face recognition model is a convolutional neural network model.
In a second aspect, an embodiment of the present application provides an apparatus for generating a face recognition model, where the apparatus includes: an acquisition unit configured to acquire a training sample set; the face recognition model generation unit is configured to input each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, and the face recognition model is used for recognizing whether objects corresponding to the input face image pair meet a predetermined blood relationship; wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on the steps of: acquiring a face feature map of a target object as a training sample image in a training sample image pair; acquiring face images of at least two relation objects having a preset blood relationship with a target object; generating a target image set, wherein the target image set comprises a feature map generated by a face image of a relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region; and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
In some embodiments, the training sample set further comprises at least one training sample image pair comprising a facial feature map of the target image and a facial feature map of a human object that does not have a predetermined consanguinity relationship with the target object; and the face recognition model generation unit is further configured to: inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, so that if a to-be-detected image pair input into the face recognition model comprises a face feature image of a to-be-detected target object and a face feature image having a predetermined blood relationship with the to-be-detected target object, the numerical value output by the face recognition model is greater than a first preset threshold value, and if the to-be-detected image pair comprises the face feature image of the target object and the face feature image not having the predetermined blood relationship with the target object, the numerical value output by the face recognition model is less than a second preset threshold value; and the second preset threshold is smaller than the first preset threshold.
In some embodiments, the obtaining unit is further configured to: acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object and a combined image obtained from the face images of the at least two relation objects; inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image; and taking the face feature map of the target object as a training sample image, and randomly selecting one feature map from the face feature map of the relational object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some embodiments, the obtaining unit is further configured to: before inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model and respectively obtaining the face feature map of the target object, the face feature map of the relational object and the feature map of the combined image, carrying out affine transformation on the face image of the target object, the face image of the relational object and the combined image to obtain a transformed face image of the target object, a transformed face image of the relational object and a transformed combined image; respectively inputting the transformed target object face image, the transformed relation object face image and the transformed combined image into a pre-trained face feature recognition model to obtain a transformed target object face feature image, a transformed relation object face feature image and a transformed combined image feature image; and taking the face feature map of the target object or the face feature map of the transformed target object as a training sample image, and randomly selecting one feature map from the transformed face feature map of the target object, the transformed face feature map of the relational object and the transformed feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some embodiments, the face recognition model is a convolutional neural network model.
In a third aspect, an embodiment of the present application provides a server, where the server includes: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
The method and the device for generating the face recognition model provided by the embodiment of the application have the advantages that when the face feature map of the target object and the face feature map of any one of at least two relation objects having a predetermined blood relationship with the target object are used as the pair of training sample images, the initial face model is also trained by taking the feature map of any combined image of the face images of at least two relational objects having a predetermined blood relationship with the target object and the face feature map of the target object as a training sample image pair, so that the trained face recognition model can recognize whether the objects corresponding to the face image pair input into the model satisfy the predetermined blood relationship, thereby realizing that while the face image of the target object and the face image of the relationship object are not increased, the feature map for training is expanded, and the cost of manpower, material resources and time for obtaining the face image for training is reduced. The efficiency of training the face recognition model is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating a face recognition model according to the present application;
FIG. 3 is a schematic diagram of the generation of at least one training sample image pair from a set of training samples according to the present application;
FIG. 4 is a schematic diagram of an application scenario of a face recognition model generation method according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method of generating a face recognition model according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an apparatus for generating a face recognition model according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which an embodiment of the face recognition model generation method or the face recognition model generation apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may provide various services, for example, the server 105 may obtain input data from the terminals 101, 102, 103 through the network 104 so as to implement training of the face recognition model and obtain the trained face recognition model.
It should be noted that the method for generating the face recognition model provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the generating device of the face recognition model is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of generating a face recognition model according to the present application is shown. The generation method of the face recognition model comprises the following steps:
step 201, a training sample set is obtained.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for generating a face recognition model may acquire a training face image from a terminal device in a wired connection manner or a wireless connection manner. Here, the training face image may include a face image of the target object. And adding the labeling information of the identity of the target object in the face image of the target object in advance.
After receiving the face image of the target object, the execution subject may perform a one-step analysis process on the face image of the target object by using various analysis methods, so as to obtain a training sample set. The training sample set comprises a plurality of training sample image pairs. At least one training sample image pair of the training sample set may be generated based on the steps shown in fig. 3.
Please refer to fig. 3, which shows a schematic diagram of generation of at least one training sample image pair in the training sample set.
In the schematic generation diagram 300 of at least one training sample image pair in the training sample set shown in fig. 3, at least one training sample image pair in the training sample set may be generated by the following steps.
Step 301, a face feature map of a target object is obtained as one training sample image in a training sample image pair.
The execution subject may obtain the face feature map of the target object by using various analysis methods based on the face image of the target object. Such as image processing, to determine the face feature map of the target object. The face feature map of the target object can be determined through processes of light compensation, image graying, gaussian smoothing, similarity calculation, binarization and the like.
In this embodiment, the face feature map refers to an image that can describe color features, texture features, and shape features of a face of an object and relative positional relationship features of parts of the face. The face feature map may be a two-dimensional image.
In some optional implementations of the embodiment, for the face image of the target object, the regions of the parts of the face may be detected first in the face image. Then, for each part of the human face, a feature map of the part is extracted by various methods (e.g., a method of image processing). And finally, obtaining a feature image of the face according to the feature images of all parts of the face. The human face parts here may be, for example, eyes, nose and mouth.
In this embodiment, the execution subject may use the face feature map of the target object as one of the training sample images in the training sample image pair.
Step 302, acquiring face images of at least two relation objects having a predetermined blood relationship with a target object.
The training image may further include face images of at least two related objects having a predetermined blood relationship with the target object. The predetermined relationship may be, for example, a natural direct relationship, a natural collateral relationship, or the like. The natural direct relationship includes parents/children, grandparents/grandchildren, etc., and the natural collateral relationship includes brothers, sisters, etc.
When the predetermined relationship is a parent/child relationship, for example, the target object is a child, the at least two relationship objects may include two relationship objects, that is, facial images of a father and a mother.
When the predetermined blood relationship is a direct blood relationship, the face images of the at least two relationship objects having the predetermined relationship with the target object may further include face images of grandparents of the target object, face images of grandparents, and the like.
The face image of the relationship object is added with the annotation information of the identity of the relationship object in advance, and the annotation information of the identity of the relationship object here may be, for example, information indicating a predetermined relationship with the target object.
Step 303, generating a target image set, where the target image set includes a feature map generated from the face image of the relationship object and a feature map generated from a combined image of the relationship object, where the combined image of the relationship object is an image generated by cutting a preset feature region of the face image of one of the relationship objects and replacing a feature region corresponding to the face image of the other relationship object with the cut preset feature region.
The executing body may generate the target image set based on the face images of at least two relational objects having a predetermined blood-related relationship with the target object after obtaining the face images of the at least two relational objects.
First, the execution subject may obtain, from the face images of the at least two relationship objects, face feature maps corresponding to the at least two relationship objects respectively through various analysis methods (e.g., image processing methods).
Then, a combined image of the relational objects is generated from the face images in the at least two relational objects. Specifically, the execution subject may intercept a preset feature region of the face image of any one of the at least two relationship objects, and replace a feature region corresponding to the face image of another relationship object with the intercepted preset feature region to generate a combined image. This results in a plurality of combined images. The predetermined characteristic region can be, for example, the eyes, nose and/or mouth.
That is, the execution subject may intercept an eye region in a face image of any one of the at least two relationship objects, and replace the face image of another relationship object with the corresponding eye regionAn eye region generating a combined image; or the execution subject can intercept the nose region in the face image of any one of the at least two relational objects, replace the nose region corresponding to the face image of the other relational object and generate a combined image; or, the execution subject may further intercept a mouth region in the face image of any one of the at least two relationship objects, replace a mouth region corresponding to the face image of another relationship object, generate a combined image, and the like. When the number of the relationship objects is N, when the number of the preset feature areas is m, N may be obtainedm-N combined images. Wherein N is more than or equal to 2 and is a positive integer. m is more than or equal to 2 and is a positive integer.
Then, for each combined image, the execution subject may obtain a feature map of the combined image through various analysis methods. The feature map of the set of combined images may be obtained, for example, by means of image processing.
In this way, the target image set is generated from the feature map corresponding to each combined image and the face feature map corresponding to each of the at least two relationship objects.
Step 304, one feature map is arbitrarily selected from the target image set to serve as the other training sample image in the training sample image pair.
In this embodiment, the executing entity may arbitrarily select one feature map from the target image set obtained in step 303 as another training sample image in the training sample image pair. That is to say, one training sample image in one training sample image pair may be a face feature map of a target object, and the other training sample image may be a face feature map corresponding to any one of the at least two relationship objects, or may be a feature map of any one of a plurality of combined images generated from face images of the at least two relationship objects.
Through steps 301 to 304, the execution subject can obtain a plurality of training sample image pairs. Each pair of training sample image pairs may include a facial feature map corresponding to the target object, anAny one of the feature maps selected from the target image set. When the number of the at least two relation objects is N and the preset feature region is three regions of eyes, nose and mouth, N can be obtained3Pairs of training sample images.
In this way, the number of pairs of training sample images obtained is much greater than the number of pairs of training sample images formed by combining only the face feature map obtained from the face image of the target image and the face feature map obtained from the face image of any one of the at least two objects having a predetermined blood-related relationship with the target object.
Referring back to fig. 2, the method for clustering personal images according to the present embodiment further includes:
step 202, inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model, so as to obtain a trained face recognition model, wherein the face recognition model is used for recognizing whether objects corresponding to the face image pair input into the face recognition model meet a predetermined blood relationship.
In this embodiment, after obtaining the training sample set in step 201, the executing entity (for example, the server shown in fig. 1) may input the training sample set into the initial face recognition model to train the initial face recognition model, so as to obtain a trained face recognition model.
In this embodiment, the face recognition model may be, for example, an artificial neural network model, other non-neural network models such as a support vector machine, and the like.
In some optional implementations of the present embodiment, the face recognition model may be a convolutional neural network model. A Convolutional Neural Network (CNN) is one of deep artificial Neural networks, and in general, a Convolutional Neural Network may include a plurality of feature extraction layers (also called Convolutional layers) and a plurality of downsampling layers (also called Pooling layers). Wherein, the feature extraction layer is connected with the down-sampling layer alternately. Each feature extraction layer may include at least one convolution kernel. For a feature extraction layer, a feature map is obtained by convolving the output of the previous layer with a convolution kernel of the layer. And the down-sampling layer is used for solving the local average and dimension reduction processing of the convolution result output by the connected feature extraction layer. The convolution kernel in the feature extraction layer comprises a plurality of weights. The weights in the convolution kernel can be trained from multiple samples. When each convolution kernel of the convolution neural network extracts the characteristic graph from the image, local weight sharing is utilized, and the complexity of the neural network model can be reduced.
In some optional implementations of the present embodiment, the training sample set further includes at least one training sample image pair, where the training sample image pair includes a face feature map corresponding to the target object and a face feature map of a human object that does not satisfy the predetermined consanguinity relationship with the target object.
It will be appreciated that in these alternative implementations, a training sample graph comprising a face feature graph corresponding to the target object and a face feature graph of a person object that does not satisfy the predetermined consanguinity relationship with the target object may be compared as a negative sample in the training sample set.
The executing body may input each training sample image pair in the training sample set including the training sample image pair obtained in steps 301 to 304, the face feature image corresponding to the target object, and the face feature image of the person object that does not satisfy the predetermined blood relationship with the target object, into the initial face recognition model to train the initial face recognition model, so as to obtain the trained face recognition model. And the trained face recognition model can recognize whether the objects corresponding to the image pair to be detected input into the trained face recognition model meet the preset blood relationship.
Specifically, if the image pair to be detected input into the face recognition model includes a face feature image of the target object to be detected and a face feature image of an object having a predetermined blood relationship with the target object to be detected, the output value of the face recognition model may be a numerical value greater than a preset threshold value. If the image pair to be detected input into the face recognition model includes a face feature image of the target object to be detected and a face feature image of an object which does not have a predetermined blood relationship with the target object to be detected, the output value of the recognition model may be a numerical value smaller than the preset threshold value. The preset threshold may be set according to a specific application, and for example, the preset threshold may be 0.5.
In some optional implementation manners of this embodiment, the executing subject may further train the initial face recognition model with a training sample set formed by a training sample pair obtained in steps 301 to 304 shown in fig. 3 and a training sample pair formed by a face feature map of the target object and a face feature map of an object having no predetermined blood-related relationship with the target object. And obtaining the trained face recognition model, so that if the image pair to be detected input into the face recognition model comprises the face feature map of the target object to be detected and the object face feature map meeting a preset blood relationship with the target object to be detected, the output numerical value of the trained face recognition model is larger than a first preset threshold value. And if the image pair to be detected comprises the face feature image of the target object and the face feature image of the object which does not satisfy the predetermined blood relationship with the target object, the numerical value output by the trained face recognition model is smaller than a second preset threshold value. Here the second predetermined threshold is smaller than the first predetermined threshold.
The specific values of the first preset threshold and the second preset threshold may be set according to the need, and are not limited herein.
It should be noted that the above-mentioned artificial neural network model, convolutional neural network model, and other non-neural network models such as support vector machine are well-known technologies that are widely researched and applied at present, and are not described herein again.
Please refer to fig. 4, which is a schematic diagram of an application scenario of the method for generating a face recognition model according to the present application.
In the application scenario of fig. 4, the server 402 acquires a training person image 403 from the user terminal 401, where the training person image includes a face image of a target object and face images of at least two relationship objects having a predetermined blood-related relationship with the target object; thereafter, based on the training person image 403, the server 402 may obtain a training sample set 404; the training sample set may include a plurality of training sample pairs, wherein at least one training sample pair in the training sample set is obtained by: firstly, acquiring a face feature map of a target object as one training sample image in a training sample image pair, then generating a combined image based on the face images of at least two relational objects, then acquiring a feature map of the combined image and face feature maps of the relational objects, and taking any one of the feature map of the combined image and the face feature map of the relational image as the other training sample image; then, the server 402 inputs each training sample in the training sample set into the initial face recognition model to train 405 the initial face recognition model, and obtains a trained face recognition model 406, so that the face recognition model can recognize whether the objects corresponding to the face image pairs input into the face recognition model satisfy a predetermined blood relationship.
The method provided by the above embodiment of the present application is implemented by taking the face feature map of the target object and the face feature map of any one of at least two relationship objects having a predetermined blood-related relationship with the target object as a pair of training sample images, the initial face model is also trained by taking the feature map of any combined image of the face images of at least two relational objects having a predetermined blood relationship with the target object and the face feature map of the target object as a training sample image pair, so that the trained face recognition model can recognize whether the objects corresponding to the face image pair input into the model satisfy the predetermined blood relationship, therefore, the training of the face recognition model is realized without increasing the face image of the target object for training and the face image of the relation object, the feature map for training is expanded, and the cost of manpower, material resources and time for obtaining the face image for training is reduced. The efficiency of training the face recognition model is improved.
With further reference to FIG. 5, a flow 500 of yet another embodiment of a method of generating a face recognition model is illustrated. The process 500 of the method for generating a face recognition model includes the following steps:
step 501, acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object, and a combined image obtained by combining the face images of the at least two relation objects.
In this embodiment, an executing subject (for example, a server shown in fig. 1) of the method for generating a face recognition model may acquire a face image of a target object and face images of at least two relationship objects having a predetermined blood-related relationship with the target object from a terminal device by a wired connection manner or a wireless connection manner. And adding the labeling information of the identity of the target object in the face image of the target object in advance. The annotation information of the identity of the relationship object is added in advance to the face images of at least two relationship objects having a predetermined relationship with the target object, and the annotation information of the identity of the relationship object here may be, for example, information indicating a predetermined relationship with the target object.
The execution subject may derive a plurality of combined images from face images of at least two relationship objects having a predetermined consanguineous relationship with the target object. The above process of obtaining a plurality of combined images can refer to the detailed description of step 303 in the embodiment shown in fig. 3, which is not repeated herein.
Step 502, inputting the face image of the target object, the face image of the relationship object and the combined image into a pre-trained face feature recognition model, and respectively obtaining a face feature map of the target object, a face feature map of the relationship object and a feature map of the combined image.
The execution main body can respectively input the face image of the target object, the face image of at least two relation objects having a preset blood relationship with the target object and the combined image into a pre-trained face feature recognition model to obtain a face feature map of the target object, face feature maps of at least two relation objects having a preset blood relationship with the target object and a feature map of the combined image.
The preset face feature recognition model may be a neural network model (e.g., an artificial neural network model, a convolutional neural network model), a non-neural network model, or the like.
Step 503, using the face feature map of the target object as a training sample image, and arbitrarily selecting one feature map from the face feature map of the relational object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
The executing agent may use a face feature map of a target object obtained by the face feature recognition model as one of the pair of training sample images, and select one feature map from the face feature maps of at least two respective relationship objects having a predetermined blood-related relationship with the target object obtained by the face feature recognition model and the feature map of any combined image as the other training sample image to generate a pair of training sample images in the training sample set. In this way, a plurality of training sample pairs in the set of training samples may be obtained.
In some optional implementation manners of this embodiment, before the face image of the target object, the face images of at least two objects having a predetermined blood-related relationship with the target object, and the combined image are respectively input to the pre-trained face feature recognition model, the executing body may further perform affine transformation on the face image of the target object, the face images of at least two objects having a predetermined blood-related relationship with the target object, and the combined image to respectively obtain a transformed face image of the target object, a transformed face image of the related object, and a transformed combined image. The affine transformation may include: and carrying out scaling, translation, rotation and other processing on the face image.
In these optional implementations, the execution subject may further input the face image of the transformed target object, the face image of the transformed relationship object, and the transformed combined image into the face feature recognition model to obtain a face feature map of the transformed target object, a face feature map of the transformed relationship object, and a feature map of the transformed combined image.
The execution subject may further use the face feature map of the target object or the transformed face feature map of the target object as a training sample image, and arbitrarily select one feature map from the transformed face feature map of the target object, the transformed face feature map of the relationship object, and the transformed feature map of the combined image as another training sample image, so as to obtain at least one training sample image pair in the training sample set.
In these alternative implementations, the face feature map of the target object or the transformed face feature map of the target object may be used as one of the training sample pairs, and any one of the face feature map of the transformed relational object or the feature map of the transformed combined image may be used as another training sample image to generate at least one training sample image pair in the training sample set. These training sample image pairs may be used in step 504 for training the initial face recognition model, which may further expand the number of training sample image pairs.
Step 504, inputting each training sample in the training sample set into the initial face recognition model to train the initial face recognition model, so as to obtain a trained face recognition model.
Step 504 is the same as step 202 in the embodiment shown in fig. 2, and is not described here.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the process 500 of the generation method of the face recognition model in this embodiment highlights the step of combining the face image of the target object and the face image of the relationship object with the face image, inputting the combined face image into the pre-trained face feature recognition model, and obtaining the face feature map of the target object, the face feature map of the relationship object, and the feature map of the combined face image. Therefore, the scheme described in the embodiment can accelerate the speed of acquiring the training data. In addition, in the scheme described in this embodiment, the transformed target object face feature map, the transformed relation object face feature map, and the transformed combined image feature map are also used as training sample images to train the face recognition model, and since the transformed target object face feature map, the transformed relation object face feature map, and the transformed combined image feature map can reflect the features of the target object face image, the relation object face, and the combined image at different angles, training the initial face recognition model using the transformed target object face feature map, the transformed relation object face feature map, and the transformed combined image feature map can further improve the robustness of the trained face recognition model.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating a face recognition model, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for generating a face recognition model according to the present embodiment includes: an acquisition unit 601 and a face recognition model generation unit 602. The acquiring unit 601 is configured to acquire a training sample set; a face recognition model generating unit 602 configured to input each training sample in the training sample set into an initial face recognition model to train the initial face recognition model, so as to obtain a trained face recognition model, where the face recognition model is used to recognize whether an object corresponding to a face image pair input into the face recognition model satisfies a predetermined blood relationship; wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on the steps of: acquiring a face feature map of a target object as a training sample image in a training sample image pair; acquiring face images of at least two relation objects having a preset blood relationship with a target object; generating a target image set, wherein the target image set comprises a feature map generated by a face image of a relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region; and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
In this embodiment, specific processing of the obtaining unit 601 and the face recognition model generating unit 602 of the face recognition model generating device 600 and technical effects thereof can refer to related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of the present embodiment, the training sample set further includes at least one training sample image pair including a facial feature map of the target image and a facial feature map of a human object having no predetermined consanguinity relationship with the target object; and the face recognition model generation unit is further configured to: inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, so that if a to-be-detected image pair input into the face recognition model comprises a face feature image of a to-be-detected target object and a face feature image having a predetermined blood relationship with the to-be-detected target object, the numerical value output by the face recognition model is greater than a first preset threshold value, and if the to-be-detected image pair comprises the face feature image of the target object and the face feature image not having the predetermined blood relationship with the target object, the numerical value output by the face recognition model is less than a second preset threshold value; and the second preset threshold is smaller than the first preset threshold.
In some optional implementations of this embodiment, the obtaining unit is further configured to: acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object and a combined image obtained from the face images of the at least two relation objects; inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image; and taking the face feature map of the target object as a training sample image, and randomly selecting one feature map from the face feature map of the relational object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some optional implementations of this embodiment, the obtaining unit is further configured to: before inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model and respectively obtaining the face feature map of the target object, the face feature map of the relational object and the feature map of the combined image, carrying out affine transformation on the face image of the target object, the face image of the relational object and the combined image to obtain a transformed face image of the target object, a transformed face image of the relational object and a transformed combined image; respectively inputting the transformed target object face image, the transformed relation object face image and the transformed combined image into a pre-trained face feature recognition model to obtain a transformed target object face feature image, a transformed relation object face feature image and a transformed combined image feature image; and taking the face feature map of the target object or the face feature map of the transformed target object as a training sample image, and randomly selecting one feature map from the transformed face feature map of the target object, the transformed face feature map of the relational object and the transformed feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
In some optional implementations of the embodiment, the face recognition model is a convolutional neural network model.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit and a face recognition model generation unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, an acquisition unit may also be described as a "unit that acquires a set of training samples".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a training sample set; inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, wherein the face recognition model is used for recognizing whether objects corresponding to the face image pair input into the face recognition model meet a preset blood relationship; wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on: acquiring a face feature map of a target object as a training sample image in a training sample image pair; acquiring face images of at least two relation objects having the preset blood relationship with the target object; generating a target image set, wherein the target image set comprises a feature map generated by a face image of the relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region; and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A generation method of a face recognition model comprises the following steps:
acquiring a training sample set;
inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, wherein the face recognition model is used for recognizing whether objects corresponding to the face image pair input into the face recognition model meet a preset blood relationship;
wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on:
acquiring a face feature map of a target object as a training sample image in a training sample image pair;
acquiring face images of at least two relation objects having the preset blood relationship with the target object;
generating a target image set, wherein the target image set comprises a feature map generated by a face image of the relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region;
and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
2. The method of claim 1, wherein the training sample set further comprises at least one training sample image pair comprising a facial feature map of the target image and a facial feature map of a human object that does not have a predetermined consanguinity relationship with the target object; and
the inputting of each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model includes:
inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, so that if a to-be-detected image pair input into the face recognition model comprises a face feature image of a to-be-detected target object and a face feature image having a predetermined blood relationship with the to-be-detected target object, the numerical value output by the face recognition model is greater than a first preset threshold, and if the to-be-detected image pair comprises the face feature image of the target object and the face feature image not having the predetermined blood relationship with the target object, the numerical value output by the face recognition model is less than a second preset threshold; wherein the second preset threshold is smaller than the first preset threshold.
3. The method of claim 1, wherein the obtaining a set of training samples comprises:
acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object and a combined image obtained from the face images of the at least two relation objects;
inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image;
and taking the face feature map of the target object as a training sample image, and randomly selecting one feature map from the face feature map of the relation object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
4. The method according to claim 3, wherein before the inputting the face image of the target object, the face image of the relationship object, and the combined image into the pre-trained face feature recognition model to obtain the face feature map of the target object, the face feature map of the relationship object, and the feature map of the combined image, respectively, the method further comprises:
carrying out affine transformation on the face image of the target object, the face image of the relational object and the combined image to obtain a transformed face image of the target object, a transformed face image of the relational object and a transformed combined image; and
the inputting the face image of the target object, the face image of the relationship object, and the combined image into a pre-trained face feature recognition model to obtain a face feature map of the target object, a face feature map of the relationship object, and a feature map of the combined image, respectively, further includes: respectively inputting the transformed target object face image, the transformed relation object face image and the transformed combined image into a pre-trained face feature recognition model to obtain a transformed target object face feature image, a transformed relation object face feature image and a transformed combined image feature image; and
the method for obtaining at least one training sample image pair in a training sample set by using a face feature image of a target object as a training sample image and arbitrarily selecting one feature image from the face feature image of the relational object and the feature image of the combined image as another training sample image further comprises:
and taking the face feature map of the target object or the transformed face feature map of the target object as a training sample image, and randomly selecting one feature map from the transformed face feature map of the target object, the transformed face feature map of the relational object and the transformed feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
5. The method of claim 1, wherein the face recognition model is a convolutional neural network model.
6. An apparatus for generating a face recognition model, comprising:
an acquisition unit configured to acquire a training sample set;
a face recognition model generating unit configured to input each training sample in the training sample set into an initial face recognition model to train the initial face recognition model, so as to obtain a trained face recognition model, where the face recognition model is used to recognize whether an object corresponding to a face image pair input into the face recognition model satisfies a predetermined blood relation;
wherein the training sample set comprises a plurality of training sample image pairs, at least one training sample image pair in the training sample set being generated based on:
acquiring a face feature map of a target object as a training sample image in a training sample image pair;
acquiring face images of at least two relation objects having the preset blood relationship with the target object;
generating a target image set, wherein the target image set comprises a feature map generated by a face image of the relational object and a feature map generated by a combined image of the relational object, and the combined image of the relational object is an image generated by cutting out a preset feature region of the face image of one of the relational objects and replacing a feature region corresponding to the face image of the other relational object by the cut-out preset feature region;
and randomly selecting one feature map from the target image set as the other training sample image in the training sample image pair.
7. The apparatus of claim 6, wherein the training sample set further comprises at least one training sample image pair comprising a facial feature map of the target image and a facial feature map of a human object that does not have a predetermined consanguinity relationship with the target object; and
the face recognition model generation unit is further configured to:
inputting each training sample in the training sample set into an initial face recognition model to train the initial face recognition model to obtain a trained face recognition model, so that if a to-be-detected image pair input into the face recognition model comprises a face feature image of a to-be-detected target object and a face feature image having a predetermined blood relationship with the to-be-detected target object, the numerical value output by the face recognition model is greater than a first preset threshold, and if the to-be-detected image pair comprises the face feature image of the target object and the face feature image not having the predetermined blood relationship with the target object, the numerical value output by the face recognition model is less than a second preset threshold; wherein the second preset threshold is smaller than the first preset threshold.
8. The apparatus of claim 6, wherein the obtaining unit is further configured to: acquiring a face image of a target object, face images of at least two relation objects having a predetermined blood relationship with the target object and a combined image obtained from the face images of the at least two relation objects;
inputting the face image of the target object, the face image of the relational object and the combined image into a pre-trained face feature recognition model to respectively obtain a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image;
and taking the face feature map of the target object as a training sample image, and randomly selecting one feature map from the face feature map of the relation object and the feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
9. The apparatus of claim 8, wherein the obtaining unit is further configured to:
before the face image of the target object, the face image of the relational object and the combined image are input into a pre-trained face feature recognition model and a face feature map of the target object, a face feature map of the relational object and a feature map of the combined image are respectively obtained, carrying out affine transformation on the face image of the target object, the face image of the relational object and the combined image to obtain a transformed face image of the target object, a transformed face image of the relational object and a transformed combined image; and
respectively inputting the transformed target object face image, the transformed relation object face image and the transformed combined image into a pre-trained face feature recognition model to obtain a transformed target object face feature image, a transformed relation object face feature image and a transformed combined image feature image; and
and taking the face feature map of the target object or the transformed face feature map of the target object as a training sample image, and randomly selecting one feature map from the transformed face feature map of the target object, the transformed face feature map of the relational object and the transformed feature map of the combined image as another training sample image to obtain at least one training sample image pair in the training sample set.
10. The apparatus of claim 6, wherein the face recognition model is a convolutional neural network model.
11. A server, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201810268892.1A 2018-03-29 2018-03-29 Method and device for generating face recognition model Active CN108491812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810268892.1A CN108491812B (en) 2018-03-29 2018-03-29 Method and device for generating face recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810268892.1A CN108491812B (en) 2018-03-29 2018-03-29 Method and device for generating face recognition model

Publications (2)

Publication Number Publication Date
CN108491812A CN108491812A (en) 2018-09-04
CN108491812B true CN108491812B (en) 2022-05-03

Family

ID=63317307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810268892.1A Active CN108491812B (en) 2018-03-29 2018-03-29 Method and device for generating face recognition model

Country Status (1)

Country Link
CN (1) CN108491812B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163222B (en) * 2018-10-08 2023-01-24 腾讯科技(深圳)有限公司 Image recognition method, model training method and server
CN111860034A (en) * 2019-04-24 2020-10-30 广州煜煊信息科技有限公司 Household accident handling method
CN111368685B (en) * 2020-02-27 2023-09-29 北京字节跳动网络技术有限公司 Method and device for identifying key points, readable medium and electronic equipment
CN113705276A (en) * 2020-05-20 2021-11-26 武汉Tcl集团工业研究院有限公司 Model construction method, model construction device, computer apparatus, and medium
CN113393265B (en) * 2021-05-25 2023-04-25 浙江大华技术股份有限公司 Feature library construction method for passing object, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000059431A (en) * 1999-03-03 2000-10-05 조성우 Family identification system from genetic database using DNA typing method and combination of markers for the system
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN105488463A (en) * 2015-11-25 2016-04-13 康佳集团股份有限公司 Lineal relationship recognizing method and system based on face biological features
CN106384116A (en) * 2016-08-29 2017-02-08 北京农业信息技术研究中心 Terahertz imaging based plant vein recognition method and device
CN106709482A (en) * 2017-03-17 2017-05-24 中国人民解放军国防科学技术大学 Method for identifying genetic relationship of figures based on self-encoder
CN106951858A (en) * 2017-03-17 2017-07-14 中国人民解放军国防科学技术大学 A kind of recognition methods of personage's affiliation and device based on depth convolutional network
CN107229902A (en) * 2017-04-12 2017-10-03 南京晓庄学院 A kind of twins' recognition methods based on improvement SVM

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000059431A (en) * 1999-03-03 2000-10-05 조성우 Family identification system from genetic database using DNA typing method and combination of markers for the system
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN105488463A (en) * 2015-11-25 2016-04-13 康佳集团股份有限公司 Lineal relationship recognizing method and system based on face biological features
CN106384116A (en) * 2016-08-29 2017-02-08 北京农业信息技术研究中心 Terahertz imaging based plant vein recognition method and device
CN106709482A (en) * 2017-03-17 2017-05-24 中国人民解放军国防科学技术大学 Method for identifying genetic relationship of figures based on self-encoder
CN106951858A (en) * 2017-03-17 2017-07-14 中国人民解放军国防科学技术大学 A kind of recognition methods of personage's affiliation and device based on depth convolutional network
CN107229902A (en) * 2017-04-12 2017-10-03 南京晓庄学院 A kind of twins' recognition methods based on improvement SVM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A NEW PROBLEM IN FACE IMAGE ANALYSIS: FINDING KINSHIP CLUES FOR SIBLINGS PAIRS;A. Bottino 等;《Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods》;20121231;405-410 *
From Face Recognition to Kinship Verification: An Adaptation Approach;Qingyan Duan 等;《2017 IEEE International Conference on Computer Vision Workshops (ICCVW)》;20180123;1590-1598 *
利用深度学习在人脸图像中识别亲缘关系;李珏 等;《第十一届和谐人机环境联合会议论文集》;20170725;1-7 *

Also Published As

Publication number Publication date
CN108491812A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN108509915B (en) Method and device for generating face recognition model
CN108491812B (en) Method and device for generating face recognition model
US10936919B2 (en) Method and apparatus for detecting human face
CN111898696B (en) Pseudo tag and tag prediction model generation method, device, medium and equipment
CN109214343B (en) Method and device for generating face key point detection model
CN108229419B (en) Method and apparatus for clustering images
US20220172518A1 (en) Image recognition method and apparatus, computer-readable storage medium, and electronic device
CN107622240B (en) Face detection method and device
CN112699991A (en) Method, electronic device, and computer-readable medium for accelerating information processing for neural network training
US11436863B2 (en) Method and apparatus for outputting data
WO2022105125A1 (en) Image segmentation method and apparatus, computer device, and storage medium
CN109101919B (en) Method and apparatus for generating information
CN109034069B (en) Method and apparatus for generating information
CN110659723B (en) Data processing method and device based on artificial intelligence, medium and electronic equipment
CN109711508B (en) Image processing method and device
CN111709240A (en) Entity relationship extraction method, device, equipment and storage medium thereof
CN108509994B (en) Method and device for clustering character images
CN110414502B (en) Image processing method and device, electronic equipment and computer readable medium
CN108388889B (en) Method and device for analyzing face image
CN108399401B (en) Method and device for detecting face image
CN110648289A (en) Image denoising processing method and device
CN111931628B (en) Training method and device of face recognition model and related equipment
CN116824278B (en) Image content analysis method, device, equipment and medium
CN109241930B (en) Method and apparatus for processing eyebrow image
CN108257081B (en) Method and device for generating pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant