CN111881740B - Face recognition method, device, electronic equipment and medium - Google Patents

Face recognition method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111881740B
CN111881740B CN202010567110.1A CN202010567110A CN111881740B CN 111881740 B CN111881740 B CN 111881740B CN 202010567110 A CN202010567110 A CN 202010567110A CN 111881740 B CN111881740 B CN 111881740B
Authority
CN
China
Prior art keywords
feature vector
face
sample
face image
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010567110.1A
Other languages
Chinese (zh)
Other versions
CN111881740A (en
Inventor
肖传宝
梁佳
杜永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202010567110.1A priority Critical patent/CN111881740B/en
Publication of CN111881740A publication Critical patent/CN111881740A/en
Application granted granted Critical
Publication of CN111881740B publication Critical patent/CN111881740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method, which relates to the technical field of face recognition, and specifically comprises the following steps: establishing a mapping network, wherein the mapping network has a mapping relation between a source feature vector and a target feature vector, the source feature vector corresponds to a sample face image with an occlusion region, and the target feature vector corresponds to a sample face image without an occlusion region; acquiring a feature vector a, wherein the feature vector a corresponds to a face image to be recognized with an occlusion region; inputting the feature vector a into a mapping network to obtain a feature vector b; judging whether the feature vector b is matched with the feature vector c in the feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of the non-shielding area. The face recognition method can be used for rapidly recognizing the face image with the shielding area, so that the user experience is improved. The invention also discloses a face recognition device, electronic equipment and a computer readable storage medium.

Description

Face recognition method, device, electronic equipment and medium
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a face recognition method, device, electronic apparatus, and medium.
Background
Face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people. A series of related technologies, commonly referred to as image recognition and face recognition, are used to capture images or video streams containing faces with a camera or cameras, and automatically detect and track the faces in the images, thereby performing face recognition on the detected faces.
However, when the face has a large shielding object, such as a sunglasses or a mask, the face recognition cannot be performed normally or is judged to be failed directly, so that the person to be recognized needs to take down the corresponding shielding object and wear the mask continuously after recognition is completed, and the recognition efficiency is reduced and the user experience is affected.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the purposes of the invention is to provide a face recognition method which can rapidly recognize face images with shielding areas so as to improve user experience.
One of the purposes of the invention is realized by adopting the following technical scheme:
a face recognition method, comprising the steps of:
establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without an occlusion region;
acquiring a feature vector a, wherein the feature vector a corresponds to a face image to be recognized with an occlusion area;
inputting the feature vector a into the mapping network to obtain a feature vector b;
and judging whether the feature vector b is matched with a feature vector c in a feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of a non-shielding area.
Further, the mapping network is established, comprising the following steps:
acquiring a source sample, and inputting the source sample into a face recognition model to obtain a source feature vector, wherein the source sample is a sample face image of a non-shielding area;
inputting the source sample into a shielding model to obtain a target sample, inputting the target sample into a face recognition model to obtain a target feature vector, wherein the target sample is a sample face image with a shielding area;
training a mapping network, taking the target feature vector as an input of the mapping network, and taking the source feature vector as an output of the mapping network.
Further, the feature vector a is obtained, which comprises the following steps:
receiving an image to be detected;
inputting the image to be detected into a face detection model to obtain a face area;
judging whether the face area has a shielding area, if so, marking the face area as a face image to be recognized, and inputting the face image to be recognized into a face recognition model to obtain a feature vector a.
Further, the method also comprises the following steps:
when the face area does not have an occlusion area, recording the face area as a complete face image;
inputting the complete face image into a face recognition model to obtain a feature vector d;
and judging whether the feature vector d is matched with the feature vector c in the feature vector base, and if so, outputting a recognition success signal.
Further, the mapping network is provided with more than one mapping model, and each mapping model is respectively associated with the type of the shielding area; inputting the feature vector a into the mapping network to obtain a feature vector b, wherein the method comprises the following steps of:
inquiring the shielding region type k of the characteristic vector a;
obtaining an associated mapping model D according to the shielding region type k;
and inputting the feature vector a into the mapping model D to obtain a feature vector b.
Further, inquiring the shielding area type k of the characteristic vector a, including the following steps;
acquiring a face image to be recognized with a shielding area;
and inputting the face image to be recognized into a classification model to obtain the shielding region type k.
Further, the shielding area type k comprises any one or more of left eye shielding, right eye shielding, nose shielding and mouth shielding.
The second objective of the present invention is to provide a face recognition device, which can perform fast face recognition on a face image with a shielding area, so as to improve user experience.
The second purpose of the invention is realized by adopting the following technical scheme:
a face recognition device, comprising: the mapping network is provided with a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to the sample face image with the shielding area, and the target characteristic vector corresponds to the sample face image without the shielding area; the acquisition module is used for acquiring a characteristic vector a, wherein the characteristic vector a corresponds to the face image to be recognized with the shielding area; the processing module is used for inputting the feature vector a into the mapping network to obtain a feature vector b; and the matching module is used for judging whether the feature vector b is matched with the feature vector c in the feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of the non-shielding area.
A third object of the present invention is to provide an electronic device for performing one of the objects, which includes a processor, a storage medium, and a computer program stored in the storage medium, wherein the computer program, when executed by the processor, implements the face recognition method described above.
It is a fourth object of the present invention to provide a computer-readable storage medium storing one of the objects of the present invention, on which a computer program is stored, which computer program, when being executed by a processor, implements the above-mentioned face recognition method.
Compared with the prior art, the invention has the beneficial effects that: the mapping network has a mapping relation between the source feature vector and the target feature vector, so that the feature vector a can obtain a feature vector b through the mapping network, the feature vector b can be regarded as corresponding to the face image to be recognized in the non-shielding area, then a matching step can be carried out, and therefore the face image with the shielding area can be rapidly recognized, and further user experience is improved; the obtained feature vector b is still matched with the feature vector c in the feature vector base, so that a developer can develop on the existing face recognition technology to reduce development difficulty.
Drawings
Fig. 1 is a flowchart of a face recognition method according to a first embodiment;
fig. 2 is a flowchart of step S10 in the second embodiment;
fig. 3 is a flowchart of step S30 in the second embodiment;
fig. 4 is a flowchart of step S20 and step S60 in the third embodiment;
fig. 5 is a block diagram of a face recognition device according to a fourth embodiment;
fig. 6 is a block diagram of an electronic device according to a fifth embodiment.
In the figure: 1. establishing a module; 2. an acquisition module; 3. a processing module; 4. a matching module; 5. an electronic device; 51. a processor; 52. a memory; 53. an input device; 54. and an output device.
Detailed Description
The invention will now be described in more detail with reference to the accompanying drawings, to which it should be noted that the description is given below by way of illustration only and not by way of limitation. Various embodiments may be combined with one another to form further embodiments not shown in the following description.
Example 1
An embodiment provides a face recognition method, which aims to solve the problem that the face image with a shielding area is difficult to recognize in the existing face recognition technology. Specifically, referring to fig. 1, the face recognition method may include steps S10 to S50.
Step S10, a mapping network is established. The mapping network has a mapping relation obtained through training, wherein the mapping relation is a relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without an occlusion region. I.e. the mapping network is characterized by its inputs and outputs being feature vectors when it is trained and discriminated.
Step S20, obtaining a feature vector a. The feature vector a corresponds to an image of the face to be recognized having an occlusion region. It should be noted that the setting rule of the shielding area is not limited in this case, and may be adjusted accordingly according to the actual situation.
And step S30, inputting the feature vector a into a mapping network to obtain a feature vector b. It should be noted that, the feature vector b is obtained through a mapping relationship, so it does not really correspond to the image to be recognized without occlusion, but can only be regarded as corresponding to the image of the face to be recognized without occlusion.
Step S40, judging whether the feature vector b is matched with the feature vector c in the feature vector base, if so, executing step S50; if not, finishing the face recognition and carrying out the next face recognition. It should be noted that, if the next face recognition is entered, the process may begin directly with step S20. The feature vector base stores a feature vector c, and the feature vector c corresponds to a pre-stored face image without shielding.
Step S50, outputting a recognition success signal, and converting the person to be recognized into a person for completing recognition, wherein the person is allowed to execute corresponding operation. The feature vector c matching the feature vector b is denoted as feature vector c0, and the recognition success signal may carry the feature vector c0 and/or a pre-stored facial image corresponding to the feature vector c 0. For example: when the face recognition method is applied to an access control system, and when the access control system receives the recognition success signal, the access control system can be controlled to be opened; when the face recognition method is applied to the police system, and when the police system receives the recognition success signal, a pre-stored face image corresponding to the feature vector c0 is output.
It is worth noting that the steps of the method are performed on the basis of the executing device. Specifically, the execution device may be a server, a client, a processor, or the like, but the execution device is not limited to the above type.
In summary, the mapping network has a mapping relation between the source feature vector and the target feature vector, so that the feature vector a can obtain a feature vector b through the mapping network, the feature vector b can be regarded as corresponding to the face image to be recognized without the shielding area, then a matching step can be carried out, and therefore the face image with the shielding area can be rapidly recognized, and further user experience is improved; the obtained feature vector b is still matched with the feature vector c in the feature vector base, so that a developer can develop on the existing face recognition technology to reduce development difficulty.
Example two
The present embodiment provides a face recognition method, and is performed on the basis of the first embodiment, as shown in fig. 1 and 2. Specifically, the step S10 may include steps S101 to S103.
Step S101, acquiring a source sample, and inputting the source sample into a face recognition model to obtain a source feature vector. Wherein the source sample is an unobstructed sample face image. Here, the face recognition model may be, but not limited to, any model using deep learning generation, and MobileNet-V2 is preferably used as long as the feature vector of the source sample can be obtained.
Step S102, inputting a source sample into a shielding model to obtain a target sample, and inputting the target sample into a face recognition model to obtain a target feature vector. The target sample is a sample face image with an occlusion region, that is, the target sample and the source sample both correspond to the same sample face image, and only the difference exists between the two. The occlusion model may be any model, but not limited to, a model generated by deep learning, as long as the sample face without an occlusion region can be occluded according to the occlusion requirement to form an occlusion region, and preferably, cycleGAN is used.
Step S103, training the mapping network, taking the target feature vector as the input of the mapping network, and taking the source feature vector as the output of the mapping network. The mapping network thus has a mapping relationship between the source feature vector and the target feature vector. The mapping network may be, but is not limited to, any one of deep learning generation models, as long as the mapping relationship can be obtained, and preferably, mobileNet-V2 is used.
By the technical scheme, the establishment of the mapping network is realized, so that the feature vector a corresponding to the image to be identified in the non-shielding area can obtain the feature vector b corresponding to the image to be identified in the shielding area according to the mapping network.
Further, the type of the occlusion region may be one, and the mapping network may only include a mapping model D, which is simpler and not described herein in detail; when the occlusion zone type is multiple, the mapping network D may include one or more mapping models.
When the occlusion zone type is n and n > 1, the mapping network comprises only one mapping model. It can be understood that if the source samples in step S101 are one, n target samples in step S102 are provided, and each target sample corresponds to an occlusion region type, then in step S30, it is not necessary to determine the occlusion region type. In this way, steps can be reduced, but the accuracy of the mapping is not high.
When the occlusion region type is n and n > 1, the mapping network has n mapping models. It can be understood that in step S101, if there is one source sample, n target samples in step S102, and each target sample corresponds to an occlusion region type, in step S30, the occlusion region type of the feature vector a needs to be determined, and then a corresponding mapping model is selected. In this way, each occlusion region type has a corresponding mapping model to improve the accuracy of the mapping, despite the addition of steps.
It should be noted that the type of the shielding area may include any one or more of left eye shielding, right eye shielding, nose shielding and mouth shielding, and the type of the shielding area is not limited to the above type, and may be added or deleted according to practical situations. The left eye shielding can be that the left eye is shielded by more than half, the right eye shielding can be that the right eye is shielded by more than half, the nose shielding can be that the nose is shielded by more than half, and the mouth shielding can be that the mouth is shielded by more than half. For example: when a person to be identified wears the mask, the mouth and more than half of the nose are required to be shielded, and the person to be identified can be regarded as a combination of mouth shielding and nose shielding; when the person to be identified wears the sunglasses, the left eye and the right eye are required to be shielded, and the person to be identified can be regarded as a combination of left eye shielding and right eye shielding.
As an alternative solution, when the occlusion region type is n and n > 1, and the mapping network has n mapping models, as shown with reference to fig. 3, step S30 may include steps S301 to S303.
Step S301, inquiring the type of the shielding area of the feature vector a, and recording the type as the type k of the shielding area. Classification models may be used herein. Specifically, a face image to be recognized with a shielding region is acquired first, and then the face image to be recognized is input into a classification model to obtain a shielding region type k.
It should be noted that it may be possible to use the target samples in step S102, where the number of target samples corresponds to the number of types of the occlusion regions, each target sample is used as an input of the classification model, and the type of the occlusion region is used as an output of the classification model. The classification model may be, but is not limited to, any of the deep learning generation models, so long as a determination of occlusion region type is available.
Step S302, obtaining an associated mapping model according to the occlusion region type k, and recording the mapping model as a mapping model D. Since the occlusion region types of the target samples of the same mapping model are the same at the time of training, the occlusion region types are associated with the mapping model.
Step S303, the feature vector a is input into the mapping model D to obtain a feature vector b.
According to the technical scheme, the application of the single mapping model D replaces the application of the whole mapping network, so that training efficiency is improved on one hand, and calculation efficiency and accuracy are improved on the other hand.
Example III
The present embodiment provides a face recognition method, which is performed on the basis of the first embodiment or the second embodiment. Referring to fig. 1 and 4, step S20 may further include steps S201 to S204.
Step S201, receiving an image to be detected. The image to be detected can be acquired by a matched camera or uploaded by other equipment, and the specific source of the image to be detected is not limited.
Step S202, inputting the image to be detected into a face detection model to obtain a face area. The face detection model is a prior art and is not limited herein. It should be noted that the face area is the face with the greatest weight in the image to be detected.
Step S203, judging whether the face area has a shielding area, if yes, executing step S204. The corresponding model may be selected and the corresponding algorithm may be adopted, which is not limited herein. It should be noted that the determination in this step is only a rough determination, for example: the determination is carried out by the area size of the shielding area, and the specific position of the shielding area is not required to be determined.
And S204, marking the face area as a face image to be recognized, inputting the face image to be recognized into a face recognition model to obtain a feature vector, and marking the feature vector as a feature vector.
By the technical scheme, the input image to be detected can be subjected to preliminary judgment, and then the face area with the shielding area is correspondingly executed to improve the accuracy of the input of the mapping network model.
As an alternative solution, referring to fig. 4, the method may further include step S60.
This step S60 is executed after it is determined in step S203 that the face area does not have an occlusion area. The step S60 may include steps S601 to S603.
Step S601, the face area without the shielding area is recorded as a complete face image.
Step S602, inputting the complete face image into a face recognition model to obtain a corresponding feature vector d;
step S603, determining whether the feature vector d matches the feature vector c in the feature vector base, if so, executing step S604, and outputting a recognition success signal. It should be noted that, the step S40 and the step S603 may be combined into one step for execution, and accordingly, the step S50 and the step S604 may be combined into one step for execution, so as to reduce the overall steps and the memory.
According to the technical scheme, when the face area is provided with the shielding area, the face area enters a mapping network to be matched with the feature vector c in the feature vector base; when the face area does not have the shielding area, the face area directly enters and is matched with the feature vector c in the feature vector base to realize the face recognition of the image to be detected, and the mapping network memory is smaller, so that the accuracy of the face recognition with the shielding area is improved, and the overall time consumption is not influenced.
Further, the face recognition method may further include the steps of: when the matching in the step S40 fails, the feature vector c with the highest matching degree is marked and marked as a feature vector c1; then judging whether the next matching based on the step S60 is successful, if so, marking the matched feature vector c and marking the feature vector c as a feature vector c2; and judging whether the feature vector c1 and the feature vector c2 are the same, if so, calling a pre-stored face image corresponding to the feature vector c, taking the pre-stored face image as a source sample, and training and updating the mapping network according to the step S10.
Example IV
An embodiment four provides a face recognition device, which is a virtual device structure of the above embodiment, and aims to solve the problem that the existing face recognition technology is difficult to recognize a face image with a shielding area. The face recognition device may include: the device comprises a building module 1, an acquisition module 2, a processing module 3 and a matching module 4.
The establishing module 1 is used for establishing a mapping network, the mapping network is provided with a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to the sample face image with the shielding area, and the target characteristic vector corresponds to the sample face image without the shielding area; the acquisition module 2 is used for acquiring a feature vector a, wherein the feature vector a corresponds to the face image to be recognized with the shielding area; the processing module 3 is used for inputting the feature vector a into the mapping network to obtain a feature vector b; the matching module 4 is configured to determine whether the feature vector b is matched with the feature vector c in the feature vector base, if yes, output a recognition success signal, where the feature vector c corresponds to a pre-stored face image of the non-occlusion region.
Example five
The electronic device 5 may be a desktop computer, a notebook computer, a server (physical server or cloud server), etc., or even a cell phone or tablet computer,
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, as shown in fig. 1 and fig. 6, the electronic device 5 includes a processor 51, a memory 52, an input device 53 and an output device 54; the number of processors 51 in the computer device may be one or more, one processor 51 being taken as an example in fig. 6; the processor 51, the memory 52, the input means 53 and the output means 54 in the electronic device 5 may be connected by a bus or by other means, in fig. 6 by way of example.
The memory 52 is used as a computer readable storage medium, and may be used to store a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the face recognition method in the embodiment of the present invention, where the program instruction/module is 1 in the face recognition device and the building module; 2. an acquisition module; 3. a processing module; 4. and a matching module. The processor 51 executes various functional applications of the electronic device 5 and data processing by running a software program, instructions/modules stored in the memory 52, i.e., implements the face recognition method of any of the above-described embodiments one to three or a combination of the embodiments.
Memory 52 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 52 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. The memory 52 may be further arranged to comprise a memory arranged remotely with respect to the processor 51, which may be connected to the electronic device 5 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It is noted that the input means 53 may be adapted to receive acquired related data. The output device 54 may include a display device such as a document or a display screen. Specifically, when the output device 54 is a document, the corresponding information can be recorded in the document according to a specific format, so that data storage is realized and data integration is realized; when the output device 54 is a display device such as a display screen, the corresponding information is directly put on the display screen, so that the user can view the corresponding information in real time.
Example six
A sixth embodiment of the present invention also provides a computer-readable storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the face recognition method described above, the method comprising:
establishing a mapping network, wherein the mapping network has a mapping relation between a source feature vector and a target feature vector, the source feature vector corresponds to a sample face image with an occlusion region, and the target feature vector corresponds to a sample face image without an occlusion region;
acquiring a feature vector a, wherein the feature vector a corresponds to a face image to be recognized with an occlusion region;
inputting the feature vector a into a mapping network to obtain a feature vector b;
judging whether the feature vector b is matched with the feature vector c in the feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of the non-shielding area.
Of course, the computer-readable storage medium provided by the embodiments of the present invention, the computer-executable instructions of which are not limited to the method operations described above.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FlASH Memory (FlASH), a hard disk or an optical disk of a computer, or the like, and include several instructions for causing an electronic device (which may be a mobile phone, a personal computer, a server, or a network device, etc.) to execute the face recognition method according to any embodiment or combination of embodiments of the present invention.
It should be noted that, in the above embodiment of face recognition, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented. In addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.

Claims (9)

1. The face recognition method is characterized by comprising the following steps of:
establishing a mapping network, wherein the mapping network has a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to a sample face image with an occlusion region, and the target characteristic vector corresponds to a sample face image without an occlusion region;
acquiring a feature vector a, wherein the feature vector a corresponds to a face image to be recognized with an occlusion area;
inputting the feature vector a into the mapping network to obtain a feature vector b, wherein the feature vector b is regarded as corresponding to the image to be identified without shielding;
judging whether the feature vector b is matched with a feature vector c in a feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of a non-shielding area;
establishing a mapping network, comprising the steps of:
acquiring a source sample, and inputting the source sample into a face recognition model to obtain a source feature vector, wherein the source sample is a sample face image of a non-shielding area;
inputting the source sample into a shielding model to obtain a target sample, inputting the target sample into a face recognition model to obtain a target feature vector, wherein the target sample is a sample face image with a shielding area;
training a mapping network, taking the target feature vector as an input of the mapping network, and taking the source feature vector as an output of the mapping network.
2. The face recognition method according to claim 1, wherein the feature vector a is obtained, comprising the steps of:
receiving an image to be detected;
inputting the image to be detected into a face detection model to obtain a face area;
judging whether the face area has a shielding area, if so, marking the face area as a face image to be recognized, and inputting the face image to be recognized into a face recognition model to obtain a feature vector a.
3. The face recognition method according to claim 2, further comprising the steps of:
when the face area does not have an occlusion area, recording the face area as a complete face image;
inputting the complete face image into a face recognition model to obtain a feature vector d;
and judging whether the feature vector d is matched with the feature vector c in the feature vector base, and if so, outputting a recognition success signal.
4. A face recognition method according to any one of claims 1 to 3, wherein the mapping network has more than one mapping model, each mapping model being associated with an occlusion region type; inputting the feature vector a into the mapping network to obtain a feature vector b, wherein the method comprises the following steps of:
inquiring the shielding region type k of the characteristic vector a;
obtaining an associated mapping model D according to the shielding region type k;
and inputting the feature vector a into the mapping model D to obtain a feature vector b.
5. The face recognition method according to claim 4, wherein the inquiring of the occlusion region type k of the feature vector a includes the steps of;
acquiring a face image to be recognized with a shielding area;
and inputting the face image to be recognized into a classification model to obtain the shielding region type k.
6. The face recognition method of claim 4, wherein the occlusion region type k comprises any one or more of left eye occlusion, right eye occlusion, nose occlusion, mouth occlusion.
7. A face recognition device, comprising:
the mapping network is provided with a mapping relation between a source characteristic vector and a target characteristic vector, the source characteristic vector corresponds to the sample face image with the shielding area, and the target characteristic vector corresponds to the sample face image without the shielding area; wherein, establish the mapping network, include the following steps:
acquiring a source sample, and inputting the source sample into a face recognition model to obtain a source feature vector, wherein the source sample is a sample face image of a non-shielding area; inputting the source sample into a shielding model to obtain a target sample, inputting the target sample into a face recognition model to obtain a target feature vector, wherein the target sample is a sample face image with a shielding area; training a mapping network, taking the target feature vector as an input of the mapping network, and taking the source feature vector as an output of the mapping network;
the acquisition module is used for acquiring a characteristic vector a, wherein the characteristic vector a corresponds to the face image to be recognized with the shielding area;
the processing module is used for inputting the feature vector a into the mapping network to obtain a feature vector b;
and the matching module is used for judging whether the feature vector b is matched with the feature vector c in the feature vector base, if so, outputting a recognition success signal, wherein the feature vector c corresponds to a pre-stored face image of the non-shielding area.
8. An electronic device comprising a processor, a storage medium and a computer program stored in the storage medium, characterized in that the computer program, when executed by the processor, implements the face recognition method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the face recognition method according to any one of claims 1 to 6.
CN202010567110.1A 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium Active CN111881740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010567110.1A CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010567110.1A CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111881740A CN111881740A (en) 2020-11-03
CN111881740B true CN111881740B (en) 2024-03-22

Family

ID=73156522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010567110.1A Active CN111881740B (en) 2020-06-19 2020-06-19 Face recognition method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111881740B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613435A (en) * 2020-12-28 2021-04-06 杭州魔点科技有限公司 Face image generation method, device, equipment and medium
WO2023241817A1 (en) * 2022-06-15 2023-12-21 Veridas Digital Authentication Solutions, S.L. Authenticating a person
CN116128514B (en) * 2022-11-28 2023-10-13 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN105139000A (en) * 2015-09-16 2015-12-09 浙江宇视科技有限公司 Face recognition method and device enabling glasses trace removal
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face
CN110751009A (en) * 2018-12-20 2020-02-04 北京嘀嘀无限科技发展有限公司 Face recognition method, target recognition device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9405965B2 (en) * 2014-11-07 2016-08-02 Noblis, Inc. Vector-based face recognition algorithm and image search system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN105139000A (en) * 2015-09-16 2015-12-09 浙江宇视科技有限公司 Face recognition method and device enabling glasses trace removal
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
WO2019033572A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Method for detecting whether face is blocked, device and storage medium
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN110363047A (en) * 2018-03-26 2019-10-22 普天信息技术有限公司 Method, apparatus, electronic equipment and the storage medium of recognition of face
CN110751009A (en) * 2018-12-20 2020-02-04 北京嘀嘀无限科技发展有限公司 Face recognition method, target recognition device and electronic equipment
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Also Published As

Publication number Publication date
CN111881740A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN111881740B (en) Face recognition method, device, electronic equipment and medium
CN105518712B (en) Keyword notification method and device based on character recognition
CN111476306A (en) Object detection method, device, equipment and storage medium based on artificial intelligence
EP3989104A1 (en) Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium
CN111461089A (en) Face detection method, and training method and device of face detection model
CN111914812B (en) Image processing model training method, device, equipment and storage medium
CN110443366B (en) Neural network optimization method and device, and target detection method and device
JP5361524B2 (en) Pattern recognition system and pattern recognition method
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN111523413B (en) Method and device for generating face image
CN110781770B (en) Living body detection method, device and equipment based on face recognition
CN115699082A (en) Defect detection method and device, storage medium and electronic equipment
CN114882437A (en) Recognition model training method and device, electronic equipment and storage medium
CN111435432A (en) Network optimization method and device, image processing method and device, and storage medium
CN115205925A (en) Expression coefficient determining method and device, electronic equipment and storage medium
JP2021520015A (en) Image processing methods, devices, terminal equipment, servers and systems
CN111144215A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111783674A (en) Face recognition method and system based on AR glasses
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium
CN105631404A (en) Method and device for clustering pictures
CN114677350A (en) Connection point extraction method and device, computer equipment and storage medium
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN110619602B (en) Image generation method and device, electronic equipment and storage medium
CN115620054A (en) Defect classification method and device, electronic equipment and storage medium
CN115115552A (en) Image correction model training method, image correction device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant