CN112597854B - Non-matching type face recognition system and method - Google Patents

Non-matching type face recognition system and method Download PDF

Info

Publication number
CN112597854B
CN112597854B CN202011482942.XA CN202011482942A CN112597854B CN 112597854 B CN112597854 B CN 112597854B CN 202011482942 A CN202011482942 A CN 202011482942A CN 112597854 B CN112597854 B CN 112597854B
Authority
CN
China
Prior art keywords
face
image
images
occlusion
multiple angles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011482942.XA
Other languages
Chinese (zh)
Other versions
CN112597854A (en
Inventor
王松柏
张家瑞
王峰
沈嘉葆
刘晓冬
陈鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing College of Electronic Engineering
Original Assignee
Chongqing College of Electronic Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing College of Electronic Engineering filed Critical Chongqing College of Electronic Engineering
Priority to CN202011482942.XA priority Critical patent/CN112597854B/en
Publication of CN112597854A publication Critical patent/CN112597854A/en
Application granted granted Critical
Publication of CN112597854B publication Critical patent/CN112597854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to the technical field of face recognition, in particular to a non-matching face recognition system and a method, which comprises the following steps: the acquisition unit is used for simultaneously and simultaneously acquiring the face images from multiple angles to obtain the face images from multiple angles; the correction unit is used for acquiring shielding images from the face images at multiple angles and correcting the shielding images to obtain corrected images at multiple angles; the centering unit is used for obtaining face centering images at a plurality of angles; the comparison unit is used for extracting face features from the face centered images of multiple angles to obtain the face features of multiple angles; respectively comparing the face features of the plurality of angles with face features prestored in a database; and the display unit is used for displaying the information of the success of the face recognition or the failure of the face recognition. The invention solves the technical problem that in the prior art, in a non-matching environment, if a human face is shielded, the human face cannot be accurately identified.

Description

Non-matching type face recognition system and method
Technical Field
The invention relates to the technical field of face recognition, in particular to a non-fit face recognition system and a method.
Background
With the development of science and technology, the research results of face recognition technology gradually enter people's daily life and work. At present, most of the face recognition acquisition devices are of a matched type, living body detection is required, for example, a mouth is opened or a head is rotated left and right, and each captured picture needs to be continuously analyzed and recognized, so that the processing efficiency is greatly reduced. Therefore, how to avoid unnecessary analysis and detection and rapidly perform non-fitting type face recognition is very important.
In contrast, chinese patent CN109740501A discloses a face recognition method, which collects face images from multiple angles; after the face in the face image is detected by respectively applying a deep convolutional neural network method, face registration and alignment are carried out on the face images at multiple angles by using an affine transformation method, face features are respectively and rapidly analyzed and extracted, the face features at multiple angles are respectively compared with data in an attendance face information database, whether the face features are personnel in a face information database to be checked or not is judged, non-matching face recognition is rapidly carried out, unnecessary analysis and detection are avoided, and the recognition efficiency is improved.
In the process of face authentication in an actual non-fitting environment, besides the rotation of the face, the blocking is also inevitable, and the face is easily blocked by other people or objects, such as ornaments like a hat and a scarf of the person and other people nearby. Under the condition that the face is shielded, the acquired face data is incomplete, so that complete and effective face information cannot be extracted, and the accuracy of face authentication detection and recognition is influenced. That is, in the unmatched environment, if a face is occluded, face recognition cannot be performed accurately.
Disclosure of Invention
The invention provides a non-matching type face recognition system, which solves the technical problem that in the prior art, in a non-matching type environment, if a face is shielded, the face cannot be accurately recognized.
The basic scheme provided by the invention is as follows: a non-fitted face recognition system comprising:
the acquisition unit is used for simultaneously and simultaneously acquiring the face images from multiple angles to obtain the face images from multiple angles;
the device comprises a correcting unit, a judging unit and a judging unit, wherein the correcting unit is used for acquiring an occlusion image from face images of multiple angles, the occlusion image is an image of a face which is partially occluded, and the occlusion image comprises a plurality of feature points used for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing filling on the occlusion image which does not meet the preset condition to obtain corrected images of a plurality of angles;
the centering unit is used for carrying out face detection on the corrected images at multiple angles through a deep convolutional neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centered images at multiple angles;
the comparison unit is used for extracting the face features from the face centered images of a plurality of angles to obtain the face features of the plurality of angles; and comparing the face features of a plurality of angles with face features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if so, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
and the display unit is used for displaying the information of the successful face recognition or the failed face recognition.
The working principle and the advantages of the invention are as follows:
(1) The method comprises the steps that face images are collected from multiple angles, compared with the face images collected from a single angle, the method can avoid the collection of shielding images as far as possible, and the collected face images are all shielding images;
(2) The obtained occlusion image containing the feature points is led into a face occlusion model, so that occlusion information of the occlusion image can be quickly and accurately obtained, and occlusion images which do not meet preset conditions are supplemented, namely, the occluded part is predicted and filled through the known information of the occlusion image, and the identification information carried by the occlusion image is perfected as much as possible, so that the accuracy and the efficiency of face identification are improved;
(3) Face registration and alignment are carried out on face detection images at multiple angles through affine transformation, so that face features are conveniently extracted, and omission in face feature extraction is effectively avoided; the face features from multiple angles are compared with face features stored in the database in advance, unnecessary analysis and detection are reduced, and accordingly the speed of non-matching face recognition is improved.
The invention predicts and fills the shielded part through the known information of the shielded image, perfects the identification information carried by the shielded image as much as possible, and solves the technical problem that the detection and identification of the face authentication can not be accurately carried out if the face is shielded in the non-matching environment in the prior art.
Further, the correction unit further includes:
the acquisition subunit is used for acquiring an occlusion image from the face images at a plurality of angles;
the dividing subunit is used for dividing the shielding image into at least one face area through the characteristic point;
the calculating subunit is used for calculating the ratio of non-face pixels to all pixels in each face area;
and the learning subunit is used for training a face shielding model by adopting a machine learning algorithm and taking the shielding image as input and the shielding information of each face region in the shielding image as output.
The beneficial effects are that: in such a way, the occlusion image is used as input, the occlusion information of each face region in the occlusion image is used as output, and the face occlusion model is trained by adopting a machine learning algorithm, so that the method is beneficial to improving the calculation accuracy of the occlusion information.
Furthermore, the correction unit also comprises a filtering subunit, which is used for identifying whether the occlusion image has a face or not and deleting the occlusion image without the face; the filtering subunit is further configured to perform picture quality judgment on the occlusion image, and delete the occlusion image whose picture quality does not meet the preset requirement.
Has the advantages that: by the mode, the occlusion images without human faces and with picture quality not meeting the preset requirements are deleted in advance, for example, the resolution and the definition are too low, the occlusion images which are not required to be corrected can be eliminated, and the workload of correction or compensation is reduced appropriately.
Further, the correction unit further comprises a posture subunit, which is used for correcting the overlooking posture of the face in the shielding image to obtain the shielding image with the face in the front-looking posture.
The beneficial effects are that: when the target approaches the camera from far to near, the resolution of the face is gradually increased, and the depression angle is gradually increased, so that the negative influence of the dynamic change on the face recognition, such as the local image is unclear, can be reduced.
Further, the correction unit further comprises a grouping subunit, configured to group the occlusion images according to the occlusion information.
Has the advantages that: the shielding information is the ratio of non-face pixels in the face area to all pixels in the face area, so that the shielding information reflects the workload which needs to be corrected and supplemented, the larger the ratio is, the larger the workload which needs to be supplemented is, and the efficiency can be improved by performing group correction in such a way.
The invention also provides a non-matching face recognition method, which comprises the following steps:
s1, simultaneously and simultaneously acquiring face images from multiple angles to obtain face images from multiple angles;
s2, obtaining an occlusion image from face images of multiple angles, wherein the occlusion image is an image in which a face is partially occluded, and the occlusion image comprises a plurality of feature points for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing filling on the occlusion image which does not meet the preset condition to obtain corrected images of a plurality of angles;
s3, carrying out face detection on the corrected images at multiple angles through a deep convolutional neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centering images at multiple angles;
s4, extracting face features from the face centered images of multiple angles to obtain face features of multiple angles; and comparing the facial features of a plurality of angles with facial features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if so, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
and S5, displaying information of successful face recognition or failed face recognition.
The working principle and the advantages of the invention are as follows: the face images are collected from multiple angles, and compared with the face images collected from a single angle, the collected face images can be prevented from being all shielding images as far as possible; the method comprises the steps of supplementing the shielding images which do not meet preset conditions, namely predicting and filling the shielded parts through the known information of the shielding images, and perfecting the identification information carried by the shielding images as much as possible, so that the accuracy and the efficiency of face identification are improved; face registration and alignment are carried out on the face detection images at multiple angles through affine transformation, and omission in face feature extraction is effectively avoided; the face features from multiple angles are compared with face features stored in the database in advance, unnecessary analysis and detection are reduced, and accordingly the speed of non-matching face recognition is improved.
Further, in S2, the correction unit further includes a face-shielding training model, which is specifically as follows:
a1, obtaining a shielding image from face images of multiple angles;
a2, dividing the occlusion image into at least one face area through a characteristic point;
a3, calculating the ratio of non-face pixels to all pixels in each face area;
and A4, taking the occlusion image as input, taking the occlusion information of each face area in the occlusion image as output, and training a face occlusion model by adopting a machine learning algorithm.
Has the advantages that: the face shielding model is trained by adopting a machine learning algorithm, so that self-learning is facilitated, and the calculation accuracy of shielding information is improved.
Further, in S2, after the occlusion image is obtained, the occlusion image without the human face and with the picture quality not meeting the preset requirement is deleted.
Has the advantages that: therefore, the shielding images without human faces and with picture quality not meeting the preset requirement are deleted in advance, and the shielding images which are not required to be corrected can be eliminated.
Drawings
Fig. 1 is a block diagram of a system structure of an embodiment of a non-cooperative face recognition system according to the present invention.
Detailed Description
The following is further detailed by way of specific embodiments:
example 1
The embodiment of the non-matching face recognition system is basically as shown in the attached figure 1: the method comprises the following steps:
the acquisition unit is used for simultaneously and simultaneously acquiring the face images from multiple angles to obtain the face images from multiple angles;
the device comprises a correcting unit, a judging unit and a judging unit, wherein the correcting unit is used for acquiring an occlusion image from face images of multiple angles, the occlusion image is an image of a face which is partially occluded, and the occlusion image comprises a plurality of feature points used for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing filling on the occlusion image which does not meet the preset condition to obtain corrected images of a plurality of angles;
the centering unit is used for carrying out face detection on the corrected images at multiple angles through a deep convolution neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centered images at multiple angles;
the comparison unit is used for extracting the face features from the face centered images of a plurality of angles to obtain the face features of the plurality of angles; and comparing the facial features of a plurality of angles with facial features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if yes, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
and the display unit is used for displaying the information of the successful face recognition or the failed face recognition.
In this embodiment, the acquisition unit is a camera, the correction unit, the centering unit and the comparison unit are mounted on a server, the functions of the acquisition unit, the centering unit and the comparison unit are realized through software program codes, and the display unit is a display screen. The camera and the display screen are installed at an entrance guard near a security room of a school entrance position, the server is placed in the security room, the camera and the display screen are in signal connection with the server, and face images of all students in the school and corresponding face feature data are stored in a database of the server in advance.
The specific implementation process is as follows:
when the student walks to the entrance guard department at school's gate, the camera gathers student's facial image from a plurality of angles, obtains the facial image of a plurality of angles, and whole collection process is gone on simultaneously with the ground. For example, the camera can rotate 360 degrees, can collect face images of students according to a preset angle, does not move in the collection process, and sends the collected face images of multiple angles to the server.
After the server receives the face images of multiple angles, the correction unit firstly acquires the occlusion images from the face images of multiple angles. In this embodiment, the occlusion image is an image in which the face is partially occluded, that is, in the occlusion image, the face of the student is in a partially occluded state, for example, occluded by a hat or a scarf, and only partially visible. The occlusion image comprises a plurality of feature points for marking the features of the human face, and the occluded human face can be determined even if the human face is occluded according to the feature points. Generally, each face has the same number of face features with similar positions, and the face features in the face image which is not shielded and the face features in the face image which is shielded can be marked. For example, the part of the face in the occlusion image that is occluded is an image corresponding to a mouth, and the feature point can still mark the image of the occluded mouth.
Then, importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; and judging whether the occlusion image meets a preset condition according to the occlusion information, and performing filling on the occlusion image which does not meet the preset condition to obtain corrected images of a plurality of angles. In this embodiment, the face shielding model is pre-established, and a corresponding relation table of the shielding image and the shielding information is built in the face shielding model, and a calculation formula for performing numerical calculation on the shielding image to obtain the shielding information is obtained. For example, the preset condition is 0.3, if the calculated occlusion information is 0.2, that is, the ratio of non-face pixels in the face region to all pixels in the face region in the occlusion image is 0.2, the occluded face portion is smaller, and the occlusion image meets the requirement; on the contrary, if the calculated shielding information is 0.4, the shielded face part is large, and the shielding image does not meet the requirement and needs to be corrected. For correction, in this embodiment, in combination with the characteristic that the face is close to bilateral symmetry, the information of the face that is not blocked is supplemented to the face that is symmetrical to the face that is blocked in a mirror image manner, and the blocked images at multiple angles are supplemented one by one in such a manner, so as to obtain corrected images at multiple angles.
Meanwhile, in this embodiment, the correction unit further includes an obtaining subunit, a dividing subunit, a calculating subunit, and a learning subunit, where the obtaining subunit obtains the occlusion image from the face images from multiple angles, and the dividing subunit divides the occlusion image into at least one face region by the feature point; the calculation subunit calculates the ratio of non-face pixels to all pixels in each face region; the learning subunit adopts a machine learning algorithm, takes the occlusion image as input, takes the occlusion information of each face region in the occlusion image as output, and trains a face occlusion model. That is, the occlusion image is used as input, the occlusion information of each face region in the occlusion image is used as output, and a machine learning algorithm is adopted to train the face occlusion model.
Then, the centering unit carries out face detection on the corrected images at multiple angles through a deep convolutional neural network to obtain face detection images at multiple angles; and carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centered images at multiple angles. For example, a deep convolutional neural network is used to detect a face existing in a corrected image and perform face alignment, that is, the face is 'aligned'; after the face is corrected, face alignment is carried out through affine transformation in OpenCV.
And finally, extracting the face features from the face centered images of multiple angles by a comparison unit to obtain the face features of multiple angles, loading the face features of multiple angles and the face features in the database into a memory for comparison to obtain similarity, and judging whether the similarity of at least one angle reaches a preset threshold value. If the similarity under at least one angle reaches a preset threshold value, judging that the face recognition is successful; if the similarity of no angle reaches a preset threshold value, judging that the face recognition fails; and displaying the information of the success of the face recognition or the failure of the face recognition on a display screen.
Based on the uncooperative human face recognition system, the uncooperative human face recognition method is also disclosed, and comprises the following steps:
s1, simultaneously and simultaneously collecting face images from multiple angles to obtain face images from multiple angles;
s2, obtaining an occlusion image from face images of multiple angles, wherein the occlusion image is an image in which a face is partially occluded, and the occlusion image comprises a plurality of feature points for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing filling on the occlusion image which does not meet the preset condition to obtain corrected images of a plurality of angles;
s3, carrying out face detection on the corrected images at multiple angles through a deep convolution neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centering images at multiple angles;
s4, extracting face features from the face centered images of multiple angles to obtain the face features of multiple angles; and comparing the face features of a plurality of angles with face features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if yes, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
and S5, displaying information of successful face recognition or failed face recognition.
Example 2
The only difference from embodiment 1 is that the correction unit further includes a filtering subunit, a posture subunit, and a grouping subunit. After obtaining the occlusion image, carrying out three steps of pretreatment in advance: firstly, a filtering subunit identifies whether a human face exists in an occlusion image or not, and deletes the occlusion image without the human face; meanwhile, the image quality of the occlusion image is judged, and the occlusion image with the image quality not meeting the preset requirement, such as the occlusion image with low definition and low pixel, is deleted. Then, the pose sub-unit corrects the overlooking pose of the face in the occlusion image to obtain an occlusion image with the face in the front view pose, for example, the overlooking view is converted into a scale factor through a gaussian function, and the front face and the overlooking face are subjected to weighted summation according to the scale factor to obtain the occlusion image with the face in the front view pose. And finally, the modifying and grouping subunit groups the shielding images according to the shielding information, for example, the modifying and grouping subunit groups the shielding images according to the ratio of non-face pixels in the face region to all pixels in the face region, wherein the ratio is a group between 0.1 and 0.2, and the ratio is a group between 0.2 and 0.3.
Example 3
The difference from the embodiment 2 is that the camera collects face images of students from multiple angles, and the images are not simultaneously and simultaneously collected but are collected from different places. In the whole acquisition process, the student slowly walks towards the entrance guard, and the camera acquires the face image at a preset time interval, such as 2 seconds, in the walking process of the student. The face images stored in the database of the server in advance are collected through the mobile phones of the students, the face images comprise equipment information of the mobile phones, and the equipment information corresponds to the mobile phones one by one, such as MAC addresses. In the process that the student walks to the entrance guard, wiFi or Bluetooth of the mobile phone is in an open state, and the change of the signal intensity of WiFi connection of the mobile phone is detected. If the detected signal intensity changes gradually and at least one of the MAC address of the mobile phone and the MAC address of the face image stored in the database in advance is corresponding, the face image is effective; otherwise, the face image is invalid, and the filtering subunit deletes the invalid face image.
Meanwhile, the correction unit judges whether the image is actively shielded or passively shielded according to the shielded image by adopting an image recognition algorithm. That is, the image recognition algorithm is adopted to recognize the type of the sheltering object, if the sheltering object is a cap, clothes and other common clothes ornaments, passive sheltering is described, the student does not intentionally shelter, and the face image is effective; on the contrary, the sheltering object is an unusual object such as a helmet, a head cover and the like, which means active sheltering, the student deliberately shelters, the face image is invalid, and the filtering subunit deletes the invalid face image.
The foregoing are embodiments of the present invention and are not intended to limit the scope of the invention to the particular forms set forth in the specification, which are set forth in the claims below, but rather are to be construed as the full breadth and scope of the claims, as defined by the appended claims, as defined in the appended claims, in order to provide a thorough understanding of the present invention. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (4)

1. A non-adaptive face recognition system, comprising:
the acquisition unit is used for simultaneously and simultaneously acquiring the face images from multiple angles to obtain the face images from multiple angles;
the device comprises a correcting unit, a judging unit and a judging unit, wherein the correcting unit is used for acquiring an occlusion image from face images of multiple angles, the occlusion image is an image of a face which is partially occluded, and the occlusion image comprises a plurality of feature points used for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing rectification on the occlusion image which does not meet the preset condition to obtain corrected images at multiple angles;
the centering unit is used for carrying out face detection on the corrected images at multiple angles through a deep convolutional neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centered images at multiple angles;
the comparison unit is used for extracting the face features from the face centered images of a plurality of angles to obtain the face features of the plurality of angles; and comparing the face features of a plurality of angles with face features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if so, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
the display unit is used for displaying information of successful face recognition or failed face recognition;
the acquisition unit is also used for acquiring face images of students from multiple angles at different places in different time, and the acquisition unit acquires the face images according to a preset time interval in the walking process of the students;
the correction unit further includes:
the acquisition subunit is used for acquiring an occlusion image from the face images of a plurality of angles;
the dividing subunit is used for dividing the occlusion image into at least one face area through the characteristic point;
the calculating subunit is used for calculating the ratio of non-face pixels to all pixels in each face area;
the learning subunit is used for training a face occlusion model by adopting a machine learning algorithm and taking the occlusion image as input and the occlusion information of each face region in the occlusion image as output;
the filtering subunit is used for identifying whether the occlusion image has a face or not and deleting the occlusion image without the face; the filtering subunit is also used for judging the image quality of the occlusion image and deleting the occlusion image of which the image quality does not meet the preset requirement;
the correction unit is also used for judging whether the sheltering object is actively sheltered or passively sheltered according to the sheltered image by adopting an image recognition algorithm, and the type of the sheltering object is recognized by adopting the image recognition algorithm: the face image is valid for passive occlusion if the obstruction is a normal clothing decoration, such as a hat or clothing; if the obstruction is an abnormal object, the human face image is invalid for active obstruction, and the abnormal object is a helmet or a head sleeve; the filtering subunit is also used for deleting invalid face images.
2. The matless face recognition system of claim 1, wherein the correction unit further comprises a pose sub-unit for correcting a top-view pose of the face in the occlusion image to obtain an occlusion image with the face in a front-view pose.
3. The matless face recognition system of claim 2, wherein the modification unit further comprises a grouping subunit for grouping occlusion images according to occlusion information.
4. The method for recognizing the human face without matching is characterized by comprising the following steps:
s1, simultaneously and simultaneously acquiring face images from multiple angles to obtain face images from multiple angles;
s2, obtaining an occlusion image from face images of multiple angles, wherein the occlusion image is an image in which a face is partially occluded, and the occlusion image comprises a plurality of feature points for marking face features; importing the occlusion image into a pre-trained face occlusion model, and calculating corresponding occlusion information through the feature points, wherein the occlusion information is the ratio of non-face pixels in the face region to all pixels in the face region; judging whether the occlusion image meets a preset condition according to the occlusion information, and performing rectification on the occlusion image which does not meet the preset condition to obtain corrected images at multiple angles;
s3, carrying out face detection on the corrected images at multiple angles through a deep convolutional neural network to obtain face detection images at multiple angles; carrying out face registration and alignment on the face detection images at multiple angles through affine transformation to obtain face centered images at multiple angles;
s4, extracting face features from the face centered images of multiple angles to obtain face features of multiple angles; and comparing the face features of a plurality of angles with face features stored in a database in advance respectively to obtain similarity, and judging whether the similarity under at least one angle reaches a preset threshold value: if so, judging that the face recognition is successful; if not, judging that the face recognition fails successfully;
s5, displaying information of successful face recognition or failed face recognition;
in the S1, face images of students are collected from multiple angles at different places in different time, and the face images are collected according to a preset time interval in the walking process of the students;
in S2, an image recognition algorithm is adopted to judge whether the shielding object is actively shielded or passively shielded according to the shielding image, and the type of the shielding object is recognized by adopting the image recognition algorithm: the face image is valid if the obstruction is a passive obstruction of a typical clothing accessory, which is a hat or a piece of clothing; if the obstruction is active obstruction by an unusual article, the face image is invalid, and the invalid face image is deleted, wherein the unusual article is a helmet or a head cover.
CN202011482942.XA 2020-12-15 2020-12-15 Non-matching type face recognition system and method Active CN112597854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011482942.XA CN112597854B (en) 2020-12-15 2020-12-15 Non-matching type face recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011482942.XA CN112597854B (en) 2020-12-15 2020-12-15 Non-matching type face recognition system and method

Publications (2)

Publication Number Publication Date
CN112597854A CN112597854A (en) 2021-04-02
CN112597854B true CN112597854B (en) 2023-04-07

Family

ID=75196391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011482942.XA Active CN112597854B (en) 2020-12-15 2020-12-15 Non-matching type face recognition system and method

Country Status (1)

Country Link
CN (1) CN112597854B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792295B (en) * 2022-06-23 2022-11-04 深圳憨厚科技有限公司 Method, device, equipment and medium for correcting blocked object based on intelligent photo frame

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001118159A (en) * 1999-10-20 2001-04-27 Nippon Aleph Corp Human body detection device, human body detection method and computer readable recording medium recording program making computer execute the method
JP5314294B2 (en) * 2008-02-27 2013-10-16 パナソニック株式会社 Face recognition device
CN102737386B (en) * 2012-05-15 2017-06-13 北京硅盾安全技术有限公司 Tracking is blocked in a kind of anti-fusion of moving target
GB201310373D0 (en) * 2013-06-11 2013-07-24 Sony Comp Entertainment Europe Head-Mountable Apparatus and systems
CN104091156A (en) * 2014-07-10 2014-10-08 深圳市中控生物识别技术有限公司 Identity recognition method and device
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN107909065B (en) * 2017-12-29 2020-06-16 百度在线网络技术(北京)有限公司 Method and device for detecting face occlusion
CN108551552B (en) * 2018-05-14 2020-09-01 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN109035295B (en) * 2018-06-25 2021-01-12 广州杰赛科技股份有限公司 Multi-target tracking method, device, computer equipment and storage medium
CN109740501A (en) * 2018-12-28 2019-05-10 广东亿迅科技有限公司 A kind of Work attendance method and device of recognition of face
CN110110672A (en) * 2019-05-10 2019-08-09 广东工业大学 A kind of facial expression recognizing method, device and equipment
CN110110681A (en) * 2019-05-14 2019-08-09 哈尔滨理工大学 It is a kind of for there is the face identification method blocked
CN110189450A (en) * 2019-06-05 2019-08-30 厦门浩邈科技有限公司 A kind of trailing analysis method based on recognition of face and demographics
CN110414394B (en) * 2019-07-16 2022-12-13 公安部第一研究所 Facial occlusion face image reconstruction method and model for face occlusion detection
CN110599129A (en) * 2019-09-16 2019-12-20 世纪海航(厦门)科技有限公司 Campus attendance checking method, device, identification terminal and system based on image tracking
CN111191506A (en) * 2019-11-26 2020-05-22 万翼科技有限公司 Personnel flow statistical method and device, computer equipment and storage medium
CN110909692A (en) * 2019-11-27 2020-03-24 北京格灵深瞳信息技术有限公司 Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN111127308B (en) * 2019-12-08 2023-06-30 复旦大学 Mirror image feature rearrangement restoration method for single sample face recognition under partial shielding
CN111767880B (en) * 2020-07-03 2022-09-09 腾讯科技(深圳)有限公司 Living body identity recognition method and device based on facial features and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning

Also Published As

Publication number Publication date
CN112597854A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
JP6549797B2 (en) Method and system for identifying head of passerby
WO2021036436A1 (en) Facial recognition method and apparatus
CN109461003A (en) Plurality of human faces scene brush face payment risk preventing control method and equipment based on multi-angle of view
CN105022999A (en) Man code company real-time acquisition system
WO2022062379A1 (en) Image detection method and related apparatus, device, storage medium, and computer program
CN108171138B (en) Biological characteristic information acquisition method and device
US11062126B1 (en) Human face detection method
CN110414381A (en) Tracing type face identification system
CN112434545A (en) Intelligent place management method and system
CN110148092B (en) Method for analyzing sitting posture and emotional state of teenager based on machine vision
JP7151875B2 (en) Image processing device, image processing method, and program
CN110705500A (en) Attention detection method and system for personnel working image based on deep learning
CN110163126A (en) A kind of biopsy method based on face, device and equipment
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN112597854B (en) Non-matching type face recognition system and method
CN111027517A (en) Sitting posture correction reminding system and method based on vision and application
CN112800950A (en) Large security activity face searching method based on deep learning
CN111178129B (en) Multi-mode personnel identification method based on human face and gesture
CN114898443A (en) Face data acquisition method and device
CN107862298A (en) It is a kind of based on the biopsy method blinked under infrared eye
CN110555393A (en) method and device for analyzing pedestrian wearing characteristics from video data
CN112183287A (en) People counting method of mobile robot under complex background
CN113008380A (en) Intelligent AI body temperature early warning method, system and storage medium
CN110516538B (en) Prison double off-duty violation assessment method based on deep learning target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant