CN112613461A - Intelligent gate communication and attendance checking method and system combining face recognition - Google Patents

Intelligent gate communication and attendance checking method and system combining face recognition Download PDF

Info

Publication number
CN112613461A
CN112613461A CN202011604566.7A CN202011604566A CN112613461A CN 112613461 A CN112613461 A CN 112613461A CN 202011604566 A CN202011604566 A CN 202011604566A CN 112613461 A CN112613461 A CN 112613461A
Authority
CN
China
Prior art keywords
face
attendance
face recognition
gate
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011604566.7A
Other languages
Chinese (zh)
Other versions
CN112613461B (en
Inventor
张衡
刘光杰
刘伟伟
赵华伟
陆赛杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Publication of CN112613461A publication Critical patent/CN112613461A/en
Application granted granted Critical
Publication of CN112613461B publication Critical patent/CN112613461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)

Abstract

The invention discloses an intelligent gate traffic and attendance method and system combining face recognition, which are used for collecting photo information of staff in advance, storing work numbers, names and face information of the staff in a face database, collecting faces of the staff who enter and exit, comparing the collected face information with the face information in the database to obtain a final recognition result, completing traffic and attendance, and updating registered face information periodically based on a certain strategy to complete self-learning updating of the face information. According to the invention, through a non-inductive mode, the intelligent attendance checking level can be effectively improved, and the management safety is improved.

Description

Intelligent gate communication and attendance checking method and system combining face recognition
Technical Field
The invention relates to machine vision and image processing technologies, in particular to an intelligent gate on-line and attendance checking method and system combining face recognition.
Background
The rapid development of information technology changes the life style of people, improves the cognitive level of people on intellectualization, and more enterprises gradually introduce intelligent management methods in order to improve the standardization and intellectualization of management specifications. Daily attendance and safety management are the first steps of internal intelligent management. In the aspect of daily attendance, a plurality of different attendance modes appear, such as signature attendance at the early stage, later card swiping attendance, and later mobile phone card punching attendance, and due to the non-standard management of the attendance modes, a plurality of attendance phenomena of impersonation and substitution appear, and the later attendance record query is not facilitated; in the aspect of safety management, on one hand, security personnel are arranged at the entrance and exit to prevent strangers from entering and exiting at will, the management mode obviously lacks of intellectualization, on the other hand, because the strangers can enter by swiping a card, the strangers can enter a building at will by using employee cards of other people, the mode lacks of real-name management, and risks are brought to safety inside the building.
In recent years, the quick iteration of the biological characteristic identification technology brings more possibilities to attendance card punching modes, such as fingerprint identification, palm vein identification, face identification and the like; the fingerprint identification has the advantages of convenience, easiness in operation and high identification speed, but the requirements on the humidity and the cleanliness of the fingers are high, the finger lines of part of people are not obvious and are difficult to image, and meanwhile, the fingers need to be in contact with equipment, so that the sanitation problem is obvious; the palm vein recognition has the characteristics of high anti-counterfeiting performance, simplicity, convenience, easy use and the like, but the palm vein recognition has a small application range due to a special acquisition mode and high manufacturing cost; the face recognition is a non-mandatory, non-contact and high-accuracy biological recognition technology, the calculation cost of the face recognition technology is greatly reduced due to the rapid development of deep learning, and a safer and more sanitary attendance environment can be provided by a non-contact method. The intelligent card reader is characterized in that a face recognition terminal is added on the existing gate passing technology to collect face information of people entering and exiting a building, attendance checking and card punching of staff are completed, meanwhile, due to the real name attribute of face recognition, strangers can be effectively prevented from entering the building through identity counterfeiting, and the safety management level of the building is improved.
Disclosure of Invention
The invention aims to provide an intelligent gate on-line and attendance checking method and system based on face recognition.
The technical solution for realizing the purpose of the invention is as follows: an intelligent gate on-line and attendance checking method combining face recognition specifically comprises the following steps:
step 1: registering the face information, the employee card number and the employee name of the employee to be registered in a face database;
step 2: collecting a visible light video stream and an infrared video stream by using an MIPI camera, and respectively obtaining corresponding images of each frame, including a visible light image and an infrared image, by performing video decoding;
and step 3: reducing the size of the visible light image, sending the visible light image into an NNIE depth network reasoning frame, detecting the face by using a YOLOv3 network, and obtaining position information of face coordinates and size under the size reduction;
and 4, step 4: reducing the size of the infrared image, cutting the infrared image by using the face coordinate information in the step 3, and then sending the cut infrared image into an MTCNN network for in-vivo detection;
and 5: if the living body detection in the step 4 is unsuccessful, re-collecting, otherwise, cutting the visible light image by using the face coordinate information in the step 3, simultaneously reducing the visible light image to the original size, and then sending the reduced face image into an MTCNN network to obtain five key point coordinates of the face;
step 6: judging the sizes of all the detected faces, aligning the faces with the face areas larger than a certain threshold value according to the face key point coordinates in the step 5, sending the faces into an NNIE deep network reasoning frame, and obtaining a face characteristic value by using a SphereFace network;
and 7: comparing the face features obtained in the step 6 with the registered face information in the database in the step 1 one by one, calculating the similarity through a cosine distance algorithm, comparing the maximum similarity with a set threshold value, and judging whether the recognition is passed or not; after the face recognition is successful, the face recognition terminal can display the job number and the name of the recognized employee for attendance check feedback, meanwhile, a green frame is used for marking the face on a terminal interface to indicate that the attendance check is successful, meanwhile, the face recognition terminal can send a message to a gate, and after the gate is successfully received, the gate is opened for releasing; if the attendance record passes the identification, marking the attendance record by using a red frame to indicate that the attendance record fails;
and 8: the registered face features and the face features of the employees successfully identified in the actual scene are fused, the self-learning updating of the registered face information is periodically completed, and the synchronism of the face information is ensured.
Further, a lightweight YOLOv3 network is adopted in step 3, and specifically includes:
1) replacing a YOLOv3 network BackBone with MobileNet, and simultaneously introducing separable convolution to replace common convolution;
2) pruning the multi-scale feature map of the output layer of YOLOv3, and only keeping one medium-scale output;
3) the light YOLOv3 network is trained on a Linux server based on a Caffe framework, and before use, the trained Caffe model is converted into a file format supported by a system development board.
Further, the step 4 and the step 5 adopt light MTCNN, the MTCNN network based on Caffe version is firstly changed into dnn version under opencv, and then only the ONet network is taken for detecting key points of the human face, and the detection result includes five parts, namely, specific positions of the left and right canthus, the nose tip and the left and right corners of the mouth.
Further, the optimized cosine distance algorithm is adopted in the step 7, the NEON instruction is used for accelerating, and the register is directly operated in the characteristic comparison process.
Further, in step 7, based on a TCP/IP protocol, the face recognition terminal communicates with the gate, the face recognition terminal is the "master" in the communication, the gate industrial personal computer is the "slave" mode, the face recognition terminal periodically sends a status message to the gate industrial personal computer, and the industrial personal computer returns the gate status message after executing a response.
Further, in the self-learning update algorithm in step 8, the registered face features and the face features in the actual scene are fused according to a certain proportion, and face information of registered people in the original database is periodically and synchronously updated, wherein the original face registration features X and the actual scene face features Y are fused according to the following expression:
Z=αX+βY
and obtaining the fused feature Z, and then replacing the face feature X in the face database to finish updating.
An intelligent gate machine passing and attendance system combining face recognition is completed based on any one of the methods, and comprises a face registration module, a video image preprocessing module, a face detection module, a face recognition module, a data communication module, a terminal interface information prompting module and a self-learning updating module based on feature fusion, wherein:
the face registration module is realized based on PC end face registration software and is used for firstly detecting faces of people to be registered by face pictures, then aligning the faces, then extracting the characteristics of the detected faces, and finally sending the face pictures to a face recognition terminal machine based on an HTTP protocol and storing staff names, staff work numbers and face characteristics into a face database;
the video image preprocessing module is based on a media processing software platform and is used for decoding a video stream acquired by an MIPI camera on a face recognition terminal to generate a frame-by-frame image;
the face detection module comprises two parts: 1) detecting a living body based on an infrared camera; 2) based on the face detection of visible light images, detecting the position and size information of a face in each frame of image by using a YOLOv3 network, and detecting key points of the face by using an MTCNN network;
the face recognition module comprises two parts: 1) performing alignment operation of a face by using face key points obtained by a face detection module, 2) performing feature extraction on the aligned face by using a SphereFace network, and then performing comparison calculation with face features of a face database to complete face identification;
the data communication module is based on a TCP/IP protocol and is used for communicating the face recognition terminal machine with the gate machine, and the gate is opened for releasing when the face recognition is passed;
the terminal interface information prompting module is based on a QT graphical user interface application program development framework and is used for prompting the recognition personnel that attendance is successful and informing the name and the job number to carry out attendance interaction;
the self-learning updating module based on the feature fusion is used for updating the face features of the registered people in the face database by using the face features identified in the actual scene.
A computer device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the method, and intelligent gate passage and attendance check combined with face recognition are completed.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods described herein to perform intelligent gate entry and attendance in conjunction with face recognition.
Compared with the prior art, the invention has the following beneficial effects: 1) by adding a face recognition terminal on the gate, the MIPI camera on the terminal can collect visible light video streams and infrared video streams. 2) For limited computing resources of the embedded development board, a lightweight YOLOV3 algorithm is adopted for face detection, living body detection and face key point positioning are carried out on the basis of an ONet module in a lightweight MTCNN (micro terminal coupled network) model 3) face feature updating of a face database is periodically completed on the basis of a self-learning updating mechanism, so that a recognition threshold value can be improved, and the probability of false recognition is reduced.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1-2, the intelligent gate passing and attendance system combining with face recognition comprises: the system comprises a face registration module, a video image preprocessing module, a face detection module, a face recognition module, a data communication module, a terminal interface information prompt module and a self-learning updating module based on feature fusion. The face registration module is used for storing employee job numbers, names and face information into a face database; the video image preprocessing module decodes the video stream collected by the MIPI camera to generate a frame-by-frame image. And the face detection module performs face detection and face key positioning on the decoded image and completes living body detection at the same time. The face recognition module extracts features of the detected face and then performs recognition matching with all face features of the face database. And the data communication module completes communication between the face recognition terminal machine and the gate machine and is used for opening and closing the gate. And the terminal interface information prompting module is used for prompting the job number and name information of the identified person. And the self-learning updating module based on feature fusion is used for finishing the updating operation of the face database and ensuring the synchronism of the face information. The attendance checking method mainly comprises the following steps:
step 1: the face information, the employee card number and the employee name of the employee to be registered are registered to a face database through a face registration module;
step 2: the method comprises the steps of collecting visible light video streams and infrared video streams by using an MIPI camera, and respectively obtaining corresponding images of each frame through a video decoding module, wherein the images comprise visible light images and infrared images, the resolution ratio of the visible light input images is 640 x 480, the resolution ratio of the infrared input images is 1920 x 1080, and the image formats are YUV 420.
And step 3: and (3) scaling the visible light image in the step (2) to 416 x 416, then sending the image into an NNIE depth network reasoning framework, and detecting the face by using a lightweight YOLOv3 network to obtain position information such as face coordinates, size and the like under the scaling size.
Further, the light-weight YOLOv3 network is improved based on the following aspects; 1) the YOLOv3 network BackBone is replaced by the MobileNet, and meanwhile, separable convolution is introduced to replace the common convolution, so that the parameter number of the network hidden layer is greatly reduced, and the calculation amount is simplified. 2) The multi-scale feature map of the output layer of the YOLOv3 is pruned, only one medium-scale output is reserved, and the detection effect on the human face is guaranteed. 3) The light-weight YOLOv3 network is trained on a Linux server based on a Caffe framework, and in order to be applied to the system, the trained Caffe model needs to be converted into a file format supported by a development board. If the Haas 35 series chips are adopted by the development board, the trained Caffe model is converted into a wk file supported by the Haas NNIE deep learning framework by using a Haas Ruyi studio _ mapper plug-in.
And 4, step 4: converting the YUV420 format image into a Mat format image, performing size conversion on the infrared image by using the face coordinate information in the step 3, cutting the infrared image, and sending the cut infrared image into a lightweight MTCNN network for live body detection.
Further, the light MTCNN is adopted, the MTCNN based on Caffe version is firstly changed into dnn version under opencv, then only the ONet network is used for detecting key points of the human face, and the detection result comprises five parts, namely specific positions of the left and right canthus, the nose tip and the left and right mouth corners.
And 5: and (4) if the living body detection in the step 4 is successful, cutting the image (416 × 416) under the visible light according to the face coordinates in the step 3, simultaneously scaling the original size (640 × 480), and then sending the face image into a light-weight MTCNN (transport connectivity network) to obtain five key point coordinates of the face.
Further, in step 5, as in step 4, the MTCNN network based on the Caffe version is first changed to dnn version under opencv, and then only the ONet network therein is taken for detecting the key points of the face, and the detection result includes five parts, namely, the specific positions of the left and right corners of the eye, the nose tip, and the left and right corners of the mouth.
Step 6: judging the sizes of all the detected faces, aligning the faces with the face areas larger than a certain threshold value according to the face key point coordinates in the step 5, sending the faces into an NNIE deep network reasoning frame, and obtaining a face characteristic value by using a SphereFace network.
And 7: comparing the face features obtained in the step 6 with the face information registered in the database in the step 1 one by one, calculating the similarity through an optimized cosine distance algorithm, obtaining the maximum similarity omcos _ similarity, comparing the maximum similarity with a set threshold face _ threshold, and if the maximum similarity is satisfied, determining that the recognition is passed.
Furthermore, the optimized cosine distance algorithm uses NEON instruction acceleration, and the calculation efficiency is greatly optimized in a mode of directly operating a register in the characteristic comparison process.
And 8: after the face recognition is successful, the face recognition terminal can display the job number and the name of the recognized employee for attendance feedback, meanwhile, the face is marked by a green frame on a terminal interface for indicating that the attendance is successful, and the face which is not recognized passes is marked by a red frame for indicating that the attendance fails.
And step 9: and 7, if the face recognition is passed, according to the data communication module, the face recognition terminal machine sends a message to the gate machine, and after the gate machine receives the message successfully, the gate is opened for releasing.
Further, the data communication module is based on a TCP/IP protocol to communicate the face recognition terminal machine with the gate machine, the face recognition terminal machine is in a master mode, the gate machine industrial personal computer is in a slave mode, the face recognition terminal machine regularly sends a state message to the gate machine industrial personal computer, and the gate machine state message is returned after the industrial personal computer executes response.
Step 10: the registered face features and the face features of the employees successfully identified in the actual scene are fused, the self-learning updating of the registered face information is periodically completed, and the synchronism of the face information is ensured.
Further, the self-learning updating algorithm fuses the registered face features and the face features in the actual scene according to a certain proportion, and periodically performs synchronous updating operation on the face information of the registered people in the original database, wherein the original face registration features X, the actual scene face features Y and the dimensionalities are all 512 dimensions, and the fusion is performed according to the following expression:
X={x1,x2,…,x512},Y={y1,y2,…,y512}
Z=αX+βY
and obtaining the fused feature Z, and then replacing the face feature X in the face database to finish updating. Where α is 0.8 and β is 0.2.
The face recognition terminal is arranged above the gate, the employee's job number, name and face information are registered in the database in advance, each time when the employee enters and exits the building, the face recognition terminal collects faces and compares the collected face information with the face information in the database to obtain a final recognition result, and attendance checking is completed; based on a self-learning mechanism, the influence of factors such as illumination, posture, expression and the like on the identification precision can be reduced; meanwhile, due to the real-name attribute of face recognition, the background management system can effectively perform identity authentication on the people who enter and exit, effectively prevent strangers from entering and exiting randomly through identity counterfeiting, and improve the safety management level.
Examples
To verify the validity of the inventive scheme, the following simulation experiment was performed.
The hardware architecture of the system of the embodiment is composed of a development board based on a Haisi Hi3559CV100 chip and an MIPI camera. The face recognition end machine is arranged above the gate machine, the job number, the name and the face information of the staff are registered in the database in advance, the face recognition end machine carries out face collection every time the staff goes in and out, the collected face information is compared with the face information of the database to obtain a final recognition result, attendance checking is completed, meanwhile, the background management system can effectively carry out identity authentication on the staff who goes in and out, statistics and analysis are carried out, and the safety management level is improved. The specific process is as follows:
step 1: the face information, the employee card number and the employee name of the employee to be registered are registered to a face database through a face registration module;
step 2: visible light video stream and infrared video stream collected by the MIPI camera are utilized, corresponding images of each frame are obtained through the video decoding module, the visible light images and the infrared images are included, the resolution ratio of the visible light input images is 640 x 480, the resolution ratio of the infrared input images is 1920 x 1080, and the image formats are YUV 420.
And step 3: and (3) scaling the visible light image in the step (2) to 416 x 416, then sending the image into an NNIE depth network reasoning framework, and detecting the face by using a lightweight YOLOv3 network to obtain position information such as face coordinates, size and the like under the scaling size.
And 4, step 4: and (3) calling an IVE interface function in the Hai, converting the YUV420 format image into a Mat format image, performing size conversion on the infrared image by using the face coordinate information in the step 3, cutting the infrared image, and sending the cut infrared image into a light MTCNN (multiple-transmission communication network) for in-vivo detection.
And 5: and (4) if the living body detection in the step 4 is successful, cutting the image (416 × 416) under the visible light according to the face coordinates in the step 3, simultaneously scaling the original size (640 × 480), and then sending the face image into a light-weight MTCNN (transport connectivity network) to obtain five key point coordinates of the face.
Step 6: judging the sizes of all the detected faces, aligning the faces with the face areas larger than a certain threshold value according to the face key point coordinates in the step 5, sending the faces into an NNIE deep network reasoning frame, and obtaining a face characteristic value by using a SphereFace network.
And 7: comparing the face features obtained in the step 6 with the face information registered in the database in the step 1 one by one, calculating the similarity through an optimized cosine distance algorithm, obtaining the maximum similarity omcos _ similarity, comparing the maximum similarity with a set threshold face _ threshold, and if the maximum similarity is satisfied, determining that the recognition is passed.
And 8: after the face recognition is successful, the face recognition terminal can display the job number and the name of the recognized employee for attendance feedback, meanwhile, the face is marked by a green frame on a terminal interface for indicating that the attendance is successful, and the face which is not recognized passes is marked by a red frame for indicating that the attendance fails.
And step 9: and 7, if the face recognition is passed, according to the data communication module, the face recognition terminal machine sends a message to the gate machine, and after the gate machine receives the message successfully, the gate is opened for releasing.
Step 10: the registered face features and the face features of the employees successfully identified in the actual scene are fused, the self-learning updating of the registered face information is periodically completed, and the synchronism of the face information is ensured.
The face library used in the experiment contains 2 ten thousand pieces of face data, and the comparison time is 80-110 ms. The face recognition algorithm adopting the self-learning updating mechanism can greatly improve the contrast similarity, and can be improved to more than 0.8 from the common similarity of 0.7. From the practical effect, the invention has good application effect in the gate attendance and passage scenes, and replaces the conventional attendance mode to achieve the green, accurate and efficient use effect.

Claims (9)

1. The intelligent gate on-line and attendance checking method combining with the face recognition is characterized by specifically comprising the following steps of:
step 1: registering the face information, the employee card number and the employee name of the employee to be registered in a face database;
step 2: collecting a visible light video stream and an infrared video stream by using an MIPI camera, and respectively obtaining corresponding images of each frame, including a visible light image and an infrared image, by performing video decoding;
and step 3: reducing the size of the visible light image, sending the visible light image into an NNIE depth network reasoning frame, detecting the face by using a YOLOv3 network, and obtaining position information of face coordinates and size under the size reduction;
and 4, step 4: reducing the size of the infrared image, cutting the infrared image by using the face coordinate information in the step 3, and then sending the cut infrared image into an MTCNN network for in-vivo detection;
and 5: if the living body detection in the step 4 is unsuccessful, re-collecting, otherwise, cutting the visible light image by using the face coordinate information in the step 3, simultaneously reducing the visible light image to the original size, and then sending the reduced face image into an MTCNN network to obtain five key point coordinates of the face;
step 6: judging the sizes of all the detected faces, aligning the faces with the face areas larger than a certain threshold value according to the face key point coordinates in the step 5, sending the faces into an NNIE deep network reasoning frame, and obtaining a face characteristic value by using a SphereFace network;
and 7: comparing the face features obtained in the step 6 with the registered face information in the database in the step 1 one by one, calculating the similarity through a cosine distance algorithm, comparing the maximum similarity with a set threshold value, and judging whether the recognition is passed or not; after the face recognition is successful, the face recognition terminal can display the job number and the name of the recognized employee for attendance check feedback, meanwhile, a green frame is used for marking the face on a terminal interface to indicate that the attendance check is successful, meanwhile, the face recognition terminal can send a message to a gate, and after the gate is successfully received, the gate is opened for releasing; if the attendance check fails, marking the attendance check with a red frame to indicate that the attendance check fails;
and 8: and (3) fusing the registered face features with the face features of the employees successfully identified in the actual scene, and periodically finishing self-learning updating of the registered face information.
2. The method for intelligent gate communication and attendance check in combination with face recognition according to claim 1, wherein a lightweight YOLOv3 network is adopted in the step 3, and specifically comprises the following steps:
1) replacing a YOLOv3 network BackBone with MobileNet, and simultaneously introducing separable convolution to replace common convolution;
2) pruning the multi-scale feature map of the output layer of YOLOv3, and only keeping one medium-scale output;
3) the light YOLOv3 network is trained on a Linux server based on a Caffe framework, and before use, the trained Caffe model is converted into a file format supported by a system development board.
3. The intelligent gate on-line and attendance-checking method combined with face recognition according to claim 1, characterized in that: the light MTCNN is adopted in the steps 4 and 5, the MTCNN based on Caffe version is firstly changed into dnn version under opencv, then only the ONet network is used for detecting key points of the human face, and the detection result comprises five parts, namely specific positions of the left and right eye corners, the nose tip and the left and right mouth corners.
4. The intelligent gate on-line and attendance-checking method combined with face recognition according to claim 1, characterized in that: in the step 7, an optimized cosine distance algorithm is adopted, NEON instruction acceleration is used, and a register is directly operated in the characteristic comparison process.
5. The intelligent gate on-line and attendance-checking method combined with face recognition according to claim 1, characterized in that: and 7, based on a TCP/IP protocol, the face recognition terminal communicates with the gate, the face recognition terminal is in a master mode in communication, the gate industrial personal computer is in a slave mode, the face recognition terminal periodically sends a state message to the gate industrial personal computer, and the industrial personal computer returns the gate state message after executing response.
6. The intelligent gate on-line and attendance-checking method combined with face recognition according to claim 1, characterized in that: in the step 8, the self-learning update algorithm fuses the registered face features and the face features in the actual scene according to a certain proportion, and periodically performs synchronous update operation on the face information of the registered people in the original database, wherein the original face registration features X and the actual scene face features Y are fused according to the following expression:
Z=αX+βY
and obtaining the fused feature Z, and then replacing the face feature X in the face database to finish updating.
7. The intelligent gate machine passing and attendance system combined with the face recognition is characterized in that the intelligent gate machine passing and attendance system combined with the face recognition is completed based on the method of any one of claims 1 to 6, and comprises a face registration module, a video image preprocessing module, a face detection module, a face recognition module, a data communication module, a terminal interface information prompting module and a self-learning updating module based on feature fusion, wherein:
the face registration module is realized based on PC end face registration software and is used for firstly detecting faces of people to be registered by face pictures, then aligning the faces, then extracting the characteristics of the detected faces, and finally sending the face pictures to a face recognition terminal machine based on an HTTP protocol and storing staff names, staff work numbers and face characteristics into a face database;
the video image preprocessing module is based on a media processing software platform and is used for decoding a video stream acquired by an MIPI camera on a face recognition terminal to generate a frame-by-frame image;
the face detection module comprises two parts: 1) detecting a living body based on an infrared camera; 2) based on the face detection of visible light images, detecting the position and size information of a face in each frame of image by using a YOLOv3 network, and detecting key points of the face by using an MTCNN network;
the face recognition module comprises two parts: 1) performing alignment operation of a face by using face key points obtained by a face detection module, 2) performing feature extraction on the aligned face by using a SphereFace network, and then performing comparison calculation with face features of a face database to complete face identification;
the data communication module is based on a TCP/IP protocol and is used for communicating the face recognition terminal machine with the gate machine, and the gate is opened for releasing when the face recognition is passed;
the terminal interface information prompting module is based on a QT graphical user interface application program development framework and is used for prompting the recognition personnel that attendance is successful and informing the name and the job number to carry out attendance interaction;
the self-learning updating module based on the feature fusion is used for updating the face features of the registered people in the face database by using the face features identified in the actual scene.
8. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-6 when executing the computer program, performing intelligent gate entry and attendance in combination with face recognition.
9. A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of any of claims 1-6, performing intelligent gate entry and attendance in combination with face recognition.
CN202011604566.7A 2020-12-14 2020-12-29 Intelligent gate communication and attendance checking method and system combining face recognition Active CN112613461B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020114693241 2020-12-14
CN202011469324 2020-12-14

Publications (2)

Publication Number Publication Date
CN112613461A true CN112613461A (en) 2021-04-06
CN112613461B CN112613461B (en) 2022-11-29

Family

ID=75249173

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011604566.7A Active CN112613461B (en) 2020-12-14 2020-12-29 Intelligent gate communication and attendance checking method and system combining face recognition
CN202111030993.3A Pending CN113870356A (en) 2020-12-14 2021-09-03 Gate passing behavior identification and control method combining target detection and binocular vision

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111030993.3A Pending CN113870356A (en) 2020-12-14 2021-09-03 Gate passing behavior identification and control method combining target detection and binocular vision

Country Status (1)

Country Link
CN (2) CN112613461B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114241590A (en) * 2022-02-28 2022-03-25 深圳前海清正科技有限公司 Self-learning face recognition terminal

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445393B (en) * 2022-02-07 2023-04-07 无锡雪浪数制科技有限公司 Bolt assembly process detection method based on multi-vision sensor
CN116403284B (en) * 2023-04-07 2023-09-12 北京奥康达体育产业股份有限公司 Wisdom running examination training system based on bluetooth transmission technology
CN117765651A (en) * 2023-12-27 2024-03-26 暗物质(北京)智能科技有限公司 Gate passing identification method and system based on top view visual angle depth fusion
CN118097564A (en) * 2024-04-19 2024-05-28 南京国电南自轨道交通工程有限公司 Subway scene image sample simulation construction method based on virtual reality technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507289A (en) * 2017-09-30 2017-12-22 四川长虹电器股份有限公司 A kind of mobile terminal human face identification work-attendance checking method and system
CN109816838A (en) * 2019-03-14 2019-05-28 福建票付通信息科技有限公司 A kind of recognition of face gate and its ticket checking method
US20200202110A1 (en) * 2017-09-19 2020-06-25 Nec Corporation Collation system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200202110A1 (en) * 2017-09-19 2020-06-25 Nec Corporation Collation system
CN107507289A (en) * 2017-09-30 2017-12-22 四川长虹电器股份有限公司 A kind of mobile terminal human face identification work-attendance checking method and system
CN109816838A (en) * 2019-03-14 2019-05-28 福建票付通信息科技有限公司 A kind of recognition of face gate and its ticket checking method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569676A (en) * 2021-07-16 2021-10-29 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2023284307A1 (en) * 2021-07-16 2023-01-19 上海商汤智能科技有限公司 Image processing method and apparatus, and electronic device, storage medium and computer program product
CN113569676B (en) * 2021-07-16 2024-06-11 北京市商汤科技开发有限公司 Image processing method, device, electronic equipment and storage medium
CN114241590A (en) * 2022-02-28 2022-03-25 深圳前海清正科技有限公司 Self-learning face recognition terminal

Also Published As

Publication number Publication date
CN112613461B (en) 2022-11-29
CN113870356A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN112613461B (en) Intelligent gate communication and attendance checking method and system combining face recognition
CN111460962B (en) Face recognition method and face recognition system for mask
CN106803289A (en) A kind of false proof method and system of registering of intelligent mobile
CN101739742B (en) Networking type multi-channel access control and attendance system
CN101964056A (en) Bimodal face authentication method with living body detection function and system
CN112733802B (en) Image occlusion detection method and device, electronic equipment and storage medium
CN108108711B (en) Face control method, electronic device and storage medium
KR20220042301A (en) Image detection method and related devices, devices, storage media, computer programs
CN113408465B (en) Identity recognition method and device and related equipment
CN113269091A (en) Personnel trajectory analysis method, equipment and medium for intelligent park
CN101794386A (en) Fingerprint identification system and method for resisting remaining fingerprint
CN116311400A (en) Palm print image processing method, electronic device and storage medium
CN208888897U (en) A kind of intelligent visitor system based on testimony of a witness unification
CN113011544B (en) Face biological information identification method, system, terminal and medium based on two-dimensional code
CN113591603A (en) Certificate verification method and device, electronic equipment and storage medium
WO2018185574A1 (en) Apparatus and method for documents and/or personal identities recognition and validation
CN113538720A (en) Embedded face recognition attendance checking method based on Haisi intelligent AI chip
CN107025435A (en) A kind of face recognition processing method and system
CN103700151A (en) Morning run check-in method
CN105718972B (en) A kind of information intelligent acquisition method
CN210721506U (en) Dynamic face recognition terminal based on 3D camera
CN210052203U (en) Attendance check-in system based on multiple identification methods
CN112241674A (en) Face recognition method and system
CN105139254A (en) Earprint recognition-based bank remote identity authentication method and system
CN111476931A (en) Human code information verification method and device for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Guangjie

Inventor after: Liu Weiwei

Inventor after: Zhang Heng

Inventor after: Zhao Huawei

Inventor after: Lu Saijie

Inventor before: Zhang Heng

Inventor before: Liu Guangjie

Inventor before: Liu Weiwei

Inventor before: Zhao Huawei

Inventor before: Lu Saijie

GR01 Patent grant
GR01 Patent grant