CN114241556A - Non-perception face recognition attendance checking method and device - Google Patents
Non-perception face recognition attendance checking method and device Download PDFInfo
- Publication number
- CN114241556A CN114241556A CN202111514144.5A CN202111514144A CN114241556A CN 114241556 A CN114241556 A CN 114241556A CN 202111514144 A CN202111514144 A CN 202111514144A CN 114241556 A CN114241556 A CN 114241556A
- Authority
- CN
- China
- Prior art keywords
- face
- identity information
- attendance
- identity
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a non-perception face recognition attendance checking method, which relates to the technical field of image recognition, and comprises the following steps: s1, acquiring a face video through a camera of the attendance machine, and extracting a plurality of continuous video frames from the video; step S2, extracting the face features in the video frame by using a face detection algorithm, matching the extracted face features with the face images stored in the database in advance, and finding out the face image with the highest similarity and the corresponding identity information; and step S3, comparing the identity information of the face image found in the step S2 with the identity information in an attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state. The invention also discloses a non-perception face recognition attendance checking device which is used for realizing the method, solving the problems of missing check and queuing attendance checking of personnel, improving the work efficiency and having the advantages of convenient use and high recognition rate.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a non-perception face recognition attendance checking method and a non-perception face recognition attendance checking device.
Background
Artificial Intelligence (AI), refers to systems and machines that can mimic human Intelligence to perform tasks and iteratively improve themselves based on collected information.
AI is more a process and capability that serves for super thinking and data analysis than a format or function. In the view of many people, AI means that highly functional humanoid robots take over the world. In fact, the original intent of AI was not to replace humans, which was intended to greatly enhance the abilities and contributions of humans.
An Artificial intelligence algorithm (also called soft computing) is an algorithm for simulating and solving problems according to the principle of the Artificial intelligence algorithm, which is inspired by the law of nature. The current artificial intelligence algorithm comprises an artificial neural network genetic algorithm, a simulated annealing algorithm, a cluster intelligent ant colony algorithm, an example colony algorithm and the like. With continuous optimization of the artificial intelligence algorithm, the method not only can help us to improve work efficiency and improve living standard of us, but also can quickly find information needed by us in huge modern information resources.
The Mask-RCNN of the Mask R-CNN algorithm is an Instance segmentation (Instance segmentation) framework, and various tasks such as object classification, object detection, semantic segmentation, Instance segmentation, human body posture estimation and the like can be completed by adding different branches. For example segmentation, a branch is added on the basis of fast-RCNN (classification + regression branch) for semantic segmentation.
The MTCNN algorithm is an abbreviation of Multi-task Cascaded Convolutional Neural Networks in English, which is translated into a multitask Cascaded Convolutional Neural network. The face feature points obtained by combining the face detection and the face feature point positioning for the first time can realize the face correction. The algorithm consists of 3 stages: a first stage of rapidly generating candidate frames through CNN; in the second stage, a candidate window is refined through CNN with a more complex point, and a large number of overlapped windows are discarded; and in the third stage, the stronger CNN is used to realize the elimination of the candidate window and simultaneously regress 5 face key points.
The FSA-Net algorithm uses one stream one and stream two, which are intended to supplement each other with useful information, and the structures used for each stream are different. In the double-isomerism stream, each stream extracts a feature map in K of each stage; the method comprises the following steps of utilizing a stage fusion module to fuse the extracted features of two streams at each stage in an element multiplication mode (a middle green frame) instead of waiting for final fusion; the method comprises the following steps of utilizing a stage fusion module to fuse the extracted features of two streams at each stage in an element multiplication mode instead of waiting for final fusion; and (4) converting the fused features into a convolution channel c by utilizing convolution of 1x1, wherein the part is a fine-grained structure mapping module.
The FaceNet algorithm does not use the traditional softmax mode to carry out classification learning, but extracts a certain layer as a feature, learns an encoding method from an image to a Euclidean space, and then carries out face recognition, face verification, face clustering and the like based on the encoding.
Disclosure of Invention
Aiming at the requirements and the defects of the prior art development, the invention provides a non-perception face recognition attendance checking method and a non-perception face recognition attendance checking device.
Firstly, the invention provides a non-perception face recognition attendance checking method, and the technical scheme adopted for solving the technical problems is as follows:
an imperceptible face recognition attendance checking method comprises the following steps:
s1, acquiring a face video through a camera of the attendance machine, and extracting a plurality of continuous video frames from the video;
step S2, extracting the face features in the video frame by using a face detection algorithm, matching the extracted face features with the face images stored in the database in advance, and finding out the face image with the highest similarity and the corresponding identity information;
and step S3, comparing the identity information of the face image found in the step S2 with the identity information in an attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
Executing step S2, extracting the face features in the video frame by using a face detection algorithm, wherein the specific operations comprise:
s2.1, carrying out face alignment and coordinate information calculation on the image frame extracted in the step S1 by using a face detection algorithm;
s2.2, extracting the characteristics of the detected human face for each image frame;
and S2.3, performing similarity matching on all the extracted face features and face images stored in a database in advance.
And S2.3, performing similarity matching on all the extracted face features and face images prestored in a database, wherein the specific operations comprise:
s2.3.1, aiming at a plurality of continuous image frames of the same identity information, obtaining spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
step S2.3.2, setting a first threshold, and for the spatial position distribution probability of each face, finding out a maximum value exceeding the first threshold, a maximum point coordinate corresponding to the maximum value exceeding the first threshold, and corresponding identity information;
and S2.3.3, comparing the identity information found in the step S2.3.2 with the identity information in the attendance list, wherein if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
Optionally, the face detection algorithm adopts a statistical-based method, specifically a face detection algorithm based on histogram coarse segmentation and singular value features.
Secondly, the invention provides a non-sensing face recognition attendance device, which adopts the following technical scheme for solving the technical problems:
the utility model provides a no perception face identification attendance device, its structure includes:
the camera shooting processing module is used for collecting a face video and extracting a plurality of continuous video frames from the collected video;
the characteristic extraction and matching module is used for extracting the face characteristics in the video frame, matching the extracted face characteristics with the face images stored in the database in a similarity manner, and finding out the face image with the highest similarity and the corresponding identity information;
and the comparison confirmation module is used for comparing the identity information of the found face image with the identity information in the attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
Optionally, the related feature extraction and matching module specifically includes:
the face detection algorithm unit is used for carrying out face alignment and coordinate information calculation on the image frames extracted by the camera processing module by using a face detection algorithm;
the characteristic extraction unit is used for extracting the human face characteristic in each image frame;
and the feature matching unit is used for performing similarity matching on all the extracted face features and face images stored in the database in advance.
Further optionally, the specific process of performing similarity matching on all extracted facial features and facial images prestored in the database by the related feature matching unit includes:
aiming at a plurality of continuous image frames of the same identity information, calculating spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
setting a first threshold, and finding out a maximum value exceeding the first threshold, an extreme point coordinate corresponding to the maximum value exceeding the first threshold and corresponding identity information for the spatial position distribution probability of each face;
and transmitting the found identity information to the comparison confirmation module.
Preferably, the face detection algorithm in the related feature extraction and matching module adopts a statistical-based method.
Compared with the prior art, the non-perception face recognition attendance checking method and the device have the beneficial effects that:
(1) according to the invention, the human face video is obtained in a non-perception mode, and a plurality of continuous video frames in the video are extracted for feature extraction and comparison, so that non-perception human face attendance is realized, the problems of missing inspection and queuing attendance of personnel are solved, and the work efficiency is improved;
(2) the invention has the advantages of convenient use and high recognition rate, and solves the problems of high false judgment rate and low face recognition accuracy rate of the conventional attendance system.
Drawings
FIG. 1 is a flow chart of a method according to a first embodiment of the present invention;
fig. 2 is a connection block diagram of the second embodiment of the present invention.
The reference information in the drawings indicates:
1. a camera processing module 2, a characteristic extraction and matching module 3, a comparison and confirmation module,
4. a face detection algorithm unit, 5, a feature extraction unit, 6 and a feature matching unit.
Detailed Description
In order to make the technical scheme, the technical problems to be solved and the technical effects of the present invention more clearly apparent, the following technical scheme of the present invention is clearly and completely described with reference to the specific embodiments.
The first embodiment is as follows:
with reference to fig. 1, this embodiment provides an imperceptible face recognition attendance checking method, where the recognition process includes:
s1, acquiring a face video through a camera of the attendance machine, and extracting a plurality of continuous video frames from the video;
step S2, carrying out face alignment and coordinate information calculation on the image frame extracted in the step S1 by using a face detection algorithm;
step S3, extracting the characteristics of the detected human face for each image frame, and matching the similarity of all the extracted human face characteristics with the human face images stored in the database in advance;
step S4, aiming at a plurality of continuous image frames of the same identity information, obtaining spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
step S5, setting a first threshold, and for the spatial position distribution probability of each face, finding out a maximum value exceeding the first threshold, a maximum point coordinate corresponding to the maximum value exceeding the first threshold, and corresponding identity information;
and step S6, comparing the identity information found in the step S5 with the identity information in the attendance list, wherein if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
In this embodiment, the face detection algorithm may adopt a statistical-based method, specifically, a face detection algorithm based on histogram rough segmentation and singular value features.
Example two:
with reference to fig. 2, the present embodiment provides an insensible face recognition attendance device, which structurally includes:
the camera shooting processing module 1 is used for collecting a face video and extracting a plurality of continuous video frames from the collected video;
the feature extraction and matching module 2 is used for extracting the face features in the video frame, matching the extracted face features with the face images stored in the database in a similarity manner, and finding out the face image with the highest similarity and the corresponding identity information;
and the comparison confirmation module 3 is used for comparing the identity information of the found face image with the identity information in the attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
In this embodiment, the related feature extracting and matching module 2 specifically includes:
the face detection algorithm unit 4 is used for carrying out face alignment and coordinate information calculation on the image frames extracted by the camera processing module 1 by using a face detection algorithm;
the feature extraction unit 5 is used for extracting the face features in each image frame;
and the feature matching unit 6 is used for performing similarity matching on all the extracted face features and face images stored in the database in advance.
In this embodiment, the specific process of performing similarity matching on all extracted facial features and facial images pre-stored in the database by the feature matching unit 6 includes:
aiming at a plurality of continuous image frames of the same identity information, calculating spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
setting a first threshold, and finding out a maximum value exceeding the first threshold, an extreme point coordinate corresponding to the maximum value exceeding the first threshold and corresponding identity information for the spatial position distribution probability of each face;
and transmitting the found identity information to the comparison confirmation module 3.
In this embodiment, the face detection algorithm may adopt a statistical-based method, specifically, a face detection algorithm based on histogram rough segmentation and singular value features.
In conclusion, the non-perception face identification attendance method and the non-perception face identification attendance device can realize non-perception face attendance, solve the problems of missing inspection and queuing attendance of personnel, improve the work efficiency and have the advantages of convenience in use and high identification rate.
The principles and embodiments of the present invention have been described in detail using specific examples, which are provided only to aid in understanding the core technical content of the present invention. Based on the above embodiments of the present invention, those skilled in the art should make any improvements and modifications to the present invention without departing from the principle of the present invention, and therefore, the present invention should fall into the protection scope of the present invention.
Claims (8)
1. An imperceptible face recognition attendance checking method is characterized in that the recognition process comprises the following steps:
s1, acquiring a face video through a camera of the attendance machine, and extracting a plurality of continuous video frames from the video;
step S2, extracting the face features in the video frame by using a face detection algorithm, matching the extracted face features with the face images stored in the database in advance, and finding out the face image with the highest similarity and the corresponding identity information;
and step S3, comparing the identity information of the face image found in the step S2 with the identity information in an attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
2. The method for attendance checking by non-perceptual face recognition according to claim 1, wherein step S2 is executed, and the face feature in the video frame is extracted by using a face detection algorithm, and the specific operations include:
s2.1, carrying out face alignment and coordinate information calculation on the image frame extracted in the step S1 by using a face detection algorithm;
s2.2, extracting the characteristics of the detected human face for each image frame;
and S2.3, performing similarity matching on all the extracted face features and face images stored in a database in advance.
3. The method for checking attendance by non-perceptual face recognition according to claim 1, wherein step S2.3 is performed to perform similarity matching between all extracted face features and face images prestored in a database, and the specific operations include:
s2.3.1, aiming at a plurality of continuous image frames of the same identity information, obtaining spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
step S2.3.2, setting a first threshold, and for the spatial position distribution probability of each face, finding out a maximum value exceeding the first threshold, a maximum point coordinate corresponding to the maximum value exceeding the first threshold, and corresponding identity information;
and S2.3.3, comparing the identity information found in the step S2.3.2 with the identity information in the attendance list, wherein if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
4. The method of claim 1, wherein the face detection algorithm is a statistical-based method.
5. The utility model provides a no perception face identification attendance device which characterized in that, its structure includes:
the camera shooting processing module is used for collecting a face video and extracting a plurality of continuous video frames from the collected video;
the characteristic extraction and matching module is used for extracting the face characteristics in the video frame, matching the extracted face characteristics with the face images stored in the database in a similarity manner, and finding out the face image with the highest similarity and the corresponding identity information;
and the comparison confirmation module is used for comparing the identity information of the found face image with the identity information in the attendance list, if the comparison result is consistent, the identity attendance is successful, and if the comparison result is inconsistent, the identity is in an absent state.
6. The device of claim 5, wherein the feature extraction and matching module specifically comprises:
the face detection algorithm unit is used for carrying out face alignment and coordinate information calculation on the image frames extracted by the camera processing module by using a face detection algorithm;
the characteristic extraction unit is used for extracting the human face characteristic in each image frame;
and the feature matching unit is used for performing similarity matching on all the extracted face features and face images stored in the database in advance.
7. The imperceptible face recognition attendance device according to claim 6, wherein the specific process of the feature matching unit performing similarity matching on all extracted face features and face images prestored in the database includes:
aiming at a plurality of continuous image frames of the same identity information, calculating spatial position distribution probability, superposing a plurality of spatial position distribution probabilities of the face of the same identity information, calculating a maximum value point, and recording the coordinate and the identity information of the maximum value point;
setting a first threshold, and finding out a maximum value exceeding the first threshold, an extreme point coordinate corresponding to the maximum value exceeding the first threshold and corresponding identity information for the spatial position distribution probability of each face;
and transmitting the found identity information to the comparison confirmation module.
8. The device of claim 5, wherein the face detection algorithm in the feature extraction and matching module is a statistical-based method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111514144.5A CN114241556A (en) | 2021-12-13 | 2021-12-13 | Non-perception face recognition attendance checking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111514144.5A CN114241556A (en) | 2021-12-13 | 2021-12-13 | Non-perception face recognition attendance checking method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114241556A true CN114241556A (en) | 2022-03-25 |
Family
ID=80755017
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111514144.5A Pending CN114241556A (en) | 2021-12-13 | 2021-12-13 | Non-perception face recognition attendance checking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114241556A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023029678A1 (en) * | 2022-04-06 | 2023-03-09 | 江苏商贸职业学院 | Gis-based agricultural service management method and system |
-
2021
- 2021-12-13 CN CN202111514144.5A patent/CN114241556A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023029678A1 (en) * | 2022-04-06 | 2023-03-09 | 江苏商贸职业学院 | Gis-based agricultural service management method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Ppr-fcn: Weakly supervised visual relation detection via parallel pairwise r-fcn | |
CN109919977B (en) | Video motion person tracking and identity recognition method based on time characteristics | |
CN111814661B (en) | Human body behavior recognition method based on residual error-circulating neural network | |
CN111523378B (en) | Human behavior prediction method based on deep learning | |
CN109145717A (en) | A kind of face identification method of on-line study | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN111126307B (en) | Small sample face recognition method combining sparse representation neural network | |
CN105373810B (en) | Method and system for establishing motion recognition model | |
CN112926475B (en) | Human body three-dimensional key point extraction method | |
CN113705445B (en) | Method and equipment for recognizing human body posture based on event camera | |
CN116246338B (en) | Behavior recognition method based on graph convolution and transducer composite neural network | |
CN111898566B (en) | Attitude estimation method, attitude estimation device, electronic equipment and storage medium | |
CN112381045A (en) | Lightweight human body posture recognition method for mobile terminal equipment of Internet of things | |
CN113591774A (en) | Transformer-based behavior recognition algorithm | |
CN110807391A (en) | Human body posture instruction identification method for human-unmanned aerial vehicle interaction based on vision | |
CN116229507A (en) | Human body posture detection method and system | |
CN110348395B (en) | Skeleton behavior identification method based on space-time relationship | |
CN114241556A (en) | Non-perception face recognition attendance checking method and device | |
CN114694174A (en) | Human body interaction behavior identification method based on space-time diagram convolution | |
CN111914600A (en) | Group emotion recognition method based on space attention model | |
CN117496434A (en) | Student behavior detection method and system based on improved YOLOv5 algorithm | |
CN116580450A (en) | Method for recognizing gait at split viewing angles | |
CN117173777A (en) | Learner front posture estimation method based on limb direction clue decoding network | |
CN116092189A (en) | Bimodal human behavior recognition method based on RGB data and bone data | |
CN110458113A (en) | A kind of non-small face identification method cooperated under scene of face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221220 Address after: Room 305-22, Building 2, No. 1158 Zhangdong Road and No. 1059 Dangui Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 200120 Applicant after: Shanghai Yunxi Technology Co.,Ltd. Address before: 250100 No. 1036 Tidal Road, Jinan High-tech Zone, Shandong Province, S01 Building, Tidal Science Park Applicant before: Inspur cloud Information Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |