CN110119673A - Noninductive face Work attendance method, device, equipment and storage medium - Google Patents
Noninductive face Work attendance method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110119673A CN110119673A CN201910239164.2A CN201910239164A CN110119673A CN 110119673 A CN110119673 A CN 110119673A CN 201910239164 A CN201910239164 A CN 201910239164A CN 110119673 A CN110119673 A CN 110119673A
- Authority
- CN
- China
- Prior art keywords
- face
- matrix
- face characteristic
- attendance
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1091—Recording time for administrative or management purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Entrepreneurship & Innovation (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Marketing (AREA)
- Economics (AREA)
- Educational Administration (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of noninductive Work attendance method, device, equipment and computer readable storage mediums, this method is by carrying out facial modeling to continuous frame number image, and by convolutional neural networks model, extracts the face characteristic in every frame image and save to face characteristic list;Face characteristic list is clustered, face characteristic matrix is obtained;Quality of human face image analysis is carried out to the face characteristic in face characteristic matrix, obtain the corresponding image quality value of face characteristic, construct quality analysis matrix, the original face characteristic stored in face characteristic in face characteristic matrix and preset original face database is matched, the matching result of face characteristic is obtained and is saved into identification similarity matrix;According to quality analysis matrix and identification similarity matrix, face checking-in result is obtained, recognition of face is carried out using image of the image quality analysis method combination face characteristic clustering method to more people's multiframes, effectively improves the precision of recognition of face during face attendance.
Description
Technical field
The present invention relates to Computer Image Processing field more particularly to a kind of noninductive face Work attendance method, device, equipment and
Computer readable storage medium.
Background technique
Traditional attendance mode is checked card for fingerprint, and fingerprint, which is checked card, undoubtedly can cause congestion because of queuing in working peak period,
As artificial intelligence technology constantly promotes, face attendance has become reality.The important technology that face attendance is related to is face figure
As matching.
Currently, often more people appear in camera simultaneously and camera can obtain during using common camera attendance
It takes by attendance person from entering to all frame numbers to disappear from camera, can be turned one's head and be led because bowing during attendance by attendance person
Cause has multiple recognition results, and wearing spectacles cap is replaced additionally, due to Changes in weather, by attendance person will lead to discrimination reduction, knows
Other result diversification.Therefore, the precision for how improving recognition of face during common camera attendance becomes face field of attendance
Technical problem urgently to be resolved.
Summary of the invention
In view of the above-mentioned problems, the purpose of the present invention is to provide a kind of noninductive face Work attendance method, device, equipment and storages
Medium can effectively improve the precision of recognition of face during face attendance in conjunction with picture quality, and O&M cost is low.
In a first aspect, the embodiment of the invention provides a kind of noninductive face Work attendance methods, comprising the following steps:
Facial modeling is carried out to the continuous frame number image received, and passes through preset convolutional neural networks mould
Type extracts the face characteristic in every frame image;
The corresponding face characteristic of every frame image is saved to preset face characteristic list;
The face characteristic list is clustered, face characteristic matrix is obtained;Wherein, every in the face characteristic matrix
The corresponding face characteristic of image of the different frame numbers of the same people of one behavior;
Quality of human face image analysis is carried out to each face characteristic of every a line in the face characteristic matrix, obtains institute
The corresponding image quality value of each face characteristic of every a line in face characteristic matrix is stated, and constructs quality analysis matrix;
It will be stored in each face characteristic of every a line in the face characteristic matrix and preset original face database
Original face characteristic matched, obtain the matching result of each face characteristic in every a line, and the matching result is protected
It deposits into the identification similarity matrix constructed in advance;
According to the quality analysis matrix and the identification similarity matrix, face checking-in result is obtained.
Preferably, the matching result includes name information, similarity, human face image information.
It is preferably, described that face checking-in result is obtained according to the quality analysis matrix and the identification similarity matrix,
It specifically includes:
According to the quality analysis matrix and the identification similarity matrix, face comprehensive analysis matrix is obtained;Wherein, institute
State in face recognition result and the identification similarity matrix in face comprehensive analysis matrix in every a line in every a line
It is corresponded with result, the face recognition result includes name information, face comprehensive grading value, human face image information;It is described
Face comprehensive grading value is that image quality value and the similarity for identifying similarity matrix multiply in the quality analysis matrix
Product;
According to face comprehensive grading value, to each face recognition result of every a line in the face comprehensive analysis matrix into
Row sequence obtains face comprehensive grading value and is greater than the corresponding face recognition result of preset first scoring threshold value;
It is greater than the name information in the corresponding face recognition result of preset first scoring threshold value with face comprehensive grading value
For first object attendance object, query history casting record;
When the first object attendance object is not broadcasted, the attendance of first object attendance object is recorded and by described first
The corresponding face recognition result of target attendance object is sent to voice broadcast module and is broadcasted, by the first object attendance pair
It is shown as corresponding face recognition result is sent to display module;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When value, records the attendance of target attendance object and the corresponding face recognition result of the first object attendance object is sent to language
Sound broadcasting module is broadcasted, the corresponding face recognition result of the first object attendance object is sent to display module progress
Display;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When value, analysis is ranked up to the similarity of next line in the identification similarity matrix.
It is preferably, described that face checking-in result is obtained according to the quality analysis matrix and the identification similarity matrix,
Further include:
It obtains face comprehensive grading value and is not more than the corresponding face recognition result of the preset first scoring threshold value, as
First face the selection result;
According to the first face the selection result, the face comprehensive grading value of the first face the selection result and described
The length of a line of the first face the selection result where in the face comprehensive analysis matrix, determines the second target attendance pair
As;
When the second target attendance object is not broadcasted, the attendance of the second target attendance object is recorded and by described second
The corresponding face recognition result of target attendance object is sent to voice broadcast module and is broadcasted, by the second target attendance pair
It is shown as corresponding face recognition result is sent to display module;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When value, records the attendance of target attendance object and the corresponding face recognition result of the second target attendance object is sent to language
Sound broadcasting module is broadcasted, the corresponding face recognition result of the second target attendance object is sent to display module progress
Display;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When value, analysis is ranked up to the similarity of next line in the identification similarity matrix.
Preferably, described to be commented according to the face of the first face the selection result, the first face the selection result synthesis
The length of a line of score value and the first face the selection result where in the face comprehensive analysis matrix, determines second
Target attendance object, specifically includes:
Face in a line that any one first face the selection result is calculated where in the face comprehensive analysis matrix
Comprehensive grading value is greater than the number of the corresponding face recognition result of preset second scoring threshold value and any one first face sieves
Select the ratio of the total number of the face recognition result of a line of result where in the face comprehensive analysis matrix;
When the ratio is greater than preset third scoring threshold value, any one described first face the selection result is corresponded to
Name information be the second target attendance object.
Preferably, described by each face characteristic of every a line in the face characteristic matrix and preset original face number
It is matched according to the original face characteristic stored in library, obtains the matching result of each face characteristic in every a line, specifically include:
Any one face characteristic of every a line and pre- is calculated in the face characteristic matrix using cosine similarity algorithm
If original face database in the similarity of each original face characteristic that stores, and obtain similarity maximum value and its correspondence
Original face characteristic and name information, to generate the matching result of any one face characteristic.
Preferably, the described pair of continuous frame number image received carries out facial modeling, specifically includes:
Facial modeling is carried out to continuous frame number image using dlib facial feature points detection algorithm.
Second aspect, a kind of noninductive face Work attendance device of the embodiment of the present invention, comprising:
Face characteristic extraction module for carrying out facial modeling to the continuous frame number image received, and passes through
Preset convolutional neural networks model extracts the face characteristic in every frame image;
Face characteristic list builder module, for saving the corresponding face characteristic of every frame image to preset face characteristic
List;
Face characteristic cluster module obtains face characteristic matrix for clustering to the face characteristic list;Its
In, the corresponding face characteristic of image of the different frame numbers of each same people of behavior in the face characteristic matrix;
Image quality analysis module carries out people for each face characteristic to every a line in the face characteristic matrix
Face image quality analysis obtains the corresponding image quality value of each face characteristic of every a line in the face characteristic matrix,
And construct quality analysis matrix;
Face characteristic matching module, for by each face characteristic of every a line in the face characteristic matrix with it is preset
The original face characteristic stored in original face database is matched, and the matching knot of each face characteristic in every a line is obtained
Fruit, and the matching result is saved into the identification similarity matrix constructed in advance;
Face attendance module, for obtaining face and examining according to the quality analysis matrix and the identification similarity matrix
Diligent result.
The third aspect, a kind of noninductive face Time Attendance Device of the embodiment of the present invention, including processor, memory and storage
In the memory and it is configured as the computer program executed by the processor, the processor executes the computer
Noninductive face Work attendance method as described in relation to the first aspect is realized when program.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
Medium includes the computer program of storage, wherein controls the computer-readable storage medium in computer program operation
Equipment executes noninductive face Work attendance method as described in relation to the first aspect where matter.
Above embodiments have the following beneficial effects:
By carrying out facial modeling to the continuous frame number image received, and pass through preset convolutional neural networks
Model extracts the face characteristic in every frame image;The corresponding face characteristic of every frame image is saved to preset face characteristic
List;The face characteristic list is clustered, face characteristic matrix is obtained;Wherein, each in the face characteristic matrix
The corresponding face characteristic of image of the different frame numbers of the same people of behavior;Recorded in a manner of face characteristic list different people not
Face feature vector in facial image to be identified at same frame is simultaneously clustered, and finally obtains face characteristic matrix, the face characteristic
The corresponding face feature vector of facial image to be identified of the different frame numbers of each same people of behavior in matrix, can be simultaneously to more
It is personal to carry out recognition of face simultaneously, realize more people's multiframe recognitions of face;To each of every a line in the face characteristic matrix
Face characteristic carries out quality of human face image analysis, and each face characteristic for obtaining every a line in the face characteristic matrix is corresponding
Image quality value, and construct quality analysis matrix, realize the image for obtaining high quality from continuous frame number image data;By institute
It is special to state the original face stored in each face characteristic of every a line and preset original face database in face characteristic matrix
Sign is matched, and obtains the matching result of each face characteristic in every a line, and the matching result is saved to preparatory building
Identification similarity matrix in;According to the quality analysis matrix and the identification similarity matrix, face checking-in result is obtained;
Recognition of face is carried out using image of the image quality analysis method combination face characteristic clustering method to more people's multiframes, it can be effective
The precision of recognition of face during raising face attendance, while the efficiency of face attendance is greatlyd improve, O&M cost is low.
Detailed description of the invention
In order to illustrate more clearly of technical solution of the present invention, attached drawing needed in embodiment will be made below
Simply introduce, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow diagram for the noninductive face Work attendance method that first embodiment of the invention provides.
Fig. 2 is the overall flow schematic diagram of face Work attendance method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of face attendance checking system provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram for the noninductive face Work attendance device that second embodiment of the invention provides;
Fig. 5 is the structural schematic diagram for the noninductive face Time Attendance Device that third embodiment of the invention provides.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig.1 and 2, wherein, Fig. 1 is the process signal of noninductive face Work attendance method provided in an embodiment of the present invention
Figure, Fig. 2 is the overall flow schematic diagram of face Work attendance method provided in an embodiment of the present invention.First embodiment of the invention provides
A kind of noninductive face Work attendance method can be executed by noninductive face Time Attendance Device, and the following steps are included:
S11: facial modeling is carried out to the continuous frame number image received, and passes through preset convolutional neural networks
Model extracts the face characteristic in every frame image;
In embodiments of the present invention, the noninductive face Time Attendance Device can for computer, mobile phone, tablet computer, access control equipment,
Laptop or server etc. calculate equipment, and it is integrated that the noninductive face Work attendance method can be used as one of functional module
On the noninductive face Time Attendance Device, executed by the noninductive face Time Attendance Device.
In embodiments of the present invention, the noninductive face Time Attendance Device receives continuous frame number image data, needs to illustrate
It is that the embodiment of the present invention does not do any restrictions for the acquisition modes of the target facial image, such as can pass through the nothing
The included video camera that office population place is arranged in of touching face Time Attendance Device is obtained, and wired mode or nothing are either passed through
Line mode from network, the camera in office population place is set or other equipment receive the continuous frame number image data.
It should be noted that the embodiment of the present invention does not do any limit for the positioning feature point mode of the facial image to be identified
System, for example, can by ASM (Active Shape Model) algorithm, AAM (Active Appreance Model) algorithm or
Person determines human face characteristic point (eyes, eyebrow, nose, mouth, face in facial image to be identified based on dlib Face datection algorithm
Portion's outer profile) position, and establish face characteristic training set.The face characteristic training set is input to convolutional neural networks model
(CNN model) is trained, and obtains the face characteristic in every frame facial image to be identified.Preferably, by the continuous frame number figure
As data are grouped in order;Wherein, every group of image data packet includes continuous N frame image, N > 1;Face is carried out to every group of image data
Positioning feature point, and by preset convolutional neural networks model, extract the face characteristic in every frame image.Of the invention real
It applies in example, N=10, accordingly, the length of face characteristic list is 10.For the continuous frame number image data received, every 10
Frame image is successively analyzed as one group.It should be noted that the face Time Attendance Device based on self study received
Continuous frame number image data is video data, which is divided into the sub-video data of several pieces regular length, so
It, can be with as facial image to be identified afterwards by the target frame image of the every one's share of expenses for a joint undertaking video data of quality of human face image Analysis and Screening
Calculation amount is reduced, recognition efficiency is improved.
S12: the corresponding face characteristic of every frame image is saved to preset face characteristic list;
S13: the face characteristic list is clustered, face characteristic matrix is obtained;Wherein, the face characteristic matrix
In each same people of behavior different frame numbers the corresponding face characteristic of image;
S14: quality of human face image analysis is carried out to each face characteristic of every a line in the face characteristic matrix, is obtained
To the corresponding image quality value of each face characteristic of every a line in the face characteristic matrix, and construct quality analysis square
Battle array;
In embodiments of the present invention, using CW clustering algorithm (Chinese_Whisper) in the face characteristic list
The face feature vector of every a line is clustered.CW clustering algorithm is by building non-directed graph, by each face as in non-directed graph
A node, the similarity between face passes through the corresponding similarity of one node of iterative search as the side between node
Weight is cumulative and to search classification and be clustered.The embodiment of the present invention recorded in a manner of face characteristic list different people
Face characteristic in different frame facial image to be identified is simultaneously clustered, and the face characteristic of different frame numbers, different people is assigned to
Two-dimensional face characteristic matrix (face_matric), in the face eigenmatrix the different frame numbers of each same people of behavior to
Identifying the corresponding face characteristic of facial image, then not going together in the face eigenmatrix indicates different attendance objects to be identified,
Recognition of face can be carried out simultaneously to multiple people simultaneously, realize more people's multiframe recognitions of face.
Further, quality of human face image point is carried out by each face characteristic to every a line in face characteristic matrix
Analysis, is capable of the speed of the searching of quick lock in key frame images, obtains by attendance person compared to existing from entering to from camera shooting
All frame number images that head disappears carry out the matched process of face, and invention significantly improves program overall operation efficiency and passes
The precision of key image recognition.
Further, each face characteristic to every a line in the face characteristic matrix carries out facial image matter
Amount analysis, obtains the corresponding image quality value of each face characteristic of every a line in the face characteristic matrix, specifically includes:
Face swing, the facial image for calculating each face characteristic of every a line in the face characteristic matrix are clear
Degree, facial image brightness and facial image sizes values;
It is bright according to the face swing of preset weight and each face characteristic, facial image clarity, facial image
Brightness and facial image sizes values, calculate the image quality value of each face characteristic.
In embodiments of the present invention, calculate separately each face characteristic face swing r1 (preset weight w 1=1),
Facial image clarity q1 (preset power w2=0.8), facial image brightness c1 (preset power w3=0.6) and face figure
As sizes values s1 (preset power w4=0.9), and weighting processing is done, obtains the image quality value t of each face characteristic:
T=(r1 × w1+q1 × w2+c1w3+s1 × w4)/(w1+w2+w3+w4).To realize in continuous video flowing frame
In can select an optimal facial image and identified, promote recognition result precision.
S15: will be in each face characteristic of every a line in the face characteristic matrix and preset original face database
The original face characteristic of storage is matched, and obtains the matching result of each face characteristic in every a line, and the matching is tied
Fruit saves into the identification similarity matrix constructed in advance;
In embodiments of the present invention, before carrying out face matching further include: acquisition is by the original face figure of attendance object
Picture, and establish original face database, wherein include in the original face database same people it is corresponding at least one it is original
Facial image and personnel's attribute information;Personnel's attribute information includes name information, job information etc..
S16: according to the quality analysis matrix and the identification similarity matrix, face checking-in result is obtained.
The embodiment of the present invention, using image quality analysis method combination face characteristic clustering method to the image of more people's multiframes
Recognition of face is carried out, the precision of recognition of face during face attendance can be effectively improved, while greatling improve face attendance
Efficiency, O&M cost is low.
In an alternative embodiment, the matching result includes name information, similarity, human face image information.It should
Face characteristic information includes face characteristic in original facial image and/or face characteristic matrix.
In an alternative embodiment, S15: by each face characteristic of every a line in the face characteristic matrix and in advance
If original face database in the original face characteristic that stores matched, obtain the matching of each face characteristic in every a line
As a result, specifically including:
Any one face characteristic of every a line and pre- is calculated in the face characteristic matrix using cosine similarity algorithm
If original face database in the similarity of each original face characteristic that stores, and obtain similarity maximum value and its correspondence
Original face characteristic and name information, to generate the matching result of any one face characteristic.
In embodiments of the present invention, specific matching process is as follows: by each individual of every a line in the face characteristic matrix
All original facial images in face feature and original face database calculate COS distance, thus obtain apart from it is maximum that
A Corresponding matching is the face matching result of a certain frame in this line, and result is stored in identification similarity matrix, identification
The different attendance people to be identified of each behavior of similarity matrix, each value stores each frame and original face database in every a line
It is matched as a result, include name, similarity, human face image information etc..
In an alternative embodiment, it S16: according to the quality analysis matrix and the identification similarity matrix, obtains
Face checking-in result is obtained, is specifically included:
According to the quality analysis matrix and the identification similarity matrix, face comprehensive analysis matrix is obtained;Wherein, institute
State in face recognition result and the identification similarity matrix in face comprehensive analysis matrix in every a line in every a line
It is corresponded with result, the face recognition result includes name information, face comprehensive grading value, human face image information;It is described
Face comprehensive grading value is that image quality value and the similarity for identifying similarity matrix multiply in the quality analysis matrix
Product;
According to face comprehensive grading value, to each face recognition result of every a line in the face comprehensive analysis matrix into
Row sequence obtains face comprehensive grading value and is greater than the corresponding face recognition result of preset first scoring threshold value;
It is greater than the name information in the corresponding face recognition result of preset first scoring threshold value with face comprehensive grading value
For first object attendance object, query history casting record;
When the first object attendance object is not broadcasted, the attendance of first object attendance object is recorded and by described first
The corresponding face recognition result of target attendance object is sent to voice broadcast module and is broadcasted, by the first object attendance pair
It is shown as corresponding face recognition result is sent to display module;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When value, records the attendance of target attendance object and the corresponding face recognition result of the first object attendance object is sent to language
Sound broadcasting module is broadcasted, the corresponding face recognition result of the first object attendance object is sent to display module progress
Display;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When value, analysis is ranked up to the similarity of next line in the identification similarity matrix.
In present example, the preset time threshold is 30s.To every a line in face comprehensive analysis matrix with face
The size of comprehensive grading value sorts from large to small, and takes face comprehensive grading value to be greater than preset first scoring threshold value (such as default
First scoring threshold value=0.9, as the first Rule of judgment) matching result be final recognition result, when the result is corresponding
The time is not also broadcasted or broadcast to attendance object greater than 30s, then records attendance and voice broadcast comes out, which exists simultaneously
The display module of web terminal is shown, when the time has been broadcasted or broadcasted to the corresponding attendance object of the matching result less than 30s
Then skip the image data for analyzing next 10 frame.The schematic block diagram of face attendance checking system is as shown in Figure 2.
In an alternative embodiment, it S16: according to the quality analysis matrix and the identification similarity matrix, obtains
Obtain face checking-in result, further includes:
It obtains face comprehensive grading value and is not more than the corresponding face recognition result of the preset first scoring threshold value, as
First face the selection result;
According to the first face the selection result, the face comprehensive grading value of the first face the selection result and described
The length of a line of the first face the selection result where in the face comprehensive analysis matrix, determines the second target attendance pair
As;
When the second target attendance object is not broadcasted, the attendance of the second target attendance object is recorded and by described second
The corresponding face recognition result of target attendance object is sent to voice broadcast module and is broadcasted, by the second target attendance pair
It is shown as corresponding face recognition result is sent to display module;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When value, records the attendance of target attendance object and the corresponding face recognition result of the second target attendance object is sent to language
Sound broadcasting module is broadcasted, the corresponding face recognition result of the second target attendance object is sent to display module progress
Display;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When value, analysis is ranked up to the similarity of next line in the identification similarity matrix.
In an alternative embodiment, described according to the first face the selection result, first face screening knot
The a line of the face comprehensive grading value of fruit and the first face the selection result where in the face comprehensive analysis matrix
Length, determine the second target attendance object, specifically include:
Face in a line that any one first face the selection result is calculated where in the face comprehensive analysis matrix
Comprehensive grading value is greater than the number of the corresponding face recognition result of preset second scoring threshold value and any one first face sieves
Select the ratio of the total number of the face recognition result of a line of result where in the face comprehensive analysis matrix;
When the ratio is greater than preset third scoring threshold value, any one described first face the selection result is corresponded to
Name information be the second target attendance object.
In embodiments of the present invention, when face comprehensive grading value is not more than 0.9, second of condition judgement is carried out, is avoided
Information is lost.Specifically, it counts in the face comprehensive analysis matrix and knows in certain a line of face comprehensive grading value no more than 0.9
Not Chu face number information, if identifying in this journey that any one face comprehensive grading value is greater than preset second scoring threshold value
(such as preset second scoring threshold value=0.85), and preset second scoring threshold value is greater than 0.8 number/this journey length > 0.8
(as the second Rule of judgment), then meet condition, if same this person does not also broadcast or broadcast the time and is greater than 30s and then records and examine
Diligent and voice broadcast comes out, and identification face result is then shown in web terminal, is less than if having broadcasted or having broadcasted the time
30s then skips next 10 frame of analysis.The embodiment of the present invention combines multiframe number and is identified as the judgement of same people's Probabilistic Synthesis, can be with
Attendance effect is further promoted, avoids missing some people in face recognition process.
In an alternative embodiment, the described pair of continuous frame number image received carries out facial modeling, tool
Body includes:
Facial modeling is carried out to continuous frame number image using dlib facial feature points detection algorithm.
Above embodiments have the following beneficial effects:
1, really realize noninductive attendance, checked-in person without any operation can attendance, mitigate peak period congestion.
2, it can be realized using common camera, be greatly saved cost.
3, compared with common one-to-one attendance, it is one-to-many attendance, reduces one-to-one attendance and user is needed first to authenticate not
Necessity operation;
4, identification phase similarity significant increase attendance effect is combined using face mass analysis method;
5, judge to carry out recognition of face to more people's multiframe situations using clustering method combination multiple condition, greatly improve knowledge
Other precision.
Referring to Fig. 4, second embodiment of the invention provides a kind of noninductive face Work attendance device, comprising:
Face characteristic extraction module 1 for carrying out facial modeling to the continuous frame number image received, and passes through
Preset convolutional neural networks model extracts the face characteristic in every frame image;
Face characteristic list builder module 2, it is special to preset face for saving the corresponding face characteristic of every frame image
Levy list;
Face characteristic cluster module 3 obtains face characteristic matrix for clustering to the face characteristic list;Its
In, the corresponding face characteristic of image of the different frame numbers of each same people of behavior in the face characteristic matrix;
Image quality analysis module 4 is carried out for each face characteristic to every a line in the face characteristic matrix
Quality of human face image analysis, obtains the corresponding picture quality of each face characteristic of every a line in the face characteristic matrix
Value, and construct quality analysis matrix;
Face characteristic matching module 5, for by each face characteristic of every a line in the face characteristic matrix and default
Original face database in the original face characteristic that stores matched, obtain the matching knot of each face characteristic in every a line
Fruit, and the matching result is saved into the identification similarity matrix constructed in advance;
Face attendance module 6, for obtaining face and examining according to the quality analysis matrix and the identification similarity matrix
Diligent result.
In an alternative embodiment, the matching result includes name information, similarity, human face image information.
In an alternative embodiment, face attendance module 6 includes:
Face comprehensive analysis matrix construction unit, for according to the quality analysis matrix and the identification similarity moment
Battle array, obtains face comprehensive analysis matrix;Wherein, the face recognition result in the face comprehensive analysis matrix in every a line and institute
It states the matching result in identification similarity matrix in every a line to correspond, the face recognition result includes name information, people
Face comprehensive grading value, human face image information;The face comprehensive grading value be the quality analysis matrix in image quality value with
The product of the similarity of the identification similarity matrix;
First face recognition result determination unit is used for according to face comprehensive grading value, to the face comprehensive analysis square
Each face recognition result of every a line is ranked up in battle array, is obtained face comprehensive grading value and is greater than preset first scoring threshold value
Corresponding face recognition result;
First object attendance object determination unit, for being greater than preset first scoring threshold value pair with face comprehensive grading value
Name information in the face recognition result answered is first object attendance object, query history casting record;
First data transmission unit, for recording first object attendance when the first object attendance object is not broadcasted
The corresponding face recognition result of the first object attendance object is simultaneously sent to voice broadcast module and broadcast by the attendance of object
The corresponding face recognition result of the first object attendance object is sent to display module and shows by report;
Second data transmission unit, for having broadcasted when the first object attendance object, and broadcast the time and it is current when
Between difference when being greater than preset time threshold, the attendance for recording target attendance object is simultaneously corresponding by the first object attendance object
Face recognition result be sent to voice broadcast module and broadcasted, by the corresponding recognition of face of the first object attendance object
As a result display module is sent to be shown;
Third data transmission unit, for having broadcasted when the first object attendance object, and broadcast the time and it is current when
Between difference when being less than preset time threshold, analysis is ranked up to the similarity of next line in the identification similarity matrix.
In an alternative embodiment, face attendance module 6 includes:
Second face recognition result determination unit is commented for obtaining face comprehensive grading value no more than described preset first
Divide the corresponding face recognition result of threshold value, as the first face the selection result;
Second target attendance object determination unit, for being sieved according to the first face the selection result, first face
Select face comprehensive grading value and the first face the selection result place in the face comprehensive analysis matrix of result
The length of a line determines the second target attendance object;
4th data transmission unit, for recording the second target attendance when the second target attendance object is not broadcasted
The corresponding face recognition result of the second target attendance object is simultaneously sent to voice broadcast module and broadcast by the attendance of object
The corresponding face recognition result of the second target attendance object is sent to display module and shows by report;
5th data transmission unit, for having broadcasted when the second target attendance object, and broadcast the time and it is current when
Between difference be greater than preset time threshold when, record target attendance object attendance simultaneously will the second target attendance object correspondence
Face recognition result be sent to voice broadcast module and broadcasted, by the corresponding recognition of face of the second target attendance object
As a result display module is sent to be shown;
6th data transmission unit, for having broadcasted when the second target attendance object, and broadcast the time and it is current when
Between difference when being less than preset time threshold, analysis is ranked up to the similarity of next line in the identification similarity matrix.
In an alternative embodiment, the second target attendance object determination unit, it is the first for calculating any one
Face comprehensive grading value is greater than preset second and comments in a line of face the selection result where in the face comprehensive analysis matrix
The number and any one first face the selection result for dividing the corresponding face recognition result of threshold value are in the face comprehensive analysis square
The ratio of the total number of the face recognition result of a line where in battle array;When the ratio is greater than preset third scoring threshold value
When, it is the second target attendance object by any one described corresponding name information of the first face the selection result.
In an alternative embodiment, face characteristic matching module 5, described in being calculated using cosine similarity algorithm
It is stored in any one face characteristic of every a line and preset original face database in face characteristic matrix each original
The similarity of face characteristic, and similarity maximum value and its corresponding original face characteristic and name information are obtained, appointed with generating
It anticipates the matching result of a face characteristic.
In an alternative embodiment, face characteristic extraction module 21, for being calculated using dlib facial feature points detection
Method facial image progress positioning feature point to be identified described in multiframe.
It should be noted that the apparatus embodiments described above are merely exemplary, wherein described be used as separation unit
The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with
It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual
It needs that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.In addition, device provided by the invention
In embodiment attached drawing, the connection relationship between module indicate between them have communication connection, specifically can be implemented as one or
A plurality of communication bus or signal wire.Those of ordinary skill in the art are without creative efforts, it can understand
And implement.
It is the schematic diagram for the noninductive face Time Attendance Device that third embodiment of the invention provides referring to Fig. 5.As shown in figure 5, should
Noninductive face Time Attendance Device includes: at least one processor 11, such as CPU, at least one network interface 14 or other users
Interface 13, memory 15, at least one communication bus 12, communication bus 12 is for realizing the connection communication between these components.
Wherein, user interface 13 optionally may include USB interface and other standards interface, wireline interface.Network interface 14 is optional
May include Wi-Fi interface and other wireless interfaces.Memory 15 may include high speed RAM memory, it is also possible to also wrap
It includes non-labile memory (non-volatilememory), for example, at least a magnetic disk storage.Memory 15 is optional
It may include at least one storage device for being located remotely from aforementioned processor 11.
In some embodiments, memory 15 stores following element, executable modules or data structures, or
Their subset or their superset:
Operating system 151 includes various system programs, for realizing various basic businesses and hardware based of processing
Business;
Program 152.
Specifically, processor 11 executes nothing described in above-described embodiment for calling the program 152 stored in memory 15
Touching face Work attendance method, such as step S11 shown in FIG. 1.Alternatively, the processor is realized when executing the computer program
State the function of each module/unit in each Installation practice, such as face characteristic extraction module.
Illustratively, the computer program can be divided into one or more module/units, one or more
A module/unit is stored in the memory, and is executed by the processor, to complete the present invention.It is one or more
A module/unit can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing institute
State implementation procedure of the computer program in the noninductive face Time Attendance Device.
The noninductive face Time Attendance Device can be desktop PC, notebook, palm PC and cloud server etc.
Calculate equipment.The noninductive face Time Attendance Device may include, but be not limited only to, processor, memory.Those skilled in the art can
To understand, the schematic diagram is only the example of noninductive face Time Attendance Device, does not constitute the limit to noninductive face Time Attendance Device
It is fixed, it may include perhaps combining certain components or different components than illustrating more or fewer components.
Alleged processor 11 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng the processor 11 is the control centre of the noninductive face Time Attendance Device, utilizes various interfaces and the entire nothing of connection
The various pieces of touching face Time Attendance Device.
The memory 15 can be used for storing the computer program and/or module, the processor 11 by operation or
Computer program and/or the module stored in the memory is executed, and calls the data being stored in memory, is realized
The various functions of the noninductive face Time Attendance Device.The memory 15 can mainly include storing program area and storage data area,
Wherein, storing program area can application program needed for storage program area, at least one function (such as sound-playing function, figure
As playing function etc.) etc.;Storage data area, which can be stored, uses created data (such as audio data, phone according to mobile phone
This etc.) etc..In addition, memory 15 may include high-speed random access memory, it can also include nonvolatile memory, such as
Hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or other volatibility are solid
State memory device.
Wherein, if module/unit that the noninductive face Time Attendance Device integrates is realized in the form of SFU software functional unit
And when sold or used as an independent product, it can store in a computer readable storage medium.Based on such
Understand, the present invention realizes all or part of the process in above-described embodiment method, can also instruct phase by computer program
The hardware of pass is completed, and the computer program can be stored in a computer readable storage medium, which exists
When being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer journey
Sequence code, the computer program code can be source code form, object identification code form, executable file or certain intermediate shapes
Formula etc..The computer-readable medium may include: any entity or device, note that can carry the computer program code
Recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory),
Random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium
Deng.It should be noted that the content that the computer-readable medium includes can be real according to legislation in jurisdiction and patent
The requirement trampled carries out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium
It does not include electric carrier signal and telecommunication signal.
Fourth embodiment of the invention provides a kind of computer readable storage medium, the computer readable storage medium packet
Include the computer program of storage, wherein where controlling the computer readable storage medium in computer program operation
Equipment executes noninductive face Work attendance method as in the first embodiment.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art
For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (10)
1. a kind of noninductive face Work attendance method characterized by comprising
Facial modeling is carried out to the continuous frame number image received, and by preset convolutional neural networks model, is mentioned
Take out the face characteristic in every frame image;
The corresponding face characteristic of every frame image is saved to preset face characteristic list;
The face characteristic list is clustered, face characteristic matrix is obtained;Wherein, every a line in the face characteristic matrix
For the corresponding face characteristic of image of the different frame numbers of same people;
Quality of human face image analysis is carried out to each face characteristic of every a line in the face characteristic matrix, obtains the people
The corresponding image quality value of each face characteristic of every a line in face eigenmatrix, and construct quality analysis matrix;
The original that will be stored in each face characteristic of every a line in the face characteristic matrix and preset original face database
Beginning face characteristic is matched, and obtains the matching result of each face characteristic in every a line, and by the matching result save to
In the identification similarity matrix constructed in advance;
According to the quality analysis matrix and the identification similarity matrix, face checking-in result is obtained.
2. noninductive face Work attendance method as described in claim 1, which is characterized in that the matching result include name information,
Similarity, human face image information.
3. noninductive face Work attendance method as claimed in claim 2, which is characterized in that it is described according to the quality analysis matrix and
The identification similarity matrix obtains face checking-in result, specifically includes:
According to the quality analysis matrix and the identification similarity matrix, face comprehensive analysis matrix is obtained;Wherein, the people
Matching knot in face recognition result and the identification similarity matrix in face comprehensive analysis matrix in every a line in every a line
Fruit corresponds, and the face recognition result includes name information, face comprehensive grading value, human face image information;The face
Comprehensive grading value is the product of image quality value and the similarity of the identification similarity matrix in the quality analysis matrix;
According to face comprehensive grading value, each face recognition result of every a line in the face comprehensive analysis matrix is arranged
Sequence obtains face comprehensive grading value and is greater than the corresponding face recognition result of preset first scoring threshold value;
Being greater than the name information in the corresponding face recognition result of preset first scoring threshold value with face comprehensive grading value is the
One target attendance object, query history casting record;
When the first object attendance object is not broadcasted, the attendance of first object attendance object is recorded and by the first object
The corresponding face recognition result of attendance object is sent to voice broadcast module and is broadcasted, by the first object attendance object pair
The face recognition result answered is sent to display module and is shown;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When, it records the attendance of target attendance object and the corresponding face recognition result of the first object attendance object is sent to voice
Broadcasting module, which broadcasted, the corresponding face recognition result of the first object attendance object is sent to display module shows
Show;
When the first object attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When, analysis is ranked up to the similarity of next line in the identification similarity matrix.
4. noninductive face Work attendance method as claimed in claim 3, which is characterized in that it is described according to the quality analysis matrix and
The identification similarity matrix obtains face checking-in result, further includes:
It obtains face comprehensive grading value and is not more than the corresponding face recognition result of the preset first scoring threshold value, as first
Face the selection result;
According to the first face the selection result, the face comprehensive grading value and described first of the first face the selection result
The length of a line of face the selection result where in the face comprehensive analysis matrix, determines the second target attendance object;
When the second target attendance object is not broadcasted, the attendance of the second target attendance object is recorded and by second target
The corresponding face recognition result of attendance object is sent to voice broadcast module and is broadcasted, by the second target attendance object pair
The face recognition result answered is sent to display module and is shown;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is greater than preset time threshold
When, it records the attendance of target attendance object and the corresponding face recognition result of the second target attendance object is sent to voice
Broadcasting module, which broadcasted, the corresponding face recognition result of the second target attendance object is sent to display module shows
Show;
When the second target attendance object has been broadcasted, and the difference for broadcasting time and current time is less than preset time threshold
When, analysis is ranked up to the similarity of next line in the identification similarity matrix.
5. noninductive face Work attendance method as claimed in claim 4, which is characterized in that described screened according to first face is tied
Fruit, the face comprehensive grading value of the first face the selection result and the first face the selection result are comprehensive in the face
The length of a line where in analysis matrix, determines the second target attendance object, specifically includes:
Face is comprehensive in a line that any one first face the selection result is calculated where in the face comprehensive analysis matrix
Score value is greater than the number of the corresponding face recognition result of preset second scoring threshold value and the screening of any one first face is tied
The ratio of the total number of the face recognition result of a line of fruit where in the face comprehensive analysis matrix;
It, will the corresponding people of any one described first face the selection result when the ratio is greater than preset third scoring threshold value
Name information is the second target attendance object.
6. noninductive face Work attendance method as described in claim 1, which is characterized in that it is described will be every in the face characteristic matrix
The original face characteristic stored in each face characteristic of a line and preset original face database is matched, and is obtained each
The matching result of each face characteristic in row, specifically includes:
Using cosine similarity algorithm calculate in the face characteristic matrix any one face characteristic of every a line with it is preset
The similarity of each original face characteristic stored in original face database, and obtain similarity maximum value and its corresponding original
Beginning face characteristic and name information, to generate the matching result of any one face characteristic.
7. noninductive face Work attendance method as described in claim 1, which is characterized in that the described pair of continuous frame number image received
Facial modeling is carried out, is specifically included:
Facial modeling is carried out to continuous frame number image using dlib facial feature points detection algorithm.
8. a kind of noninductive face Work attendance device characterized by comprising
Face characteristic extraction module, for carrying out facial modeling to the continuous frame number image received, and by default
Convolutional neural networks model, extract the face characteristic in every frame image;
Face characteristic list builder module is arranged for saving the corresponding face characteristic of every frame image to preset face characteristic
Table;
Face characteristic cluster module obtains face characteristic matrix for clustering to the face characteristic list;Wherein, institute
State the corresponding face characteristic of image of the different frame numbers of each same people of behavior in face characteristic matrix;
Image quality analysis module carries out face figure for each face characteristic to every a line in the face characteristic matrix
As quality analysis, the corresponding image quality value of each face characteristic of every a line in the face characteristic matrix, and structure are obtained
Build quality analysis matrix;
Face characteristic matching module, for by each face characteristic of every a line in the face characteristic matrix with it is preset original
The original face characteristic stored in face database is matched, and the matching result of each face characteristic in every a line is obtained, and
The matching result is saved into the identification similarity matrix constructed in advance;
Face attendance module, for obtaining face attendance knot according to the quality analysis matrix and the identification similarity matrix
Fruit.
9. a kind of noninductive face Time Attendance Device, which is characterized in that in the memory including processor, memory and storage
And it is configured as the computer program executed by the processor, the processor is realized when executing the computer program as weighed
Benefit require any one of 1 to 7 described in noninductive face Work attendance method.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium includes the calculating of storage
Machine program, wherein equipment where controlling the computer readable storage medium in computer program operation is executed as weighed
Benefit require any one of 1 to 7 described in noninductive face Work attendance method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910239164.2A CN110119673B (en) | 2019-03-27 | 2019-03-27 | Non-inductive face attendance checking method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910239164.2A CN110119673B (en) | 2019-03-27 | 2019-03-27 | Non-inductive face attendance checking method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110119673A true CN110119673A (en) | 2019-08-13 |
CN110119673B CN110119673B (en) | 2021-01-12 |
Family
ID=67520662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910239164.2A Active CN110119673B (en) | 2019-03-27 | 2019-03-27 | Non-inductive face attendance checking method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110119673B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807440A (en) * | 2019-11-15 | 2020-02-18 | 深圳算子科技有限公司 | Method and system for noninductive class face input |
CN111027937A (en) * | 2019-12-10 | 2020-04-17 | 浩云科技股份有限公司 | Attendance system |
CN111079718A (en) * | 2020-01-15 | 2020-04-28 | 中云智慧(北京)科技有限公司 | Quick face comparison method |
CN111325865A (en) * | 2020-03-20 | 2020-06-23 | 广州美电恩智电子科技有限公司 | Non-inductive attendance checking method and device and equipment |
CN111401324A (en) * | 2020-04-20 | 2020-07-10 | Oppo广东移动通信有限公司 | Image quality evaluation method, device, storage medium and electronic equipment |
CN112001219A (en) * | 2020-06-19 | 2020-11-27 | 国家电网有限公司技术学院分公司 | Multi-angle multi-face recognition attendance checking method and system |
CN112182008A (en) * | 2020-10-27 | 2021-01-05 | 青岛以萨数据技术有限公司 | System, method, terminal and medium for analyzing face picture data acquired by mobile terminal |
CN112507315A (en) * | 2021-02-05 | 2021-03-16 | 红石阳光(北京)科技股份有限公司 | Personnel passing detection system based on intelligent brain |
CN112686141A (en) * | 2020-12-29 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | Personnel filing method and device and electronic equipment |
CN112784733A (en) * | 2021-01-21 | 2021-05-11 | 敖客星云(北京)科技发展有限公司 | Emotion recognition method and device based on online education and electronic equipment |
CN112948612A (en) * | 2021-03-16 | 2021-06-11 | 杭州海康威视数字技术股份有限公司 | Human body cover generation method and device, electronic equipment and storage medium |
CN113239218A (en) * | 2021-05-14 | 2021-08-10 | 南京甄视智能科技有限公司 | Method for concurrently executing face search on NPU-equipped device |
CN113705506A (en) * | 2021-09-02 | 2021-11-26 | 中国联合网络通信集团有限公司 | Nucleic acid detection method, nucleic acid detection device, nucleic acid detection apparatus, and computer-readable storage medium |
CN113807229A (en) * | 2021-09-13 | 2021-12-17 | 深圳市巨龙创视科技有限公司 | Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom |
CN114821844A (en) * | 2021-01-28 | 2022-07-29 | 深圳云天励飞技术股份有限公司 | Attendance checking method and device based on face recognition, electronic equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968828A (en) * | 2012-11-22 | 2013-03-13 | 成都江法科技有限公司 | Face recognition security and protection attendance system |
US20160042308A1 (en) * | 2014-08-07 | 2016-02-11 | Marc Aptakin | Timesly: A Mobile Solution for Attendance Verification Powered by Face Technology |
CN105741375A (en) * | 2016-01-20 | 2016-07-06 | 华中师范大学 | Large-visual-field binocular vision infrared imagery checking method |
CN105913507A (en) * | 2016-05-03 | 2016-08-31 | 深圳市商汤科技有限公司 | Attendance checking method and system |
CN106469298A (en) * | 2016-08-31 | 2017-03-01 | 乐视控股(北京)有限公司 | Age recognition methodss based on facial image and device |
CN107609466A (en) * | 2017-07-26 | 2018-01-19 | 百度在线网络技术(北京)有限公司 | Face cluster method, apparatus, equipment and storage medium |
CN108564673A (en) * | 2018-04-13 | 2018-09-21 | 北京师范大学 | A kind of check class attendance method and system based on Global Face identification |
CN108765611A (en) * | 2018-05-21 | 2018-11-06 | 中兴智能视觉大数据技术(湖北)有限公司 | A kind of dynamic human face identification Work attendance management system and its management method |
CN108830980A (en) * | 2018-05-22 | 2018-11-16 | 重庆大学 | Security protection integral intelligent robot is received in Study of Intelligent Robot Control method, apparatus and attendance |
CN109063626A (en) * | 2018-07-27 | 2018-12-21 | 深圳市践科技有限公司 | Dynamic human face recognition methods and device |
CN109285259A (en) * | 2018-09-21 | 2019-01-29 | 上海箴安建筑设计咨询中心 | A kind of control system and its control method for intelligent entrance guard |
-
2019
- 2019-03-27 CN CN201910239164.2A patent/CN110119673B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968828A (en) * | 2012-11-22 | 2013-03-13 | 成都江法科技有限公司 | Face recognition security and protection attendance system |
US20160042308A1 (en) * | 2014-08-07 | 2016-02-11 | Marc Aptakin | Timesly: A Mobile Solution for Attendance Verification Powered by Face Technology |
CN105741375A (en) * | 2016-01-20 | 2016-07-06 | 华中师范大学 | Large-visual-field binocular vision infrared imagery checking method |
CN105913507A (en) * | 2016-05-03 | 2016-08-31 | 深圳市商汤科技有限公司 | Attendance checking method and system |
CN106469298A (en) * | 2016-08-31 | 2017-03-01 | 乐视控股(北京)有限公司 | Age recognition methodss based on facial image and device |
CN107609466A (en) * | 2017-07-26 | 2018-01-19 | 百度在线网络技术(北京)有限公司 | Face cluster method, apparatus, equipment and storage medium |
CN108564673A (en) * | 2018-04-13 | 2018-09-21 | 北京师范大学 | A kind of check class attendance method and system based on Global Face identification |
CN108765611A (en) * | 2018-05-21 | 2018-11-06 | 中兴智能视觉大数据技术(湖北)有限公司 | A kind of dynamic human face identification Work attendance management system and its management method |
CN108830980A (en) * | 2018-05-22 | 2018-11-16 | 重庆大学 | Security protection integral intelligent robot is received in Study of Intelligent Robot Control method, apparatus and attendance |
CN109063626A (en) * | 2018-07-27 | 2018-12-21 | 深圳市践科技有限公司 | Dynamic human face recognition methods and device |
CN109285259A (en) * | 2018-09-21 | 2019-01-29 | 上海箴安建筑设计咨询中心 | A kind of control system and its control method for intelligent entrance guard |
Non-Patent Citations (3)
Title |
---|
XUDONG SUN 等: "Face detection using deep learning: An improved faster RCNN approach", 《NEUROCOMPUTING》 * |
叶诗韵 等: "基于人脸识别的考生身份识别应用研究", 《软件》 * |
蒋晓川: "人脸识别考勤技术研究", 《信息与电脑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807440A (en) * | 2019-11-15 | 2020-02-18 | 深圳算子科技有限公司 | Method and system for noninductive class face input |
CN110807440B (en) * | 2019-11-15 | 2023-11-10 | 深圳算子科技有限公司 | Classroom face non-sensing input method and system |
CN111027937A (en) * | 2019-12-10 | 2020-04-17 | 浩云科技股份有限公司 | Attendance system |
CN111079718A (en) * | 2020-01-15 | 2020-04-28 | 中云智慧(北京)科技有限公司 | Quick face comparison method |
CN111325865A (en) * | 2020-03-20 | 2020-06-23 | 广州美电恩智电子科技有限公司 | Non-inductive attendance checking method and device and equipment |
CN111401324A (en) * | 2020-04-20 | 2020-07-10 | Oppo广东移动通信有限公司 | Image quality evaluation method, device, storage medium and electronic equipment |
CN112001219A (en) * | 2020-06-19 | 2020-11-27 | 国家电网有限公司技术学院分公司 | Multi-angle multi-face recognition attendance checking method and system |
CN112001219B (en) * | 2020-06-19 | 2024-02-09 | 国家电网有限公司技术学院分公司 | Multi-angle multi-face recognition attendance checking method and system |
CN112182008A (en) * | 2020-10-27 | 2021-01-05 | 青岛以萨数据技术有限公司 | System, method, terminal and medium for analyzing face picture data acquired by mobile terminal |
CN112686141A (en) * | 2020-12-29 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | Personnel filing method and device and electronic equipment |
CN112784733A (en) * | 2021-01-21 | 2021-05-11 | 敖客星云(北京)科技发展有限公司 | Emotion recognition method and device based on online education and electronic equipment |
CN114821844A (en) * | 2021-01-28 | 2022-07-29 | 深圳云天励飞技术股份有限公司 | Attendance checking method and device based on face recognition, electronic equipment and storage medium |
CN114821844B (en) * | 2021-01-28 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Attendance checking method and device based on face recognition, electronic equipment and storage medium |
CN112507315B (en) * | 2021-02-05 | 2021-06-18 | 红石阳光(北京)科技股份有限公司 | Personnel passing detection system based on intelligent brain |
CN112507315A (en) * | 2021-02-05 | 2021-03-16 | 红石阳光(北京)科技股份有限公司 | Personnel passing detection system based on intelligent brain |
CN112948612A (en) * | 2021-03-16 | 2021-06-11 | 杭州海康威视数字技术股份有限公司 | Human body cover generation method and device, electronic equipment and storage medium |
CN112948612B (en) * | 2021-03-16 | 2024-02-06 | 杭州海康威视数字技术股份有限公司 | Human body cover generation method and device, electronic equipment and storage medium |
CN113239218A (en) * | 2021-05-14 | 2021-08-10 | 南京甄视智能科技有限公司 | Method for concurrently executing face search on NPU-equipped device |
CN113239218B (en) * | 2021-05-14 | 2022-08-23 | 南京甄视智能科技有限公司 | Method for concurrently executing face search on NPU-equipped device |
CN113705506A (en) * | 2021-09-02 | 2021-11-26 | 中国联合网络通信集团有限公司 | Nucleic acid detection method, nucleic acid detection device, nucleic acid detection apparatus, and computer-readable storage medium |
CN113705506B (en) * | 2021-09-02 | 2024-02-13 | 中国联合网络通信集团有限公司 | Nucleic acid detection method, apparatus, device, and computer-readable storage medium |
CN113807229A (en) * | 2021-09-13 | 2021-12-17 | 深圳市巨龙创视科技有限公司 | Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom |
Also Published As
Publication number | Publication date |
---|---|
CN110119673B (en) | 2021-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119673A (en) | Noninductive face Work attendance method, device, equipment and storage medium | |
CN110110593A (en) | Face Work attendance method, device, equipment and storage medium based on self study | |
CN110619423B (en) | Multitask prediction method and device, electronic equipment and storage medium | |
WO2021088510A1 (en) | Video classification method and apparatus, computer, and readable storage medium | |
CN106295567B (en) | A kind of localization method and terminal of key point | |
CN111738357B (en) | Junk picture identification method, device and equipment | |
CN110246512A (en) | Sound separation method, device and computer readable storage medium | |
CN107480624B (en) | Permanent resident population's acquisition methods, apparatus and system, computer installation and storage medium | |
CN110363091A (en) | Face identification method, device, equipment and storage medium in the case of side face | |
CN109767757A (en) | A kind of minutes generation method and device | |
CN111708913B (en) | Label generation method and device and computer readable storage medium | |
CN109815936B (en) | Target object analysis method and device, computer equipment and storage medium | |
CN110572570B (en) | Intelligent recognition shooting method and system for multi-person scene and storage medium | |
CN108537017A (en) | A kind of method and apparatus for managing game user | |
CN110210194A (en) | Electronic contract display methods, device, electronic equipment and storage medium | |
CN111079557A (en) | Face recognition-based automatic distribution method and system for electric power business hall service terminals | |
CN115828112A (en) | Fault event response method and device, electronic equipment and storage medium | |
CN110163092A (en) | Demographic method, device, equipment and storage medium based on recognition of face | |
CN111680016A (en) | Distributed server cluster log data processing method, device and system | |
CN110390315A (en) | A kind of image processing method and device | |
CN110135892A (en) | Calling charging method, device, electronic equipment and the storage medium of API | |
CN113269039A (en) | On-duty personnel behavior identification method and system | |
CN113963162A (en) | Helmet wearing identification method and device, computer equipment and storage medium | |
CN112364852B (en) | Action video segment extraction method fusing global information | |
TW202226114A (en) | Information processing method, device, electronic device, storage medium, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |