CN109829997A - Staff attendance method and system - Google Patents
Staff attendance method and system Download PDFInfo
- Publication number
- CN109829997A CN109829997A CN201811556175.5A CN201811556175A CN109829997A CN 109829997 A CN109829997 A CN 109829997A CN 201811556175 A CN201811556175 A CN 201811556175A CN 109829997 A CN109829997 A CN 109829997A
- Authority
- CN
- China
- Prior art keywords
- data
- face
- confidence level
- data set
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a kind of staff attendance method and system, multiple cameras acquire the picture to attendance region simultaneously, and are transmitted to intelligent vision analyzer, and multiple cameras are set to different positions, to cover all areas to attendance;Face in picture is detected;It carries out the angle between face and camera to picture to detect, the facial image of intersection region collected for different cameras, using the lesser image of angle between face and camera;Each face is subjected to feature extraction and match cognization, obtains recognition result ID;Recognition result denoising is carried out by clustering algorithm, exports final recognition result.Multiple cameras are arranged on the basis of not changing face recognition algorithms, using array technique in this programme, accomplish entire space all standing, simultaneously using cluster optimisation technique, Statistical error is as a result, be greatly reduced error rate, so that personnel only occur several seconds on video, accurate checking-in result can be exported.
Description
Technical field
The present invention relates to image identification technical field, in particular to a kind of staff attendance method and system.
Background technique
Face attendance is the machine using face attendance, uses face recognition technology, utilizes the difference identification people of face
Member, and noninductive identification is carried out to the personnel in classroom, meeting using recognition of face and is very efficiently had a extensive future.Classroom,
The space of meeting is often larger, personnel are more, and in the market because being limited by camera CMOS size, camera lens wide-angle, camera lens without
Method covers all areas to attendance.Currently existing scheme utilizes single camera polling technique, a normed space is divided into N number of
Region, by rotating camera lens, several seconds of each region poll could complete entire space personnel after a certain time period and examine
It is diligent.But as personnel walk about within the time of cam movement, it is possible to will cause personnel and not imaged in any region
Head collect, if personnel not that for a period of time by camera cover time in be identified, absence will be taken as,
Cause the checking-in result of mistake.
Summary of the invention
It walks about the technical problem to be solved by the present invention is to how providing a kind of avoidable personnel and causes the personnel of attendance mistake
Work attendance method and system.
In order to solve the above-mentioned technical problem, the technical solution of the present invention is as follows:
In a first aspect, the present invention proposes a kind of staff attendance method, comprising steps of
Multiple cameras acquire picture to attendance region simultaneously, and are transmitted to intelligent vision analyzer, multiple described to take the photograph
As head is set to different positions, to cover all areas to attendance;
The face in picture is identified and detected using identification module;
The angle between face and camera is carried out to picture to detect, intersection region collected for different cameras
Facial image, using the lesser image of angle between face and camera;
Each face is subjected to feature extraction and match cognization, obtains recognition result ID;
Recognition result denoising is carried out by clustering algorithm, exports final recognition result.
Preferably for the picture of the collected intersection region of different cameras, to the collected picture of different cameras
Feature extraction and match cognization are carried out respectively, and the interaction for carrying out recognition result ID to the different collected faces of camera is tested
Card.
Preferably for standard classroom, multiple cameras are disposed in proximity on the metope at dais, to cover all
The position at raw seat;For roundtable conference room, multiple cameras are set on surrounding metope, to cover the position at all seats
It sets;For standards meetings room, multiple cameras are disposed in proximity to the front of on the metope of rostrum and rostrum, with covering
The position at all persons attending the meeting seats.
Preferably, clustering algorithm comprising steps of
The data of every frame of each face in video flowing are obtained and recorded, raw data set is formed, the data include knowing
Other result ID, the coordinate of face frame, confidence level, angle value;
Initial data is concentrated into the data deletion that confidence level is less than lowest threshold or angle value is less than minimum threshold;
It is concentrated from initial data and extracts data set A and data set B;
It is compared from data A1 is extracted in data set A with the data B1 in data set B, when the confidence level of data A1 is less than
The recognition result ID of the confidence level of data B1, A1 and B1 is inconsistent and friendship and ratio of the face frame of A1 and B1 in picture are greater than
When preset value, data A1 is deleted.
Preferably, step obtains and records the data of every frame of each face in video flowing, forms raw data set, the number
According to the confidence level for further comprising the steps of: default face before including recognition result ID, the coordinate of face frame, confidence level, angle value
Lowest threshold, highest threshold value, and minimum threshold, the max-thresholds of the angle value of default face.
Preferably, data set A is that initial data concentrates confidence level to be less than highest threshold value, and angle value is less than max-thresholds
All data;Data set B is all data that initial data concentrates confidence level greater than lowest threshold.
Preferably, the lowest threshold value range of confidence level is 0.2 to 0.5, and the highest threshold value value range of confidence level is
0.5 to 0.9;Angle value include face and camera the minimum threshold of angle up and down J2 and upper and lower angle max-thresholds J1, J1 and
The value range of J2 is all between -45~45;Angle value further includes left and right angle minimum threshold LR2 and a left side for face and camera
The value range of right angle max-thresholds LR1, LR1 and LR2 are all between -45~45.
Preferably: concentrating from initial data and further comprised the steps of: before extracting data set A and data set B
When the confidence level of a data is greater than highest threshold value, which is stored in recognition result data set Z1, and from original
The data are deleted in data set.
Preferably, it is compared from extraction data A1 in data set A with the data B1 in data set B, when setting for data A1
Reliability is inconsistent less than the recognition result ID of the confidence level of data B1, A1 and B1 and friendship of the face frame of A1 and B1 in picture
And when than being greater than preset value, deleting data A1 and further comprising the steps of: later
Merge data set A to form new data set D with data set Z1;
Data in data set D are grouped by recognition result ID, the data conduct of highest confidence level is taken in every group
Final data.
Second aspect, the invention also provides a kind of staff attendance systems, including for executing as described in relation to the first aspect
The unit of staff attendance method.
By adopting the above technical scheme, by arranging multiple cameras on classroom, the meeting room for needing attendance and other places, so that taking the photograph
The all areas to attendance can be covered as the camera lens of head.When attendance, picture is acquired simultaneously using multiple cameras, avoids biography
Unite in Work attendance method as camera rotation traverses each region and caused by attendance mistake.It is collected for multiple cameras
Intersection region, by detecting the angle between face and camera respectively, the removal face that angle is larger, quality is bad is promoted
The quality of facial image.On the basis of not changing face recognition algorithms, accomplish entire space all standing using array technique,
Simultaneously using cluster optimisation technique, Statistical error is as a result, be greatly reduced error rate.So that personnel only occur several seconds on video,
Accurate checking-in result can be exported.
Detailed description of the invention
Fig. 1 is the flow chart of one embodiment of the present invention staff's Work attendance method;
Fig. 2 is that figure is arranged in the camera of the standard classroom of one embodiment of the present invention staff's Work attendance method;
Fig. 3 is that figure is arranged in the camera of the roundtable conference room of one embodiment of the present invention staff's Work attendance method;
Fig. 4 is that figure is arranged in the camera of the standards meetings room of one embodiment of the present invention staff's Work attendance method;
Fig. 5 is the flow chart of step S50 in Fig. 1;
Fig. 6 is the module principle figure of one embodiment of the present invention staff's attendance checking system;
Fig. 7 is the frame principle of one embodiment of the present invention staff's attendance checking system.
In figure, 10- camera group, 20- identification module, 30- angle detection module, 40- comparison module, 50- cluster optimization
Module.
Specific embodiment
Specific embodiments of the present invention will be further explained with reference to the accompanying drawing.It should be noted that for
The explanation of these embodiments is used to help understand the present invention, but and does not constitute a limitation of the invention.In addition, disclosed below
The each embodiment of the present invention involved in technical characteristic can be combined with each other as long as they do not conflict with each other.
Referring to Fig.1, in a first aspect, the present invention proposes a kind of staff attendance method, comprising steps of
Multiple cameras acquire the picture to attendance region simultaneously, and are transmitted to intelligent vision analyzer, multiple cameras
It is set to different positions, to cover all areas to attendance;
Such as Fig. 2, for standard classroom, multiple cameras are disposed in proximity on the metope at dais, to cover all student's seats
The position of position;Such as Fig. 3, for roundtable conference room, multiple cameras are set on surrounding metope, to cover the position at all seats
It sets.Such as Fig. 4, for standards meetings room, multiple cameras are disposed in proximity to the front of on the metope of rostrum and rostrum,
To cover the position at all persons attending the meeting seats, camera may be disposed at the positions such as classroom heel row metope, classroom top, camera
Camera lens can cover rostrum position, in order to acquire personnel's image of rostrum.Video detector in figure is with more
A camera, camera lens are respectively aligned to the seat of different zones, to reach all standing Image Acquisition of all students or meeting personnel.
The face in picture is identified and detected using identification module;
The angle between face and camera is carried out to picture to detect, intersection region collected for different cameras
Facial image, using the lesser image of angle between face and camera;
Each face is subjected to feature extraction and match cognization, obtains recognition result ID, the people that match cognization passes through extraction
The feature templates stored in the characteristic of face image and database scan for matching, by setting a threshold value, when similar
Degree is more than this threshold value, then result matching obtained exports.
Recognition result denoising is carried out by clustering algorithm, exports final recognition result.
It should be noted that the picture of intersection region collected for different cameras, collects different cameras
Picture carry out feature extraction and match cognization respectively, and recognition result ID's is carried out to the different collected faces of camera
Validation-cross.
Each enclosure space accomplishes attendance all standing, no dead angle using array technique.Within the attendance time, by video information
It is transferred to artificial intelligence visual analysis instrument, using powerful GPU computing capability and AI technology, realizes the face to every picture
Identification.Since there are accuracy to be unable to reach 100% for recognition of face, misclassification rate, leakage knowledge rate are unable to reach 0 reality, root
The characteristics of according to attendance, summarizes the face recognition result in a period of time, using cluster algorithm, using between the image of front and back
Relationship, optimization output result.To greatly improve the accuracy of identification, and reduce misclassification rate and leakage knowledge rate.Simultaneously because battle array
The all standing of column technology will form coverage area intersection, recognition result can preferably be mutually authenticated.Reach noninductive attendance effect
Optimal case.
The specific steps realized referring to Fig. 4, clustering algorithm are as follows:
The lowest threshold of the confidence level of default face, highest threshold value, and the minimum threshold of the angle value of default face, maximum
Threshold value.The lowest threshold value range of confidence level is 0.2 to 0.5, the highest threshold value value range 0.5 to 0.9 of confidence level;Angle
The value range of up and down angle minimum threshold J2 and upper and lower angle max-thresholds J1, J1 and J2 of the value including face and camera are all
Between -45~45;Angle value further includes the left and right angle minimum threshold LR2 and left and right angle max-thresholds of face and camera
The value range of LR1, LR1 and LR2 are all between -45~45.
S10: obtaining and records the data of every frame of each face in video flowing, forms raw data set, and data include knowing
Other result ID, the coordinate of face frame, confidence level, time, angle value are generated;
When the confidence level of a data is greater than highest threshold value, which is stored in recognition result data set Z1, and from original
The data are deleted in data set.
S20: initial data is concentrated into the data deletion that confidence level is less than lowest threshold or angle value is less than minimum threshold;
S30: it is concentrated from initial data and extracts data set A and data set B;Data set A is that initial data concentrates confidence level small
In highest threshold value, and angle value is less than all data of max-thresholds;Data set B is that initial data concentrates confidence level greater than minimum
All data of threshold value.
S40: it is compared from data A1 is extracted in data set A with the data B1 in data set B, when the confidence level of data A1
Recognition result ID less than the confidence level of data B1, A1 and B1 is inconsistent and friendship and ratio of the face frame of A1 and B1 in picture
When greater than preset value, data A1 is deleted.
Merge data set A to form new data set D with data set Z1;
Data in data set D are grouped by recognition result ID, the data conduct of highest confidence level is taken in every group
Final data.
It should be noted that step S40, in the attendance of classroom or meeting, when the face frame of A1 and B1 is in picture
It hands over and compares different greater than the recognition result ID of preset value and two pictures, then it represents that wherein a data are noise, at this time by going
Except confidence level lower one in two results, to realize the optimization of data, the accuracy of identification is promoted.
In another embodiment of the invention, which completes to optimize by following steps:
1, confidence level lowest threshold C2, highest threshold value C1 are set;Angle value range minimum, peak.It needs to illustrate
It is that in the present embodiment, when face is relative to camera lens, as soon as pitching, side face are more than a range, the effect of identification is bad, so to limit
Determine range.Angle value is the included angle between the face and camera for analyzing and determining out according to mass data, including face
Pitch angle angle value and face left and right side face angle value.
The lowest threshold value range of confidence level is 0.2 to 0.5, the highest threshold value value range 0.5 to 0.9 of confidence level;
Angle value includes the minimum threshold of the angle up and down J2 of face and camera and the value model of upper and lower angle max-thresholds J1, J1 and J2
It encloses all between -45~45;Angle value further includes left and right angle minimum threshold LR2 and left and right the angle maximum of face and camera
The value range of threshold value LR1, LR1 and LR2 are all between -45~45.
2, structural data of the storage every face of the every frame of continuous videos after recognition of face includes recognition result ID, face
Left and right, the upper and lower coordinate of frame, generates time and pitch angle angle value, left and right angle value at confidence level.
3, filter out confidence level less than C2, face pitch angle is excessive or the angle of the left and right side face of face is excessive
Data.
It 4, is initial data by the data summarization in a filtered preset time period.
5, higher to confidence level in initial data to the time point that can be counted, that is, it is greater than the data of threshold value C1, directly
It is incorporated to final recognition result collection Z1.
6, data set A is generated, condition: from taking-up recognition result ID in initial data not in Z1, and confidence level threshold
Value is less than C1, and angle value is less than the data of two minimum thresholds.
7, data set B is generated, condition: from all data of the taking-up confidence threshold value on C2 in initial data.
8, the data between A and B one by one compared with, the data of 2 data sets are respectively to record A1, B1.When meeting setting for A1
Reliability numbers inconsistent and the two face in the rectangle frame of original image position less than the recognition result ID of the confidence level of B1, A1 and B1
It hands over and compares when being greater than tri- conditions of preset value IOU1, then determine that A1 for noise, and deletes A1.IOU1 is the coincidence of two rectangle frames
Rate setting, range is between 0 to 1.
9, the set A after removing merges to form new data set D with Z1.
10, it is grouped using recognition result ID, the data of same identification result ID is divided into one group, in group, take confidence level
Highest data are as final result.
By adopting the above technical scheme, by arranging multiple cameras on classroom, the meeting room for needing attendance and other places, so that taking the photograph
The all areas to attendance can be covered as the camera lens of head.When attendance, picture is acquired simultaneously using multiple cameras, avoids biography
Unite in Work attendance method as camera rotation traverses each region and caused by attendance mistake.It is collected for multiple cameras
Intersection region, by detecting the angle between face and camera respectively, the removal face that angle is larger, quality is bad is promoted
The quality of facial image.On the basis of not changing face recognition algorithms, accomplish entire space all standing using array technique,
Simultaneously using cluster optimisation technique, Statistical error is as a result, be greatly reduced fault rate.So that personnel only occur several seconds on video,
Accurate checking-in result can be exported.
Referring to Fig. 5 and Fig. 6, second aspect, the invention also provides a kind of staff attendance systems, including for executing such as the
The unit of the staff attendance method of one side;Including,
Camera shooting group 10: multiple cameras acquire the picture to attendance region simultaneously, and are transmitted to intelligent vision analyzer, more
A camera is set to different positions, to cover all areas to attendance;
Identification module 20: the face in picture is identified and is detected using identification module;
Angle detection module 30: the angle between face and camera is carried out to picture and is detected, different cameras are adopted
The facial image of the intersection region collected, using the lesser image of angle between face and camera;
Comparison module 40: each face is compared with the registration photo prestored, obtains recognition result ID;
Cluster optimization module 50: denoising face by clustering algorithm, exports recognition result.
In conjunction with attached drawing, the embodiments of the present invention are described in detail above, but the present invention is not limited to described implementations
Mode.For a person skilled in the art, in the case where not departing from the principle of the invention and spirit, to these embodiments
A variety of change, modification, replacement and modification are carried out, are still fallen in protection scope of the present invention.
Claims (10)
1. a kind of staff attendance method, which is characterized in that comprising steps of
Multiple cameras acquire the picture to attendance region simultaneously, and are transmitted to intelligent vision analyzer, multiple cameras
It is set to different positions, to cover all areas to attendance;
Face in picture is detected;
It carries out the angle between face and camera to picture to detect, the face of intersection region collected for different cameras
Image, using the lesser image of angle between face and camera;
Each face is subjected to feature extraction and match cognization, obtains recognition result ID;
Recognition result denoising is carried out by clustering algorithm, exports final recognition result.
2. staff attendance method according to claim 1, it is characterised in that: the zone of intersection collected for different cameras
The picture in domain carries out feature extraction and match cognization to the collected picture of different cameras respectively, and to different cameras
Collected face carries out the validation-cross of recognition result ID.
3. staff attendance method according to claim 1, it is characterised in that:
For standard classroom, multiple cameras are disposed in proximity on the metope at dais, to cover the position at all student seats
It sets;
For roundtable conference room, multiple cameras are set on surrounding metope, to cover the position at all seats;
For standards meetings room, multiple cameras are disposed in proximity to the front of on the metope of rostrum and rostrum, to cover
Cover the position at all persons attending the meeting seats.
4. staff attendance method as described in claim 1, which is characterized in that clustering algorithm comprising steps of
The data of every frame of each face in video flowing are obtained and recorded, raw data set is formed, the data include identification knot
Fruit ID, the coordinate of face frame, confidence level, angle value;
Initial data is concentrated into the data deletion that confidence level is less than lowest threshold or angle value is less than minimum threshold;
It is concentrated from initial data and extracts data set A and data set B;
It is compared from data A1 is extracted in data set A with the data B1 in data set B, when the confidence level of data A1 is less than data
The recognition result ID of the confidence level of B1, A1 and B1 is inconsistent and friendship of the face frame of A1 and B1 in picture and more default than being greater than
When value, data A1 is deleted.
5. staff attendance method as claimed in claim 4, which is characterized in that obtain and record the every of each face in video flowing
The data of frame form raw data set, before the data include recognition result ID, the coordinate of face frame, confidence level, angle value
It further comprises the steps of:
The lowest threshold of the confidence level of default face, highest threshold value, and the minimum threshold of the angle value of default face, maximum threshold
Value.
6. staff attendance method as claimed in claim 4, which is characterized in that concentrated from initial data and extract data set A and number
According in collection B, data set A is that initial data concentrates confidence level to be less than highest threshold value, and angle value is less than all numbers of max-thresholds
According to;Data set B is all data that initial data concentrates confidence level greater than lowest threshold.
7. staff attendance method as claimed in claim 4, which is characterized in that
The lowest threshold value range of confidence level is 0.2 to 0.5, and the highest threshold value value range of confidence level is 0.5 to 0.9;
Angle value includes face and the minimum threshold of the angle up and down J2 of camera and taking for upper and lower angle max-thresholds J1, J1 and J2
It is worth range all between -45~45;
Angle value further include face and camera left and right angle minimum threshold LR2 and left and right angle max-thresholds LR1, LR1 and
The value range of LR2 is all between -45~45.
8. staff attendance method according to claim 4, which is characterized in that from initial data concentrate extract data set A and
It is further comprised the steps of: before data set B
When the confidence level of a data is greater than highest threshold value, which is stored in recognition result data set Z1, and from initial data
It concentrates and deletes the data.
9. staff attendance method according to claim 4, which is characterized in that extract data A1 and data from data set A
Data B1 in collection B is compared, when recognition result ID of the confidence level of data A1 less than the confidence level of data B1, A1 and B1 not
When unanimously and friendship and ratio of the face frame of A1 and B1 in picture are greater than preset value, data A1 is deleted, is further comprised the steps of: later
Merge data set A to form new data set D with data set Z1;
Data in data set D are grouped by recognition result ID, take the data of highest confidence level as final in every group
Data.
10. a kind of staff attendance system, which is characterized in that including for executing as claim in any one of claims 1 to 9
Staff attendance method unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811556175.5A CN109829997A (en) | 2018-12-19 | 2018-12-19 | Staff attendance method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811556175.5A CN109829997A (en) | 2018-12-19 | 2018-12-19 | Staff attendance method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109829997A true CN109829997A (en) | 2019-05-31 |
Family
ID=66858823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811556175.5A Pending CN109829997A (en) | 2018-12-19 | 2018-12-19 | Staff attendance method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109829997A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903412A (en) * | 2019-02-01 | 2019-06-18 | 北京清帆科技有限公司 | A kind of intelligent check class attendance system based on face recognition technology |
CN110852703A (en) * | 2019-10-22 | 2020-02-28 | 佛山科学技术学院 | Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition |
CN111325083A (en) * | 2019-08-01 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Method and device for recording attendance information |
CN111353361A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Face recognition method and device and electronic equipment |
CN111612930A (en) * | 2020-04-14 | 2020-09-01 | 安徽中迅徽软科技有限公司 | Attendance system and method |
CN112132030A (en) * | 2020-09-23 | 2020-12-25 | 湖南快乐阳光互动娱乐传媒有限公司 | Video processing method and device, storage medium and electronic equipment |
CN112183380A (en) * | 2020-09-29 | 2021-01-05 | 新疆爱华盈通信息技术有限公司 | Passenger flow volume analysis method and system based on face recognition and electronic equipment |
JP2021018649A (en) * | 2019-07-22 | 2021-02-15 | パナソニックi−PROセンシングソリューションズ株式会社 | Information processor, attendance management method, and program |
JP7189097B6 (en) | 2019-07-22 | 2022-12-13 | i-PRO株式会社 | Attendance management system, attendance management method, and program |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680188A (en) * | 2015-03-24 | 2015-06-03 | 重庆大学 | Method for constructing human body posture reference image library |
CN106067195A (en) * | 2016-06-08 | 2016-11-02 | 北京国杰科技有限公司 | Student attendance monitoring method |
CN107871106A (en) * | 2016-09-26 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and device |
CN107944380A (en) * | 2017-11-20 | 2018-04-20 | 腾讯科技(深圳)有限公司 | Personal identification method, device and storage device |
CN108090470A (en) * | 2018-01-10 | 2018-05-29 | 浙江大华技术股份有限公司 | A kind of face alignment method and device |
US20180157896A1 (en) * | 2016-12-06 | 2018-06-07 | Robert William Kocher | Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates |
CN108447090A (en) * | 2016-12-09 | 2018-08-24 | 株式会社理光 | The method, apparatus and electronic equipment of object gesture estimation |
-
2018
- 2018-12-19 CN CN201811556175.5A patent/CN109829997A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680188A (en) * | 2015-03-24 | 2015-06-03 | 重庆大学 | Method for constructing human body posture reference image library |
CN106067195A (en) * | 2016-06-08 | 2016-11-02 | 北京国杰科技有限公司 | Student attendance monitoring method |
CN107871106A (en) * | 2016-09-26 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and device |
US20180157896A1 (en) * | 2016-12-06 | 2018-06-07 | Robert William Kocher | Method and system for increasing biometric acceptance rates and reducing false accept rates and false rates |
CN108447090A (en) * | 2016-12-09 | 2018-08-24 | 株式会社理光 | The method, apparatus and electronic equipment of object gesture estimation |
CN107944380A (en) * | 2017-11-20 | 2018-04-20 | 腾讯科技(深圳)有限公司 | Personal identification method, device and storage device |
CN108090470A (en) * | 2018-01-10 | 2018-05-29 | 浙江大华技术股份有限公司 | A kind of face alignment method and device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903412A (en) * | 2019-02-01 | 2019-06-18 | 北京清帆科技有限公司 | A kind of intelligent check class attendance system based on face recognition technology |
JP2021018649A (en) * | 2019-07-22 | 2021-02-15 | パナソニックi−PROセンシングソリューションズ株式会社 | Information processor, attendance management method, and program |
JP7189097B6 (en) | 2019-07-22 | 2022-12-13 | i-PRO株式会社 | Attendance management system, attendance management method, and program |
JP7189097B2 (en) | 2019-07-22 | 2022-12-13 | i-PRO株式会社 | Attendance management system, attendance management method, and program |
CN111325083B (en) * | 2019-08-01 | 2024-02-23 | 杭州海康威视系统技术有限公司 | Method and device for recording attendance information |
CN111325083A (en) * | 2019-08-01 | 2020-06-23 | 杭州海康威视系统技术有限公司 | Method and device for recording attendance information |
CN111353361A (en) * | 2019-08-19 | 2020-06-30 | 深圳市鸿合创新信息技术有限责任公司 | Face recognition method and device and electronic equipment |
CN110852703B (en) * | 2019-10-22 | 2023-05-23 | 佛山科学技术学院 | Attendance checking method, system, equipment and medium based on face multi-feature fusion face recognition |
CN110852703A (en) * | 2019-10-22 | 2020-02-28 | 佛山科学技术学院 | Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition |
CN111612930A (en) * | 2020-04-14 | 2020-09-01 | 安徽中迅徽软科技有限公司 | Attendance system and method |
CN112132030A (en) * | 2020-09-23 | 2020-12-25 | 湖南快乐阳光互动娱乐传媒有限公司 | Video processing method and device, storage medium and electronic equipment |
CN112132030B (en) * | 2020-09-23 | 2024-05-28 | 湖南快乐阳光互动娱乐传媒有限公司 | Video processing method and device, storage medium and electronic equipment |
CN112183380A (en) * | 2020-09-29 | 2021-01-05 | 新疆爱华盈通信息技术有限公司 | Passenger flow volume analysis method and system based on face recognition and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829997A (en) | Staff attendance method and system | |
CN105243386B (en) | Face living body judgment method and system | |
KR100480781B1 (en) | Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image | |
US8314854B2 (en) | Apparatus and method for image recognition of facial areas in photographic images from a digital camera | |
WO2018119668A1 (en) | Method and system for recognizing head of pedestrian | |
CN102306304B (en) | Face occluder identification method and device | |
CN108564052A (en) | Multi-cam dynamic human face recognition system based on MTCNN and method | |
CN109359625A (en) | The method and system of customer identification is judged based on head and shoulder detection and face recognition technology | |
JP2004192378A (en) | Face image processor and method therefor | |
WO2008021584A2 (en) | A system for iris detection, tracking and recognition at a distance | |
WO2018076392A1 (en) | Pedestrian statistical method and apparatus based on recognition of parietal region of human body | |
CN106548148A (en) | Method and system for identifying unknown face in video | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN112149513A (en) | Industrial manufacturing site safety helmet wearing identification system and method based on deep learning | |
CN109145708A (en) | A kind of people flow rate statistical method based on the fusion of RGB and D information | |
CN109359577B (en) | System for detecting number of people under complex background based on machine learning | |
WO2020249054A1 (en) | Living body detection method and system for human face by using two long-baseline cameras | |
CN106650623A (en) | Face detection-based method for verifying personnel and identity document for exit and entry | |
CN106709438A (en) | Method for collecting statistics of number of people based on video conference | |
JP2017174343A (en) | Customer attribute extraction device and customer attribute extraction program | |
JP2000209578A (en) | Advertisement media evaluation system and advertisement medium evaluation method | |
CN108334870A (en) | The remote monitoring system of AR device data server states | |
CN111241926A (en) | Attendance checking and learning condition analysis method, system, equipment and readable storage medium | |
Abirami et al. | AI-based Attendance Tracking System using Real-Time Facial Recognition | |
KR102423934B1 (en) | Smart human search integrated solution through face recognition and multiple object tracking technology of similar clothes color |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190531 |
|
WD01 | Invention patent application deemed withdrawn after publication |