CN109345427B - Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology - Google Patents

Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology Download PDF

Info

Publication number
CN109345427B
CN109345427B CN201811139050.2A CN201811139050A CN109345427B CN 109345427 B CN109345427 B CN 109345427B CN 201811139050 A CN201811139050 A CN 201811139050A CN 109345427 B CN109345427 B CN 109345427B
Authority
CN
China
Prior art keywords
human
features
probability
identified
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811139050.2A
Other languages
Chinese (zh)
Other versions
CN109345427A (en
Inventor
周曦
温浩
陈江豪
石君
陈兰
万珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yuncong Tianfu Artificial Intelligence Technology Co., Ltd
Original Assignee
Guangzhou Yuncongkaifeng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yuncongkaifeng Technology Co Ltd filed Critical Guangzhou Yuncongkaifeng Technology Co Ltd
Priority to CN201811139050.2A priority Critical patent/CN109345427B/en
Publication of CN109345427A publication Critical patent/CN109345427A/en
Application granted granted Critical
Publication of CN109345427B publication Critical patent/CN109345427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

A classroom video frequency point method combining face recognition and pedestrian recognition technologies not only recognizes faces appearing in a video, but also recognizes upper body features of each student, such as hairstyle, clothing, wear, and shape, and the like. For each picture, the server firstly tracks the human body through a pedestrian recognition technology, then performs face recognition corresponding to each human body, and if the face is detected and can be recognized, the human body features are recognized and recorded for record. Need not to reform transform hardware equipment, camera, present most cameras can satisfy the requirement, reduce user cost.

Description

Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology
Technical Field
The invention relates to the field of face recognition, in particular to a classroom video frequency point arrival method combining face recognition and pedestrian recognition technologies.
Background
In a traditional classroom, teachers and educational administration department want to quickly and accurately know which students are not in class, so that subsequent teaching management is implemented. If the teacher calls one student, much time is wasted, and in the case of 50 students, the time is about 3-5 minutes, which is not always accurate. If the student signs his or her own signature, the student signs his or her own signature. If the attendance machine with a card swiping function, a fingerprint function or a human face is arranged, students need to queue to check in before or after class, which is very troublesome and additionally increases the cost.
A face recognition point arrival method based on video monitoring is already available in the market, and on the representative of monitoring manufacturers such as Haikangwei video and the like, the feature extraction is carried out on the face captured in real time by a monitoring picture in the background, and a face library of a student list is carried out by the steps of 1: n, searching, confirming one by one, and finally realizing no perception point. The background server decodes the video stream of the monitoring camera and then performs face recognition on the image frame by frame or frame by frame, namely a dynamic recognition method, but the method depends heavily on whether the student is looking ahead or not, and especially when the student always lowers the head or the face angle is more inclined, the situation of missed grabbing occurs. In a classroom with about 100 persons, the result is output within 5 minutes, the recognition rate of students is about 80%, and if the time is widened to 10 minutes, the recognition rate is increased to 85-90%.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a classroom video frequency point arrival method combining face recognition and pedestrian recognition technologies, and the specific technical scheme is as follows:
a classroom video frequency arrival method combining face recognition and pedestrian recognition technologies comprises the following steps: the following steps are adopted for the preparation of the anti-cancer medicine,
step 1: the processing module carries out pedestrian re-identification detection, people counting, modeling classroom position distribution map and position coordinates of each person on a single picture at the time t in the video stream of the camera; setting the set of students in class as N, the number of the detected pedestrians is N ═ N |, the set of the pedestrians is identified as M (t), and the number of the pedestrians is m (t), wherein m (t) is less than or equal to N;
step 2, extracting the features of the human face in the picture, retrieving a class list according to the human face features, confirming a current set K (t), iterating with a previous picture human face confirmation set K (t-1) to obtain a confirmation set K (t) K (t) ∪ K (t-1) at the time t, sequentially identifying the human features of each person in the set K (t) by a human feature identification module, wherein the human features comprise hairstyle, clothing, wearing features and height, establishing a corresponding relation between the identified human features and the human face features by a processing module, and storing the identified human features in a human feature database;
and step 3: judging whether the number of people in K (t) is equal to n, if so, entering a step 8, otherwise, entering a step 4;
and 4, step 4: if the number of people in K (t) is less than n, sequentially carrying out human feature probability estimation on each member subject is in the set M (t) -K (t);
the prior attendance probability P (ia), the is ∈ M (t) -K (t) and the ia ∈ O of the member objects ia in the set O are obtained from a database in advance by setting the set O to be N-K (t);
according to the identified hair style of the target is, searching the association characteristics in a human body characteristic database aiming at the member object ia to obtain a similarity probability P1 (is);
according to the identified clothing of the target is, searching the associated features in a human body feature database aiming at the member object ia to obtain a similarity probability P2 (is);
according to the identified wearing feature of the target is, searching a human body feature database aiming at the member object ia to obtain a similarity probability P3 (is);
sequentially taking an estimation probability set Q ({ Ps) ((ia) }) P1(is) } P2(is) } P3(is) | P4(is) | ia ∈ O, is } for each member object is in M (t) -K (t), and selecting a corresponding estimation attendee with the maximum value in the estimation probability set Q as the member object is to obtain a set B (t) consisting of the estimation attendees;
and 5: setting iteration times for at least p times, re-analyzing a new picture at the moment of t +1, and repeating the steps 1 to 4 for at least p times;
step 6: after iterating the set p pictures, if the termination condition is not reached, stopping iteration, and entering step 7, otherwise, entering step 8;
and 7: updating the prior attendance probability and the human body feature library probability of each member, and outputting M (t), K (t) and B (t), wherein M (t) is K (t) + B (t);
and 8: the confirmation of all students is completed, the round of checking is finished, and the output result is the student attendance list N (t), K (t), M (t).
Further: in the step 1, a deep learning-based CNN network is adopted to identify a single picture at the time t in a video stream of a camera.
The invention has the beneficial effects that: firstly, hardware equipment and a camera do not need to be modified, most of the existing cameras can meet the requirements, and the user cost is reduced;
secondly, the accuracy and speed of the original face recognition dynamic point arrival are improved. The invention can solve the problem of fast and efficient arrival of the students in the classroom at present.
Drawings
FIG. 1 is a flow chart of the operation of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
As shown in fig. 1: a method for detecting classroom video frequency by combining face recognition and pedestrian recognition technology comprises the following steps,
step 1: the processing module carries out pedestrian re-identification detection, people counting, modeling classroom position distribution map and position coordinates of each person on a single picture at the time t in the video stream of the camera;
setting the set of students in class as N, the number of the detected pedestrians is N ═ N |, the set of the pedestrians is identified as M (t), and the number of the pedestrians is m (t), wherein m (t) is less than or equal to N;
step 2, extracting the features of the human face in the picture, retrieving a class list according to the human face features, confirming a current set K (t), iterating with a previous picture human face confirmation set K (t-1) to obtain a confirmation set K (t) K (t) ∪ K (t-1) at the time t, sequentially identifying the human features of each person in the set K (t) by a human feature identification module, wherein the human features comprise hairstyle, clothing, wearing features and height, establishing a corresponding relation between the identified human features and the human face features by a processing module, and storing the identified human features in a human feature database;
and step 3: judging whether the number of people in K (t) is equal to n, if so, entering a step 8, otherwise, entering a step 4;
and 4, step 4: if the number of people in K (t) is less than n, sequentially carrying out human feature probability estimation on each member object i s in the M (t) -K (t) set;
the prior attendance probabilities P (ia), i s ∈ M (t) -K (t), and i a ∈ O of the member objects i a in the set O are obtained from the database in advance by setting the set O to N-K (t);
according to the identified hair style of the target i s, searching the association characteristics in a human body characteristic database aiming at the member object ia to obtain a similarity probability P1(i s);
according to the clothing of the identified target i s, searching the human body characteristic database for the associated characteristics aiming at the member object ia to obtain a similarity probability P2(i s);
according to the recognized wearing characteristics of the target i s, searching a human body characteristic database aiming at the member object ia to obtain a similarity probability P3(i s);
according to the identified height characteristics of the target i s, searching the human body characteristic database aiming at the member object ia to obtain a similarity probability P4(i s);
for each member object i s in M (t) -K (t) in turn,
taking an estimation probability set Q ═ { P s ═ P (ia) ═ P1(i s) × P2(i s) × P3(i s) × P4(i s) | i a ∈ O, i s }, selecting the maximum value in the estimation probability set Q as the corresponding estimation attendee of the member object i s, and obtaining a set b (t) consisting of estimation attendees;
and 5: setting iteration times for at least p times, re-analyzing a new picture at the moment of t +1, and repeating the steps 1 to 4 for at least p times;
step 6: after iterating the set p pictures, if the termination condition is not reached, stopping iteration, and entering step 7, otherwise, entering step 8;
and 7: updating the prior attendance probability and the human body feature library probability of each member, and outputting M (t), K (t) and B (t), wherein M (t) is K (t) + B (t);
and 8: the confirmation of all students is completed, the round of checking is finished, and the output result is the student attendance list N (t), K (t), M (t).

Claims (2)

1. A classroom video frequency point arrival method combining face recognition and pedestrian recognition technologies is characterized in that:
the following steps are adopted for the preparation of the anti-cancer medicine,
step 1: the processing module carries out pedestrian re-identification detection, people counting, modeling classroom position distribution map and position coordinates of each person on a single picture at the time t in the video stream of the camera; setting the set of students in class as N, the number of the detected pedestrians is N ═ N |, the set of the pedestrians is identified as M (t), and the number of the pedestrians is m (t), wherein m (t) is less than or equal to N;
step 2, extracting the features of the human face in the picture, retrieving a class list according to the human face features, confirming a current set K (t), iterating with a previous picture human face confirmation set K (t-1) to obtain a confirmation set K (t) K (t) ∪ K (t-1) at the time t, sequentially identifying the human features of each person in the set K (t) by a human feature identification module, wherein the human features comprise hairstyle, clothing, wearing features and height, establishing a corresponding relation between the identified human features and the human face features by a processing module, and storing the identified human features in a human feature database;
and step 3: judging whether the number of people in K (t) is equal to n, if so, entering a step 8, otherwise, entering a step 4;
and 4, step 4: if the number of people in K (t) is less than n, sequentially carrying out human feature probability estimation on each member subject is in the set M (t) -K (t);
the prior attendance probability P (ia), the is ∈ M (t) -K (t) and the ia ∈ O of the member objects ia in the set O are obtained from a database in advance by setting the set O to be N-K (t);
according to the identified hair style of the target is, searching the association characteristics in a human body characteristic database aiming at the member object ia to obtain a similarity probability P1 (is);
according to the identified clothing of the target is, searching the associated features in a human body feature database aiming at the member object ia to obtain a similarity probability P2 (is);
according to the identified wearing feature of the target is, searching a human body feature database aiming at the member object ia to obtain a similarity probability P3 (is);
according to the identified height characteristics of the target is, searching a human body characteristic database aiming at the member object ia to obtain a similarity probability P4 (is);
sequentially for each member subject is in M (t) -K (t),
taking an estimation probability set Q ({ Ps (ia) × P1(is) × P2(is) × P3(is) × P4(is) | ia ∈ O, is }, selecting the maximum value in the estimation probability set Q as the corresponding estimation attendees of the member object is, and obtaining a set B (t) consisting of the estimation attendees;
and 5: setting iteration times for at least p times, re-analyzing a new picture at the moment of t +1, and repeating the steps 1 to 4 for at least p times;
step 6: after iterating the set p pictures, if the termination condition is not reached, stopping iteration, and entering step 7, otherwise, entering step 8;
and 7: updating the prior attendance probability and the human body feature library probability of each member, and outputting M (t), K (t) and B (t), wherein M (t) is K (t) + B (t);
and 8: the confirmation of all students is completed, the round of checking is finished, and the output result is the student attendance list N (t), K (t), M (t).
2. The method for classroom video frequency mapping in combination with face recognition and pedestrian recognition technologies as described in claim 1, wherein: in the step 1, a deep learning-based CNN network is adopted to identify a single picture at the time t in a video stream of a camera.
CN201811139050.2A 2018-09-28 2018-09-28 Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology Active CN109345427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811139050.2A CN109345427B (en) 2018-09-28 2018-09-28 Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811139050.2A CN109345427B (en) 2018-09-28 2018-09-28 Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology

Publications (2)

Publication Number Publication Date
CN109345427A CN109345427A (en) 2019-02-15
CN109345427B true CN109345427B (en) 2020-07-03

Family

ID=65307514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811139050.2A Active CN109345427B (en) 2018-09-28 2018-09-28 Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology

Country Status (1)

Country Link
CN (1) CN109345427B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978390B (en) * 2019-03-29 2020-03-17 嘉应学院 Office efficiency evaluation system and method based on image recognition
CN110766451B (en) * 2019-09-29 2022-07-19 浙江新再灵科技股份有限公司 Elevator advertisement putting method and system based on human body static label
CN112257628A (en) * 2020-10-29 2021-01-22 厦门理工学院 Method, device and equipment for identifying identities of outdoor competition athletes

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831412A (en) * 2012-09-11 2012-12-19 魏骁勇 Teaching attendance checking method and device based on face recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831412A (en) * 2012-09-11 2012-12-19 魏骁勇 Teaching attendance checking method and device based on face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-agent based framework for person re-identification in video surveillance;Muna Saif Al Rahbi等;《2016 Future Technologies Conference》;20161207;第1349-1352页 *
Visual system for student attendance monitoring with non-standard situation detection;O. Kainz等;《2014 IEEE 12th IEEE International Conference on Emerging eLearning Technologies and Applications》;20141205;第221-226页 *

Also Published As

Publication number Publication date
CN109345427A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
US11074436B1 (en) Method and apparatus for face recognition
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
Ahmed et al. Vision based hand gesture recognition using dynamic time warping for Indian sign language
Patil et al. Implementation of classroom attendance system based on face recognition in class
WO2019095571A1 (en) Human-figure emotion analysis method, apparatus, and storage medium
CN109345427B (en) Classroom video frequency point arrival method combining face recognition technology and pedestrian recognition technology
CN111241975B (en) Face recognition detection method and system based on mobile terminal edge calculation
US20200257889A1 (en) Facial recognitions based on contextual information
Barnich et al. Frontal-view gait recognition by intra-and inter-frame rectangle size distribution
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN105183758A (en) Content recognition method for continuously recorded video or image
CN108549848B (en) Method and apparatus for outputting information
CN109191341B (en) Classroom video frequency point arrival method based on face recognition and Bayesian learning
CN111382596A (en) Face recognition method and device and computer storage medium
CN109902550A (en) The recognition methods of pedestrian's attribute and device
CN110941992B (en) Smile expression detection method and device, computer equipment and storage medium
CN111382655A (en) Hand-lifting behavior identification method and device and electronic equipment
Chowdhury et al. Development of an automatic class attendance system using cnn-based face recognition
CN111582195B (en) Construction method of Chinese lip language monosyllabic recognition classifier
KR20200060942A (en) Method for face classifying based on trajectory in continuously photographed image
CN103426005A (en) Automatic database creating video sectioning method for automatic recognition of micro-expressions
Abusham Face verification using local graph stucture (LGS)
Wu et al. How do you smile? Towards a comprehensive smile analysis system
CN113837112A (en) Video data processing method and electronic equipment
CN113449560A (en) Technology for comparing human faces based on dynamic portrait library

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 511458 room 1009, No.26, Jinlong Road, Nansha District, Guangzhou City, Guangdong Province (only for office use)

Applicant after: Guangzhou yuncongkaifeng Technology Co., Ltd

Address before: 511457 Room 1009, No. 26 Jinlong Road, Nansha District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU KAIFENG TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200813

Address after: No.99, Jingrong South 3rd Street, Jiancha street, Tianfu New District, Chengdu 610000 China (Sichuan) pilot Free Trade Zone, Chengdu

Patentee after: Sichuan Yuncong Tianfu Artificial Intelligence Technology Co., Ltd

Address before: 511458 room 1009, No.26, Jinlong Road, Nansha District, Guangzhou City, Guangdong Province (only for office use)

Patentee before: Guangzhou yuncongkaifeng Technology Co., Ltd