CN109977850B - Classroom name prompting method based on face recognition - Google Patents

Classroom name prompting method based on face recognition Download PDF

Info

Publication number
CN109977850B
CN109977850B CN201910224566.5A CN201910224566A CN109977850B CN 109977850 B CN109977850 B CN 109977850B CN 201910224566 A CN201910224566 A CN 201910224566A CN 109977850 B CN109977850 B CN 109977850B
Authority
CN
China
Prior art keywords
face
image
camera
frame image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910224566.5A
Other languages
Chinese (zh)
Other versions
CN109977850A (en
Inventor
姜光
史梦真
马全盟
杨旭元
滕浩
周大为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910224566.5A priority Critical patent/CN109977850B/en
Publication of CN109977850A publication Critical patent/CN109977850A/en
Application granted granted Critical
Publication of CN109977850B publication Critical patent/CN109977850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a class name prompting method based on face recognition, which comprises the following implementation scheme: 1) Establishing a face data set to be recognized; 2) Training a face recognition network by using a data set; 3) The camera rotates to collect a first frame of image and carries out face detection, alignment and identification in sequence. 4) Rotating a camera to collect the next frame of image and sequentially carrying out face detection, alignment and identification; 5) Homography conversion is carried out on the rear frame image to the front frame image, and image splicing is carried out; 6) Judging whether the camera rotates to a preset end position or not, if not, returning to step 4), and if so, adding face information to the spliced image and displaying the face information on an interface; 7) And judging whether the classroom use is finished or not, if so, finishing the classroom use, and if not, returning to step 4). The invention solves the problem that teachers do not know names of students in a university classroom, can continuously provide a classroom clear large picture, is convenient for teachers to obtain the listening and speaking states of back-row students, and can be used for assisting teachers to attend classes.

Description

Classroom name prompting method based on face recognition
Technical Field
The invention belongs to the technical field of face detection and face recognition, and further relates to a classroom name prompting method which can be used for timely processing and displaying identity information of classmates in a classroom when teachers and classmates interact in the classroom.
Background
The main components of the pan-tilt control camera comprise a camera, a high-speed stepping motor pan-tilt, an embedded decoder board and other electronic devices. The camera has the advantages that the optical center is unchanged, the full-angle rotation and zoom control of the camera can be realized, in the scene range monitored by the camera, clear images of all places in the monitored scene are obtained by adjusting the posture, the focal length and the like of the camera, and the defect that the monitoring view of a fixed camera is narrow is overcome.
The face recognition technology is to first determine whether a face exists in an input image or video stream based on the facial features of a person. If the human faces exist, the position of each human face is further given, and the characteristics of the human face part are extracted and compared with the known human face, so that the identity of each human face is recognized. The occurrence of deep learning greatly improves the accuracy of face recognition, and reaches the available level. The human Face detection network is a Single step Detector which is high in speed, good in effect, low in memory consumption and unchanged in scale, only uses a picture of one scale, analyzes characteristic pictures of different scales, and realizes multi-scale human Face detection in a phase-changing manner. Deng J, guo J, xue N, et al, arcFace, additive artificial Margin Loss for Deep Face Recognition [ J ].2018. The Face Recognition network proposes a new Loss function artificial Margin Loss, so that the Recognition accuracy is improved. The two methods are combined together, so that the face detection and recognition achieve the effect of real-time availability.
The image splicing technology is a technology for splicing a plurality of images with overlapped parts, which are obtained at different time, different visual angles or different sensors, into a large-scale seamless high-resolution image, so that each part in the image can be clearly seen.
In the current market, the applications of the face recognition technology in the education field include attendance checking, access control systems, examination identity checking, classroom behavior management and the like, wherein more complex systems mostly use two or even three cameras, and because when a single camera is used for acquiring images of the whole classroom, the occupied pixels of each face are limited, only dozens or even dozens of pixels exist, and subsequent processing cannot be performed. However, most of the applications are used for school supervision, examination and the like, and for teachers, the names of a plurality of students in a classroom are not memorized and the students in the back row in the classroom are not clearly seen. When interaction such as classroom questioning is carried out, students can only be found by depending on a class list or a certain student is directly pointed to, which wastes time and loses politeness. In addition, during lecturing, the listening and speaking states of students in the back row in the classroom cannot be observed, and the feedback information of the students cannot be received in time.
Disclosure of Invention
The invention aims to provide a classroom name prompting method based on face recognition aiming at the defects of the prior art, so that teacher and students in classroom can conveniently interact and communicate with each other, teachers can conveniently observe the listening and speaking states of students behind classrooms in real time, and the teaching quality is improved.
The technical idea of the invention is as follows: by means of the face recognition technology and the image splicing technology, all classmates in a classroom can be shot by only one camera, a scene image of the whole classroom is obtained, and identity information of each classmate in the classroom is displayed, and the method comprises the following implementation steps:
(1) Collecting face near pictures of students to be recognized, carrying out face detection on the collected pictures by using a deep neural network, cutting out the detected faces from the images, carrying out face correction on the cut face images and labeling the face images, wherein the labels correspond to the specific identity information of the students, and forming a face data set to be recognized by using all the corrected face images with labels;
(2) Training a face recognition network by using a face data set to be recognized to obtain a trained face recognition network;
(3) Setting the end time t and the splicing end angle gamma, and starting timing;
(4) Carrying out face detection and image splicing:
(4a) Controlling the camera to rotate to a series of preset angles through the cradle head to take a picture, and acquiring clear images of all places in the classroom;
(4b) Sequentially carrying out face detection, face correction and face identification on the obtained images to obtain the position L of each face in each image and corresponding identity information;
(4c) Calculating a homography matrix H of a homography conversion pair between a next frame image B and a previous frame image A according to the sequence of the acquired photos;
(4d) And calculating the result of the homography transformation of the next frame image B to the previous frame image A:
Figure GDA0003943525920000021
(4e) Calculating the face position L in the next frame image B B Result L transformed to the previous frame image A through homography B ′:
Figure GDA0003943525920000022
(4f) And (5) splicing the previous frame image A and the result B' of the (4 d) together to obtain a local spliced image: the splicing line of the image data needs to avoid the face position L in the previous frame image A as much as possible A Results L of (and 4 e) B If the human face cannot be avoided, the human face in the previous frame image A is reserved in the overlapping area, and a broken line type splicing line is used for ensuring the completeness of the human face;
(4g) Judging whether the camera rotates to a preset end position gamma: if yes, executing (5), if not, returning to (4 a);
(5) Framing each face on the spliced clear large image, and adding a corresponding name above each face;
(6) Judging whether the first splicing is completed: if yes, starting a foreground thread, displaying the name-added clear large image on an interface, starting to monitor the action of a mouse, and displaying the name, the school number, the college and the class of the student when the mouse clicks the face position of the student learning together; if not, updating and displaying the clear large image added with the name on the interface;
(7) Judging whether the ending time t is reached: if yes, the process is ended, otherwise, the process returns to (4 a).
The invention has the following advantages:
firstly, the invention uses the pan-tilt to control the camera to obtain the image, and uses the image splicing technology, only one camera can be used to obtain the clear large image in the whole classroom, even the back-row classmates can be clearly seen in the image, so that the teacher can clearly see the listening and speaking states of the back-row classmates, and the invention has low equipment cost and convenient installation and use;
secondly, the invention uses the deep neural network to carry out face detection, face setting and face recognition, compared with the traditional method, the accuracy rate of face recognition is improved to 99 percent, so that a teacher can accurately know the name of each student at a glance of an interface, and the time for searching the name is saved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
fig. 2 is a schematic view of the rotation of the pan/tilt head controlled camera used in the present invention.
Detailed Description
The following detailed description of the embodiments of the invention refers to the accompanying drawings.
Referring to fig. 1, the present example is implemented as follows:
step 1, a face data set to be recognized is manufactured.
Firstly, acquiring near pictures of a Face to be recognized, which are rotated by theta angles upwards, downwards, leftwards and rightwards respectively, wherein theta is more than or equal to 5 and less than or equal to 20, and performing Face detection on the acquired Face near pictures by using a Face detection network Single Stage Face Detector;
cutting out the detected face from the image, using a face setting network Multi-task shielded connected conditional Networks to carry out face setting on the cut-out face image and marking a label, wherein the label corresponds to the specific identity information of a person, and the specific identity information comprises a name, a school number, a college and a class;
and forming a face data set to be recognized by using all the marked face images after being placed.
And 2, training a face recognition network ArcFace.
Before training, randomly selecting 70% of images from a data set of a face data set to be recognized as a training set, and taking the rest 30% of images as a test set;
in the training process, the learning times and the learning rate of the face recognition network are adjusted, the images in the training set are used as the input of the face recognition network, the labels carried by the images in the training set are used as the expected output of the face recognition network, and the face recognition network is supervised and learned;
after the face recognition network finishes learning the set learning times, the face recognition network is tested, namely images in a test set are sent to the face recognition network, the proportion of the output of the face recognition network to the corresponding label is counted, namely the accuracy is calculated, and when the accuracy reaches more than 99%, the training is finished.
And 3, setting the end time t and the splicing end angle gamma, and starting timing.
And 4, rotating the camera to acquire a first frame image.
The pan/tilt/zoom control camera rotates to a first preset direction to obtain a first frame image, as shown in fig. 2, when the pan/tilt/zoom control camera rotates to the preset direction, the rotation around X needs to be completed cam Axis and Y cam The shaft rotating twice, wherein X cam Axis and Y cam The axes are two coordinate axes in a coordinate system where the pan-tilt control camera is located.
And 5, detecting the human face, correcting the human face and identifying the human face.
And (3) carrying out face detection on the first frame image to obtain the position of each face in the image, then carrying out face alignment on each detected face, traversing the aligned face, and carrying out face recognition by using the face recognition network trained in the step (2) to obtain specific identity information of each face.
And 6, carrying out face detection, face correction and face recognition on the next frame of image.
The cloud deck controls the camera to rotate to the next preset direction, an image is obtained, and the overlapping proportion of the image and the previous frame image is not less than 20%;
and sequentially carrying out face detection, face correction and face identification on the obtained image to obtain the face position in the image and the identity information corresponding to each face.
And 7, splicing the next frame of image to the previous frame of image.
7a) Calculating a homography matrix H of the next frame image B transformed to the previous frame image A through homography:
7a1) Setting a pan-tilt control camera to have the same optical center when obtaining a front frame image A and a back frame image B, and setting a three-dimensional space point N as an image point N in the front frame image A A And a pixel n in the subsequent frame image B B The homography relation is satisfied:
Figure GDA0003943525920000051
7a2) Let three-dimensional space point N and image point N on image plane satisfy
Figure GDA0003943525920000052
Where P is a projection matrix at the time of camera shooting, P = K [ R |0 ] when the world coordinate system is assumed to coincide with the camera coordinate system]K is a camera internal reference matrix which can be obtained by camera calibration, and R is a rotation matrix when the camera shoots;
7a3) Let the following two projection matrices:
projection matrix when the camera takes the previous frame image a: p A =K[R A |0],
Projection matrix when the camera takes the next frame image B: p B =K[R B |0],
Wherein R is A Rotation matrix, R, for the camera taking the previous image A B A rotation matrix when the camera shoots a next frame image B;
7a4) Let the following two correspondences:
three-dimensional space point N and corresponding image point N in previous frame image A A The relationship between:
Figure GDA0003943525920000053
three-dimensional space point N and corresponding image point N in next frame image B B The relationship between:
Figure GDA0003943525920000054
7a5) N is obtained from the results of 7a 3) and 7a 4) B And N:
adding P in 7a 3) B =K[R B |0]Substituted in 7a 4)
Figure GDA0003943525920000055
In the method, the following steps are obtained:
Figure GDA0003943525920000056
by<2>Formula can be given as N = R B -1 K -1 n B
7a6) N = R obtained from 7a 5) B -1 K -1 n B Substituted in 7a 4)
Figure GDA0003943525920000057
In the method, the following steps are obtained:
Figure GDA0003943525920000058
will be a formula<1>And formula<3>By contrast, H = KR A R B -1 K -1
Wherein: r is A =R XA )R YA ),R B =R XB )R YB ),α A 、β A Respectively winding around Y when shooting the previous frame image A for the camera cam Axial and axial sum of cam Angle of rotation of the shaft, alpha B 、β B Respectively winding Y when the camera takes the next frame image B cam Axial and axial sum of cam Angle of rotation of the shaft, R XA ) Taking the previous frame of image A for the camera and winding X cam Rotation of axis beta A Rotation matrix at angle:
Figure GDA0003943525920000061
R YA ) Taking the previous frame of image A around Y for the camera cam Rotation of the shaft alpha A Rotation matrix at angle:
Figure GDA0003943525920000062
R XB ) B winding X for shooting the next frame of image B by the camera cam Rotation of axis beta B Rotation matrix at angle:
Figure GDA0003943525920000063
R YB ) Taking the previous frame of image B for the camera around Y cam Rotation of the shaft alpha B Rotation matrix at angle:
Figure GDA0003943525920000064
7b) And calculating the result of the homography transformation of the next frame image B to the previous frame image A:
Figure GDA0003943525920000065
7c) Calculating the face position L in the next frame image B B Result L transformed into the previous frame image A through homography B ′:
Figure GDA0003943525920000066
7d) And splicing the previous frame image A and the homography transformation result B' together to obtain a local spliced image:
when splicing, the face position L in the previous frame image A needs to be avoided A And the transformed face position L in step 7 c) B If the human face cannot be avoided, the human face in the previous frame image A is reserved in the overlapping area, and a broken line type splicing line is used to ensure the integrity of the human face.
Step 8, judging whether the camera turns to a set splicing end angle gamma: if yes, the camera is indicated to acquire clear images of all parts in the whole classroom, splicing is completed, a clear large image of the whole classroom is obtained, step 9 is executed, and if not, the step 6 is returned.
And 9, framing each face in the clear large image of the whole classroom, and adding a corresponding name above each face.
Step 10, judging whether the first splicing is completed: if yes, starting a foreground thread, displaying the name-added clear large image on an interface, starting to monitor the action of a mouse, and displaying the name, the school number, the college and the class of the student when the mouse clicks the face position of the student learning together; and if not, displaying the updated clear large image with the added name on the interface.
Step 11, judging whether the ending time t is reached: if yes, the classroom use is ended, and if not, the step 6 is returned to.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (9)

1. A name prompting method realized by utilizing a face recognition technology and an image splicing technology comprises the following steps:
(1) Collecting face near pictures of students to be recognized, carrying out face detection on the collected pictures by using a deep neural network, cutting out the detected faces from the images, carrying out face correction on the cut face images and marking labels, wherein the labels correspond to specific identity information of the students, and forming a face data set to be recognized by using all the corrected face images with the labels;
(2) Training a face recognition network by using a face data set to be recognized to obtain a trained face recognition network;
(3) Setting the end time t and the splicing end angle gamma, and starting timing;
(4) Carrying out face detection and image splicing:
(4a) Controlling the camera to rotate to a series of preset angles through the cradle head to take a picture, and acquiring clear images of all places in the classroom;
(4b) Sequentially carrying out face detection, face righting and face identification on the obtained images to obtain the position L of each face in each image and corresponding identity information;
(4c) Calculating a homography matrix H of a homography transformation pair between the next frame image B and the previous frame image A according to the sequence of the acquired photos;
(4d) And calculating the result of the homography transformation of the next frame image B to the previous frame image A:
Figure FDA0003943525910000011
(4e) Calculating the face position L in the next frame image B B Result L transformed to the previous frame image A through homography B ′:
Figure FDA0003943525910000012
(4f) And (5) splicing the previous frame image A and the result B' of the (4 d) together to obtain a local spliced image: the splicing line of the image data needs to avoid the face position L in the previous frame image A as much as possible A Results L of (4 e) and (4 e) B If the human face cannot be avoided, the human face in the previous frame image A is reserved in the overlapping area, and a broken line type splicing line is used for ensuring the completeness of the human face;
(4g) Judging whether the camera rotates to a preset end position gamma: if yes, executing (5), if not, returning to (4 a);
(5) Framing each face on the spliced clear large image, and adding a corresponding name above each face;
(6) Judging whether the first splicing is completed: if yes, starting a foreground thread, displaying the name-added clear large image on an interface, starting to monitor the action of a mouse, and displaying the name, the school number, the college and the class of the student when the mouse clicks the face position of the student learning together; if not, updating and displaying the clear large image added with the name on the interface;
(7) Judging whether the ending time t is reached: if yes, the process is ended, otherwise, the process returns to (4 a).
2. The method of claim 1, wherein: the face close-up in (1) includes five angles: the front face is rotated by theta degrees upwards, downwards, leftwards and rightwards respectively, wherein theta is more than or equal to 5 and less than or equal to 20.
3. The method of claim 1, wherein: (1) The neural network used for Face detection in the method adopts a Single Stage Face Detector.
4. The method of claim 1, wherein: (1) A Multi-target cascade convolution Network Multi-task Cascaded Convolutional Network is adopted in a neural Network used for face setting.
5. The method of claim 1, wherein: the specific identity information in (1) comprises: name, school number, college and class.
6. The method of claim 1, wherein: (2) And (5) identifying the face by using a deep neural network ArcFace.
7. The method of claim 1, wherein: (2) The face recognition network is trained by using a face data set to be recognized, and the method is realized as follows:
before training, randomly selecting 70% of images from a data set as a training set, and taking the rest 30% of images as a test set;
in the training process, the learning times and the learning rate of the face recognition network are adjusted, the images in the training set are used as the input of the face recognition network, the labels carried by the images in the training set are used as the expected output of the face recognition network, and the face recognition network is supervised and learned;
after the face recognition network finishes learning for the set learning times, testing the face recognition network, namely sending the images in the test set into the face recognition network, counting the proportion of the output of the face recognition network to the corresponding label, namely the accuracy, and finishing the training when the accuracy reaches more than 99%.
8. The method of claim 1, wherein: the preset photographing angle of the cloud platform control camera in the step (4 a) refers to: the images taken at these angles have an overlap ratio of not less than 20% between the preceding and following frames, and all the images taken at all the preset angles are made to cover the whole classroom.
9. The method of claim 1, wherein: and (4 c) calculating a homography matrix H according to the following formula:
H=KR A R B -1 K -1
wherein K is a camera internal reference matrix obtained by calibrating a camera; r A 、R B Rotation matrices when shooting a previous frame image a and a subsequent frame image B for a camera, respectively, wherein:
R A =R XA )R YA )
R B =R XB )R YB )
α A 、β A respectively winding around Y when shooting the previous frame image A for the camera cam Axis and X cam Angle of rotation of the shaft, α B 、β B Respectively winding Y for the camera to take the next frame image B cam Axis and X cam Angle of rotation of the shaft, X cam Axis and Y cam The axes are two coordinate axes of a coordinate system where the camera is located;
R XA ) Taking the previous frame of image A for the camera and winding X cam Rotation of axis beta A Rotation matrix at angle:
Figure FDA0003943525910000031
R YA ) Taking the previous frame of image A around Y for the camera cam Rotation of the shaft alpha A Rotation matrix at angle:
Figure FDA0003943525910000041
R XB ) B winding X for shooting the next frame of image B by the camera cam Rotation of axis beta B Rotation matrix at angle:
Figure FDA0003943525910000042
R YB ) Taking the previous frame of image B for the camera and winding Y cam Rotation of the shaft alpha B Rotation matrix at angle:
Figure FDA0003943525910000043
CN201910224566.5A 2019-03-23 2019-03-23 Classroom name prompting method based on face recognition Active CN109977850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910224566.5A CN109977850B (en) 2019-03-23 2019-03-23 Classroom name prompting method based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910224566.5A CN109977850B (en) 2019-03-23 2019-03-23 Classroom name prompting method based on face recognition

Publications (2)

Publication Number Publication Date
CN109977850A CN109977850A (en) 2019-07-05
CN109977850B true CN109977850B (en) 2023-01-06

Family

ID=67080226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910224566.5A Active CN109977850B (en) 2019-03-23 2019-03-23 Classroom name prompting method based on face recognition

Country Status (1)

Country Link
CN (1) CN109977850B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543811B (en) * 2019-07-15 2024-03-08 华南理工大学 Deep learning-based non-cooperative examination personnel management method and system
CN111339853A (en) * 2020-02-14 2020-06-26 北京文香信息技术有限公司 Interaction method, device, equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517102A (en) * 2014-12-26 2015-04-15 华中师范大学 Method and system for detecting classroom attention of student
WO2018040510A1 (en) * 2016-08-29 2018-03-08 中兴通讯股份有限公司 Image generation method, apparatus and terminal device
CN108564673A (en) * 2018-04-13 2018-09-21 北京师范大学 A kind of check class attendance method and system based on Global Face identification
CN109389367A (en) * 2018-10-09 2019-02-26 苏州科达科技股份有限公司 Staff attendance method, apparatus and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644209A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517102A (en) * 2014-12-26 2015-04-15 华中师范大学 Method and system for detecting classroom attention of student
WO2018040510A1 (en) * 2016-08-29 2018-03-08 中兴通讯股份有限公司 Image generation method, apparatus and terminal device
CN108564673A (en) * 2018-04-13 2018-09-21 北京师范大学 A kind of check class attendance method and system based on Global Face identification
CN109389367A (en) * 2018-10-09 2019-02-26 苏州科达科技股份有限公司 Staff attendance method, apparatus and storage medium

Also Published As

Publication number Publication date
CN109977850A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN111275213B (en) Mechanical equipment fault monitoring system based on big data
CN109714519B (en) Method and system for automatically adjusting image frame
CN105635669B (en) The movement comparison system and method for data and real scene shooting video are captured based on three-dimensional motion
CN109977850B (en) Classroom name prompting method based on face recognition
CN108686978A (en) The method for sorting and system of fruit classification and color and luster based on ARM
CN103905734A (en) Method and device for intelligent tracking and photographing
CN110659397A (en) Behavior detection method and device, electronic equipment and storage medium
CN109919007B (en) Method for generating infrared image annotation information
CN111144356B (en) Teacher sight following method and device for remote teaching
CN111382633A (en) Classroom attendance management method, device, terminal and computer readable storage medium
CN108734655A (en) The method and system that aerial multinode is investigated in real time
CN112601022B (en) On-site monitoring system and method based on network camera
CN112347856A (en) Non-perception attendance system and method based on classroom scene
CN110740298A (en) Distributed classroom discipline behavior detection system, method and medium
CN106530160A (en) Education platform operation method based on monitoring positioning
CN111462229A (en) Target shooting method and shooting device based on unmanned aerial vehicle and unmanned aerial vehicle
CN109118512B (en) Classroom late arrival and early exit detection method based on machine vision
CN111738148B (en) Fault identification method using infrared inspection shooting
CN110445966B (en) Panoramic camera video shooting method and device, electronic equipment and storage medium
CN113569594A (en) Method and device for labeling key points of human face
CN111800576A (en) Method and device for rapidly positioning picture shot by pan-tilt camera
CN110536066B (en) Panoramic camera shooting method and device, electronic equipment and storage medium
CN110366849A (en) Image processing equipment and image processing method
CN113824874A (en) Auxiliary shooting method and device, electronic equipment and storage medium
Guan et al. Evaluation of classroom teaching quality based on video processing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant