CN111144333A - Teacher behavior monitoring method based on sight tracking - Google Patents

Teacher behavior monitoring method based on sight tracking Download PDF

Info

Publication number
CN111144333A
CN111144333A CN201911388632.9A CN201911388632A CN111144333A CN 111144333 A CN111144333 A CN 111144333A CN 201911388632 A CN201911388632 A CN 201911388632A CN 111144333 A CN111144333 A CN 111144333A
Authority
CN
China
Prior art keywords
teacher
classroom
reference plane
face
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911388632.9A
Other languages
Chinese (zh)
Other versions
CN111144333B (en
Inventor
韩鹏
刘日星
骆开庆
邱健
彭力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201911388632.9A priority Critical patent/CN111144333B/en
Publication of CN111144333A publication Critical patent/CN111144333A/en
Application granted granted Critical
Publication of CN111144333B publication Critical patent/CN111144333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Human Computer Interaction (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a teacher behavior monitoring method based on sight tracking, which comprises the steps of obtaining an image sequence; detecting face characteristic points; analyzing the characteristic points of the human face to solve the orientation of the human face and calculating the three-dimensional space position of the human face in the classroom; calculating the sight line position of the teacher in a reference plane of the desktop; recording the sight line position and the walking position of the teacher by using the hotspot chart; and counting the speaking time of the teacher. The invention not only records the sight distribution condition of the teacher, but also records the position distribution condition of the teacher in the classroom, so that the teacher can clearly know the self state, thereby adjusting the teaching mode and improving the teaching result.

Description

Teacher behavior monitoring method based on sight tracking
Technical Field
The invention belongs to the technical field of education informatization, and particularly relates to a teacher behavior monitoring method based on sight tracking.
Background
In traditional teaching, most of the evaluation of teaching skills is based on subjective manual analysis. Several sophisticated educational experts or advanced teachers use an audition method to determine the teaching skill level of a young teacher based on the individual subjective experience. This method is very labor and time consuming, and the evaluation results are highly subjective. The machine is used for replacing the manual work, the state of a teacher in a classroom is detected, partial referable objective data are provided for evaluating the teaching ability of the teacher, and the teaching ability of the teacher is evaluated in a more fair, fair and rigorous mode. When a teacher goes class, a large amount of data is generated, such as reading a book, reading a student, or not paying attention to a student, or standing on a platform, not on a platform, or speaking. For the quantifiable data, the data are collected and can be analyzed by experts. Experts in the education field can refer to the objective data to perform more objective teaching evaluation on young teachers.
Meanwhile, if the teacher in the school student field or the young teacher just entering the work can know the behavior state of the teacher in real time in the classroom, a very suitable state can be adjusted in time, so that the teaching level of the teacher can be certainly improved.
Disclosure of Invention
The invention aims to provide a teacher behavior monitoring method based on sight tracking, which can accurately and objectively detect the teaching behavior of teachers in class.
The invention is realized by the following technical scheme:
a teacher behavior monitoring method based on sight tracking comprises the following steps:
(1) acquiring an image sequence through an image acquisition unit;
(2) detecting the image information of the human face characteristic points in each frame of image of the image sequence based on the image sequence;
(3) solving the face orientation based on the face feature points in the image;
(4) determining the three-dimensional position of a teacher in a classroom space based on the two-dimensional position of the face in the image and adding a height prior value of the teacher;
(5) combining the three-dimensional position and face orientation of the teacher in the classroom space, adding a desktop height prior value as a reference plane of the classroom space, and calculating the distribution condition of the sight of the teacher in the reference plane;
(6) establishing a three-dimensional classroom model diagram, taking the height of a desktop as a datum plane, and describing the distribution condition of the sight line and the position of a teacher on the plane by adopting a hotspot diagram;
(7) and counting the speaking time of the teacher.
The step (3) comprises the following substeps:
(3.1) detecting face characteristic points on the face image information area, and identifying at least 5 face characteristic points;
(3.2) aligning the human face characteristic points to obtain the spatial coordinates of the head of the user;
(3.3) constructing a geometric solid model based on the spatial coordinates of the head of the user to obtain a posture angle of the head of the user;
the human face characteristic points comprise left eye corners, right eye corners, left mouth corners of the mouth, right mouth corners of the mouth and nose tips.
The step (4) comprises the following substeps:
(4.1) taking the center position of the face of the person based on the two-dimensional position of the face of the person;
(4.2) taking the height of a teacher as a reference plane of a classroom space, marking four sampling points at corresponding positions in an image, and constructing a transformation matrix M between the reference plane and the image;
and (4.3) converting the center position of the face into a two-dimensional position of a reference plane of a classroom through a perspective transformation matrix M, and adding the prior value height of a teacher to obtain the three-dimensional position of the face in the classroom space.
The step (5) comprises the following substeps:
and (5.1) taking the desktop prior height as a reference plane of a classroom space as a sight line drop point plane, constructing a space triangular model by combining the three-dimensional position and the face orientation of the teacher in the classroom space, and calculating to obtain the position of the sight line of the teacher on the reference plane.
Said step (6) comprises the sub-steps of:
and (6.1) constructing a three-dimensional classroom model diagram with the size equal to that of a real classroom, and marking four sampling points by taking a desktop as a reference plane. And simultaneously, marking four corresponding points on a reference plane which takes the desktop as a classroom. Constructing a transformation matrix M1 between the reference plane of the model map and the reference plane of the classroom;
(6.2) converting the sight line position calculated in the step (5) to a datum plane position which takes the desktop as a three-dimensional classroom model map through a transformation matrix M1;
and (6.3) constructing a three-dimensional classroom model diagram with the size equal to that of a real classroom, and marking four sampling points by taking the ground as a reference plane. And simultaneously, marking four corresponding points on a reference plane which takes the ground as a classroom. Constructing a transformation matrix M2 between the reference plane of the model map and the reference plane of the classroom;
(6.4) converting the position of the teacher obtained in the step (4) to a datum plane position which takes the ground height as a three-dimensional classroom model map through a transformation matrix M2;
and (6.5) describing the distribution of the sight lines and the positions of the teachers in the three-dimensional classroom model diagram by using the hotspot graph.
Said step (7) comprises the sub-steps of:
(7.1) extracting face information to obtain six feature points of the lips;
(7.2) setting the aspect ratio of the lips to be larger than k, and judging that the mouth is opened;
(7.3) setting the mouth closing time length to be T, and then, speaking is not performed;
(7.4) when the aspect ratio K1 at T1 is greater than K, the lip aspect ratio K2 at time T2 is greater than K, and the difference between T1 and T2 is less than T, the utterance is determined.
The invention has the advantages that: the method acquires the face orientation gesture by capturing images, utilizing image processing and mode recognition, utilizes positioning to track the spatial position of the teacher, and combines the spatial position of the teacher and the face orientation position to calculate the attention distribution situation of the teacher in a classroom space reference plane by taking the height of the desktop as the classroom space reference plane, thereby knowing the attention distribution situation of the teacher himself in real time. The invention not only records the sight distribution of the teacher, but also records the position distribution of the teacher in the classroom. Can let the mr clearly understand the state of self to adjustment teaching mode improves the teaching achievement.
Drawings
FIG. 1 is a logic block diagram of the present invention;
fig. 2 is an embodiment of an image acquisition unit.
Detailed Description
The invention is further described with reference to the following figures and examples:
as shown in fig. 1, a teacher behavior monitoring method based on eye tracking includes the following steps:
(1) acquiring an image sequence through an image acquisition unit;
the face image acquisition unit is a camera arranged above the back of a classroom and used for shooting the face area of a teacher, and a user does not need to wear any auxiliary collector and acquires images of the teacher in class through the camera. As shown in fig. 2, the installation position of the camera is the middle position behind the classroom, taking the classroom of fig. 2 as an example, the length and width are 9m and 6m respectively, and the camera is installed at the position 3m in the middle of the rear, the distance between the camera and the blackboard ranges from 6m to 8m, and the situation of the classroom can be more comprehensively collected by the camera at the middle position. The height of camera is H, and the scope of H is 1.6m to 2.2m, and this scope has corresponded most people's height, and when the height of camera and people's height wanted to be close, the angle of pitch deviation of camera and people's face was little, and is more favorable to subsequent calculation.
(2) Detecting the image information of the human face characteristic points in each frame of image of the image sequence based on the image sequence;
(3) analyzing face orientation image information based on the face characteristic points in the image, solving the face orientation, and taking the face orientation as a sight line direction;
the step (3) comprises the following substeps:
(3.1) detecting face characteristic points on the face region, and identifying five face characteristic points;
(3.2) aligning the five facial feature points to obtain the spatial coordinates of the head of the user based on a convolutional neural network method;
(3.3) constructing a geometric solid model based on the spatial coordinates of the head of the user, and solving the angle of a rotating shaft;
(3.4) let the rotation axis angle be α, and the respective components of the rotation axis under the geometric solid model be βx、βyAnd βzConverted into quaternion through formula 1;
Figure BDA0002344301130000041
wherein w, x, y and z are quaternions;
(3.4) obtaining a user head attitude angle through a formula 2 based on the quaternion;
Figure BDA0002344301130000051
where ψ is the face orientation yaw angle,
Figure BDA0002344301130000052
face orientation pitch angle is defined as phi is face orientation roll angle;
and (3.5) taking the nose tip on the human face as an end point to make a ray, wherein the ray direction is the direction of the head posture angle vector, and the direction is estimated as the sight line direction.
The five facial feature points include left eye corner, right eye corner, left mouth corner of mouth, right mouth corner of mouth, and tip of nose.
According to the embodiment of the invention, the two-dimensional image sequence file is constructed into the three-dimensional geometric model through five human face characteristic points, the three-dimensional head posture information is obtained, the human face orientation is obtained, and then the human face orientation is estimated as the sight line direction.
(4) Determining the three-dimensional position of a teacher in a classroom based on the two-dimensional position of the face in the image and adding a height prior value of the teacher;
the step (4) comprises the following substeps:
(4.1) taking the center position of the face based on the two-dimensional face coordinates of the image;
(4.2) taking the height of a teacher as a reference plane of a classroom space, marking four sampling points at corresponding positions in an image, and constructing a transformation matrix between the reference plane and the image;
(4.4) calculating a transformation matrix M of 3 x 3 according to the formula 3;
Figure BDA0002344301130000053
wherein (xi ', yi') is the position of four points of the image label, and (xi, yi) is the position of four sampling points with height as the reference plane of the classroom space;
(4.5) based on the transformation matrix M, obtaining the position p (X, Y) of the teacher in the classroom space reference plane by formula 4;
Figure BDA0002344301130000054
wherein (x, y) is the position of a point on the image, i.e. the central position of the face in the image;
(4.6) based on the above, the position p (X, Y) of the teacher with the height as the reference plane of the classroom space is obtained, and the prior height of the teacher is added, so that the three-dimensional position of the teacher in the classroom space can be obtained.
The step (5) comprises the following substeps:
(5.1) calculating the position of the sight line of the teacher falling on a classroom space reference plane by taking the height of the desktop as the reference plane, and obtaining the position of the sight line as s (x, y) according to a formula 5;
Figure BDA0002344301130000061
wherein h is the difference between the height and the desktop,
Figure BDA0002344301130000062
pitch angle, psi yaw angle, (Bx, By) is the height of the teacherIs the position p (X, Y) of the classroom space reference plane.
Said step (6) comprises the sub-steps of:
and (6.1) constructing a three-dimensional classroom model diagram with the size equal to that of a real classroom, and marking four sampling points by taking a desktop as a reference plane. And simultaneously, marking four corresponding points on a reference plane which takes the desktop as a classroom. Constructing a transformation matrix M1 between the reference plane of the model map and the reference plane of the classroom;
(6.2) converting the sight line position s (x, y) calculated in the step (5) to an s1(x1, y1) position with the desktop as a datum plane of the three-dimensional classroom model diagram through a formula 3 and a formula 4;
and (6.3) constructing a three-dimensional classroom model diagram with the size equal to that of a real classroom, and marking four sampling points by taking the ground as a reference plane. And simultaneously, marking four corresponding points on a reference plane which takes the ground as a classroom. Constructing a transformation matrix M2 between the reference plane of the model map and the reference plane of the classroom;
(6.4) the teacher position p (X, Y) obtained in the step (4) is converted into a p1(X1, Y1) position with the ground height as a reference plane of the three-dimensional classroom model diagram through the formula 3 and the formula 4;
(6.5) the positions of s1(X1, Y1) and p1(X1, Y1) are plotted on the three-dimensional classroom model map by using a hotspot map according to the set pixel values.
(6.6) the hot spot map uses five colors to represent the sight line and position density of the teacher, and the five colors of red, yellow, cyan and blue represent the density from high to low.
Said step (7) comprises the sub-steps of:
(7.1) extracting face information to obtain six feature points of the lips;
(7.2) setting the aspect ratio of the lips to be larger than k, and judging that the mouth is opened;
(7.3) setting the mouth closing time length to be T, and then, speaking is not performed;
(7.4) when the aspect ratio K1 at T1 is greater than K, the lip aspect ratio K2 at time T2 is greater than K, and the difference between T1 and T2 is less than T, the utterance is determined.
The above detailed description is specific to possible embodiments of the present invention, and the embodiments are not intended to limit the scope of the present invention, and all equivalent implementations or modifications that do not depart from the scope of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A teacher behavior monitoring method based on sight tracking is characterized by comprising the following steps:
(1) acquiring an image sequence through an image acquisition unit;
(2) detecting the image information of the human face characteristic points in each frame of image of the image sequence based on the image sequence;
(3) solving the face orientation based on the face feature points in the image;
(4) determining the three-dimensional position of a teacher in a classroom space based on the two-dimensional position of the face in the image and adding a height prior value of the teacher;
(5) combining the three-dimensional position and face orientation of the teacher in the classroom space, adding a desktop height prior value as a reference plane of the classroom space, and calculating the distribution condition of the sight of the teacher in the reference plane;
(6) establishing a three-dimensional classroom model diagram, taking the height of a desktop as a datum plane, and describing the distribution condition of the sight line and the position of a teacher on the plane by adopting a hotspot diagram;
(7) and counting the speaking time of the teacher.
2. A teacher's behavior monitoring method based on eye gaze tracking according to claim 1, wherein said step (3) comprises the following sub-steps:
(3.1) detecting face characteristic points on the face image information area, and identifying at least 5 face characteristic points;
(3.2) aligning the human face characteristic points to obtain the spatial coordinates of the head of the user;
and (3.3) constructing a geometric solid model based on the spatial coordinates of the head of the user to obtain the posture angle of the head of the user.
3. The sight line tracking-based teacher behavior monitoring method according to claim 1 or 2, wherein the face feature points include a left eye corner, a right eye corner, a left mouth corner, a right mouth corner, and a tip of the nose.
4. A teacher's behavior monitoring method based on eye gaze tracking according to claim 1, wherein said step (4) comprises the following sub-steps:
(4.1) taking the center position of the face of the person based on the two-dimensional position of the face of the person;
(4.2) taking the height of a teacher as a reference plane of a classroom space, marking four sampling points at corresponding positions in an image, and constructing a transformation matrix M between the reference plane and the image;
and (4.3) converting the center position of the face into a two-dimensional position of a reference plane of a classroom through a perspective transformation matrix M, and adding the prior value height of a teacher to obtain the three-dimensional position of the face in the classroom space.
5. A teacher's behavior monitoring method based on eye gaze tracking according to claim 1, wherein said step (5) comprises the following sub-steps:
and (5.1) taking the desktop prior height as a reference plane of a classroom space as a sight line drop point plane, constructing a space triangular model by combining the three-dimensional position and the face orientation of the teacher in the classroom space, and calculating to obtain the position of the sight line of the teacher on the reference plane.
6. A teacher's behavior monitoring method based on eye gaze tracking according to claim 1, wherein said step (6) comprises the following sub-steps:
(6.1) constructing a three-dimensional classroom model diagram with the same proportion as a real classroom, marking four sampling points by taking a desktop as a reference plane, marking four corresponding points on the reference plane by taking the desktop as the classroom, and constructing a transformation matrix M1 between the reference plane of the model diagram and the reference plane of the classroom;
(6.2) converting the sight line position calculated in the step (5) to a datum plane position which takes the desktop as a three-dimensional classroom model map through a transformation matrix M1;
(6.3) constructing a three-dimensional classroom model diagram with the same proportion as a real classroom, marking four sampling points by taking the ground as a reference plane, marking four corresponding points on the reference plane by taking the ground as the classroom, and constructing a transformation matrix M2 between the reference plane of the model diagram and the reference plane of the classroom;
(6.4) converting the position of the teacher obtained in the step (4) to a datum plane position which takes the ground height as a three-dimensional classroom model map through a transformation matrix M2;
and (6.5) describing the distribution of the sight lines and the positions of the teachers in the three-dimensional classroom model diagram by using the hotspot graph.
7. A teacher's behavior monitoring method based on eye gaze tracking according to claim 1, wherein said step (7) comprises the following sub-steps:
(7.1) extracting face information to obtain six feature points of the lips;
(7.2) setting the aspect ratio of the lips to be larger than k, and judging that the mouth is opened;
(7.3) setting the mouth closing time length to be T, and then, speaking is not performed;
(7.4) when the aspect ratio K1 at T1 is greater than K, the lip aspect ratio K2 at time T2 is greater than K, and the difference between T1 and T2 is less than T, the utterance is determined.
CN201911388632.9A 2019-12-30 2019-12-30 Teacher behavior monitoring method based on sight tracking Active CN111144333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911388632.9A CN111144333B (en) 2019-12-30 2019-12-30 Teacher behavior monitoring method based on sight tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911388632.9A CN111144333B (en) 2019-12-30 2019-12-30 Teacher behavior monitoring method based on sight tracking

Publications (2)

Publication Number Publication Date
CN111144333A true CN111144333A (en) 2020-05-12
CN111144333B CN111144333B (en) 2023-04-28

Family

ID=70521494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911388632.9A Active CN111144333B (en) 2019-12-30 2019-12-30 Teacher behavior monitoring method based on sight tracking

Country Status (1)

Country Link
CN (1) CN111144333B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743263A (en) * 2021-08-23 2021-12-03 华中师范大学 Method and system for measuring non-verbal behaviors of teacher

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN107861625A (en) * 2017-12-04 2018-03-30 北京易真学思教育科技有限公司 Gaze tracking system and method based on 3d space model
CN110582781A (en) * 2018-04-11 2019-12-17 视信有限责任公司 Sight tracking system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation
CN107861625A (en) * 2017-12-04 2018-03-30 北京易真学思教育科技有限公司 Gaze tracking system and method based on 3d space model
CN110582781A (en) * 2018-04-11 2019-12-17 视信有限责任公司 Sight tracking system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苏海明;侯振杰;梁久祯;许艳;李兴;: "使用人眼几何特征的视线追踪方法", 中国图象图形学报 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743263A (en) * 2021-08-23 2021-12-03 华中师范大学 Method and system for measuring non-verbal behaviors of teacher
WO2023024155A1 (en) * 2021-08-23 2023-03-02 华中师范大学 Method and system for measuring non-verbal behavior of teacher
CN113743263B (en) * 2021-08-23 2024-02-13 华中师范大学 Teacher nonverbal behavior measurement method and system

Also Published As

Publication number Publication date
CN111144333B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
CN109657553B (en) Student classroom attention detection method
Lim et al. Automated classroom monitoring with connected visioning system
CN110448870B (en) Human body posture training method
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN107103298A (en) Chin-up number system and method for counting based on image procossing
Cote et al. Video summarization for remote invigilation of online exams
CN111814556A (en) Teaching assistance method and system based on computer vision
Zaletelj Estimation of students' attention in the classroom from kinect features
CN109034099A (en) A kind of expression recognition method and device
CN111563449A (en) Real-time classroom attention detection method and system
CN107103293B (en) It is a kind of that the point estimation method is watched attentively based on joint entropy
CN112200138B (en) Classroom learning situation analysis method based on computer vision
CN111611854B (en) Classroom condition evaluation method based on pattern recognition
CN113705349A (en) Attention power analysis method and system based on sight estimation neural network
Xu et al. Classroom attention analysis based on multiple euler angles constraint and head pose estimation
CN115937928A (en) Learning state monitoring method and system based on multi-vision feature fusion
CN115082266A (en) Student education subject comprehensive development analysis and evaluation system based on deep learning
CN111144333A (en) Teacher behavior monitoring method based on sight tracking
CN104063689B (en) Face image identification method based on binocular stereoscopic vision
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
CN112926364B (en) Head gesture recognition method and system, automobile data recorder and intelligent cabin
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
CN114639168B (en) Method and system for recognizing running gesture
Zhou et al. Stuart: Individualized Classroom Observation of Students with Automatic Behavior Recognition And Tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant