CN111563449A - Real-time classroom attention detection method and system - Google Patents

Real-time classroom attention detection method and system Download PDF

Info

Publication number
CN111563449A
CN111563449A CN202010366511.0A CN202010366511A CN111563449A CN 111563449 A CN111563449 A CN 111563449A CN 202010366511 A CN202010366511 A CN 202010366511A CN 111563449 A CN111563449 A CN 111563449A
Authority
CN
China
Prior art keywords
attention
real
detection method
classroom
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010366511.0A
Other languages
Chinese (zh)
Inventor
肖翔
姜飞
申瑞民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Information Industry Group Co ltd
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010366511.0A priority Critical patent/CN111563449A/en
Publication of CN111563449A publication Critical patent/CN111563449A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a real-time classroom attention detection method and a system, wherein the detection method comprises the steps of firstly carrying out face recognition and head posture estimation on a real-time image to obtain an object position and a corresponding attention Euler angle, then mapping the object position and the attention Euler angle to a three-dimensional space to obtain sight rays in the three-dimensional space, and finally obtaining an attention detection result of each object based on the sight rays. Compared with the prior art, the method has the advantages of high processing speed, high accuracy and capability of obtaining a better detection result under the condition of low resolution.

Description

Real-time classroom attention detection method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a real-time classroom attention detection method and system.
Background
In the traditional teaching activities, the evaluation means of events such as teacher behaviors, student behaviors, teacher-student interaction, classroom atmosphere and the like in the teaching situation generally stay in manual modes such as student questionnaire investigation, teacher independent observation or classroom visual inspection and the like. The evaluation result of the traditional manual evaluation mode is too subjective, a great amount of time and energy of observers are needed to perform statistical analysis, the feedback period is too long, and the classroom behavior of teachers cannot be fed back and adjusted in time.
With the rapid development of artificial intelligence technology and the leap of computer computing power, the face detection algorithm can accurately and rapidly position the face in one picture and extract the features of the face. With the improvement of the technology, the face detection algorithm is widely applied to railway security systems, public security organs, mobile police and certain specific occasions, so that the service efficiency is improved, and the labor cost is reduced.
In some schools, face detection algorithms have been applied to daily learning and life.
Patent application CN104517102A discloses an attention detection method, comprising: acquiring scene images in a classroom; positioning a face and calculating a face orientation state to convert a two-dimensional position of the face in an image into a two-dimensional position of a sitting height reference plane in a classroom; adding a sitting height prior value of a student to obtain a three-dimensional space position of a face in a classroom; and calculating the attention point of the student on the teaching display board by combining the three-dimensional space position of the face and the face orientation posture. However, in the method, the regression forest method is used for estimating the head pose, the effect is not ideal, and if the running speed of processing a plurality of faces is too low, the delay is large; the predicted Euler angles are only divided into a plurality of categories, the extracted features are too few, and the accuracy is difficult to guarantee.
Patent application CN109657553A discloses an attention detection method, mainly for judging the attention condition of a student according to the eye state, comprising the following steps: collecting a teaching image of a teaching teacher and a facial image of a student; extracting the state of eyes of the student from the frame image by a spectral extinction method based on the set depth data; calculating the sight line orientation of the eyes of the student; using an algorithm to extract facial features, detecting eye state (in practice, closing eyes for a long time and not detecting eyes for a long time is classified as distraction); the moment of the student's teaching in-process attention-deficit is recorded to carry out cross contrast with the teaching image, the reason of the student's attention-deficit is analyzed. However, the method has the following disadvantages: 1) the attention cost is judged to be too high by the distraction degree of the eyes and the sight line direction of the eyes, effective eye data are difficult to detect, the cost for installing a plurality of cameras is too high, and the realized reference effect is limited; 2) in the classroom environment, the classroom space is large, the number of people is too many, and the resolution of the back row face can only reach about 32 multiplied by 32; 3) the installation positions of the classroom cameras are generally at four vertex angles of a classroom, and the shielding phenomenon is extremely serious in a normal classroom teaching environment.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a real-time classroom attention detection method and system, which have the advantages of high processing speed and high accuracy and can obtain better results under the condition of low resolution.
The purpose of the invention can be realized by the following technical scheme:
a real-time classroom attention detection method comprises the steps of firstly carrying out face recognition and head posture estimation on a real-time image to obtain an object position and a corresponding attention Euler angle, then mapping the object position and the attention Euler angle to a three-dimensional space to obtain sight rays in the three-dimensional space, and finally obtaining an attention detection result of each object based on the sight rays.
Further, the face recognition is carried out by adopting a deep learning model based on a multitask convolutional neural network, the coordinates of a face region and face characteristic points are obtained, and the nose coordinate position of the face characteristic points is used as the object position.
Further, the human face region is processed by adopting a head pose estimation deep learning model, the head pose of the object taking the front view as a reference frame is obtained, and the attention Euler angle of the object is obtained based on the head pose.
Further, obtaining the attention euler angle of the subject based on the head pose is specifically:
and setting a reference Euler angle, and subtracting the reference Euler angle from the head posture to obtain the attention Euler angle.
Further, the object position and the attention Euler angle are mapped to the three-dimensional space by a preset perspective transformation matrix.
Further, the perspective transformation matrix is obtained by the following method:
a certain frame of image is acquired, four vertexes forming a square are taken as reference points, and a perspective transformation matrix is generated based on the relation between the reference points and coordinate points of the corresponding square in a three-dimensional space.
Further, the attention detection result includes whether the subject is in a head-down state and whether the subject has an attention abnormality.
Further, the object with the head pitch angle lower than alpha is marked as a head-down state, and alpha is more than or equal to 30 degrees and less than or equal to 60 degrees.
Further, an object having an attention slope greater than tan (theta/180) and a concentration greater than twice the average concentration is labeled as an attention abnormality according to the sight-line ray state, 45 DEG ≦ theta ≦ 150 DEG, wherein,
the attention slope is the slope of the projection of the sight ray on a classroom plane of the three-dimensional space;
the concentration ratio is obtained by the following formula:
Figure BDA0002476863670000031
wherein f (x) represents concentration, x represents an abscissa of an intersection point formed by a certain sight ray and a blackboard axis, l is a central abscissa, and an average value of intersection points formed by each sight ray and the blackboard axis is taken as the center.
The invention provides a real-time classroom attention detection system which comprises a memory and a processor, wherein the memory stores a computer program, and the processor calls the computer program to execute the steps of the method.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention automatically captures the face information by using a face remote detection mode, extracts various facial features, detects the attention condition of students by using a specific algorithm and realizes qualitative and quantitative statistical analysis of classroom environment. The invention can not only accurately obtain the attention condition of the students in real time and feed the attention condition back to the teacher, is beneficial to the teacher to make various judgments in the classroom according to the attention condition of the students and adjust the behavior of the teacher to improve the teaching efficiency, but also can use the obtained attention condition as an index to better evaluate the classroom quality and evaluate the teaching result of the teacher by combining the management system or the management system of the school, can also find the students with poor learning efficiency in time through the attention index, is convenient for the executive to know in time, thereby carrying out psychological persuasion and improving the learning efficiency of the students.
2. The MTCNN network is adopted, the processing speed is high, the accuracy is high, and attention detection can be performed on the photos with low resolution.
3. In a classroom scene, the invention does not need to use a plurality of cameras or a high-resolution camera, and can still realize the detection effect of other models or algorithms by using a low-resolution camera, thereby effectively reducing the cost of the attention detection technology.
4. When the MTCNN network is adopted for face detection, the delay is small, the detection time of each frame in a real-time classroom is less than 100ms, and the total time during testing is less than 1s, so that the real-time performance can be well ensured, and the effect of attention detection is effectively improved.
5. The invention uses the deep learning model of head attitude estimation, has higher speed than the traditional machine learning model, and can obtain the complete and high-accuracy Euler angle of the head attitude.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a top view of a three-dimensional space.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Abbreviations and Key term definitions
MTCNN: the multitask convolutional neural network can quickly realize the detection of the human face area and the detection of the human face key points.
HopeNet: an accurate and easy-to-use head pose estimation network is used for head pose estimation.
Example 1
As shown in fig. 1, the present embodiment provides a real-time classroom attention detection method, including:
step S101, capturing real-time image information of a classroom by using a camera, taking frames according to a certain time interval, processing the taken real-time image, and determining the corresponding relation between the image and a three-dimensional space.
The specific determination of the corresponding relationship between the image and the three-dimensional space is as follows: a certain frame of image is acquired, four vertexes forming a square in a classroom (generally, a first row of a desk in the classroom is taken as a side to form the square) are taken as input datum points, the datum points are taken as input points, and the input points are mapped to four coordinate points of the square in a three-dimensional space. And obtaining a corresponding perspective transformation matrix according to the four square input points and the square points in the three-dimensional space.
And step S102, carrying out image analysis by using a multi-target face recognition deep learning model based on a multi-task convolutional neural network (MTCNN), and recognizing the facial area of the student.
The MTCNN is used for analyzing the whole picture to obtain a plurality of face areas (face areas) and face characteristic points, and face area coordinates and face characteristic point coordinates are reserved. Wherein the student position is defined as the nose coordinate position of the characteristic point of his face.
In step S103, head pose estimation is performed.
And (3) taking the face region coordinates and the face characteristic point coordinates of each student in the previous step, intercepting the face parts of the students in the image, analyzing the face parts by using a head posture estimation deep learning model, obtaining Euler angles relative to the front view of the image, namely Euler angles of the attention of the students, and determining the attention direction of the students.
In this embodiment, the head pose estimation deep learning model adopts a HopeNet model.
The image obtained in the camera is a perspective view, and the head pose estimation is the head pose with the front view as a reference frame. If a true head pose angle is to be obtained, additional transformations are required to obtain the pose orientation. In order to ensure the relative accuracy of the direction of the student, the two-dimensional head posture of the front visual platform is used as a reference Euler angle, the reference angle is subtracted from the student head posture relative to the image obtained by the model, the Euler angle is the Euler angle of the head deflection of the student, and the Euler angle is the attention angle. The specific value of the reference Euler angle can be manually taken, and the average value of all the posture and orientation of the students can also be taken.
And step S104, mapping the student position and attention information to a three-dimensional space, and establishing a space direction map according to the position relation and attention relation of the students.
Because the coordinate positions of students in the perspective views formed by different machine positions are different, the image information of the classroom perspective view is required to be used for affine transformation. The classroom perspective of each machine position needs to artificially map four space coordinates of a prior equal square to an established space orientation so as to obtain a perspective transformation matrix. According to the perspective transformation matrix, each face captured by the camera of the machine position can be mapped to a two-dimensional rectangle of the space plane.
Using the perspective transformation matrix obtained in the step S101 to uniformly map all the student coordinates on the picture into the three-dimensional space; then, the attention angle of each student is mapped to a three-dimensional space, and an attention direction map is drawn according to the mapped corresponding student coordinates and the corresponding attention angle, so that a sight ray is obtained, as shown in fig. 2.
In step S105, an abnormal state is identified and recorded based on the sight line ray information.
Taking the xoy plane in the three-dimensional space, the top view forms a classroom plane, and the blackboard axis is set in the direction facing the students, with the horizontal axis being the x axis and the vertical axis being the y axis (in this embodiment, the blackboard axis is-2000).
And recording each intersection point of the sight ray and the blackboard axis, and taking the average value of each intersection point as the center. The distance from each intersection point to the center is longThe Gaussian kernel function of degree as a measure of its own concentration, i.e.
Figure BDA0002476863670000051
Wherein x is the intersection abscissa and l is the central abscissa.
The Euler angle is converted into a direction vector, and then the direction vector is projected into a classroom plane, so that the attention slope can be obtained through calculation. The Euler angle of attention forms a ray in three-dimensional space, and the slope is the slope of the projection of the ray onto this plane.
In the case of normal teaching, excessive deviation of the viewing angle is not normal in class, and it is likely that the student is not concentrated. Therefore, in this embodiment, the information identification and the exception record are specifically as follows: 1) objects with a head pitch angle (pitch angle) below α degrees are marked as heads down, typically between 30 ° and 60 °; 2) objects with attention slopes greater than tan (θ/180) and concentration greater than twice the average concentration are labeled as anomalous in the classroom plane, θ typically between 45 ° and 150 °.
The coordinates of each frame, the Euler angle of attention, the low head or abnormal object coordinates are recorded into a database as data so as to further analyze and utilize the data.
Example 2
This embodiment displays the attention situation in real time on the basis of embodiment 1: drawing the coordinates and the head posture angle of each student, labeling the students with low heads and abnormal heads, and displaying the average concentration of the class.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logic analysis, reasoning or limited experiments based on the prior art according to the concept of the present invention should be within the protection scope determined by the present invention.

Claims (10)

1. A real-time classroom attention detection method is characterized in that firstly, face recognition and head posture estimation are carried out on real-time images to obtain object positions and corresponding attention Euler angles, then the object positions and the attention Euler angles are mapped to a three-dimensional space to obtain sight rays in the three-dimensional space, and finally, attention detection results of all objects are obtained based on the sight rays.
2. The real-time classroom attention detection method of claim 1, wherein the face recognition is performed using a deep learning model based on a multitask convolutional neural network to obtain face regions and face feature point coordinates, and the nose coordinate position of a face feature point is used as the object position.
3. The real-time classroom attention detection method of claim 2, wherein the face region is processed using a head pose estimation deep learning model to obtain a head pose of the subject with a front view as a reference frame, and the Euler angle of attention of the subject is obtained based on the head pose.
4. The real-time classroom attention detection method of claim 3, wherein obtaining the Euler angle of attention of the subject based on the head pose is specifically:
and setting a reference Euler angle, and subtracting the reference Euler angle from the head posture to obtain the attention Euler angle.
5. The real-time classroom attention detection method of claim 1, wherein object position and attention euler angles are mapped to the three dimensional space through a pre-set perspective transformation matrix.
6. The real-time classroom attention detection method of claim 5, wherein the perspective transformation matrix is obtained by:
a certain frame of image is acquired, four vertexes forming a square are taken as reference points, and a perspective transformation matrix is generated based on the relation between the reference points and coordinate points of the corresponding square in a three-dimensional space.
7. The real-time classroom attention detection method of claim 1, wherein the attention detection result includes whether the subject is in a heads-down state and whether the subject has an attention abnormality.
8. The real-time classroom attention detection method of claim 7, wherein objects having a head pitch angle below α degrees are labeled as a heads-down state, 30 ° ≦ α ≦ 60 °.
9. The real-time classroom attention detection method of claim 7, wherein objects having an attention slope greater than tan (θ/180) and a concentration greater than two times the average concentration are labeled as attention abnormalities according to the line-of-sight ray status, with 45 ° ≦ θ ≦ 150 °, wherein,
the attention slope is the slope of the projection of the sight ray on a classroom plane of the three-dimensional space;
the concentration ratio is obtained by the following formula:
Figure FDA0002476863660000021
wherein f (x) represents concentration, x represents an abscissa of an intersection point formed by a certain sight ray and a blackboard axis, l is a central abscissa, and an average value of intersection points formed by each sight ray and the blackboard axis is taken as the center.
10. A real-time classroom attention detection system comprising a memory and a processor, the memory storing a computer program, wherein the processor invokes the computer program to perform the steps of the method according to any one of claims 1-9.
CN202010366511.0A 2020-04-30 2020-04-30 Real-time classroom attention detection method and system Pending CN111563449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010366511.0A CN111563449A (en) 2020-04-30 2020-04-30 Real-time classroom attention detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010366511.0A CN111563449A (en) 2020-04-30 2020-04-30 Real-time classroom attention detection method and system

Publications (1)

Publication Number Publication Date
CN111563449A true CN111563449A (en) 2020-08-21

Family

ID=72071752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010366511.0A Pending CN111563449A (en) 2020-04-30 2020-04-30 Real-time classroom attention detection method and system

Country Status (1)

Country Link
CN (1) CN111563449A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362659A (en) * 2021-06-17 2021-09-07 上海松鼠课堂人工智能科技有限公司 Dynamic projection control method and system for multimedia teaching
CN113743263A (en) * 2021-08-23 2021-12-03 华中师范大学 Method and system for measuring non-verbal behaviors of teacher
CN113807330A (en) * 2021-11-19 2021-12-17 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Three-dimensional sight estimation method and device for resource-constrained scene
CN113823135A (en) * 2021-09-30 2021-12-21 创泽智能机器人集团股份有限公司 Robot-based auxiliary teaching method and equipment
CN115861427A (en) * 2023-02-06 2023-03-28 成都智元汇信息技术股份有限公司 Indoor personnel dynamic positioning method and device based on image recognition and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517102A (en) * 2014-12-26 2015-04-15 华中师范大学 Method and system for detecting classroom attention of student
CN107609517A (en) * 2017-09-15 2018-01-19 华中科技大学 A kind of classroom behavior detecting system based on computer vision
CN109284737A (en) * 2018-10-22 2019-01-29 广东精标科技股份有限公司 A kind of students ' behavior analysis and identifying system for wisdom classroom
CN109657553A (en) * 2018-11-16 2019-04-19 江苏科技大学 A kind of student classroom attention detection method
CN109886246A (en) * 2019-03-04 2019-06-14 上海像我信息科技有限公司 A kind of personage's attention judgment method, device, system, equipment and storage medium
CN109902630A (en) * 2019-03-01 2019-06-18 上海像我信息科技有限公司 A kind of attention judgment method, device, system, equipment and storage medium
WO2020024400A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Class monitoring method and apparatus, computer device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517102A (en) * 2014-12-26 2015-04-15 华中师范大学 Method and system for detecting classroom attention of student
CN107609517A (en) * 2017-09-15 2018-01-19 华中科技大学 A kind of classroom behavior detecting system based on computer vision
WO2020024400A1 (en) * 2018-08-02 2020-02-06 平安科技(深圳)有限公司 Class monitoring method and apparatus, computer device, and storage medium
CN109284737A (en) * 2018-10-22 2019-01-29 广东精标科技股份有限公司 A kind of students ' behavior analysis and identifying system for wisdom classroom
CN109657553A (en) * 2018-11-16 2019-04-19 江苏科技大学 A kind of student classroom attention detection method
CN109902630A (en) * 2019-03-01 2019-06-18 上海像我信息科技有限公司 A kind of attention judgment method, device, system, equipment and storage medium
CN109886246A (en) * 2019-03-04 2019-06-14 上海像我信息科技有限公司 A kind of personage's attention judgment method, device, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BUI NGOC ANH等: "A Computer-Vision Based Application for Student Behavior Monitoring in Classroom", 《APPLIED SCIENCES》 *
XIN XU等: "Classroom Attention Analysis Based on Multiple Euler Angles Constraint and Head Pose Estimation", 《INTERNATIONAL CONFERENCE ON MULTIMEDIA MODELING》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362659A (en) * 2021-06-17 2021-09-07 上海松鼠课堂人工智能科技有限公司 Dynamic projection control method and system for multimedia teaching
CN113743263A (en) * 2021-08-23 2021-12-03 华中师范大学 Method and system for measuring non-verbal behaviors of teacher
WO2023024155A1 (en) * 2021-08-23 2023-03-02 华中师范大学 Method and system for measuring non-verbal behavior of teacher
CN113743263B (en) * 2021-08-23 2024-02-13 华中师范大学 Teacher nonverbal behavior measurement method and system
CN113823135A (en) * 2021-09-30 2021-12-21 创泽智能机器人集团股份有限公司 Robot-based auxiliary teaching method and equipment
CN113807330A (en) * 2021-11-19 2021-12-17 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Three-dimensional sight estimation method and device for resource-constrained scene
CN115861427A (en) * 2023-02-06 2023-03-28 成都智元汇信息技术股份有限公司 Indoor personnel dynamic positioning method and device based on image recognition and medium

Similar Documents

Publication Publication Date Title
CN111563449A (en) Real-time classroom attention detection method and system
Hold-Geoffroy et al. A perceptual measure for deep single image camera calibration
CN109284737A (en) A kind of students ' behavior analysis and identifying system for wisdom classroom
CN109284738B (en) Irregular face correction method and system
Lim et al. Automated classroom monitoring with connected visioning system
CN110197169A (en) A kind of contactless learning state monitoring system and learning state detection method
CN111507592B (en) Evaluation method for active modification behaviors of prisoners
Wang et al. Investigation into recognition algorithm of helmet violation based on YOLOv5-CBAM-DCN
CN105719248B (en) A kind of real-time Facial metamorphosis method and its system
Nonaka et al. Dynamic 3d gaze from afar: Deep gaze estimation from temporal eye-head-body coordination
CN109886246A (en) A kind of personage's attention judgment method, device, system, equipment and storage medium
CN114820924A (en) Method and system for analyzing museum visit based on BIM and video monitoring
CN115223179A (en) Classroom teaching data processing method and system based on answer codes
CN115933930A (en) Method, terminal and device for analyzing attention of learning object in education meta universe
CN114677644A (en) Student seating distribution identification method and system based on classroom monitoring video
CN112861809B (en) Classroom head-up detection system based on multi-target video analysis and working method thereof
CN114565976A (en) Training intelligent test method and device
CN112801038B (en) Multi-view face in-vivo detection method and system
CN111898552B (en) Method and device for distinguishing person attention target object and computer equipment
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
Su et al. Smart training: Mask R-CNN oriented approach
CN115829234A (en) Automatic supervision system based on classroom detection and working method thereof
Martin et al. An evaluation of different methods for 3d-driver-body-pose estimation
CN111652045B (en) Classroom teaching quality assessment method and system
CN111144333B (en) Teacher behavior monitoring method based on sight tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210909

Address after: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Applicant after: SHANGHAI JIAO TONG University

Applicant after: Shanghai Information Industry (Group) Co.,Ltd.

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Applicant before: SHANGHAI JIAO TONG University

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200821

RJ01 Rejection of invention patent application after publication