CN116110091A - Online learning state monitoring system - Google Patents

Online learning state monitoring system Download PDF

Info

Publication number
CN116110091A
CN116110091A CN202211416724.5A CN202211416724A CN116110091A CN 116110091 A CN116110091 A CN 116110091A CN 202211416724 A CN202211416724 A CN 202211416724A CN 116110091 A CN116110091 A CN 116110091A
Authority
CN
China
Prior art keywords
learning state
students
learning
monitoring system
state monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211416724.5A
Other languages
Chinese (zh)
Inventor
于波
叶朝挺
石屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202211416724.5A priority Critical patent/CN116110091A/en
Publication of CN116110091A publication Critical patent/CN116110091A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Technology (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Game Theory and Decision Science (AREA)

Abstract

The invention provides an online learning state monitoring system, and relates to the field of computer vision. The on-line learning state monitoring system comprises a face recognition system and a learning state evaluation system, wherein the face recognition system captures video images of students during on-line learning through cameras, and face key point data of the students in a class, including opening and closing states of eyes of the students and head postures of the students, are obtained after the video images are processed through a convolutional neural network. And inputting the data into a learning state evaluation system, and outputting the scores of the learning states of the students after processing according to the analysis fuzzy matrix algorithm. The online learning state monitoring system is designed for solving the problems that the learning efficiency is low when the existing students learn online, and the lecturer teacher is difficult to obtain direct learning state feedback of the students. The invention can enable teachers to know the learning state of students in time when teaching online, and obtain real-time learning feedback, thereby helping students to improve online learning efficiency.

Description

Online learning state monitoring system
Technical Field
The invention relates to the field of computer vision, in particular to an online learning state monitoring system.
Background
Image recognition is an important branch in the field of computer vision, and detection and recognition of specific target objects can be effectively realized through an image recognition technology, wherein face detection is an important direction in the field of image recognition, and the method is widely applicable to fields of image photography, security, intelligent household appliances, medical treatment, epidemic prevention and the like.
The students select on-line education, and the problems of inconvenient communication of the teachers and students, low enthusiasm and low learning efficiency of the students are faced when the students learn on-line. Aiming at the problem, no comprehensive, reasonable and reliable solution exists in the market at present. Therefore, I pay attention to evaluating the learning state of online learning of students, and design the online learning state monitoring system. The teaching aid has the advantages that teachers can more conveniently and worry-saving to prompt students to learn, the advantage of convenience in online teaching is fully utilized, and the problem that students are difficult to find due to poor learning state is solved aiming at pain point force of online education. Has practical significance and good application prospect.
Disclosure of Invention
Aiming at the problems, the invention provides an online learning state monitoring system and provides the following technical scheme.
The application provides an online learning state monitoring system, which comprises a face recognition system and a learning state evaluation system.
The face recognition system captures video images of students during online learning through cameras, and obtains face key point data of the students in a class after the video images are processed through a convolutional neural network, wherein the face key point data comprise opening and closing states of eyes of the students, head postures of the students and emotion expressions of the students. And inputting the data into a learning state evaluation system, and outputting the scores of the learning states of the students after processing according to the analysis fuzzy matrix algorithm.
The face recognition system comprises an image capturing module, an image preprocessing module and a face detection module.
The image capturing module captures video images of students during online learning through the camera, and intercepts the images from the video.
The image preprocessing module is used for adjusting the gray level and the size of the image captured by the image capturing module and outputting two output pictures for the input of a follow-up neural network.
The face detection module comprises a plurality of convolutional neural networks, judges whether available face information exists in the image, and records the opening and closing states of eyes and the head gestures in the face information. The face detection module takes the picture as input, outputs two results, and inputs the results into the learning state evaluation system to analyze the learning state of the student.
The learning state evaluation system comprises a state rating module, wherein the eye opening and closing state and head posture data output by the face detection module are processed by an analysis fuzzy matrix algorithm, and finally the learning state of students is output
The invention has the advantages that: the online learning state of students can be better judged, the scoring synthesizes a plurality of indexes, the whole scoring result cannot be influenced due to the fact that one index is too large or too small, the overall view is objective, and the accuracy is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a block diagram of an on-line learning state monitoring system provided by the present invention;
Detailed Description
The invention is further described below with reference to the drawings and examples.
Embodiments of the present invention are further described below with reference to the drawings and detailed description. The following embodiments and drawings are merely illustrative of the invention.
As shown in fig. 1, the on-line learning state monitoring system is composed of a bottom pyramid neural network, a feature selection layer and a fuzzy matrix layer:
the pyramid layer mainly recognizes key point features of the face, and corrects the obtained feature results through a subsequent prediction layer and a loss layer;
the feature selection layer screens various features obtained from the pyramid layer, selects required features and processes the data;
the fuzzy matrix layer firstly uses the characteristics of the closed state of eyes and the like to judge the class state of students. If the features are within the standard range, the extracted features are input into a fuzzy matrix for calculation, and a final class state scoring result is obtained.
The pyramid layers include a low level feature pyramid layer, a context sensitive prediction layer, and a pyramid box loss layer.
The low-level feature pyramid layer is mainly used for fusing face context information contained in the high-level feature map into the low-level feature map. The low-level feature pyramids do not start from top to bottom from the top layer, but rather from the middle. Therefore, when the captured picture is large in size, more context information in the visual field can be obtained, so that the acquired student facial expression can have better accuracy.
The feature selection layer comprises emotion feature selection, head feature selection and eye closing state feature selection:
the emotion feature selection uses a convolutional neural network to identify the possible expression of the student in the class, the algorithm predicts the facial expression of the face by using four channels of x, y, w and h, the result obtained by the face recognition expression of each layer is weighted and fused, a comprehensive anchor is obtained, and the comprehensive anchor is learned and compared with the existing expression in the face library, so that the emotion feature is obtained through the expression of the student in the class.
Head characteristics are selected to quantify the head posture of the student into a pitch angle, a yaw angle and a roll angle. And obtaining key points of the face in the camera, namely a world coordinate system by using a face recognition technology. The transformation matrix formula of the pixel coordinate system and the world coordinates is then used to obtain the required determined pose, i.e. the affine transformation matrix from the 3D model to the 2D face in the picture, i.e. the translation matrix T and the rotation matrix R is determined. The euler angles for matrices T and R are then solved using the axis-angle method. To obtain a head characteristic.
The eye closure state feature selection is operated according to key points, including the outline of the whole eye and key edge points of upper eyelid and lower eyelid. The shape of the eye after the related mathematical derivation can be approximately regarded as an ellipse, so that the approximate area of the whole eye can be obtained by returning the related key point coordinates and the related mathematical formula, and the approximate area of the opening part of the eye can be obtained by the same way, namely, the percentage of the opening part of the eye occupying the whole eye can be obtained.
The fuzzy comprehensive evaluation is an evaluation method based on fuzzy mathematics, quantifies some factors with unclear boundaries and difficult quantification, and has good evaluation effect on multi-factor and multi-level complex problems. Therefore, the invention selects fuzzy comprehensive evaluation to evaluate the classroom learning state. The mathematical model for fuzzy comprehensive evaluation consists of four factors, namely a factor set, a comment set, a fuzzy comprehensive judgment matrix and weights:
when a factor set, namely an evaluation factor, is determined, determining a first-level evaluation index as a facial expression and a head gesture, and determining positive, neutral and negative as second-level indexes according to the type of the expression aiming at the first-level evaluation index of the facial expression; for the first-level evaluation index of the head gesture, the second-level index is further determined by dividing the head angle.
The invention classifies the evaluation of learning state into very focused, inattentive and very uninteresting when determining the comment set.
The present invention, in determining the weight(s),
the weights of the first-level evaluation index facial expression and the head gesture are respectively set as two variables, and the sum of the two variables is equal to 1;
setting the two-level evaluation index positive, neutral and negative expression weights for the facial expression as three variables respectively, wherein the sum of the three variables is equal to 1;
the weights of the different angle combinations are respectively set as four variables aiming at the second-level evaluation index of the head gesture, and the sum of the four variables is equal to 1.

Claims (6)

1. An on-line learning state monitoring system, characterized in that: the face recognition system and the learning state evaluation system are included.
2. The on-line learning state monitoring system of claim 1 wherein: the face recognition system comprises an image capturing module, an image preprocessing module and a face detection module.
3. The on-line learning state monitoring system of claim 2 wherein: the image capturing module captures video images of students during online learning through the camera, and intercepts the images from the video.
4. The on-line learning state monitoring system of claim 2 wherein: the image preprocessing module is used for adjusting the gray level and the size of the image captured by the image capturing module and outputting two output pictures for the input of a follow-up neural network.
5. The on-line learning state monitoring system of claim 2 wherein: the face detection module comprises a plurality of convolutional neural networks, judges whether available face information exists in an image, and records an eye opening and closing state, a head posture and an emotion state in the face information. The face detection module takes the picture as input, outputs three results, and inputs the results into the learning state evaluation system to analyze the learning state of the student.
6. The on-line learning state monitoring system of claim 1 wherein: the learning state evaluation system comprises a state rating module, wherein the eye opening and closing state and head posture data according to claim 5 are processed by an analysis fuzzy matrix algorithm, and finally the learning state score of the student is output.
CN202211416724.5A 2022-11-14 2022-11-14 Online learning state monitoring system Pending CN116110091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211416724.5A CN116110091A (en) 2022-11-14 2022-11-14 Online learning state monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211416724.5A CN116110091A (en) 2022-11-14 2022-11-14 Online learning state monitoring system

Publications (1)

Publication Number Publication Date
CN116110091A true CN116110091A (en) 2023-05-12

Family

ID=86253446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211416724.5A Pending CN116110091A (en) 2022-11-14 2022-11-14 Online learning state monitoring system

Country Status (1)

Country Link
CN (1) CN116110091A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116884068A (en) * 2023-07-14 2023-10-13 广州云天数据技术有限公司 Operation and maintenance internet of things management method, platform and storage medium based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116884068A (en) * 2023-07-14 2023-10-13 广州云天数据技术有限公司 Operation and maintenance internet of things management method, platform and storage medium based on artificial intelligence
CN116884068B (en) * 2023-07-14 2024-01-26 广州云天数据技术有限公司 Operation and maintenance internet of things management method, platform and storage medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111709409B (en) Face living body detection method, device, equipment and medium
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN111507592B (en) Evaluation method for active modification behaviors of prisoners
Xu et al. Classroom attention analysis based on multiple euler angles constraint and head pose estimation
CN116110091A (en) Online learning state monitoring system
Yuan et al. Online classroom teaching quality evaluation system based on facial feature recognition
CN115937928A (en) Learning state monitoring method and system based on multi-vision feature fusion
Wang et al. Yolov5 enhanced learning behavior recognition and analysis in smart classroom with multiple students
Ashwinkumar et al. Deep learning based approach for facilitating online proctoring using transfer learning
CN113239794B (en) Online learning-oriented learning state automatic identification method
Agarwal et al. Face recognition based smart and robust attendance monitoring using deep CNN
Tang et al. Automatic facial expression analysis of students in teaching environments
Yang et al. Student eye gaze tracking during MOOC teaching
Yang et al. Deep learning based real-time facial mask detection and crowd monitoring
Ray et al. Design and implementation of affective e-learning strategy based on facial emotion recognition
Cheng Video-based Student Classroom Classroom Behavior State Analysis
CN114120443A (en) Classroom teaching gesture recognition method and system based on 3D human body posture estimation
Gao et al. Identifying student behavioural states in business English listening classroom based on SSD algorithm
Zhu et al. Adaptive Gabor algorithm for face posture and its application in blink detection
CN116894978B (en) Online examination anti-cheating system integrating facial emotion and behavior multi-characteristics
Wang et al. Learning Behavior Recognition in Smart Classroom with Multiple Students Based on YOLOv5
CN114140282B (en) Method and device for quickly reviewing answers of general teaching classroom based on deep learning
He Detection method of students' online learning state based on posture recognition
Zhou Multimedia English online learning behavior intelligent monitoring system based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination