CN110443226B - Student state evaluation method and system based on posture recognition - Google Patents
Student state evaluation method and system based on posture recognition Download PDFInfo
- Publication number
- CN110443226B CN110443226B CN201910758393.5A CN201910758393A CN110443226B CN 110443226 B CN110443226 B CN 110443226B CN 201910758393 A CN201910758393 A CN 201910758393A CN 110443226 B CN110443226 B CN 110443226B
- Authority
- CN
- China
- Prior art keywords
- student
- students
- classroom
- video clip
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 40
- 230000036544 posture Effects 0.000 claims abstract description 93
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 210000003127 knee Anatomy 0.000 claims description 10
- 230000001133 acceleration Effects 0.000 claims description 8
- 230000003993 interaction Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 230000036461 convulsion Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000007637 random forest analysis Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 208000012661 Dyskinesia Diseases 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 230000007787 long-term memory Effects 0.000 claims description 2
- 230000006403 short-term memory Effects 0.000 claims description 2
- 238000012552 review Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 206010041349 Somnolence Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007958 sleep Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Technology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a student state evaluation method and system based on posture recognition. Acquiring classroom videos of all students in a classroom and teaching videos of teachers in real time; intercepting a plurality of video clips from the classroom video at intervals; associating a time tag with each video segment; segmenting single video clips of all students in the video clips; for any single video clip, personal information of students in the single video clip is identified by a face identification algorithm, the postures of the students in the single video clip are identified based on a posture identification model, the classroom performance evaluation of each student is output, and time labels corresponding to all the postures which do not meet the classroom requirement in each student posture time set are identified. The gestures in all video clips of the students are scientifically integrated to carry out classroom performance evaluation on the students, the time labels corresponding to the gestures which are not met by the students and required by the students are recorded, the students can review the missed knowledge points in a targeted manner after classes conveniently, and teachers can also improve teaching modes accordingly.
Description
Technical Field
The invention relates to the field of intelligent education, in particular to a student state evaluation method and system based on posture recognition.
Background
With the maturity of image technology, artificial intelligence technology is widely applied to classroom teaching. Colleges and universities have heavy courses and limited class time, and teachers need to complete teaching tasks in a short time. The phenomena of class escape, class doze and distraction of students in colleges and universities occur. In order to ensure the classroom teaching quality, college teachers often interact, for example, students are required to answer questions on site, classroom discipline is maintained, sleepy and distracting students are reminded, classroom performance of the students is recorded, teaching difficulty is undoubtedly increased for the teachers, continuity of classroom teaching is reduced, and teaching quality is influenced. Therefore, how to automatically and scientifically and accurately evaluate the classroom performance of each student has important significance.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a student state evaluation method and system based on posture recognition.
In order to achieve the above object of the present invention, according to a first aspect of the present invention, there is provided a student status evaluation method based on gesture recognition, comprising:
step S1, acquiring classroom videos of all students in a classroom and teaching videos of teachers in real time, and storing the teaching videos of the teachers;
intercepting a plurality of video clips from the classroom video at intervals;
each video clip is associated with a time tag for recording the actual shooting time of the video clip;
step S2, the following processing is performed for each video clip:
segmenting single video clips of all students in the video clips;
for any single video clip, identifying personal information of students in the single video clip by using a face recognition algorithm, identifying postures of the students in the single video clip based on a posture recognition model, and associating the personal information with the postures;
step S3, constructing a posture time set of each student by using the personal information obtained from all the video clips and the postures associated with the personal information, and setting the posture time set of the kth student as Uk={[sk1,sk2,...,ski...,skN],[T1,T2,...,Ti,...,TN]};
Wherein k is more than or equal to 1 and less than or equal to n, and n represents the total number of students and is a positive integer; n represents the total number of video clips and is a positive integer; skiRepresenting the posture of the kth student in the ith video clip; t isiA time label representing the ith video clip, i is more than or equal to 1 and less than or equal to N; skiAnd TiOne-to-one correspondence is realized;
obtaining classroom performance evaluation of each student based on the posture time set of all students;
and step S4, outputting classroom performance evaluation of each student and time labels corresponding to all postures which do not meet classroom requirements in each student posture time set.
The beneficial effects of the above technical scheme are: the method collects classroom videos and teacher teaching videos in real time, the postures of students in all video segments are identified from the classroom videos, classroom performance evaluation is carried out on the students by scientifically integrating the postures of all the video segments of the students, meanwhile, time labels corresponding to the postures of the students not meeting classroom requirements are recorded, so that the students can conveniently combine the time labels and the teacher teaching videos to review missed knowledge points in a targeted manner, the teacher can check more time labels of the students not listening to the lectures seriously, the teaching mode is improved, and the teaching quality is better improved.
In order to achieve the above object of the present invention, according to a second aspect of the present invention, there is provided a student class state evaluation system comprising a first camera for photographing all students in a class, a second camera for photographing contents of teaching by a teacher in the class, and a server;
the server receives the student classroom videos output by the first camera and the teacher teaching videos output by the second camera, obtains classroom performance evaluation of each student according to the student state evaluation method based on posture identification, stores time labels corresponding to postures of the students which do not accord with classroom requirements, and stores the teacher teaching videos.
The beneficial effects of the above technical scheme are: the system collects classroom videos and teacher teaching videos in real time, time labels which are not in accordance with classroom requirement posture correspondence of students and teacher teaching videos are stored, postures of the students in all video clips are identified from the classroom videos, classroom performance evaluation is conducted on the students by scientifically integrating postures in all video clips of the students, meanwhile, time labels which are not in accordance with classroom requirement posture correspondence of the students are recorded, the students can conveniently combine the time labels and the teacher teaching videos to review and omit specific knowledge points, the teacher can also check time labels which are not really listened to by the students and correspond to teaching video contents, a teaching mode is improved, and teaching quality is better improved.
Drawings
FIG. 1 is a schematic flow chart of a student status evaluation method based on gesture recognition according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-position gesture in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a gesture recognition model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a system layout according to an embodiment of the present invention;
FIG. 5 is a system connection diagram according to an embodiment of the present invention.
Reference numerals:
a head part; b, neck part; c, a left shoulder; d, back; e, the left elbow; f, the root of the coccyx; g, left hand; h, left hip; u left knee; v right foot; w left foot; m right knee; z right hip; o right hand; p right elbow; q right shoulder; 1 a first camera; 2 a second camera; 3, a server; 4 student terminal equipment.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The invention discloses a student state evaluation method based on posture recognition, and in a preferred embodiment, a flow schematic diagram of the method is shown in fig. 1, and specifically comprises the following steps:
step S1, acquiring classroom videos of all students in a classroom and teaching videos of teachers in real time, and storing the teaching videos of the teachers;
intercepting a plurality of video clips from the classroom video at intervals;
each video clip is associated with a time tag for recording the actual shooting time of the video clip;
step S2, the following processing is performed for each video clip:
segmenting single video clips of all students in the video clips;
for any single video clip, identifying the personal information of students in the single video clip by using a face identification algorithm, identifying the postures of the students in the single video clip based on a posture identification model, and associating the personal information with the postures;
step S3, constructing a posture time set of each student by using the personal information obtained from all the video clips and the postures associated with the personal information, and setting the posture time set of the kth student as Uk={[sk1,sk2,...,ski...,skN],[T1,T2,...,Ti,...,TN]};
Wherein k is more than or equal to 1 and less than or equal to n, and n represents the total number of students and is a positive integer; n represents the total number of video clips and is a positive integer; skiRepresenting the posture of the kth student in the ith video clip; t isiA time label representing the ith video clip, i is more than or equal to 1 and less than or equal to N; skiAnd TiOne-to-one correspondence is realized;
obtaining classroom performance evaluation of each student based on the posture time set of all students;
and step S4, outputting classroom performance evaluation of each student and time labels corresponding to all postures which do not meet classroom requirements in each student posture time set.
In the embodiment, the plurality of video clips may be cut from the classroom video at equal or unequal intervals, and the interval time of the equal interval may be 3-7 seconds, preferably, 5 seconds may be selected.
In the present embodiment, the teacher teaching video is preferably stored in a common server for the students and teachers to inquire.
In this embodiment, one video clip contains at least one frame of image. The single video segments of all students are segmented from the video segments, preferably but not limited to manual segmentation, or segmentation according to distribution areas of positions of desks and chairs of students in the images, or segmentation processing by adopting a multi-target human body area segmentation method in the existing video, for example, the method disclosed in the publication number CN108648198A can be adopted.
In this embodiment, the face recognition algorithm is an existing face recognition algorithm, preferably, all the pictures of the students in the classroom and the personal information associated with the pictures are stored in advance, the face images in the single video clip are compared with the prestored pictures one by one, and if the similarity between the face images and the prestored pictures is greater than or equal to 90%, the students in the single video clip can be considered as the students associated with the pictures. The personal information preferably includes, but is not limited to, student names and/or school numbers, etc.
In the embodiment, the output classroom performance evaluation of each student and the time labels corresponding to all postures which do not meet the classroom requirement in each student posture time set can be stored in the open server so as to be convenient for the related personnel to inquire. Preferably, a student set Stu ═ { Stu ═ is defined1,Stu2,Stu3,...,Stun},StukClass information set, Stu, representing the kth studentk={pk,class,[sk1,sk2,...,ski...,skN],[T1,T2,...,TN]}; class represents course information; p is a radical ofkPersonal information representing a kth student; k is more than or equal to 1 and less than or equal to n;
in a preferred embodiment of the present invention, in step S2, the method further includes the step of checking attendance of the student, and specifically includes:
the method comprises the steps of obtaining pre-stored personal information of all corresponding students in a classroom, accumulating the times that the personal information of each corresponding student is matched with the personal information of the students obtained through face recognition by all single video clips, considering that the corresponding student is absent and reminding the corresponding student to go to class if the times are less than or equal to a preset time threshold, and considering that the corresponding student normally goes out of duty if the times are greater than the preset time threshold.
In the embodiment, the attendance condition of students is automatically checked, and teachers do not need to participate, so that the workload of the teachers is reduced.
In the present embodiment, the number threshold is preferably, but not limited to, 70% N to 90% N. The reminding mode of the students absent from the office can adopt the server to automatically send notification reminding information to the intelligent equipment of the students absent from the office, and remind the students of the teaching time and place information of the course.
In a preferred embodiment of the present invention, as shown in fig. 3, in step S2, the step of recognizing the posture of the student in the single-person video clip based on the posture recognition model specifically includes:
establishing a gesture recognition model, inputting the single video clip into the gesture recognition model, and outputting the gestures of students in the single video clip by the gesture recognition model;
the process of establishing the gesture recognition model comprises the following steps:
step S21, constructing a training data set marked as Vlabled(ii) a Training data set VlabledThe system comprises a plurality of single-person video clips provided with attitude tags;
step S22, extracting the video characteristics of a single person video clip in the training data set through a video characteristic extraction module;
and step S23, training and verifying the random forest classifier by taking the video characteristics of the single video clip in the training data set as input and the attitude label of the single video clip as a classification result to obtain an attitude identification model.
In the embodiment, the gesture recognition model is trained by adopting a random forest classification method based on deep learning, the intelligent degree is high, and manual participation is not needed.
In this embodiment, a random forest classifier is used, the number of base decision trees is 10, the upper limit of the length of all predicted paths per tree is 5, and this trained classifier is used to classify the pose of the video.
In a preferred embodiment of the present invention, in step S21, the specific process of constructing the training data set includes:
step S211, a plurality of video clips are cut from the existing student classroom videos, single video clips of all students in each video clip are cut, all the single video clips are constructed into a single video clip set and recorded as Vunlabled;
Step S212, presetting a plurality of postures and postures
s belongs to { reading, writing, listening, standing up to answer questions, raising hands, speaking a little, playing a mobile phone, sleeping };
sending each single video clip in the single video clip set to a plurality of visitors respectively, scoring the coincidence degree of the single video clip and each gesture by the visitors, and calculating the average value of the coincidence degree score of each single video clip and each gesture:
wherein the content of the first and second substances,an average value representing the score of the degree of coincidence of the ith' single video segment with the mth pose s (m) in the single video segment set; n ispRepresenting the number of visitors scored for the i' th single-person video clip; i ', m and j' are positive integers, m is more than or equal to 1 and less than or equal to 8, j 'is the serial number of the interviewee, j' is more than or equal to 1 and less than or equal to np;
Step S213, setting a posture label for the single video clip:
if the average of the score for the degree of coincidence of the ith' single person video clip with the mth pose s (m) satisfies:setting a posture label s for the ith single-person video clipi'And adding the ith' single person video clip into the training data set VlabledSaid attitude tag si'Comprises the following steps:wherein the content of the first and second substances,is a preset score threshold;
if the average of the scores of the matching degree of the ith' single-person video clip and the mth posture s (m) is not satisfiedOr the average of the scores of the coincidence degree of the ith' single-person video clip and more than one gesture satisfiesNot adding the ith' single person video clip to the training data set Vlabled。
In the present embodiment, the degree of coincidence between the individual video segments and 8 postures of reading, writing, listening, speaking, standing up to answer questions, holding hands, speaking a little, playing a mobile phone, and dozing may be scored by a plurality of visitors in a 5-point or 10-point full-scale system, where a full-scale indicates that the visitors consider the coincidence completely coincident with each other, and a 0-point indicates that the visitors consider the coincidence completely non-coincident with each other.The preset score threshold may be 70% of full score, such as when a 5-point full score system is used,the score threshold for the preset may be 3.5.
In the embodiment, the label classification is carried out on the existing data set by utilizing the scoring system of the interviewee, so that the method is more humanized and accurate.
In a preferred embodiment of the present invention, as shown in fig. 3, in step S22, the process of extracting the video feature of the video clip of a single person in the training data set by the video feature extraction module includes:
step S221, extracting the three-dimensional posture of each frame of image in the ith single video clip to obtain a three-dimensional posture set G; preferably, the three-dimensional posture in each frame of image is estimated and obtained by the existing openpos method (the gilthub open source human body posture identification project), and a schematic diagram of the three-dimensional posture is shown in fig. 2.
G={P1,P2,...,Pττ is the total frame number of images contained in the ith single-person video clip and is a positive integer; p1、 P2、…、PτRespectively representing the three-dimensional postures of the 1 st frame image, the 2 nd frame image, … and the tau th frame image in the ith' single person video clip;
step S222, obtaining deep layer characteristics F of the three-dimensional posture set G through the existing long and short term memory modeldeep;
Step S223, extracting class characteristics F from the three-dimensional gesture set GclassClass feature FclassIncluding a posture feature FposeAnd motion characteristics Fmove;
Attitude characterizationWherein, Ft,poseRepresenting the attitude feature of the t-th image frame, Ft,pose={f1,f2,...,f16In the t-th image frame, f1The size of a picture occupied by a human body in the t-th image frame is represented, namely the sum of pixel points in the human body area or the proportion of the human body area in the whole image frame; f. of2The angle formed by the connecting line from the left shoulder c to the neck b and the connecting line from the right shoulder q to the neck b is represented as an angle cbq as shown in fig. 2; f. of3Representing the angle formed by the connecting line from the left shoulder c to the neck b and the connecting line from the head a to the neck b, as shown in fig. 2, the angle is ≈ cba; f. of4Representing the angle formed by the connecting line from the right shoulder q to the neck b and the connecting line from the head a to the neck b, as shown in fig. 2, is an angle of qba; f. of5Representing the angle formed by the connecting line from the head a to the neck b and the connecting line from the back d to the neck b, as shown in fig. 2, which is the angle ^ abd; f. of6The angle formed by a connecting line from the neck b to the back d and a connecting line from the coccyx root f to the back d is represented as an angle bdf as shown in fig. 2; f. of7The angle formed by a connecting line from the left shoulder c to the left elbow e and a connecting line from the left hand g to the left elbow e is represented as an angle ceg as shown in fig. 2; f. of8Representing the angle formed by the connecting line from the right shoulder q to the right elbow p and the connecting line from the right hand o to the right elbow p, as shown in fig. 2, which is an angle qpo; f. of9The angle formed by the connecting line from the left hip h to the left knee u and the connecting line from the left foot to the left knee u is shown as an angle huw in fig. 2; f. of10Represents the angle formed by the connecting line from the right hip part to the right knee m and the connecting line from the right foot v to the right knee m, as shown in fig. 2, is the angle-zmv;f11Represents the distance of the right hand o from the coccyx root f, which is shown in FIG. 2 as distance of; f. of12Represents the distance between the left hand g and the coccyx root f, which is the distance hf as shown in FIG. 2; f. of13Represents the distance between the right foot v and the coccyx root f, which is distance vf as shown in FIG. 2; f. of14The distance between the left foot w and the coccyx root f is represented as distance wf in FIG. 2; f. of15Represents the area of the triangle enclosed by the two hands and the neck part b, as shown in fig. 2, the area of the triangle Δ obg; f. of16The area of a triangle formed by the two feet and the coccyx root f is shown as Δ vfw in fig. 2; t is more than or equal to 1 and less than or equal to tau;
movement characteristicsWherein, Ft,moveRepresenting the motion characteristics of the t-th image frame, Ft,move={f17,f18,...,f25In the t-th image frame, f17Representing the right hand o velocity, f18Representing the right hand o acceleration, f19Indicates the degree of jerk of the right hand's o-movement, f20Denotes the left-hand g speed, f21Representing the left hand g acceleration, f22Indicates the degree of jerk of the left hand g, f23Indicating head a speed, f24Representing head a acceleration, f25Indicating the degree of jerkiness of the head a;
single person video clip viClass-giving feature of Fclass=Fpose+Fmove;
Step S224, a single video clip viThe video characteristics of (a) are: ftotal=Fdeep+Fclass。
In the present embodiment, for the three joints of the two hands and the head a, the positions of the three joints in each frame of image are recorded, and then the single-person video clip has the time-varying displacement records of the three joints, which can be represented by position vectors. The first-order derivation of the position vector of a certain joint is carried out to obtain the velocity of the joint, and the second-order derivation of the position vector of the certain joint is carried out to obtain the acceleration of the joint motion; the third-order derivation of the position vector of a certain joint obtains the degree of acceleration change, which is called the jerk degree.
In the embodiment, a plurality of posture features are obtained from the three-dimensional posture of each frame of image to be constructed together, so that the posture of the student is more accurately and scientifically recognized. Of course, f may not be completely obtained in one frame of image due to the action influence of sitting, exercising, lying down and the like of the student1To f25The total number of the feature parameters is 25, and the feature parameter which cannot be obtained at this time can be obtained by averaging the feature parameter (a plurality of history values of the feature parameter) in the previous multi-frame images.
In a preferred embodiment of the present invention, in step S3, the step of obtaining the classroom performance assessment of each student based on the posture time sets of all students specifically includes:
step S31, the gesture S of the student in the single-person video clip is recognized based on the gesture recognition model is set as:
step S32, obtaining the listening state, the classroom interaction degree, the classroom activity and the inattention degree of each student in the classroom;
class attending state L of kth studentkComprises the following steps:wherein, L iskiFor the kth student in the ith single-person video clip viIn the state of being in the class in (1),skiindicating that the kth student is in the ith single-person video clip viThe posture of (1);
preferably, if the kth student is in the ith single-person video clip viIf the lecture listening state in the student can not be identified, the historical lecture listening state information of the student is used as the lecture listening state of the current single-person video clipThe state, such as the lecture listening state in the (i-1) th or (i-2) th single-person video clip is used as the lecture listening state of the current single-person video clip.
The classroom interaction degree is determined by the hands-on and standing-up answering questions in the class gesture, and the classroom interaction degree T of the kth studentkComprises the following steps:wherein the content of the first and second substances,is a preset first weight parameter; omegahand_kIndicating that the kth student is in the position of holding hands in the N video clips,Ωstand_kindicating that the kth student is in a position to answer the question immediately in N video clips,
class liveness B of kth studentkComprises the following steps: b isk=max(0,min(1,1-ΔBk) ); wherein, Delta BkIndicating the classroom activity B of the kth studentkAmount of change of (a) Δ Bk=δ1*(τread_k+τwrite_k+τlisten_k)+δ2*(τtalk_k+τphone_k+τsleep_k)-δ3*τhand_k-δ4*τstand_k,δ1Representing a predetermined first reward and penalty factor, δ2Representing a predetermined second reward penalty factor, δ3Representing a predetermined third reward penalty factor, δ4Representing a predetermined fourth reward and penalty factor, τread_k、τwrite_k、τlisten_k、τtalk_k、τphone_k、τsleep_k、τhand_k、τstand_kRespectively showing that the detection states of continuous M video clips of the kth student in the N video clips are reading, writing, listening and speaking, talking on the small phone and playing the mobile phoneThe number of times of sleeping, raising hands and standing up for answering the state of the question, wherein M is more than or equal to 1 and less than or equal to N; preferably, M is 8 to 15 times, and may be 10 times.
The class activeness B of the students is determined by whether the students continuously keep reading, writing, listening, speaking, playing mobile phones, dozing postures, raising hands and answering questions standing up for a long time. Students with high liveness in class often do not read or listen to the book all the time, but listen to the book and take notes while listening to the speech. The reference value of B is 1 and ranges from 0 to 1.
According to the conditions that the student continuously speaks a little phone, plays a mobile phone and sleeps, the inattentive degree of the student and the inattentive degree Z of the kth student are definedkComprises the following steps: zk=τtalk_k+τphone_k+τsleep_k;
Step S33, obtaining classroom performance evaluation of each student based on the listening state, classroom interaction degree, classroom activity and inattention degree of all students in classroom, specifically comprising:
step S331, constructing an initial decision matrix C;
normalizing the initial decision matrix C to obtain a normalized decision matrix C';
step S332, setting a weight matrix W:
wherein eta is1Weight, η, representing the state of attending a lesson2Weight, η, representing the degree of classroom interaction3Weight, η, representing class liveness4A weight representing a degree of inattention;
step S333, calculating a weighted decision matrix D,
Wherein j represents the serial number of the columns of the weighted decision matrix D, and j is 1,2,3, 4;
step S334, calculating Euclidean distance between each student and the positive and negative ideal values;
the classroom performance of the kth student is evaluated as Vk:
In the embodiment, the performance of students in class is evaluated from four dimensions of the listening state, the classroom interaction degree, the classroom activity degree and the inattentive degree, the evaluation is carried out according to a plurality of dimensions, the degree of each student closest to the optimal show student can be directly obtained through the decision matrix, and the evaluation mode is more scientific and reasonable. A scoring mechanism is designed according to the classroom performance of students, helps teachers record the classroom performance of the students and scientifically score the classroom performance of the students, and is beneficial to improving teaching quality.
In a preferred embodiment of the present invention, the gesture that does not meet the classroom requirement includes speaking a talk, playing a mobile phone, and sleeping in step S4.
In the present embodiment, it is preferable that the method further includes:
if the postures of the students in the single video clip do not meet the classroom requirement based on the posture recognition model, reminding the students in the single video clip, and further reminding the teacher to remind the students if the postures of the students in the single video clip in the previous M1 single video clips do not meet the classroom requirement, wherein M1 is a preset positive integer, and M1 is more than or equal to 1 and is less than or equal to N. M1 is preferably but not limited to 5-10.
The invention also discloses a student classroom state evaluation system, in a preferred embodiment, as shown in fig. 4 and 5, the system comprises a first camera 1 for shooting all students in the classroom, a second camera 2 for shooting the teaching content of the teacher in the classroom, and a server 3;
the server 3 receives the student class videos output by the first camera 1 and the teacher teaching videos output by the second camera 2, obtains the class performance evaluation of each student according to the student state evaluation method based on the gesture recognition, and stores the time labels corresponding to the gestures of the students which do not accord with the class requirements and the teacher teaching videos.
In the present embodiment, the first camera 1 is preferably positioned in front of the classroom and facing all students, and the second camera 2 is preferably positioned behind the classroom and facing the teacher, so that it can capture images of the blackboard or the projector. The server 3 may be located in a classroom or a school management center, and is connected to the first camera 1 and the second camera 2 by wire or wirelessly.
In a preferred embodiment of the present invention, a plurality of student terminal devices 4 and teacher terminal devices that establish connection communication with the server 3 are further included.
In the present embodiment, the student terminal device 4 and the teacher terminal device are wirelessly connected to the server 3.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (8)
1. A student state evaluation method based on gesture recognition is characterized by comprising the following steps:
step S1, acquiring classroom videos of all students in a classroom and teaching videos of teachers in real time, and storing the teaching videos of the teachers;
intercepting a plurality of video clips from the classroom video at intervals;
each video clip is associated with a time tag for recording the actual shooting time of the video clip;
step S2, the following processing is performed for each video clip:
segmenting single video clips of all students in the video clips;
for any single video clip, identifying personal information of students in the single video clip by using a face recognition algorithm, identifying postures of the students in the single video clip based on a posture recognition model, and associating the personal information with the postures;
the step of recognizing the postures of the students in the single-person video clip based on the posture recognition model specifically comprises the following steps of:
establishing a gesture recognition model, inputting the single video clip into the gesture recognition model, and outputting the gestures of students in the single video clip by the gesture recognition model;
the process of establishing the gesture recognition model comprises the following steps:
step S21, constructing a training data set marked as Vlabled(ii) a The training data set VlabledThe system comprises a plurality of single-person video clips provided with attitude tags;
step S22, extracting the video characteristics of a single person video clip in the training data set through a video characteristic extraction module; step S23, training and verifying the random forest classifier by taking the video characteristics of the single video clip in the training data set as input and the attitude label of the single video clip as a classification result to obtain an attitude identification model;
in step S21, the specific process of constructing the training data set includes:
step S211, a plurality of video clips are cut from the existing student classroom videos, single video clips of all students in each video clip are cut, all the single video clips are constructed into a single video clip set and recorded as Vunlabled;
Step S212, presetting a plurality of postures and postures
s belongs to { reading, writing, listening, standing up to answer questions, raising hands, speaking a little, playing a mobile phone, sleeping };
sending each single video clip in the single video clip set to a plurality of visitors respectively, scoring the coincidence degree of the single video clip and each gesture by the visitors, and calculating the average value of the coincidence degree score of each single video clip and each gesture:
wherein the content of the first and second substances,an average value representing the score of the degree of coincidence of the ith' single video segment with the mth pose s (m) in the single video segment set; n ispRepresenting the number of visitors scored for the i' th single-person video clip; i ', m and j' are positive integers, m is more than or equal to 1 and less than or equal to 8, j 'is the serial number of the interviewee, j' is more than or equal to 1 and less than or equal to np;
Step S213, setting a posture label for the single video clip:
if the average of the score for the degree of coincidence of the ith' single person video clip with the mth pose s (m) satisfies:setting a posture label s for the ith single-person video clipi'And adding the ith' single person video clip into the training data set VlabledSaid attitude tag si'Comprises the following steps:wherein the content of the first and second substances,is a preset score threshold;
if the average of the scores of the matching degree of the ith' single-person video clip and the mth posture s (m) is not satisfiedOr the average of the scores of the coincidence degree of the ith' single-person video clip and more than one gesture satisfiesNot adding the ith' single person video clip to the training data set Vlabled;
Step S3, constructing a posture time set of each student by using the personal information obtained from all the video clips and the postures associated with the personal information, and setting the posture time set of the kth student as Uk={[sk1,sk2,...,ski...,skN],[T1,T2,...,Ti,...,TN]};
Wherein k is more than or equal to 1 and less than or equal to n, and n represents the total number of students and is a positive integer; n represents the total number of video clips and is a positive integer; skiRepresenting the posture of the kth student in the ith video clip; t isiA time label representing the ith video clip, i is more than or equal to 1 and less than or equal to N; skiAnd TiOne-to-one correspondence is realized;
obtaining classroom performance evaluation of each student based on the posture time set of all students;
and step S4, outputting classroom performance evaluation of each student and time labels corresponding to all postures which do not meet classroom requirements in each student posture time set.
2. The student status evaluation method based on gesture recognition according to claim 1, wherein in step S2, the method further comprises checking attendance of student attendance, specifically comprising:
the method comprises the steps of obtaining pre-stored personal information of all corresponding students in a classroom, accumulating the times that the personal information of each corresponding student is matched with the personal information of the students obtained through face recognition by all single video clips, considering that the corresponding student is absent and reminding the corresponding student to go to class if the times are less than or equal to a preset time threshold, and considering that the corresponding student normally goes out of duty if the times are greater than the preset time threshold.
3. The student status evaluation method based on gesture recognition according to claim 1, wherein in step S22, the process of extracting the video feature of the video clip of a single person in the training data set by the video feature extraction module comprises:
step S221, extracting the three-dimensional posture of each frame of image in the ith single video clip to obtain a three-dimensional posture set G;
g ═ P1,P2,...,Pττ is the total frame number of images contained in the ith single-person video clip and is a positive integer; p1、P2、…、PτRespectively representing the three-dimensional postures of the 1 st frame image, the 2 nd frame image, … and the tau th frame image in the ith' single person video clip;
step S222, obtaining deep layer characteristics F of the three-dimensional posture set G through the long-term and short-term memory modeldeep;
Step S223, extracting class characteristics F from the three-dimensional gesture set GclassSaid class attendance feature FclassIncluding a posture feature FposeAnd motion characteristics Fmove;
The attitude featureWherein, Ft,poseRepresenting the attitude feature of the t-th image frame, Ft,pose={f1,f2,...,f16In the t-th image frame, f1Representing the size of the frame occupied by the human body in the t-th image frame, f2Representing the angle formed by the line connecting the left shoulder to the neck and the line connecting the right shoulder to the neck, f3Representing the angle formed by the left shoulder-to-neck line and the head-to-neck line, f4Representing the angle formed by the right shoulder-to-neck line and the head-to-neck line, f5Representing the angle formed by the head-to-neck line and the back-to-neck line, f6Representing the angle formed by the line connecting the neck to the back and the line connecting the root of the coccyx to the back, f7Representing the angle formed by the left shoulder to left elbow line and the left hand to left elbow line, f8Representing the angle formed by the right shoulder-to-right elbow line and the right hand-to-right elbow line, f9Showing the angle formed by the line from the left hip to the left knee and the line from the left foot to the left knee, f10Representing the angle formed by the line from the right hip to the right knee and the line from the right foot to the right knee, f11Denotes the distance of the right hand from the root of the coccyx, f12Indicating the left hand and the coccygeal rootDistance of section, f13Denotes the distance of the right foot from the root of the coccyx, f14Denotes the distance of the left foot from the root of the coccyx, f15Showing the area of the triangle enclosed by the two hands and the neck, f16The area of a triangle formed by the two feet and the root of the coccyx is shown; t is more than or equal to 1 and less than or equal to tau;
the movement characteristicsWherein, Ft,moveRepresenting the motion characteristics of the t-th image frame, Ft,move={f17,f18,...,f25In the t-th image frame, f17Representing the right hand velocity, f18Representing right hand acceleration, f19Indicating the degree of jerk of the right-hand operation, f20Indicating left hand speed, f21Representing left hand acceleration, f22Indicating the jerky degree of the left-hand movement, f23Indicating head speed, f24Representing head acceleration, f25Indicating the degree of jerkiness of the head;
single person video clip viClass-giving feature of Fclass=Fpose+Fmove;
Step S224, a single video clip viThe video characteristics of (a) are: ftotal=Fdeep+Fclass。
4. The student status evaluation method based on gesture recognition according to claim 1, wherein in step S3, the step of obtaining the classroom performance evaluation of each student based on the gesture time sets of all students specifically includes:
step S31, the gesture S of the student in the single-person video clip is recognized based on the gesture recognition model is set as:
step S32, obtaining the listening state, the classroom interaction degree, the classroom activity and the inattention degree of each student in the classroom;
class attending state L of kth studentkComprises the following steps:wherein, L iskiFor the kth student in the ith single-person video clip viIn the state of being in the class in (1),skiindicating that the kth student is in the ith single-person video clip viThe posture of (1);
classroom interaction degree T of kth studentkComprises the following steps:wherein the content of the first and second substances,is a preset first weight parameter; omegahand_kIndicating that the kth student is in the position of holding hands in the N video clips,Ωstand_kindicating that the kth student is in a position to answer the question immediately in N video clips,
class liveness B of kth studentkComprises the following steps: b isk=max(0,min(1,1-ΔBk) ); wherein, Delta BkIndicating the classroom activity B of the kth studentkAmount of change of (a) Δ Bk=δ1*(τread_k+τwrite_k+τlisten_k)+δ2*(τtalk_k+τphone_k+τsleep_k)-δ3*τhand_k-δ4*τstand_k,δ1Representing a predetermined first reward and penalty factor, δ2Representing a predetermined second reward penalty factor, δ3Representing a predetermined third reward penalty factor, δ4Representing a predetermined fourth reward and penalty factor, τread_k、τwrite_k、τlisten_k、τtalk_k、τphone_k、τsleep_k、τhand_k、τstand_kRespectively representing the times of the detection states of continuous M video clips of the kth student in the N video clips, namely the states of reading, writing, listening, speaking, playing a mobile phone, sleeping, lifting hands and answering questions at stand-up, wherein M is more than or equal to 1 and less than or equal to N;
degree of inattention Z of kth studentkComprises the following steps: zk=τtalk_k+τphone_k+τsleep_k;
Step S33, obtaining classroom performance evaluation of each student based on the listening state, classroom interaction degree, classroom activity and inattention degree of all students in classroom, specifically comprising:
step S331, constructing an initial decision matrix C;
normalizing the initial decision matrix C to obtain a normalized decision matrix C';
Step S332, setting a weight matrix W:
the above-mentionedWherein eta is1Weight, η, representing the state of attending a lesson2Weight, η, representing the degree of classroom interaction3Weight, η, representing class liveness4A weight representing a degree of inattention;
step S333, calculating a weighted decision matrix D,
Wherein j represents the serial number of the columns of the weighted decision matrix D, and j is 1,2,3, 4;
step S334, calculating Euclidean distance between each student and the positive and negative ideal values;
the classroom performance of the kth student is evaluated as Vk:
5. The student status evaluation method based on gesture recognition according to claim 1, wherein in the step S4, the gestures that do not meet the classroom requirement include talking a talk, playing a mobile phone, and sleeping.
6. The student status evaluation method based on gesture recognition according to claim 5, further comprising:
and if the postures of the students in the single video clips are identified to be not in accordance with the classroom requirements based on the posture identification model, reminding the students in the single video clips, and further reminding the teachers of the students in the previous M1 single video clips to remind the students if the postures of the students in the single video clips are not in accordance with the classroom requirements, wherein M1 is a preset positive integer, and M1 is not less than 1 and not more than N.
7. A student classroom state evaluation system is characterized by comprising a first camera for shooting all students in a classroom, a second camera for shooting teaching contents of teachers in the classroom and a server;
the server receives the student class videos output by the first camera and the teacher teaching videos output by the second camera, obtains the class performance evaluation of each student according to the student state evaluation method based on the posture recognition in any one of claims 1 to 6, and stores the time labels corresponding to the postures of the students which do not accord with the class requirements and the teacher teaching videos.
8. The student classroom status evaluation system of claim 7 further comprising a plurality of student terminal devices and teacher terminal devices in connected communication with the server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910758393.5A CN110443226B (en) | 2019-08-16 | 2019-08-16 | Student state evaluation method and system based on posture recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910758393.5A CN110443226B (en) | 2019-08-16 | 2019-08-16 | Student state evaluation method and system based on posture recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110443226A CN110443226A (en) | 2019-11-12 |
CN110443226B true CN110443226B (en) | 2022-01-25 |
Family
ID=68435975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910758393.5A Active CN110443226B (en) | 2019-08-16 | 2019-08-16 | Student state evaluation method and system based on posture recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443226B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144255B (en) * | 2019-12-18 | 2024-04-19 | 华中科技大学鄂州工业技术研究院 | Analysis method and device for non-language behaviors of teacher |
CN111062844A (en) * | 2020-03-17 | 2020-04-24 | 浙江正元智慧科技股份有限公司 | Intelligent control system for smart campus |
CN111507227B (en) * | 2020-04-10 | 2023-04-18 | 南京汉韬科技有限公司 | Multi-student individual segmentation and state autonomous identification method based on deep learning |
CN111507873A (en) * | 2020-04-14 | 2020-08-07 | 四川聚阳科技集团有限公司 | Classroom participation degree evaluation method based on sound and image joint sampling |
CN111611854B (en) * | 2020-04-16 | 2023-09-01 | 杭州电子科技大学 | Classroom condition evaluation method based on pattern recognition |
CN111553218A (en) * | 2020-04-20 | 2020-08-18 | 南京医科大学 | Intelligent medical skill teaching monitoring system based on human body posture recognition |
CN111680558A (en) * | 2020-04-29 | 2020-09-18 | 北京易华录信息技术股份有限公司 | Learning special attention assessment method and device based on video images |
CN111738177B (en) * | 2020-06-28 | 2022-08-02 | 四川大学 | Student classroom behavior identification method based on attitude information extraction |
CN112686154B (en) * | 2020-12-29 | 2023-03-07 | 杭州晨安科技股份有限公司 | Student standing detection method based on head detection and picture sequence |
CN112905811A (en) * | 2021-02-07 | 2021-06-04 | 广州录臻科技有限公司 | Teaching audio and video pushing method and system based on student classroom behavior analysis |
CN112862643A (en) * | 2021-03-01 | 2021-05-28 | 深圳市微幼科技有限公司 | Multimedia remote education platform system |
CN112990105B (en) * | 2021-04-19 | 2021-09-21 | 北京优幕科技有限责任公司 | Method and device for evaluating user, electronic equipment and storage medium |
CN114285966B (en) * | 2021-12-07 | 2024-03-29 | 浙江东隆教育科技有限公司 | Method and system for processing monitoring data related to labor education |
CN114440884A (en) * | 2022-04-11 | 2022-05-06 | 天津果实科技有限公司 | Intelligent analysis method for human body posture for intelligent posture correction equipment |
CN116055684B (en) * | 2023-01-18 | 2023-12-12 | 广州乐体科技有限公司 | Online physical education system based on picture monitoring |
CN116757524B (en) * | 2023-05-08 | 2024-02-06 | 广东保伦电子股份有限公司 | Teacher teaching quality evaluation method and device |
CN117196237A (en) * | 2023-09-19 | 2023-12-08 | 武汉盟游网络科技有限公司 | Information management method and device based on cloud computing |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085721A (en) * | 2017-06-26 | 2017-08-22 | 厦门劢联科技有限公司 | A kind of intelligence based on Identification of Images patrols class management system |
CN107958351A (en) * | 2017-12-26 | 2018-04-24 | 重庆大争科技有限公司 | Teaching quality assessment cloud service platform |
CN108564022A (en) * | 2018-04-10 | 2018-09-21 | 深圳市唯特视科技有限公司 | A kind of more personage's pose detection methods based on positioning classification Recurrent networks |
CN108875606A (en) * | 2018-06-01 | 2018-11-23 | 重庆大学 | A kind of classroom teaching appraisal method and system based on Expression Recognition |
CN109035089A (en) * | 2018-07-25 | 2018-12-18 | 重庆科技学院 | A kind of Online class atmosphere assessment system and method |
CN109359539A (en) * | 2018-09-17 | 2019-02-19 | 中国科学院深圳先进技术研究院 | Attention appraisal procedure, device, terminal device and computer readable storage medium |
CN109685692A (en) * | 2019-01-15 | 2019-04-26 | 山东仁博信息科技有限公司 | A kind of noninductive acquisition and analysis system of various dimensions student learning behavior |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10388178B2 (en) * | 2010-08-27 | 2019-08-20 | Arthur Carl Graesser | Affect-sensitive intelligent tutoring system |
WO2014061015A1 (en) * | 2012-10-16 | 2014-04-24 | Sobol Shikler Tal | Speech affect analyzing and training |
-
2019
- 2019-08-16 CN CN201910758393.5A patent/CN110443226B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085721A (en) * | 2017-06-26 | 2017-08-22 | 厦门劢联科技有限公司 | A kind of intelligence based on Identification of Images patrols class management system |
CN107958351A (en) * | 2017-12-26 | 2018-04-24 | 重庆大争科技有限公司 | Teaching quality assessment cloud service platform |
CN108564022A (en) * | 2018-04-10 | 2018-09-21 | 深圳市唯特视科技有限公司 | A kind of more personage's pose detection methods based on positioning classification Recurrent networks |
CN108875606A (en) * | 2018-06-01 | 2018-11-23 | 重庆大学 | A kind of classroom teaching appraisal method and system based on Expression Recognition |
CN109035089A (en) * | 2018-07-25 | 2018-12-18 | 重庆科技学院 | A kind of Online class atmosphere assessment system and method |
CN109359539A (en) * | 2018-09-17 | 2019-02-19 | 中国科学院深圳先进技术研究院 | Attention appraisal procedure, device, terminal device and computer readable storage medium |
CN109685692A (en) * | 2019-01-15 | 2019-04-26 | 山东仁博信息科技有限公司 | A kind of noninductive acquisition and analysis system of various dimensions student learning behavior |
Non-Patent Citations (5)
Title |
---|
Automated Alertness and Emotion Detection for Empathic Feedback During E-Learning;S L Happy等;《2013 IEEE Fifth International Conference on Technology for Education》;20140303;第47-50页 * |
Estimation of Students’ Attention in the Classroom From Kinect Features;Janez Zaletelj;《10th International Symposium on Image and Signal Processing and Analysis (ISPA 2017)》;20171019;第220-224页 * |
基于深度学习的智能教学系统的设计与研究;陈晋音等;《计算机科学》;20190630;第46卷(第6A期);第550-554页 * |
基于视频的人体行为分析;李鑫鑫;《中国优秀硕士学位论文全文数据库 信息科技辑》;20131231;第2013年卷(第1期);第6,29-30,46-50页 * |
课堂教学自动评价及其初步研究成果;骆祖莹等;《现代教育技术》;20181231;第28卷(第8期);第38-44页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110443226A (en) | 2019-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443226B (en) | Student state evaluation method and system based on posture recognition | |
CN110991381B (en) | Real-time classroom student status analysis and indication reminding system and method based on behavior and voice intelligent recognition | |
CN111796752B (en) | Interactive teaching system based on PC | |
Broderick et al. | “Say just one word at first”: The emergence of reliable speech in a student labeled with autism | |
CN106228982A (en) | A kind of interactive learning system based on education services robot and exchange method | |
CN110647842A (en) | Double-camera classroom inspection method and system | |
CN206209894U (en) | Realization of High School Teaching Managing System | |
Costa | 99 tips for creating simple and sustainable educational videos: A guide for online teachers and flipped classes | |
Nguyen et al. | Online feedback system for public speakers | |
CN111275345B (en) | Classroom informatization evaluation and management system and method based on deep learning | |
CN112328077B (en) | College student behavior analysis system, method, device and medium | |
CN111160277A (en) | Behavior recognition analysis method and system, and computer-readable storage medium | |
CN109754653B (en) | Method and system for personalized teaching | |
Maniar et al. | Automated proctoring system using computer vision techniques | |
CN111178263A (en) | Real-time expression analysis method and device | |
Beard et al. | Listening research in the communication discipline | |
Ashwin et al. | Unobtrusive students' engagement analysis in computer science laboratory using deep learning techniques | |
CN112116841A (en) | Personalized remote education system and method based on deep learning | |
JP2022075662A (en) | Information extraction apparatus | |
JP2022075661A (en) | Information extraction apparatus | |
CN112634096A (en) | Classroom management method and system based on intelligent blackboard | |
TWM600908U (en) | Learning state improvement management system | |
TWM600921U (en) | Learning trajectory analysis system | |
CN204965794U (en) | Multimedia teaching system | |
TWI731577B (en) | Learning state improvement management system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |