CN111950472A - Teacher grinding evaluation method and system - Google Patents

Teacher grinding evaluation method and system Download PDF

Info

Publication number
CN111950472A
CN111950472A CN202010820365.4A CN202010820365A CN111950472A CN 111950472 A CN111950472 A CN 111950472A CN 202010820365 A CN202010820365 A CN 202010820365A CN 111950472 A CN111950472 A CN 111950472A
Authority
CN
China
Prior art keywords
teacher
proportion
audio
grinding
lesson
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010820365.4A
Other languages
Chinese (zh)
Inventor
周倩如
须佶成
李川
郭杏荣
李光杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gosboro Education Technology Co ltd
Original Assignee
Beijing Gosboro Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gosboro Education Technology Co ltd filed Critical Beijing Gosboro Education Technology Co ltd
Priority to CN202010820365.4A priority Critical patent/CN111950472A/en
Publication of CN111950472A publication Critical patent/CN111950472A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a teacher grinding evaluation method and system. The method comprises the following steps: acquiring the number and proportion of effective teaching actions through limb action identification; the method comprises the steps of detecting the face orientation of a head image, and acquiring the number and proportion of head images of a front face given by a teacher in a teaching process; acquiring the proportion of good expressions expressed by a teacher by identifying the expressions of head images of front faces given by the teacher; obtaining the audio rating of the teacher through audio detection of the class grinding audio of the teacher; and (4) integrating the number and the proportion of effective teaching actions, the proportion of head images of the front face given by the teacher, the integral proportion of good expressions shown by the teacher and the audio rating of the teacher, and evaluating the course grinding process of the teacher. The teacher lesson grinding evaluation method and the teacher lesson grinding evaluation system can provide accurate and quantitative detection results of all dimensions for each teacher, and give comprehensive scores and grades by means of algorithms according to the results of the dimensions so as to help promote the improvement of all abilities of the teacher.

Description

Teacher grinding evaluation method and system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a teacher lesson grinding evaluation method and system.
Background
Education has been a great deal in China. The education investment of Chinese families is always very much, parents expect that own children can obtain the best education resources, and the best teachers teach the children. However, throughout China, the number of well-known teachers is very limited and mostly concentrated in a two-line metropolitan area. In order to improve the learning quality of students in a classroom, most of the new teachers who are the teachers at first and teachers who lack specific teaching capacity in the course of teaching cannot meet the increasing demands of parents for hopeing that children can obtain the best teaching, so that the teaching capacity of the teaching teachers can be improved in an all-round way. In the traditional course of teachers and teachers training or lessons grinding, experienced teachers firstly watch the trial lectures of teachers to be trained, and then manual comment is carried out on the performance of the teachers, so that the defects of the teachers are pointed out, and professor improvement opinions and suggestions are pertinently provided. However, in a general scene, the number of teachers waiting for class grinding is not small, and very much time and manpower are consumed for manual watching and commenting, and both the labor cost and the time cost are very high. The most important link in the course of lesson training is to detect the problems of the teacher in the course of lesson teaching, obtain the specific evaluation under each dimensionality, thus can make a pointed suggestion, thus finally improve the teaching ability of the teacher. Therefore, how to use artificial intelligence, namely, a machine is used for replacing a senior teacher to automatically evaluate all aspects of the teacher needing to grind the lessons and give all aspects of evaluation, so that the teaching ability of the teacher needing to grind the lessons is pertinently improved, and the problem which is needed urgently and not perfectly solved is formed.
The evaluation of the teacher includes a plurality of evaluation dimensions, and the whole evaluation can be divided into the fields of expressiveness, content and the like. The expressive power includes analysis of body layers such as limbs and heads and analysis of layers such as voice. The content level mainly utilizes technologies such as NLP and the like to analyze the teaching content level of the teacher. These aspects of the task can be solved by mainstream conventional machine learning methods. In recent years, deep learning, which is an artificial intelligence technique most concerned in these years, has shown its effect beyond the conventional method in many fields, and it has now been successfully applied to aspects of our lives. The deep learning is a method based on an artificial neural network, original information such as input images and sounds is converted into meaningful digital features in a network self-learning mode, and specific tasks such as specified classification and identification are completed by utilizing the features. The evaluation of multiple dimensions in the course grinding task of the teacher is completed by utilizing a deep learning method, which is a well-known solution.
At present, a plurality of companies in China have introduced similar lesson grinding solutions, for example, a lesson quality analysis solution of a certain company. The teacher performance evaluation method quantifies various abilities of the teacher in the aspects of expressive force, content and the like so as to provide data support for the promotion suggestion of the teacher ability in a targeted manner. However, the evaluation dimensions of the teachers are not fine enough, a plurality of possibly effective and more detailed evaluation dimensions are omitted, and the system is designed on the basis of evaluation criteria obtained by years of teachers' training experience in conclusion, so that the evaluation of the teachers is more authoritative.
Disclosure of Invention
The invention aims to provide a teacher lesson grinding evaluation method and system, which can provide accurate and quantitative detection results of all dimensions for each teacher, and give comprehensive scores and grades by means of an algorithm according to the results of the dimensions so as to help promote the improvement of all abilities of the teacher.
In order to solve the technical problem, the invention provides a teacher lesson grinding evaluation method, which comprises the following steps: acquiring the number and proportion of effective teaching actions through limb action identification; the method comprises the steps of detecting the face orientation of a head image, and acquiring the number and proportion of head images of a front face given by a teacher in a teaching process; acquiring the proportion of good expressions expressed by a teacher by identifying the expressions of head images of front faces given by the teacher; obtaining the audio rating of the teacher through audio detection of the class grinding audio of the teacher; and (4) integrating the number and the proportion of effective teaching actions, the proportion of head images of the front face given by the teacher, the integral proportion of good expressions shown by the teacher and the audio rating of the teacher, and evaluating the course grinding process of the teacher.
In some embodiments, the obtaining of the number and the proportion of the effective teaching actions through the limb action recognition includes: performing attitude estimation analysis on each frame of picture extracted from the video to obtain the position and confidence degree detailed information of the key point of the human body in each picture; carrying out specific calculation analysis on each limb joint part of the human body to obtain detailed state information of the action, standing state, orientation and joint amplitude of the limb of the human body in each frame of picture; and obtaining the number and the proportion of the effective actions by matching and inspecting the correlation degree of each frame of limb action and effective action definition.
In some embodiments, performing pose estimation analysis on each frame of image extracted from the video to obtain key information of positions and confidence degrees of key points of a human body in each image in the image includes: and performing attitude estimation analysis on each frame of image extracted from the video by adopting a top-to-bottom method or a bottom-to-top method.
In some embodiments, the acquiring the number and the proportion of the head images of which the teacher gives a front face in the teaching process by detecting the face orientation of the head images comprises: estimating a head position by using a deep learning method to obtain yaw angle, pitch angle and roll angle values of the head state in each frame of picture; the model can automatically obtain the face orientation of each frame of picture through the judgment of the three dimensional values, and finally, the result of the front face detection is given through the summarization.
In some embodiments, obtaining the proportion of the teacher who exhibits a good expression by expression recognition of a head image of a front face given to the teacher includes: the facial expression recognition result of the teacher of each frame image can be obtained by performing expression recognition on each frame header image obtained by preprocessing, and the final expression detection result is obtained by matching and summarizing the obtained expressions.
In some embodiments, good expressions are not limited to smiling expressions.
In some embodiments, obtaining the audio score of the teacher by audio detection of the teacher's lesson grinding audio comprises: by analyzing the voice characteristic dimensions such as accent, tone, pitch and the like of the audio of the teacher, the evaluation score of the teacher on the standing back is quantitatively given.
In some embodiments, the analysis of the dimension of the voice feature such as accent, tone, pitch, etc. of the audio of the teacher lecture includes: and (3) extracting basic characteristics such as a speech basic time domain, a frequency domain and the like, and scoring the audio by combining a time domain analysis method or a deep learning method.
In some embodiments, the evaluation of the course grinding process of the teacher by integrating the number and proportion of the effective teaching actions, the proportion of the head image of the front face given by the teacher, the overall proportion showing good expression, and the audio rating of the teacher comprises: the number and proportion of effective teaching actions, the proportion of head images of the face given by the teacher, the overall proportion of good expressions shown and the audio scoring of the teacher are integrated by a weighting method, TOPSIS or a supervised machine learning method.
In addition, the invention also provides a teacher class grinding evaluation system, which comprises: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the teacher lesson assessment method as described above.
After adopting such design, the invention has at least the following advantages:
the automatic teacher lesson grinding video detection and grading process is realized. The video detection and rating function can be automatically realized under the condition of no manual interaction, and the effect is approved by professionally trained teachers and trainers. The labor cost for teachers to grind classes can be effectively reduced, and the method can be widely applied to scenes of teachers grinding classes.
Drawings
The foregoing is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description.
Fig. 1 is a flowchart of a teacher lesson-grinding evaluation method according to an embodiment of the present invention;
fig. 2 is a block diagram of a teacher lesson grinding evaluation system according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The invention aims to solve the problem of realizing a complete multi-field and multi-dimensional teacher lesson grinding evaluation system aiming at teacher expressive force, providing accurate and quantitative detection results of each dimension for each teacher, and giving comprehensive scores and grades by means of an algorithm according to the results of the dimensions so as to help promote the improvement of various abilities of the teacher.
The invention designs a brand-new teacher evaluation method. The function of automatically grading the grinding video of the teacher without intervention of the senior teacher is realized. The evaluation flow consists of two parts, namely detection and rating, and is shown in figure 1.
The evaluation part comprises face orientation detection, expression recognition detection, limb effective action detection and teaching speed detection. After a user uploads a video to be evaluated and the server acquires the task, frame extraction processing and audio extraction processing are performed on the uploaded video to be evaluated. The whole process is then divided into two parts.
One part, performing independent limb detection processing on each extracted frame to obtain a proportional value of an effective action, performing face orientation detection and expression recognition detection in sequence after completing the limb detection part, and finally obtaining the proportional values of the dimensions; and the other part is used for detecting the audio data of the extracted audio to obtain a corresponding speech speed value and a corresponding Yangtze frustration. After the results obtained by all dimensions are obtained, the dimension values are aligned and brought into a rating model, and the final rating and the score are obtained. These sections included in the flow are described in detail below in turn.
The aim of limb detection is to detect the overall extension degree of the teacher in the course of teaching. In the course of teaching in a classroom, the extension degree of limbs and the teaching action are very important parts in the teaching link, and good limb action can attract the attention of students to a certain extent, so that the classroom teaching quality is improved. According to years of experience of professionally trained teachers and teachers, effective limb action proportion is defined as an evaluation index of the partial detection.
The most important core of the analysis of the limb is to obtain the corresponding positioning of the limb joint, namely the posture estimation of the human body is needed. It can be classified into two main categories, namely top-to-bottom and bottom-to-top. The method from top to bottom is that all people in the picture are detected and crop is carried out, and then the obtained crop image resize is brought into the network for posture estimation; the method from bottom to top is to find all the key points in the picture first, and group the key points to find each person in the image.
In a specific processing logic, firstly, a deep learning method is used for performing posture estimation analysis on each frame of image extracted from a video by using a model, and detailed information such as the position, confidence degree and the like of a key point of a human body in each image can be obtained. Through the key point information obtained in the previous step, specific calculation analysis is carried out on each limb joint part of the human body, detailed state information of human body limb actions, standing states, orientation, joint amplitude and the like in each frame of picture is obtained, and finally the number and proportion of effective actions are obtained through matching and investigating the correlation degree of each frame of limb actions and effective action definitions.
The face detection module can detect the proportion of the direct student instruction in the whole teaching process of the teacher. In the course of teaching a teacher, it is a basic rigid standard to face students. Therefore, frontal face detection is an important fundamental detection dimension. In order to more conveniently detect the part and the subsequent expressions, a preprocessing is added, namely, a trained target detection model is used for head detection, and a corresponding image area is extracted for subsequent detection. Specifically, we use a deep learning method to estimate the head position on the extracted head picture, and obtain the yaw, pitch, and roll values of the head state in each frame of picture. The model can automatically obtain the face orientation of each frame of picture through the judgment of the three dimensional values, and finally, the result of the front face detection is given through the summarization.
The object processed by the expression recognition detection part is a head captured picture obtained by preprocessing. The purpose of this part is to detect and obtain the expression infection ability of the teacher in the course of teaching. The teacher with rich expression can catch the attention of students in the course of giving lessons, thereby improving the teaching effect. According to the particularity of the educational scene, and through a plurality of communication discussions with professor teachers and trainees, two important indexes are obtained:
1. the expression detection model should be as accurate as possible, and the model should find out all good expressions to the maximum extent.
2. A good expression is defined as a more positive expression, not just a smiling expression.
In specific operation, according to the obtained index information, the expression picture conforming to the scene is selected, and a data set suitable for the scene is built from the beginning. And training the expression recognition model on the basis of the constructed data set, and finally obtaining the expression recognition model suitable for the scene. Specifically, expression recognition is performed on each frame header image obtained through preprocessing, a teacher facial expression recognition result of each frame image can be obtained, and a final expression detection result is obtained through matching and summarizing the obtained expressions.
Some characteristics of language, such as fluency of lectures and speech speed, are relatively intuitive indicators for measuring the ability of a teacher to give lessons. Therefore, speech rate detection is also an important detection dimension. The speed of speech is defined as the number of words spoken by a person in a language per unit time.
The voice speed detection method adopted by the teacher adopts two voice speed detection methods, one method is that the ASR automatic voice recognition technology is utilized to carry out text content recognition on the audio of the teacher to obtain the corresponding teaching content, VAD voice activity detection technology is utilized to detect the teaching audio segment of the teacher in the teaching audio to obtain the actual teaching audio time length, and the text word numerical value and the actual teaching time length value are divided to obtain the voice speed value corresponding to the audio; the other method is to use syllable estimation technology to directly perform regression task on the audio without a voice recognition method to obtain a corresponding word number estimation value, perform VAD detection on the audio to obtain the actual teaching duration, and divide the actual teaching duration and the actual teaching duration to obtain a speech speed value corresponding to the audio.
Besides the speed of speech detection, the classroom expressiveness of the teacher has an important dimension, namely classroom lecture enthusiasm of the teacher and infectivity analysis of students. The evaluation score of the teacher teaching pause is quantitatively given mainly through analysis of voice characteristic dimensions such as accent, tone, pitch and the like of the teacher teaching audio. The part scores the audio by extracting basic characteristics of basic time domain, frequency domain and the like of the voice and combining a time domain analysis method or a deep learning method.
The part comprehensively scores the expressive force of the teaching video of the teacher by means of the detection results of all the dimensions. And (4) making and grading the scores, and determining by referring to the result of manual evaluation of the teacher. General multidimensional comprehensive evaluation methods (such as and without limitation weighting methods, TOPSIS, etc.) or supervised machine learning methods may be employed.
Fig. 2 shows the structure of the teacher class grinding evaluation system. Referring to fig. 2, for example, the teacher lesson training evaluation system 200 may be used as an evaluation host in an online education system. As described herein, the teacher grinding evaluation system 200 can be used in an online education system to implement an online evaluation function for the grinding behavior of the teacher. The teacher lesson grinding evaluation system 200 may be implemented in a single node, or the functions of the teacher lesson grinding evaluation system 200 may be implemented in multiple nodes in the network. Those skilled in the art will appreciate that the teacher class grinding evaluation system includes a broad sense of devices, and the teacher class grinding evaluation system 200 shown in fig. 2 is only one example thereof. Teacher class grinding evaluation system 200 is for clarity and is not intended to limit the application of the present invention to a particular teacher class grinding evaluation system embodiment or to a class of teacher class grinding evaluation system embodiments. At least some of the features/methods described herein may be implemented in a network device or component, such as teacher lesson assessment system 200. For example, the features/methods of the present invention may be implemented in hardware, firmware, and/or software running installed on hardware. The teacher lesson assessment system 200 may be any device that processes, stores, and/or forwards data frames over a network, such as a server, a client, a data source, and the like. As shown in fig. 2, the teacher lesson assessment system 200 may include a transceiver (Tx/Rx)210, which may be a transmitter, a receiver, or a combination thereof. Tx/Rx 210 may be coupled to a plurality of ports 250 (e.g., an uplink interface and/or a downlink interface) for transmitting and/or receiving frames from other nodes. Processor 230 may be coupled to Tx/Rx 210 to process frames and/or determine to which nodes to send frames. Processor 230 may include one or more multi-core processors and/or memory devices 232, which may serve as data stores, buffers, and the like. Processor 230 may be implemented as a general-purpose processor or may be part of one or more Application Specific Integrated Circuits (ASICs) and/or Digital Signal Processors (DSPs).
The automatic teacher lesson grinding video detection and grading process is realized. The video detection and rating function can be automatically realized under the condition of no manual interaction, and the effect is approved by professionally trained teachers and trainers. The labor cost for teachers to grind classes can be effectively reduced, and the method can be widely applied to scenes of teachers grinding classes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention in any way, and it will be apparent to those skilled in the art that the above description of the present invention can be applied to various modifications, equivalent variations or modifications without departing from the spirit and scope of the present invention.

Claims (10)

1. A teacher lesson grinding evaluation method is characterized by comprising the following steps:
acquiring the number and proportion of effective teaching actions through limb action identification;
the method comprises the steps of detecting the face orientation of a head image, and acquiring the number and proportion of head images of a front face given by a teacher in a teaching process;
acquiring the proportion of good expressions expressed by a teacher by identifying the expressions of head images of front faces given by the teacher;
obtaining the audio rating of the teacher through audio detection of the class grinding audio of the teacher;
and (4) integrating the number and the proportion of effective teaching actions, the proportion of head images of the front face given by the teacher, the integral proportion of good expressions shown by the teacher and the audio rating of the teacher, and evaluating the course grinding process of the teacher.
2. The teacher lesson-grinding evaluation method as claimed in claim 1, wherein the obtaining of the number and proportion of effective teaching actions through limb action recognition comprises:
performing attitude estimation analysis on each frame of picture extracted from the video to obtain the position and confidence degree detailed information of the key point of the human body in each picture;
carrying out specific calculation analysis on each limb joint part of the human body to obtain detailed state information of the action, standing state, orientation and joint amplitude of the limb of the human body in each frame of picture;
and obtaining the number and the proportion of the effective actions by matching and inspecting the correlation degree of each frame of limb action and effective action definition.
3. The teacher class grinding evaluation method of claim 2, wherein performing pose estimation analysis on each image frame extracted from the video to obtain the key information of the position and confidence of the key point of the human body in each image frame, comprises:
and performing attitude estimation analysis on each frame of image extracted from the video by adopting a top-to-bottom method or a bottom-to-top method.
4. The teacher lesson-grinding evaluation method according to claim 1, wherein the obtaining of the number and the proportion of the head images of the front face given by the teacher during teaching by face orientation detection of the head images comprises:
estimating the head position by using a deep learning method to obtain the yaw angle, the pitch angle and the roll angle value of the head state in each frame of picture;
the model can automatically obtain the face orientation of each frame of picture through the judgment of the three dimensional values, and finally, the result of the front face detection is given through the summarization.
5. The teacher lesson-grinding evaluation method according to claim 1, wherein obtaining the proportion of the teacher who exhibits a good expression through expression recognition of the head image of the front face given to the teacher comprises:
the facial expression recognition result of the teacher of each frame image can be obtained by performing expression recognition on each frame header image obtained by preprocessing, and the final expression detection result is obtained by matching and summarizing the obtained expressions.
6. The teacher lesson-grinding evaluation method according to claim 5, wherein the good expressions are not limited to smiling expressions.
7. The teacher class grinding evaluation method of claim 1, wherein the obtaining of the audio rating of the teacher by audio detection of the teacher class grinding audio comprises:
by analyzing the voice characteristic dimensions such as accent, tone, pitch and the like of the audio of the teacher, the evaluation score of the teacher on the standing back is quantitatively given.
8. The teacher class grinding evaluation method of claim 7, wherein the analysis of the dimension of the voice feature such as accent, tone, pitch, etc. of the audio of the teacher class comprises:
and (3) extracting basic characteristics such as a speech basic time domain, a frequency domain and the like, and scoring the audio by combining a time domain analysis method or a deep learning method.
9. The teacher class grinding evaluation method according to claim 1, wherein the evaluation of the teacher's class grinding process by integrating the number and ratio of the effective teaching actions, the proportion of the head image of the front face given by the teacher, the proportion of the whole face showing good expression, and the audio rating of the teacher comprises:
the number and proportion of effective teaching actions, the proportion of head images of the face given by the teacher, the overall proportion of good expressions shown and the audio scoring of the teacher are integrated by a weighting method, TOPSIS or a supervised machine learning method.
10. A teacher lesson grinding evaluation system is characterized by comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the teacher lesson assessment method of any one of claims 1-9.
CN202010820365.4A 2020-08-14 2020-08-14 Teacher grinding evaluation method and system Pending CN111950472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010820365.4A CN111950472A (en) 2020-08-14 2020-08-14 Teacher grinding evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010820365.4A CN111950472A (en) 2020-08-14 2020-08-14 Teacher grinding evaluation method and system

Publications (1)

Publication Number Publication Date
CN111950472A true CN111950472A (en) 2020-11-17

Family

ID=73342249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010820365.4A Pending CN111950472A (en) 2020-08-14 2020-08-14 Teacher grinding evaluation method and system

Country Status (1)

Country Link
CN (1) CN111950472A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024155A1 (en) * 2021-08-23 2023-03-02 华中师范大学 Method and system for measuring non-verbal behavior of teacher

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697577A (en) * 2019-02-01 2019-04-30 北京清帆科技有限公司 A kind of voice-based Classroom instruction quality evaluation method
CN109919434A (en) * 2019-01-28 2019-06-21 华中科技大学 A kind of classroom performance intelligent Evaluation method based on deep learning
CN111401797A (en) * 2020-05-09 2020-07-10 华南师范大学 Teaching quality evaluation method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919434A (en) * 2019-01-28 2019-06-21 华中科技大学 A kind of classroom performance intelligent Evaluation method based on deep learning
CN109697577A (en) * 2019-02-01 2019-04-30 北京清帆科技有限公司 A kind of voice-based Classroom instruction quality evaluation method
CN111401797A (en) * 2020-05-09 2020-07-10 华南师范大学 Teaching quality evaluation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈良波;许维胜;: "基于教师表情的教学情境理解", 系统仿真技术, no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024155A1 (en) * 2021-08-23 2023-03-02 华中师范大学 Method and system for measuring non-verbal behavior of teacher

Similar Documents

Publication Publication Date Title
CN110992741B (en) Learning auxiliary method and system based on classroom emotion and behavior analysis
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN106851216B (en) A kind of classroom behavior monitoring system and method based on face and speech recognition
CN107203953B (en) Teaching system based on internet, expression recognition and voice recognition and implementation method thereof
CN108648757B (en) Analysis method based on multi-dimensional classroom information
CN109359215A (en) Video intelligent method for pushing and system
WO2019024247A1 (en) Data exchange network-based online teaching evaluation system and method
CN108898115B (en) Data processing method, storage medium and electronic device
JP6977901B2 (en) Learning material recommendation method, learning material recommendation device and learning material recommendation program
CN111242049A (en) Student online class learning state evaluation method and system based on facial recognition
CN114298497A (en) Evaluation method and device for classroom teaching quality of teacher
CN110427977B (en) Detection method for classroom interaction behavior
CN111695442A (en) Online learning intelligent auxiliary system based on multi-mode fusion
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
WO2020007097A1 (en) Data processing method, storage medium and electronic device
CN113920534A (en) Method, system and storage medium for extracting video highlight
CN112101074A (en) Online education auxiliary scoring method and system
CN114841841A (en) Intelligent education platform interaction system and interaction method for teaching interaction
CN116050892A (en) Intelligent education evaluation supervision method based on artificial intelligence
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
CN110956142A (en) Intelligent interactive training system
CN113076885B (en) Concentration degree grading method and system based on human eye action characteristics
CN111950472A (en) Teacher grinding evaluation method and system
CN116825288A (en) Autism rehabilitation course recording method and device, electronic equipment and storage medium
Krishnamoorthy et al. E-Learning Platform for Hearing Impaired Students

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination