CN117994098A - Teaching index data determining method, device, computer equipment and medium - Google Patents

Teaching index data determining method, device, computer equipment and medium Download PDF

Info

Publication number
CN117994098A
CN117994098A CN202410249374.0A CN202410249374A CN117994098A CN 117994098 A CN117994098 A CN 117994098A CN 202410249374 A CN202410249374 A CN 202410249374A CN 117994098 A CN117994098 A CN 117994098A
Authority
CN
China
Prior art keywords
analysis
teaching
result
teacher
results
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410249374.0A
Other languages
Chinese (zh)
Inventor
谢枚
叶忠
解凯彬
杨霞
郭伟
李呈祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xuchuan Technology Co ltd
Original Assignee
Jiangsu Xuchuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xuchuan Technology Co ltd filed Critical Jiangsu Xuchuan Technology Co ltd
Priority to CN202410249374.0A priority Critical patent/CN117994098A/en
Publication of CN117994098A publication Critical patent/CN117994098A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Educational Technology (AREA)
  • Acoustics & Sound (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to the technical field of digital education and discloses a method, a device, computer equipment and a medium for determining teaching index data. The method comprises the following steps: collecting teaching data of a teacher; processing the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers; and determining teaching index data based on the teaching analysis result. According to the scheme, the accuracy is high when the teaching index data determining function is realized.

Description

Teaching index data determining method, device, computer equipment and medium
Technical Field
The invention relates to the technical field of digital education, in particular to a method, a device, computer equipment and a medium for determining teaching index data.
Background
In the teaching process of teachers, teaching evaluation can be performed in time, the growth of the teacher capability can be promoted, the coordinated development and the integrity of a power-assisted teacher team are improved, and powerful support is provided for scientifically and efficiently developing activities such as teaching, research, training and evaluation.
In the teaching evaluation process, index rules for teaching level evaluation are preset in general, and then, the teacher with abundant experience listens on site and gives various indexes to the classroom performance of the teaching teacher. However, this approach requires time from other teachers, resulting in high time and labor costs, and the evaluation results are susceptible to subjective preferences of the hearing aid teacher. In addition, it is difficult to ensure that each lesson of each teacher is scheduled for hearing, and the teacher who listens each time cannot ensure the same, and it is difficult to obtain the teaching ability growth condition of the teaching teacher.
Aiming at the problems in the prior art that the accuracy of analysis results of teaching scoring index data is low due to the problems in the aspects of objectivity and uniformity of analysis processing of teaching scoring index of a teacher in a manual mode, a corresponding solution is needed.
Disclosure of Invention
In view of the above, the invention provides a method, a device, a computer device and a storage medium for determining teaching index data, so as to solve the problem of low accuracy of analysis results of teaching score index data.
In a first aspect, the present invention provides a method for determining teaching index data, where the method includes: collecting teaching data of a teacher; processing the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers; and determining teaching index data based on the teaching analysis result.
According to the scheme, the acquired teaching data are processed in the same standard through the preset teaching analysis model, so that objective and unified teaching analysis results are obtained, and the accuracy of the follow-up determined teaching index data can be improved.
In an alternative embodiment, the collecting teaching data of the teacher includes: and collecting video data, audio data and teaching material data of each lesson of each teacher.
In an alternative embodiment, the processing the teaching data using a teaching analysis model includes: performing target recognition on the video data, performing voice recognition and semantic analysis on the audio data, and reading teaching material data to obtain a data processing result; and carrying out teaching analysis by using the data processing result to obtain a teaching analysis result.
In an optional implementation manner, the performing target recognition on the video data, performing voice recognition and semantic analysis on the audio data, and reading teaching material data to obtain a data processing result includes: performing target recognition on the video data to obtain a teacher recognition result, a student recognition result, a courseware recognition result and an blackboard writing recognition result; performing voice recognition and semantic analysis on the audio data to obtain teacher voice results, student voice results, voice texts and keyword extraction results; reading the teaching data to obtain the word number of the teaching data, the paragraph of the teaching data and the document size of the teaching data.
In an alternative embodiment, the educational analysis includes line of sight analysis, sound analysis, language analysis, face analysis, emotion analysis, gesture analysis, medium analysis, scene analysis, behavioral or performance association analysis; the teaching analysis by using the data processing result comprises the following steps: performing line-of-sight analysis based on the teacher recognition result and the teacher voice result; performing sound analysis based on the teacher voice result; language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results; carrying out emotion analysis based on teacher recognition results; performing face analysis and gesture analysis based on teacher recognition results and student recognition results; performing medium analysis based on the number of words of the teaching data, the paragraph of the teaching data and the file size of the teaching data; scene analysis is carried out based on courseware recognition results and blackboard writing recognition results; and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result.
In an alternative embodiment, the teaching analysis results include a resource preparation analysis result, a lesson preparation full-degree analysis result, a classroom structure analysis result, a new and old engagement analysis result, an excitation motivation analysis result, a question type analysis result, a volume speech rate analysis result, a positive speech analysis result, a spoken speech analysis result, a teaching gesture analysis result, a teaching writing analysis result, a question response analysis result, an import attraction analysis result, a sight line distribution analysis result, a walking track analysis result, and a multiple evaluation analysis result; the obtaining of the teaching analysis result comprises the following steps: performing medium analysis based on the number of the teaching data words, the teaching data paragraphs and the teaching data file size to obtain a resource preparation analysis result; carrying out gesture analysis based on the teacher identification result to obtain a lesson preparation full-degree analysis result; carrying out emotion analysis based on the teacher recognition result, carrying out face analysis and gesture analysis based on the teacher recognition result and the student recognition result, and obtaining a teaching gesture analysis result; language analysis is carried out based on teacher voice results, voice texts and keyword extraction results, and classroom structure analysis results, new and old connection analysis results, motivation analysis results, question type analysis results, active term analysis results and spoken term analysis results are obtained; performing voice analysis based on the teacher voice result, and performing language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a volume speech rate analysis result; scene analysis is carried out based on courseware recognition results and blackboard writing recognition results, and teaching writing analysis results are obtained; language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results, and question response analysis results are obtained; performing gesture analysis based on the student identification result to obtain an imported attraction analysis result; performing face analysis based on the teacher recognition result, and performing line-of-sight analysis based on the teacher recognition result and the teacher voice result to obtain a line-of-sight distribution analysis result; tracking a track based on the teacher identification result and the student identification result to obtain a walking track analysis result; and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result to obtain a multi-element evaluation analysis result.
In an alternative embodiment, the method further comprises: collecting video data, audio data and teaching material data of a plurality of classes of each teacher, and respectively obtaining teaching analysis results corresponding to the plurality of classes; determining teaching index data of a teacher according to teaching analysis results corresponding to the plurality of lessons; and generating teacher teaching improvement suggestions based on the teaching index data of the teacher.
According to the scheme, the video data, the audio data and the teaching data of a plurality of lessons of each teacher are collected, so that the change rule of the teaching analysis result of each teacher can be obtained through processing, the change rule of the more accurate teaching index data of each teacher is obtained, and the more targeted teaching improvement suggestion of the teacher is generated.
In a second aspect, the present invention provides a teaching index data determining apparatus, the apparatus comprising: the data acquisition module is used for acquiring teaching data of a teacher; the teaching analysis module is used for processing the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers; and the data determining module is used for determining teaching index data based on the teaching analysis result.
In a third aspect, the present invention provides a computer device comprising: the teaching index data determining method comprises the steps of storing a teaching index data determining program, wherein the teaching index data determining program comprises a memory and a processor, the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the teaching index data determining method of the first aspect or any corresponding implementation mode of the first aspect is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the teaching index data determining method of the first aspect or any of the embodiments corresponding thereto.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for determining teaching index data according to an embodiment of the present invention;
FIG. 2 is a flow chart of another teaching index data determination method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the results of resource preparation analysis according to an embodiment of the present application;
FIG. 4 is a schematic diagram showing the analysis result of the full extent of lessons preparation according to the embodiment of the present application;
FIG. 5 shows a schematic diagram of gesture analysis in accordance with an embodiment of the present application;
FIG. 6 is a schematic diagram showing the analysis results of a classroom structure according to an embodiment of the present application;
FIG. 7 is a schematic diagram showing the results of the analysis of old and new links and the results of the analysis of the excitation engine according to the embodiment of the application;
FIG. 8 is a schematic diagram showing results of question type analysis according to an embodiment of the present application;
FIG. 9 is a diagram illustrating the results of a positive term analysis according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing the results of spoken word analysis, according to an embodiment of the present application;
FIG. 11 is a schematic diagram showing the results of volume speech rate analysis according to an embodiment of the present application;
FIG. 12 is a schematic diagram showing the results of a teaching writing analysis according to an embodiment of the present application;
FIG. 13 is a schematic view showing the analysis result of the line of sight distribution according to the embodiment of the present application;
FIG. 14 is a diagram showing the analysis result of the walking track according to the embodiment of the present application;
FIG. 15 is a schematic diagram showing the results of a multivariate evaluation analysis according to an embodiment of the present application;
fig. 16 is a block diagram of a teaching index data determination device according to an embodiment of the present invention;
fig. 17 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the teaching process of teachers, teaching evaluation can be performed in time, the growth of the teacher capability can be promoted, the coordinated development and the integrity of a power-assisted teacher team are improved, and powerful support is provided for scientifically and efficiently developing activities such as teaching, research, training and evaluation.
In the teaching evaluation process, index rules for teaching level evaluation are preset in general, and then, the teacher with abundant experience listens on site and gives various indexes to the classroom performance of the teaching teacher. However, this method requires additional time for other teachers, resulting in high time and labor costs, and the evaluation results are easily affected by subjective favorites of the hearing aid teacher, and additional manpower is required for data statistics. In addition, it is difficult to ensure that each teacher gives an advice to each lesson, and the teacher who gives an advice to each lesson cannot ensure the same, and it is difficult to obtain uniform evaluation of teaching teachers and statistics of teaching ability growth.
Therefore, the embodiment of the invention provides a method for determining teaching index data, and provides a corresponding solution for the problem of low accuracy of analysis results of the teaching score index data caused by the problems of objectivity and uniformity of analysis processing of the teaching score index of a teacher in a manual mode in the related technology.
According to an embodiment of the present invention, there is provided an embodiment of a teaching index data determination method, it should be noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that shown or described herein.
In this embodiment, a method for determining teaching index data is provided, which may be used in a notebook, a desktop, a teaching system, etc., fig. 1 is a flowchart of a method for determining teaching index data according to an embodiment of the present invention, as shown in fig. 1, where the flowchart includes the following steps:
Step S101, collecting teaching data of a teacher.
Optionally, the teaching data of the teacher is collected through a camera, a voice collecting device and the like, such as teaching courseware, blackboard writing and speaking of the teacher in the class, teaching actions of the teacher in the class and the like.
Optionally, a plurality of target teachers needing to collect teaching data and target courses corresponding to each target teacher are preset for collection.
Step S102, the teaching data is processed by using a teaching analysis model, and a teaching analysis result of the teacher is obtained.
After the required teaching data is collected, the teaching data can be processed through a pre-established teaching analysis model, the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers, for example, whether the pre-class preparation of the teachers is sufficient or not through collected teaching courseware, whether the teachers speak in the classroom or not, and the like, and the classroom behaviors of the teachers are analyzed through collected teaching actions of the teachers in the classroom.
Step S103, determining teaching index data based on the teaching analysis result.
The acquired teaching data are processed by the computer equipment according to the same standard, so that the acquired teaching analysis result is unified and objective, and the accuracy of subsequent determination of teaching index data can be improved. Furthermore, the teaching index can be set according to the actual requirement, for example, corresponding teaching index data can be set according to the numerical value of the teaching analysis result.
According to the teaching index data determining method, the same standard processing is carried out on the collected teaching data through the preset teaching analysis model, so that objective and unified teaching analysis results are obtained, and the accuracy of the teaching index data determined later can be improved.
In this embodiment, a method for determining teaching index data is provided, which may be used in a notebook, a desktop, a teaching system, etc., fig. 2 is a flowchart of a method for determining teaching index data according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
Step S201, collecting teaching data of a teacher.
Specifically, the step S201 includes:
s2011, collecting video data, audio data and teaching material data of each lesson of each teacher.
The video data of each lesson of each teacher is respectively collected through the camera, the video data comprises video data and audio data, or the audio data of each lesson of each teacher is respectively collected through the radio equipment, and teaching data uploaded in advance by the teacher, such as courseware, course outline and the like of each lesson, are directly read.
Step S202, the teaching data is processed by using a teaching analysis model, and a teaching analysis result of the teacher is obtained.
The teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers.
Specifically, the step S202 includes:
Step S2021, performing target recognition on the video data, performing speech recognition and semantic analysis on the audio data, and reading the teaching data to obtain a data processing result.
Specifically, target recognition is performed on the video data to obtain a teacher recognition result, a student recognition result, a courseware recognition result and an blackboard writing recognition result. The teacher recognition result comprises a target detection result of the teacher, a walking track of the teacher, line-of-sight distribution of the teacher, action behaviors of the teacher, face recognition and emotion recognition results of the teacher and the like, and the student recognition result comprises a target detection result of the student, a walking track of the student, action behaviors of the student and the like.
And performing voice recognition and semantic analysis on the audio data to obtain teacher voice results, student voice results, voice texts and keyword extraction results. The teacher voice result is a part belonging to the teacher speaking in the audio data, the student voice result is a part belonging to the student speaking in the audio data, the voice text is a voice-to-text result, the voice text is divided into a teacher voice text and a student voice text, the keyword is preset according to the requirement, and if the keyword is matched in the audio data, the keyword is recorded to be used as a keyword extraction result. Optionally, an automatic speech recognition model is adopted to process the audio data, and the audio data are divided according to the time periods to obtain the speech text of the corresponding time period. Further, the voice text is divided into sentences, the total number of sentences and the number of words contained in each sentence are counted, the audio duration corresponding to each sentence is obtained, and for example, the audio duration is obtained through an audio processing library. Alternatively, the total number of words and the length of the voice audio are extracted, and the ratio of the total number of words to the length of the voice audio is taken as the speech rate (usually in terms of words/time). Alternatively, the real-time volume of the audio is calculated by an audio processing algorithm, such as a Root Mean Square (RMS) algorithm that can estimate the overall volume of the audio by calculating the root mean square value of the audio signal, and a peak detection algorithm that can detect the peak volume in the audio. When the video data contains audio data, the audio data is first extracted from the video data and then processed. Illustratively, ASR (Automatic Speech Recognition ) speech recognition is employed, including Hidden Markov Models (HMMs) and "end-to-end" methods based on deep neural networks, both of which require "input-encode-decode-output" processes, and Whisper automatic speech recognition systems may be employed in performing speech recognition based on deep neural networks.
Reading the teaching data to obtain the word number of the teaching data, the paragraph of the teaching data and the document size of the teaching data. The format of the instructional material data, such as word, ppt, etc., may also be read.
And step S2022, performing teaching analysis by using the data processing result to obtain a teaching analysis result.
Optionally, the teaching analysis includes line of sight analysis, sound analysis, language analysis, face analysis, emotion analysis, gesture analysis, medium analysis, scene analysis, behavior or performance association analysis, and other teaching analysis items can be set according to requirements.
Specifically, line-of-sight analysis is performed based on the teacher recognition result and the teacher voice result, the teacher voice result is used for dividing each stage of the class, and line-of-sight analysis is performed based on the teacher recognition result after the class stage is divided; performing sound analysis based on the teacher voice result, such as analyzing speech speed, volume, etc.; language analysis is carried out based on the teacher voice result, the student voice result, the voice text and the keyword extraction result, for example, the teacher voice result and the student voice result are firstly divided, then the corresponding voice text is extracted, and the corresponding keyword extraction result is obtained by matching with a preset keyword word library; carrying out emotion analysis based on teacher recognition results; performing face analysis and gesture analysis based on teacher recognition results and student recognition results; performing medium analysis based on the number of words of the teaching data, the paragraph of the teaching data and the file size of the teaching data; scene analysis is carried out based on courseware recognition results and blackboard writing recognition results; and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result, for example, judging whether the behaviors of the teacher and the student have association or not according to the extracted keyword.
Optionally, the teaching analysis results include a resource preparation analysis result, a lesson preparation full-degree analysis result, a classroom structure analysis result, a new and old connection analysis result, an motivation analysis result, a question type analysis result, a volume speech speed analysis result, a positive term analysis result, a spoken term analysis result, a teaching gesture analysis result, a teaching writing analysis result, a question response analysis result, an introduction attraction analysis result, a sight line distribution analysis result, a walking track analysis result and a multiple evaluation analysis result, and other teaching analysis result items can be set according to requirements.
And performing medium analysis based on the number of the teaching data words, the teaching data paragraphs and the teaching data file size to obtain a resource preparation analysis result. Specifically, firstly, whether a teacher uploads teaching data is judged, and if the teaching data is uploaded, the number of types of the teaching data, the number of words, paragraph distribution and file size of each teaching data are analyzed and used as a resource preparation analysis result. Illustratively, the teaching material types include word material, pdf material, excel material, and ppt material, and the number of each type of material is counted first. Furthermore, since it is easy to read word data, the content and the file size of the word data are read first, and page number reading, word number judgment and paragraph format judgment are performed on the content of the word data, and the fonts and word sizes are analyzed. Then, the pdf data is converted into word format, file size reading, page number reading, word number judgment, paragraph format judgment and font and word size analysis are performed, and file size reading and page number reading are performed on pdf data which cannot be converted into word format. Page reading and file size reading are performed for excel data and ppt data which are difficult to read. Exemplary, FIG. 3 shows a schematic diagram of a resource preparation analysis result according to an embodiment of the present application.
And carrying out gesture analysis based on the teacher identification result to obtain a lesson preparation full-degree analysis result. Fig. 4 is a schematic diagram illustrating a lesson preparation sufficiency analysis result according to an embodiment of the present application, and as shown in fig. 4, the number of times of low head and duration of each time of low head, number of times of turning around and duration of each time of turning around of a teacher are counted as lesson preparation sufficiency analysis results based on the teacher identification result (i.e., lesson preparation proficiency analysis in fig. 4). Specifically, target detection is performed on the teacher at intervals of a preset time period (for example, 1 second), the position of the teacher is positioned, the low head action detection is performed on the teacher, if the low head of the teacher is detected, the low head time is increased by one, timing is started until the low head action is finished, the low head time is recorded, and visual display is performed in the chart. And (3) detecting targets of teachers at intervals of preset time (for example, 1 second), positioning the positions of the teachers, classifying the postures of the teachers, adding one to the number of the straddling times if the straddling of the teacher is detected, starting timing until the straddling action is finished, recording the number of the straddling times, and visually displaying in a chart.
And carrying out emotion analysis based on the teacher recognition result, and carrying out face analysis and gesture analysis based on the teacher recognition result and the student recognition result to obtain a teaching gesture analysis result. Specifically, the video data is processed, a part above the platform is cut out, a teacher is subjected to target recognition to obtain a teacher recognition result, then the teacher recognition result is subjected to skeleton analysis through a posture estimation algorithm, skeleton key points of the teacher are analyzed, judgment is made on the posture according to the relative positions of the skeleton key points, for example, each direction of the teacher is determined according to the relative positions of shoulder key points and leg key points, and the proportion of each direction is counted. Fig. 5 is a schematic diagram illustrating gesture analysis according to an embodiment of the present application, and as shown in fig. 5, gesture analysis is performed based on a teacher recognition result, so as to obtain a duration proportion of a teacher facing a right-left student, a rear student, and a side student on a classroom, as gesture analysis results.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a classroom structure analysis result. And carrying out language analysis on the voice text corresponding to the teacher voice result to obtain a keyword extraction result so as to divide the classroom structure into three sections, namely a beginning stage, a core stage and an ending stage, wherein the beginning stage takes 5 minutes for example. Specifically, a large number of standard course samples are collected in advance, phrases corresponding to the beginning stage, the core stage and the ending stage are extracted, synonym expansion is carried out on the phrases to form a class structure phrase library, when language analysis is carried out on a teacher voice result, a voice text and a keyword extraction result, phrase matching can be carried out on the teacher speaking in the class based on the class structure phrase library, a time point when the phrase matching is successful is selected near a preset time point to serve as a stage switching time point, for example, the switching time point of the beginning stage and the core stage is preset to be 5 minutes, phrase matching is carried out near 5 minutes, and if the phrase matching is successful at 5 minutes and 5 seconds, 5 minutes and 5 seconds are taken as the switching time point of the beginning stage and the core stage. If the matching is not successful, the class is judged to not use the three-section class structure. Exemplary, fig. 6 shows a schematic diagram of a classroom structure analysis result according to an embodiment of the present application.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a new and old engagement analysis result. Firstly, matching a voice text corresponding to a teacher voice result with a pre-established new and old connection keyword library to obtain a keyword extraction result corresponding to new and old connection analysis, and taking the keyword extraction result as a new and old connection analysis result. Keywords for the old and new engagement analysis include "review old knowledge", "review", "consolidate", "next", etc. Fig. 7 is a schematic diagram illustrating the results of the new and old engagement analysis and the excitation engine analysis according to the embodiment of the present application.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain an motivation analysis result. Firstly, matching a voice text corresponding to a teacher voice result with a keyword library of a pre-established motivation to obtain a keyword extraction result corresponding to motivation analysis, and using the keyword extraction result as a new and old linkage analysis result. Keywords that motivate analysis include "think of", "imagine", "experience", "group discussion", "group communication", "group report", and the like.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a question type analysis result. Firstly, carrying out language analysis on a voice text corresponding to a teacher voice result to obtain a keyword extraction result so as to extract question sentences of the teacher in a class, classifying the question sentences into a memory type, an understanding type, an analysis type, an application type, an evaluation type and a creation type, and counting the number of the question sentences of each type to be used as a question type analysis result. Specifically, through regular expression matching, through screening of questioning sentences of sentences, questioning keywords such as 'what', 'how many' and the like and similar words thereof are selected to screen out the questioning sentences, then according to classification such as memory type, understanding type, analysis type, application type, evaluation type and creation type, different keyword matching and keyword weight are used for matching respectively, finally, the number of different classification and corresponding time period are obtained, and then visual display is carried out. Illustratively, question sentences include Chinese and English, such as memory-type including "Can you tell/Do you still remember/Do you remember/Can you name/Can you state/Can you list/Can you identify/Would you please repeat/Can you define"、" utter/writer/recognize/select/identify/match/resolve/remember "and the like, understanding-type including "Can you explain/Can you describe/What happened/How does this relate to you/Can you paraphrase/Can you generalize/What can you/Can you infer"、" read/answer/solve/sample/get/narrate/describe/compare/interpret/convert/predict/infer/summarize/sort" and the like, analyzing-type including "Can you analyze/Can you compare/Can you classify/How is", "compare/analyze/why/which factors/what principle/what relation/statement/find type/get/proof" and the like, the application type includes "Can you give us/Can you apply/Can you illustrate/Can you report", "application/practice", etc., the evaluation type includes "What do you think", "criticizing/judging/evaluating/grading/evaluating/proving/certifying/dialect/opinion/standard is/is more important/is more channel-germane/is more reliable/is error/is opinion", etc., and the creative type includes "Can you assess/Can you evaluate/Can you debate/Can you job/Can you recommend to" "foreseeing/authoring/summarizing/generating/planning/designing/constructing/developing/producing/proposing/inventing/constructing/if/how you can think/how we prove" etc. Fig. 8 is a schematic diagram illustrating a question type analysis result according to an embodiment of the present application.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a positive term analysis result. Firstly, language analysis is carried out on a voice text corresponding to a teacher voice result to obtain a keyword extraction result so as to extract encouraging words and corresponding speaking times of the teacher in a class, such as speaking special bars, applause, praise and the like and corresponding speaking times. The encouraging term can be set according to actual demands, for example, after the class term of each teacher is counted, the encouraging term habit of each teacher is analyzed in time and updated into the encouraging term library. Specifically, through a voice-to-text model, text which is spoken by a teacher can be obtained, then the text is matched according to a keyword library of active terms, and then high-frequency display output is screened. Illustratively, the positive term keywords include "great job/good job/great/excellent/well done/amazing/good points/really creative/innovative/great ideas/wonderful ideas/great talents"、" true smart/you certain line/retry/special bar/praise/applause/you most bar/true bar/nothing "and so on. By way of example, FIG. 9 illustrates a schematic diagram of the results of a positive term analysis in accordance with an embodiment of the present application.
And carrying out language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a spoken phrase analysis result. Firstly, language analysis is carried out on a voice text corresponding to a teacher voice result to obtain a keyword extraction result so as to extract the spoken language of the teacher in a class and the corresponding speaking times, such as speaking 'that', 'that is also', and the like and the corresponding speaking times. The encouraging words can be set according to actual requirements, for example, after the class term of each teacher is counted, the word habit of each teacher is analyzed in time and updated into the word library of the word. Specifically, firstly, various side words and oral Buddhist in the teacher voice text are matched with a pre-established oral word library through a regular expression, and screening out the suspected oral Buddhist parts, and then sorting and visually displaying the suspected oral Buddhist parts according to word frequency from high to low. Illustratively, the term word includes "that", "also", "one", "another", "the" and the like. By way of example, FIG. 10 shows a schematic diagram of spoken word analysis results, in accordance with an embodiment of the present application.
And performing voice analysis based on the teacher voice result, and performing language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a volume speech rate analysis result. Firstly, language analysis is carried out based on a teacher voice result, a voice text and a keyword extraction result, a volume change curve and a speech speed change curve of the teacher in a class are extracted, and average volume and average speech speed are counted to be used as volume speech speed analysis results. Fig. 11 is a schematic diagram illustrating a volume speech rate analysis result according to an embodiment of the present application.
And performing scene analysis based on the courseware recognition result and the blackboard writing recognition result to obtain a teaching writing analysis result. Specifically, processing video data to obtain a courseware recognition result and an blackboard-writing recognition result, recognizing and counting characters on the blackboard-writing by a character recognition algorithm, and extracting the font size, the line spacing, the gradient, the board surface structure and the like of the blackboard-writing, wherein the character recognition algorithm can adopt an OCR (optical character recognition) algorithm based on deep learning, the font size detection can adopt an object detection technology in image processing, such as YOLO (You Only Look Once) and Faster R-CNN, the line spacing extraction can adopt an image segmentation algorithm, the interval between different lines on the blackboard-writing can be detected based on a text line detection method, such as MSER (Maximally Stable Extremal Regions) algorithm and Canny edge detection algorithm, the gradient analysis can adopt a character direction detection algorithm, such as a Hough conversion based method, and the board surface structure analysis can adopt an image feature extraction algorithm, such as SURF (speeded up robust feature) algorithm. Recording a writing start time stamp and a writing end time stamp through time stamps corresponding to frames of video data, and calculating the writing speed of the blackboard writing by combining the writing content length and the time interval. By way of example, fig. 12 shows a schematic diagram of a teaching writing analysis result according to an embodiment of the present application, and as shown in fig. 12, scene analysis is performed based on a courseware recognition result and an blackboard writing recognition result, so as to obtain, as teaching writing analysis results, the number of blackboard writing, the speed of writing a blackboard writing, the ratio of the blackboard writing area to the total blackboard area, the spacing between the fonts of each line of the blackboard writing, the inclination of the fonts of the blackboard writing, the board surface structure (for example, a text structure or an image structure) of the blackboard writing, and the like.
And carrying out language analysis based on the teacher voice result, the student voice result, the voice text and the keyword extraction result to obtain a question response analysis result. Specifically, firstly, question words contained in a keyword extraction result corresponding to a teacher voice result are based, a question sentence of the teacher is obtained, whether student voice results for answering questions of students are contained in the sentences before and after analysis, if yes, semantic analysis is carried out on the question sentence of the teacher, then a preliminary answer to the question sentence of the teacher is obtained through a pre-established language model, then the preliminary answer and the question answer of the students are matched through the language model, and the matching degree of the preliminary answer and the question answer of the students is obtained as a question answer analysis result.
And carrying out gesture analysis based on the student identification result to obtain an imported attraction analysis result. Specifically, the video data is screened firstly, the video data corresponding to the problem class is removed (the problem class does not need to be imported), then the screened video data is subjected to target detection, a student identification result is obtained, and gesture analysis is performed, so that the head raising rate of the student in the class is obtained, and the head raising rate is used as an importing attraction analysis result capable of reflecting the concentration of the student. For example, the object detection is performed by an object detection algorithm such as R-CNN, overFeat, fast/fast R-CNN, SSD, YOLO, the student identification result is detected, then the student identification result is input into an image classification model (for example GoogLeNet) to judge whether the student is raised, and finally the head raising proportion of the student is output as the lead-in attraction analysis result.
And performing face analysis based on the teacher recognition result, and performing line-of-sight analysis based on the teacher recognition result and the teacher voice result to obtain a line-of-sight distribution analysis result. Firstly, processing audio data by adopting an automatic language identification model to divide a classroom according to time periods to obtain teacher voice results of teachers in all time periods, then carrying out face analysis on the teacher identification results by a sight Estimation (size Estimation) algorithm and a Gaze following target Estimation (size follow) algorithm to obtain sight distribution punctuation patterns of students and emotion changes of the teachers in the classroom in all time periods of the classroom, and obtaining sight distribution analysis results by combining the sight distribution punctuation patterns and emotion changes. Fig. 13 is a schematic diagram illustrating a line-of-sight distribution analysis result according to an embodiment of the present application.
And tracking the track based on the teacher identification result and the student identification result to obtain a walking track analysis result. Firstly, processing video data through a multi-target tracking algorithm to respectively obtain a punctual graph of a walking track of a teacher and a punctual graph of a walking track of a student in each time period of a classroom, and further analyzing the relevance between the punctual graph and the punctual graph, for example, whether the walking track of the teacher extends over students in each azimuth, and when the students get on the desk to answer questions, whether the teacher makes corresponding walking feedback or not is used as a walking track analysis result. By way of example, the multi-target tracking algorithm employed may be DeepSORT algorithm, byteTrack algorithm, fairMOT algorithm, etc. Fig. 14 is a schematic diagram illustrating a walking track analysis result according to an embodiment of the present application.
And performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result to obtain a multi-element evaluation analysis result. Firstly, performing scene capturing on video data through a multi-target tracking algorithm and a target detection algorithm to obtain a teacher identification result and a student identification result which are corresponding to a teacher question and a student standing answer question, processing audio data of corresponding time through an automatic voice recognition model to obtain a keyword extraction result, performing behavior or performance association analysis on the teacher identification result, the student identification result and the keyword extraction result to obtain interaction times and interaction time points of the teacher and the student in a classroom, extracting whether the answer of the teacher to the student in each interaction contains correctness keywords (such as ' correct ', ' very good pair ', ' to be perfected ', ' deep keyword (such as ' deep ', ' full ') and the like, judging as multi-element interaction and recording if the answer is contained, and counting the multi-element interaction times to be used as a multi-element evaluation analysis result. Exemplary, fig. 15 shows a schematic diagram of a multivariate evaluation analysis result according to an embodiment of the present application.
Step S203, determining teaching index data based on the teaching analysis result.
Optionally, video data, audio data and teaching data of a plurality of lessons of each teacher are collected, teaching analysis results corresponding to the lessons are obtained respectively, teaching index data of the teacher are determined according to the teaching analysis results corresponding to the lessons, and finally teaching improvement suggestions of the teacher are generated based on the teaching index data of the teacher. The teaching analysis can be carried out by setting how long or how many lessons are in a specific interval by oneself.
The calculation rule of the teaching index data can be specified by oneself to score the teaching analysis result so as to determine the teaching index data, and it should be noted that the application processes the collected teaching data mainly through the same standard to obtain a unified teaching analysis result, and further the teaching index data can be determined according to the unified teaching analysis result, and the specific rule for determining the teaching index data is set according to the requirement, and the application is not limited herein. Illustratively, the teaching index data is determined by the following table:
Table 1: and a teaching index data determination table.
/>
/>
Furthermore, the teaching improvement suggestions of the teacher can be set according to the requirements, for example, corresponding learning resources are pushed to the teacher, and the like.
Furthermore, the teaching analysis results of a plurality of lessons of the teacher can be stored and processed to form a teaching ability growth file of the teacher, visual and specific teaching style portraits of the teacher are provided for education managers, the coordinated development and the integrity promotion of the assisting teacher team are achieved, and powerful support is provided for scientific and efficient development of activities such as teaching, training and evaluation.
According to the teaching index data determining method, the same standard processing is carried out on the collected teaching data through the preset teaching analysis model, so that objective and unified teaching analysis results are obtained, and the accuracy of the teaching index data determined later can be improved.
The embodiment also provides a teaching index data determining device, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a teaching index data determining apparatus, as shown in fig. 16, including:
a data acquisition module 1601, configured to acquire teaching data of a teacher;
The teaching analysis module 1602 is configured to process the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers;
a data determining module 1603, configured to determine teaching index data based on the teaching analysis result.
In an alternative embodiment, the data acquisition module is further configured to: and collecting video data, audio data and teaching material data of each lesson of each teacher.
In an alternative embodiment, the teaching analysis module is further configured to: performing target recognition on the video data, performing voice recognition and semantic analysis on the audio data, and reading teaching data to obtain a data processing result; and carrying out teaching analysis by using the data processing result to obtain a teaching analysis result.
In an alternative embodiment, the teaching analysis module is further configured to: performing target recognition on the video data to obtain a teacher recognition result, a student recognition result, a courseware recognition result and an blackboard writing recognition result; performing voice recognition and semantic analysis on the audio data to obtain teacher voice results, student voice results, voice texts and keyword extraction results; reading the teaching data to obtain the word number of the teaching data, the paragraph of the teaching data and the document size of the teaching data.
In an alternative embodiment, the educational analysis includes line of sight analysis, sound analysis, language analysis, face analysis, emotion analysis, gesture analysis, medium analysis, scene analysis, behavioral or performance association analysis; the teaching analysis module is also used for: performing line-of-sight analysis based on the teacher recognition result and the teacher voice result; performing sound analysis based on the teacher voice result; language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results; carrying out emotion analysis based on teacher recognition results; performing face analysis and gesture analysis based on teacher recognition results and student recognition results; performing medium analysis based on the number of words of the teaching data, the paragraph of the teaching data and the file size of the teaching data; scene analysis is carried out based on courseware recognition results and blackboard writing recognition results; and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result.
In an alternative embodiment, the teaching analysis results include a resource preparation analysis result, a lesson preparation full-degree analysis result, a classroom structure analysis result, a new and old engagement analysis result, an excitation motivation analysis result, a question type analysis result, a volume speech rate analysis result, a positive speech analysis result, a spoken speech analysis result, a teaching gesture analysis result, a teaching writing analysis result, a question response analysis result, an introduction attraction analysis result, a sight line distribution analysis result, a walking track analysis result, and a multiple evaluation analysis result; the teaching analysis module is also used for: performing medium analysis based on the number of the teaching data words, the teaching data paragraphs and the teaching data file size to obtain a resource preparation analysis result; carrying out gesture analysis based on the teacher identification result to obtain a lesson preparation full-degree analysis result; carrying out emotion analysis based on the teacher recognition result, carrying out face analysis and gesture analysis based on the teacher recognition result and the student recognition result, and obtaining a teaching gesture analysis result; language analysis is carried out based on teacher voice results, voice texts and keyword extraction results, and classroom structure analysis results, new and old connection analysis results, motivation analysis results, question type analysis results, active term analysis results and spoken term analysis results are obtained; performing voice analysis based on the teacher voice result, and performing language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a volume speech rate analysis result; scene analysis is carried out based on courseware recognition results and blackboard writing recognition results, and teaching writing analysis results are obtained; language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results, and question response analysis results are obtained; performing gesture analysis based on the student identification result to obtain an imported attraction analysis result; performing face analysis based on the teacher recognition result, and performing line-of-sight analysis based on the teacher recognition result and the teacher voice result to obtain a line-of-sight distribution analysis result; tracking a track based on the teacher identification result and the student identification result to obtain a walking track analysis result; and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result to obtain a multi-element evaluation analysis result.
In an optional implementation manner, the system further comprises a suggestion generation module, wherein the suggestion generation module is used for collecting video data, audio data and teaching data of a plurality of classes of each teacher and respectively obtaining teaching analysis results corresponding to the plurality of classes; determining teaching index data of a teacher according to teaching analysis results corresponding to the plurality of lessons; based on the teacher's teaching index data, a teacher teaching improvement suggestion is generated.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The teaching index data determination device in this embodiment is presented in the form of a functional unit, where the unit refers to an ASIC (Application SPECIFIC INTEGRATED Circuit) Circuit, a processor and a memory that execute one or more software or firmware, and/or other devices that can provide the above functions.
The embodiment of the invention also provides a computer device which is provided with the teaching index data determining device shown in the figure 16.
Referring to fig. 17, fig. 17 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 17, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 17.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 17.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for determining teaching index data, the method comprising:
Collecting teaching data of a teacher;
Processing the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers;
And determining teaching index data based on the teaching analysis result.
2. The method of claim 1, wherein the collecting teaching data of the teacher comprises:
and collecting video data, audio data and teaching material data of each lesson of each teacher.
3. The method of claim 2, wherein processing the teaching data using a teaching analytical model comprises:
Performing target recognition on the video data, performing voice recognition and semantic analysis on the audio data, and reading teaching material data to obtain a data processing result;
and carrying out teaching analysis by using the data processing result to obtain a teaching analysis result.
4. A method according to claim 3, wherein said performing object recognition on said video data, performing speech recognition and semantic analysis on said audio data, and reading teaching material data to obtain data processing results comprises:
Performing target recognition on the video data to obtain a teacher recognition result, a student recognition result, a courseware recognition result and an blackboard writing recognition result;
Performing voice recognition and semantic analysis on the audio data to obtain teacher voice results, student voice results, voice texts and keyword extraction results;
Reading the teaching data to obtain the word number of the teaching data, the paragraph of the teaching data and the document size of the teaching data.
5. The method of claim 4, wherein the educational analysis comprises line-of-sight analysis, sound analysis, language analysis, face analysis, emotion analysis, gesture analysis, media analysis, scene analysis, behavioral or performance association analysis;
the teaching analysis by using the data processing result comprises the following steps:
performing line-of-sight analysis based on the teacher recognition result and the teacher voice result;
Performing sound analysis based on the teacher voice result;
Language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results;
Carrying out emotion analysis based on teacher recognition results;
Performing face analysis and gesture analysis based on teacher recognition results and student recognition results;
Performing medium analysis based on the number of words of the teaching data, the paragraph of the teaching data and the file size of the teaching data;
Scene analysis is carried out based on courseware recognition results and blackboard writing recognition results;
and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result.
6. The method of claim 5, wherein the teaching analysis results include resource preparation analysis results, lesson preparation full-scale analysis results, classroom structure analysis results, old and new engagement analysis results, motivation analysis results, question type analysis results, volume speech rate analysis results, active speech analysis results, spoken speech analysis results, teaching gesture analysis results, teaching writing analysis results, question response analysis results, lead-in attraction analysis results, line-of-sight distribution analysis results, walking track analysis results, multi-element evaluation analysis results;
The obtaining of the teaching analysis result comprises the following steps:
Performing medium analysis based on the number of the teaching data words, the teaching data paragraphs and the teaching data file size to obtain a resource preparation analysis result;
Carrying out gesture analysis based on the teacher identification result to obtain a lesson preparation full-degree analysis result;
carrying out emotion analysis based on the teacher recognition result, carrying out face analysis and gesture analysis based on the teacher recognition result and the student recognition result, and obtaining a teaching gesture analysis result;
Language analysis is carried out based on teacher voice results, voice texts and keyword extraction results, and classroom structure analysis results, new and old connection analysis results, motivation analysis results, question type analysis results, active term analysis results and spoken term analysis results are obtained;
Performing voice analysis based on the teacher voice result, and performing language analysis based on the teacher voice result, the voice text and the keyword extraction result to obtain a volume speech rate analysis result;
scene analysis is carried out based on courseware recognition results and blackboard writing recognition results, and teaching writing analysis results are obtained;
language analysis is carried out based on teacher voice results, student voice results, voice texts and keyword extraction results, and question response analysis results are obtained;
performing gesture analysis based on the student identification result to obtain an imported attraction analysis result;
performing face analysis based on the teacher recognition result, and performing line-of-sight analysis based on the teacher recognition result and the teacher voice result to obtain a line-of-sight distribution analysis result;
Tracking a track based on the teacher identification result and the student identification result to obtain a walking track analysis result;
and performing behavior or performance association analysis based on the teacher identification result, the student identification result and the keyword extraction result to obtain a multi-element evaluation analysis result.
7. The method according to claim 1, wherein the method further comprises:
Collecting video data, audio data and teaching material data of a plurality of classes of each teacher, and respectively obtaining teaching analysis results corresponding to the plurality of classes;
determining teaching index data of a teacher according to teaching analysis results corresponding to the plurality of lessons;
And generating teacher teaching improvement suggestions based on the teaching index data of the teacher.
8. A teaching index data determination apparatus, characterized in that the apparatus comprises:
The data acquisition module is used for acquiring teaching data of a teacher;
The teaching analysis module is used for processing the teaching data by using a teaching analysis model to obtain a teaching analysis result of the teacher; the teaching analysis model is used for analyzing pre-class preparation and classroom behaviors of teachers;
and the data determining module is used for determining teaching index data based on the teaching analysis result.
9. A computer device, comprising:
A memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the teaching index data determination method of any of claims 1-7.
10. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the teaching index data determination method of any of claims 1 to 7.
CN202410249374.0A 2024-03-05 2024-03-05 Teaching index data determining method, device, computer equipment and medium Pending CN117994098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410249374.0A CN117994098A (en) 2024-03-05 2024-03-05 Teaching index data determining method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410249374.0A CN117994098A (en) 2024-03-05 2024-03-05 Teaching index data determining method, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN117994098A true CN117994098A (en) 2024-05-07

Family

ID=90901335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410249374.0A Pending CN117994098A (en) 2024-03-05 2024-03-05 Teaching index data determining method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN117994098A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491890A (en) * 2017-08-29 2017-12-19 广州思涵信息科技有限公司 One kind can quantify Classroom Teaching Quality Assessment system and method
CN111681143A (en) * 2020-04-27 2020-09-18 平安国际智慧城市科技股份有限公司 Multi-dimensional analysis method, device, equipment and storage medium based on classroom voice
WO2020214316A1 (en) * 2019-04-19 2020-10-22 Microsoft Technology Licensing, Llc Artificial intelligence-based generation of event evaluation report
CN115239527A (en) * 2022-06-27 2022-10-25 重庆市科学技术研究院 Teaching behavior analysis system for teaching characteristic fusion and modeling based on knowledge base

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491890A (en) * 2017-08-29 2017-12-19 广州思涵信息科技有限公司 One kind can quantify Classroom Teaching Quality Assessment system and method
WO2020214316A1 (en) * 2019-04-19 2020-10-22 Microsoft Technology Licensing, Llc Artificial intelligence-based generation of event evaluation report
CN111681143A (en) * 2020-04-27 2020-09-18 平安国际智慧城市科技股份有限公司 Multi-dimensional analysis method, device, equipment and storage medium based on classroom voice
CN115239527A (en) * 2022-06-27 2022-10-25 重庆市科学技术研究院 Teaching behavior analysis system for teaching characteristic fusion and modeling based on knowledge base

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
辛继胜;刘泽华;李昊;: "课堂教学大数据采集与分析平台架构及关键技术研究", 广东轻工职业技术学院学报, no. 01, 30 March 2020 (2020-03-30), pages 15 - 21 *

Similar Documents

Publication Publication Date Title
US10546508B2 (en) System and method for automated literacy assessment
CN111292751B (en) Semantic analysis method and device, voice interaction method and device, and electronic equipment
CN111524578B (en) Psychological assessment device, method and system based on electronic psychological sand table
CN108280065B (en) Foreign text evaluation method and device
Dobbs et al. Using new vocabulary in writing: Exploring how word and learner characteristics relate to the likelihood that writers use newly taught vocabulary
CN112069329B (en) Text corpus processing method, device, equipment and storage medium
US10380490B1 (en) Systems and methods for scoring story narrations
CN113505786A (en) Test question photographing and judging method and device and electronic equipment
CN110852071B (en) Knowledge point detection method, device, equipment and readable storage medium
CN116821377A (en) Primary school Chinese automatic evaluation system based on knowledge graph and large model
Agarwal et al. Autoeval: A nlp approach for automatic test evaluation system
CN111539207A (en) Text recognition method, text recognition device, storage medium and electronic equipment
Al-Ajlan et al. Towards the development of an automatic readability measurements for Arabic language
CN118193701A (en) Knowledge tracking and knowledge graph based personalized intelligent answering method and device
JP2016085284A (en) Program, apparatus and method for estimating evaluation level with respect to learning item on the basis of person's remark
CN117473078A (en) Visual reading system of long literature based on cross-domain named entity recognition
CN117592470A (en) Low-cost gazette data extraction method driven by large language model
CN112116181B (en) Classroom quality model training method, classroom quality evaluation method and classroom quality evaluation device
KR101072100B1 (en) Document processing apparatus and method for extraction of expression and description
CN117994098A (en) Teaching index data determining method, device, computer equipment and medium
Rüdian et al. Automatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?
CN110442862B (en) Data processing method and device based on recruitment information
Luong et al. Building a corpus for vietnamese text readability assessment in the literature domain
CN114020863A (en) Visual question-answer analysis method, device and system and readable storage medium
CN113435213A (en) Method and device for returning answers aiming at user questions and knowledge base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination