CN108776794B - Teaching effect image drawing method based on big data and artificial intelligence and robot system - Google Patents

Teaching effect image drawing method based on big data and artificial intelligence and robot system Download PDF

Info

Publication number
CN108776794B
CN108776794B CN201810634275.9A CN201810634275A CN108776794B CN 108776794 B CN108776794 B CN 108776794B CN 201810634275 A CN201810634275 A CN 201810634275A CN 108776794 B CN108776794 B CN 108776794B
Authority
CN
China
Prior art keywords
evaluation unit
teaching
teacher
preset
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810634275.9A
Other languages
Chinese (zh)
Other versions
CN108776794A (en
Inventor
朱定局
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201810634275.9A priority Critical patent/CN108776794B/en
Publication of CN108776794A publication Critical patent/CN108776794A/en
Application granted granted Critical
Publication of CN108776794B publication Critical patent/CN108776794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Technology (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Big data and artificial intelligence based teaching effect image drawing method and robot system, comprising: and taking each evaluation unit of each teacher as an evaluation unit label of the teaching effect portrait of each teacher, taking the proportion of the total duration of the first preset actions of all students identified from the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit, and taking the proportion as the value of the evaluation unit label of the teaching effect portrait of each teacher. The method and the system portray the teaching effect of the teacher through the reactions of the students on the courses in the big data in the teaching process, reflect the teaching effect of the teacher more truly and objectively, and can greatly improve the objectivity and the accuracy of the teaching portrayal and the teaching evaluation.

Description

Teaching effect image drawing method based on big data and artificial intelligence and robot system
Technical Field
The invention relates to the technical field of information, in particular to a teaching effect image drawing method based on big data and artificial intelligence and a robot system.
Background
The prior art of teaching effect portrait is formed by the student scoring the teacher at the end of the period.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: the evaluation of the teacher by the student depends not only on how the teacher gives the lecture but also on the preference of the student. Therefore, the existing teaching effect portrait can not objectively evaluate the teaching effect but is subjectively influenced by students, and the accuracy of the teaching effect portrait is low.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Therefore, it is necessary to provide a teaching effect portrait method and a robot system based on big data and artificial intelligence to overcome the defects of strong subjectivity and low accuracy of teaching effect portrait in the prior art.
In a first aspect, there is provided a method of portraying a pedagogical effect, the method comprising:
an evaluation labeling step, wherein each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher;
a video identification step, namely, the total duration of the first preset actions of all students identified from the teaching videos corresponding to each evaluation unit of each teacher accounts for the proportion of the total duration of each evaluation unit;
and a label assignment step of taking the ratio as a value of the one evaluation unit label of the teaching effect portrait of each teacher.
Preferably, the step of evaluating the label further comprises:
acquiring data, namely acquiring big teaching process data, wherein the big teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher;
a step of presetting action, which is to obtain the action of carefully attending a lesson as a first preset action;
preferably, the tag assigning step further comprises:
and storing the value of the evaluation unit label of the teaching effect portrait of each teacher into a teaching effect portrait knowledge base.
Preferably, the tag assigning step further comprises:
receiving a query step, and acquiring a teacher to be queried and an evaluation unit to be queried;
searching and evaluating, namely searching and acquiring a teaching effect portrait of the teacher to be queried from a teaching effect portrait knowledge base, and acquiring values of all evaluation unit labels belonging to the evaluation units to be queried from the teaching effect portrait of the teacher to be queried;
and an effect calculation step, namely acquiring the weights of all the evaluation units belonging to the evaluation unit to be inquired, and taking the value obtained by carrying out weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as the teaching effect of the evaluation unit of the teacher to be inquired.
Preferably, the evaluation unit includes a lesson for a preset period of time; the first preset action comprises that the student raises the head and eyes to look forward or/and takes notes with hands.
In a second aspect, there is provided a educational effect representation system, the system comprising:
the evaluation label module is used for taking each evaluation unit of each teacher as an evaluation unit label of the teaching effect portrait of each teacher;
the video identification module is used for identifying the proportion of the total duration of the first preset actions of all students in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit;
and the label assignment module is used for taking the proportion as the value of the label of the evaluation unit of the teaching effect portrait of each teacher.
Preferably, the system further comprises:
the system comprises a data acquisition module, a data storage module and a data processing module, wherein the data acquisition module is used for acquiring big data of a teaching process, and the big data of the teaching process comprises teaching videos corresponding to each evaluation unit of each teacher;
and the preset action module is used for acquiring a preset action of carefully listening to the class as a first preset action.
Preferably, the system further comprises:
the knowledge base storage module is used for storing the value of the evaluation unit label of the teaching effect picture of each teacher into a teaching effect picture knowledge base;
the query receiving module is used for acquiring teachers to be queried and evaluation units to be queried;
the search evaluation module is used for searching and acquiring the teaching effect portrait of the teacher to be queried from a teaching effect portrait knowledge base and acquiring values of all evaluation unit labels belonging to the evaluation unit to be queried from the teaching effect portrait of the teacher to be queried;
and the effect calculation module is used for acquiring the weights of all the evaluation units belonging to the evaluation unit to be inquired, and taking the value obtained by carrying out weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as the teaching effect of the evaluation unit of the teacher to be inquired.
Preferably, the evaluation unit includes a lesson for a preset period of time; the preset action of carefully attending the class comprises that the student raises the head and eyes to look forward or/and takes notes manually.
In a third aspect, there is provided a teaching effect representation robot system in which the teaching effect representation system according to the second aspect is disposed.
The embodiment of the invention has the following advantages and beneficial effects:
according to the teaching effect portrait method and the robot system based on the big data and the artificial intelligence, each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher, the proportion of the total duration of the first preset action of all students identified from the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit is used as the value of the evaluation unit label of the teaching effect portrait of each teacher, so that the teaching effect of teachers is portrait through the reactions of the students on courses in the big data in the teaching process, the teaching effect of the teachers is reflected more truly and objectively, and the objectivity and the accuracy of the teaching portrait and the teaching evaluation can be greatly improved.
Drawings
FIG. 1 is a flow chart of a method for representing a teaching effect according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for representing educational effect images according to a preferred embodiment of the present invention;
FIG. 3 is a schematic block diagram of a teaching effect representation system according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a teaching effect representation system according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the examples of the present invention are described in detail below with reference to the embodiments of the present invention.
The embodiment of the invention provides a teaching effect image drawing method based on big data and artificial intelligence and a robot system. The big data technology comprises big data acquisition and processing technology, and the artificial intelligence technology comprises identification technology and teaching effect portrait technology.
Big data and artificial intelligence based teaching effect portrait method
1. Example 1
As shown in FIG. 1, one embodiment provides a method for representing a teaching effect, the method comprising:
an evaluation labeling step S300, each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher. Preferably, said educational effect representation is a user representation. The user portrait is the core technology of artificial intelligence.
And a video identification step S400, in which the total duration of the first preset actions of all the students identified from the teaching video corresponding to each evaluation unit of each teacher accounts for the proportion of the total duration of each evaluation unit.
And a label assignment step S500 of assigning the ratio as a value of the one evaluation unit label of the teaching effect image of each teacher.
The portrait method is used for identifying through a video in the teaching process to obtain the portrait of the teaching effect, and not only using the subjective score of a student, the initiative score of a judge or the examination score of the student to draw the portrait of the teaching effect, so that the portrait of the teaching effect can objectively reflect the actual effect of the teaching process.
2. Example 2
Based on embodiment 1, in a preferred embodiment, the step S300 of evaluating the label includes:
s301, reading each evaluation unit of each teacher (such as Zhang three, 2018002, high mathematics, 2018-5-23 to 2018-8-12; zhang three, 2018002, english, 2018 school year; li four, 2018003, chemistry, 2017 school date, and the like) from the big data storage system.
S302, establishing a teaching effect portrait (such as a teaching effect portrait of Zhang III, a teaching effect portrait of Li IV, and the like) for each teacher.
S303, each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher (for example, zhang three, 2018002, high mathematics, 2018-5-23 to 2018-8-12 are used as an evaluation unit label of the teaching effect portrait of Zhang three, 2018002, english, 2018 academic years are used as an evaluation unit label of the teaching effect portrait of Zhang three, liyi, 2018003, chemistry, and the academic period of 2017 years is used as an evaluation unit label of the teaching effect portrait of Liyi, and the like).
The evaluation label step S300 establishes a label for each evaluation unit of each teacher, so that the portrait of the teaching effect is more refined, and an objective foundation is laid for evaluation of the teaching effect based on the portrait.
3. Example 3
On the basis of embodiment 1, in a preferred embodiment, the video recording identification step S400 includes:
s401, identifying each student from the teaching video corresponding to each evaluation unit of each teacher through a face recognition technology, and coding the students. Preferably, the teaching video includes video of classroom teaching process conditions such as students attending classes, doing experiments, practicing, taking notes, answering questions, reading aloud and the like.
S402, acquiring a preset first action set from a teaching effect recognition knowledge base, and acquiring a preset action set of carefully attending a lesson and a preset action set of unsuccessfully attending the lesson from the set;
s403, identifying the motion of each student in the teaching video corresponding to each evaluation unit of each teacher and matching the motion with each motion in a set of predefined actions (if the characteristics of the predefined actions include duration, then matching needs to be performed in combination with corresponding motions in video frames or photos adjacent to the identified motion), obtaining at least one first matching degree (for example, if there are 2 motions in the set of actions, 2 first matching degrees can be obtained), if there is one first matching degree greater than or equal to the first preset matching degree, then the identified motion is the first preset motion, if the first matching degree is less than the first preset matching degree, then matching the identified motion with each motion in the set of actions not being followed carefully, if there is duration in the characteristics of the predefined actions, then matching needs to be performed in combination with corresponding motions in video frames or photos adjacent to the identified motion), obtaining at least one second matching degree, if each second matching degree is less than the second preset motion, then matching is obtained. For example, zhang san, 2018002, high-grade mathematics, 2018-5-23 to 2018-8-12 video of teaching video or a collection of captured photographs recognizes each student from left to right, from top to bottom, and matches the action of each student in each frame of video or each photograph with the preset action of carefully listening to a speech, note taking, or the like, and if there is a degree of matching, for example, with the speech, of 0.7, which is greater than the first preset degree of matching, for example, of 0.6, it may be determined that the recognized action is the action of carefully listening to the lesson. For another example, each student is identified from left to right and from top to bottom in a video or a snapshot of a teaching video of the third school year, 2018002, english, 2018 school year, and the action of each student in each frame of video or each snapshot is matched with the action of a preset serious lesson such as speaking, recording notes, and the like, all matching degrees are smaller than a first preset matching degree, for example, 0.6, then the identified action is matched with the action of a preset unreliated lesson such as sleeping, playing a mobile phone, and all matching degrees are smaller than a second preset matching degree, for example, 0.8, and then the identified action is the first preset action. For another example, if the video or the captured photo of the teaching video of lee four, 2018003, chemistry, 2017 school period is collected from left to right, and each student is identified from top to bottom, and the action of each student in each frame of video or each photo is matched with the action of the predetermined serious listening lesson such as speaking, recording notes, and the like, and all the matching degrees are less than the first predetermined matching degree, for example, 0.6, the identified action is matched with the predetermined action of the predetermined serious listening lesson such as sleeping, playing a mobile phone, and the like, and if one matching degree, for example, the matching degree with playing a mobile phone is 0.82 greater than the second predetermined matching degree, for example, 0.8, the identified action is not the first predetermined action.
S404, counting a time length occupied by the first preset action or a number of video frames or a number of photos of each student identified in the teaching video (for example, a video or a set of photographs captured by the teaching video of zhang san, 2018002, high mathematics, 2018-5-23 to 2018-8-12) corresponding to each evaluation unit of each teacher (for example, a video with the first preset action of a student with the number 001 identifies that the time length of taking a note by the student with the number 001 is 150 minutes, the speaking time length is 50 minutes, the sleeping time length is 200 minutes, the time length of playing a mobile phone is 1000 minutes, the remaining time length is 600 minutes, and it can be obtained that the time length occupied by the first preset action of the student with the number 001 is 1000 minutes) accounts for a proportion (for example, 50%) of a total time length or a number of video frames or a number of photos of each evaluation unit (for example, the teaching video recording time length is 2000 minutes).
S405, add and average the duration or the number of video frames or the proportion of the number of photos occupied by the first preset action of each student in the teaching video (for example, the third, 2018002, high mathematics, 2018-5-23 to 2018-8-12 video or the set of captured photos of the teaching video) corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit (for example, if the teaching video contains 5 students in total, and the proportions are 50%, 20%, 30%, 60%, and 40%, respectively, the average is (50% +20% +30% +60% + 40%)/5 = 40%).
The video recognition step S400 recognizes the video for teaching and determines the corresponding actions of various teaching effects, for example, the action of being taken in class corresponds to a good teaching effect, and the action of not being taken in class corresponds to a bad teaching effect, so that the portrait can be used as an objective basis for evaluating the teaching effect.
4. Example 4
Based on embodiment 1, in a preferred embodiment, the label assigning step S500 includes:
s501, the weighted average is used as the value of the one evaluation unit label of the teaching effect picture of each teacher (for example, the evaluation unit labels "zhangsan, 2018002, high, etc. mathematics of zhangsan, the values of 2018-5-23 to 2018-8-12" are 40%).
The label assignment step S500 makes the evaluation based on the picture more objective by taking a weighted average of the teaching effects as the label value of the picture.
5. Example 5
On the basis of embodiment 1, in a preferred embodiment, before the step S300 of evaluating a label, the method further includes:
a data acquisition step S100, in which large teaching process data are acquired, wherein the large teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher; preferably, the video recording includes time information and time zone information.
A preset action step S200, wherein the action of acquiring the preset serious class listening is taken as a first preset action;
all the steps before the step S300 of evaluating the label provide objective data base and objective standard of teaching effect judgment for the establishment of the portrait by acquiring teaching big data and preset actions.
(1) In a further preferred embodiment, the step of acquiring data S100 comprises:
s101, acquiring the name and the number of each teacher (such as Zhang three, 2018002; liqu, 2018003; wangwu, 2018005; and the like), and storing the name and the number into a big data storage library (such as Hbase).
S102, acquiring each evaluation unit comprising course names and start-stop time (such as advanced mathematics, 2018-5-23 to 2018-8-12; english, 2018 school year, chemistry, 2017 school time, art, 2016 three weeks before the school time, and the like) and storing the evaluation units into a large data storage library.
S103, acquiring each evaluation unit (for example, zhang three, 2018002, high-grade mathematics, 2018-5-23-2018-8-12; zhang three, 2018002, english, 2018 school year; liquan, 2018003, chemistry, 2017 school date and the like) of each teacher, and storing the evaluation units into a large data storage library.
S104, obtaining teaching videos of each evaluation unit of each teacher (for example, zusanli all teaching videos of higher mathematics in the period from 2018-5-23 to 2018-8-12; zusanli all teaching videos of English in 2018 academic year; liqu all teaching videos of chemistry in the academic period in 2017; and the like), and storing the teaching videos into a large data storage (for example, hdfs).
(2) In a further preferred embodiment, the preset action step S200 comprises:
s201, prompting the user to preset the action of taking lessons seriously, including the name of the action, the characteristics of the action (e.g. speaking, head forward and mouth moving; note taking, head down and pen holding, etc.).
S202, prompting the user to preset actions including the names of the actions and the characteristics of the actions (such as sleeping, closing eyes and more than 1 minute; playing a mobile phone, looking down at the mobile phone and more than 1 minute; and the like).
S203, receiving the input of the user, adding the preset set of actions of carefully listening to the lesson and the complement of the preset set of actions of not carefully listening to the lesson into the first preset set of actions, and storing the first preset set of actions into a teaching effect recognition knowledge base.
6. Example 6
As shown in fig. 2, on the basis of embodiment 1, in a preferred embodiment, after the step S500 of assigning a label, the method further includes:
and a step S600 of storing the value of the evaluation unit label of the teaching effect portrait of each teacher into a teaching effect portrait knowledge base.
And an inquiry receiving step S700 for obtaining the teacher to be inquired and the evaluation unit to be inquired.
And a searching and evaluating step S800, searching and acquiring the teaching effect portrait of the teacher to be queried from a teaching effect portrait knowledge base, and acquiring the values of all evaluation unit labels belonging to the evaluation units to be queried from the teaching effect portrait of the teacher to be queried.
And an effect calculation step S900, acquiring weights of all the evaluation units belonging to the evaluation unit to be queried, and taking a value obtained by performing weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as a teaching effect of the evaluation unit of the teacher to be queried. And then outputting the teaching effect of the evaluation unit of the teacher to be inquired to the user.
In the effect calculation step S900, the higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be queried is. The lower the value obtained after the weighted average is, the poorer the teaching effect of the evaluation unit of the teacher to be queried is. By comparing the values obtained after different weighted averages, the relative merits of the teaching effects of different teacher evaluation units to be queried can be judged. For example, if the value obtained by the weighted average of the first teacher a evaluation unit is 70%, the value obtained by the weighted average of the first teacher B evaluation unit is 30%, the value obtained by the weighted average of the second teacher B evaluation unit is 50%, and the value obtained by the weighted average of the second teacher C evaluation unit is 10%, the teaching effects are ranked from good to bad as first teacher a evaluation unit > second teacher B evaluation unit > first teacher B evaluation unit > second teacher C evaluation unit.
In each step after the label assignment step S500, the teaching effect of the evaluation unit of the teacher to be queried is obtained by searching the label value of the evaluation unit of the teacher to be queried from the portrait of the teaching effect, so that the teaching evaluation is performed based on the portrait of the teaching effect, and the portrait of the teaching effect is performed based on the big data of the teaching process, so that the teaching evaluation based on the embodiment can objectively reflect the teaching effect in the teaching process, and the traditional teaching evaluation is only scored by students at the end of the scholarly term, so that the traditional teaching evaluation is too subjective on one hand, and the teaching process is ignored on the other hand.
(1) In a further preferred embodiment, the step of storing in a knowledge base S600 comprises:
s601, the value of the evaluation unit label of the teaching effect portrait of each teacher is stored in a teaching effect portrait knowledge base (for example, the evaluation unit labels 'Zusanji, 2018002, high mathematics, 2018-5-23 to 2018-8-12' of the teaching effect portrait of Zusanji are 40%, the evaluation unit labels 'Zusanji, 2018002, english, 2018 academic year' of the teaching effect portrait of Zusanji are 80%, the evaluation unit labels 'IV, 2018003, chemistry, 2017 academic period' of the teaching effect portrait of Li IV are 30%, and the like).
(2) In a further preferred embodiment, the step of accepting a query S700 comprises:
s701, acquiring names and numbers (such as Zhang three and 2018002) of teachers to be queried;
s702, obtaining the evaluation units to be inquired, wherein the evaluation units comprise course names and start-stop time (example 1, high mathematics, 2018-5-23 to 2018-8-12; example 2, all courses and 2018 years).
(3) In a further preferred embodiment, the search evaluation step S800 includes:
s801, searching and acquiring a teaching effect portrait (for example, a teaching effect portrait of Zhang III) of the teacher to be inquired including name and number (for example, zhang III and 2018002) from a teaching effect portrait knowledge base.
S802, each evaluation unit label (Zusanli, 2018002, advanced mathematics, 2018-5-23 to 2018-8-12; zusanli, 2018002, english, 2018 schoolyear; and the like) is obtained from the teaching effect portrait of the teacher to be inquired (Zusanli, 2018002, advanced mathematics, 2018-5-23 to 2018-8-12 in example 1; all courses and 2018 degrees in example 2), and then all evaluation unit labels (Zusanli, 2018002, advanced mathematics, 2018-5-23 to 2018-12 in example 1; zusanli, 2018002, advanced mathematics, 2018-5-23 to 2018-8-12 in example 2; zusanli, 2018002, english and 2018 schoolyear) belonging to the evaluation unit to be inquired are selected from the teaching effect portrait of the teacher to be inquired (for example three, 2018002, 2018 schoolyear).
S803, retrieving and acquiring values of all evaluation unit labels belonging to the evaluation units to be inquired in the teaching effect portrait of the teacher to be inquired from the teaching effect portrait knowledge base (in example 1, values of evaluation unit labels "Zusan, 2018002, high-class mathematics and 2018-5-23 to 2018-8-12" of the Zusan teaching effect portrait are 40%; in example 2, values of evaluation unit labels "Zusan, 2018002, high-class mathematics and 2018-5-23 to 2018-12" of the Zusan teaching effect portrait are 40%; and values of evaluation unit labels "Zusan, 8002, english and 2018 school year" of the Zusan teaching effect portrait are 80%).
(4) In a further preferred embodiment, the effect calculating step S900 comprises:
s901, the scores corresponding to the courses of all the evaluation units belonging to the evaluation unit to be inquired are obtained as weights (in example 1, if the curriculum of high mathematics, 2018-5-23 to 2018-8-12 is divided into 1 score, the weight corresponding to the evaluation units "Zhang III, 2018002, high mathematics, 2018-5-23 to 2018-8-12" is set as 1; in example 2, if the curriculum of high mathematics, 2018-5-23 to 2018-12 is divided into 1 score, the weight corresponding to the evaluation units "Zhang III, 2018002, high mathematics, 2018-5-23 to 2018-12" is set as 1; and the curriculum of English and 2018-year is divided into 3 scores, and the weight corresponding to the evaluation units "Zhang III, 2018002, english and 2018-year" is set as 3).
S902, the values of all the evaluation unit labels are weighted-averaged according to the weights of all the evaluation units (in example 1, the label value is 40% and the corresponding weight is 1, the weighted average is 40% × 1; in example 2, the label values are 40% and 80%, the corresponding weights are 1 and 3, respectively, and the weighted average is (40% × 1+80% × 3)/4 = 70%).
S903, taking the value (40% in example 1; 70% in example 2) obtained after the weighted average as the teaching effect of the evaluation unit of the teacher to be queried.
7. Example 7
On the basis of any one of embodiments 1 to 6, in a preferred embodiment, the evaluation unit includes a lesson for a preset period of time; the first preset action comprises that the student raises the head and eyes to look forward or/and takes notes manually.
The evaluation unit covers courses and time periods thereof, so that the evaluation unit can be set individually according to needs, can be used for evaluating various types of courses and informal courses, and can be popularized to occasions similar to the courses for evaluation. The preset action is set by a user and can be updated at any time, so that the embodiment can adopt an action capable of judging the teaching effect; meanwhile, the preset action improves the accuracy and precision of judging the teaching effect through the lecture attending action by combining various actions of attending the lesson seriously and various actions of attending the lesson badly seriously.
(1) In a further preferred embodiment, the lessons for the preset time period comprise: the course name, the starting time and the ending time, or the course name, the associated school year, or the course name, the associated school date.
(2) In a further preferred embodiment, the lessons of the preset time period further comprise informal lessons, such as lectures, salons, experiments, and the like.
(3) In a further preferred embodiment, the preset action of carefully listening to the class further comprises an action other than the preset action of not carefully listening to the class, and the elimination method is adopted during the identification, and if the action is not the preset action of not carefully listening to the class, the action is determined to be the preset action of carefully listening to the class.
(4) In a further preferred embodiment, the preset action of carefully attending a lesson further comprises changes of expression, sound, mouth shape, pupil and the like.
(II) big data and artificial intelligence based teaching effect portrait system
1. Example 1
As shown in FIG. 3, one embodiment provides a pedagogical effect representation system, comprising:
an evaluation label module 300, configured to use each evaluation unit of each teacher as an evaluation unit label of the teaching effect image of each teacher;
the video identification module 400 is configured to identify, from the teaching video corresponding to each evaluation unit of each teacher, a ratio of total duration of first preset actions of all students to the total duration of each evaluation unit;
and a label assignment module 500, configured to use the ratio as a value of the one evaluation unit label of the representation of the teaching effect of each teacher.
The teaching effect portrayal system has the same beneficial effects as the teaching effect portrayal method, and the description is omitted here.
2. Example 2
In a preferred embodiment, the evaluation tag module 300 comprises units 301, 302, 303 based on embodiment 1. The units 301, 302, 303 correspond to the steps S301, S302, S303 in the foregoing preferred embodiment one to one, and the description is not repeated here. The units 301, 302, 303 are used to perform the S301, S302, S303, respectively.
The evaluation tag module 300 has the same beneficial effects as the evaluation tag step S300, and will not be described herein again.
3. Example 3
In a preferred embodiment, on the basis of embodiment 1, the video recording identification module 400 includes units 401, 402, 403, 404, and 405. The units 401, 402, 403, 404, and 405 respectively correspond to the steps S401, S402, S403, S404, and S405 in the foregoing preferred embodiment one to one, and are not repeated herein. Units 401, 402, 403, 404, 405 are used to execute the above S401, S402, S403, S404, S405, respectively.
The video identification module 400 has the same beneficial effects as the video identification step S400, and is not described herein again.
4. Example 4
In a preferred embodiment, the label assignment module 500 comprises a unit 501 based on embodiment 1. The unit 501 corresponds to the step S501 in the foregoing preferred embodiment, and details are not repeated here. The unit 501 is configured to execute the S501.
The tag assignment module 500 has the same beneficial effects as the tag assignment module S500, which are not described herein again.
5. Example 5
In a preferred embodiment, on the basis of embodiment 1, the evaluation tag module 300 further includes:
the data acquisition module 100 is configured to acquire teaching process big data, where the teaching process big data includes a teaching video corresponding to each evaluation unit of each teacher;
a preset action module 200, configured to obtain a preset action of carefully listening to a class as a first preset action;
the previous module of the evaluation tag module 300 has the same beneficial effects as the previous step of the evaluation tag module S300, and is not described herein again.
(1) In a further preferred embodiment, the get data module 100 comprises units 101, 102, 103, 104. The units 101, 102, 103, and 104 correspond to the steps S101, S102, S103, and S104 in the foregoing preferred embodiment one to one, and the description is not repeated here. The units 101, 102, 103, 104 are used to execute the above S101, S102, S103, S104, respectively.
(2) In a further preferred embodiment, the preset action module 200 comprises units 201, 202, 203. The units 201, 202, and 203 correspond to the steps S201, S202, and S203 in the foregoing preferred embodiment one to one, and are not repeated herein. The units 201, 202, 203 are used to execute the above S201, S202, S203, respectively.
6. Example 6
As shown in fig. 4, in a preferred embodiment, on the basis of embodiment 1, the tag assignment module 500 further includes:
and storing the value of the evaluation unit label of the teaching effect portrait of each teacher into a knowledge base module 600 for storing the value into a teaching effect portrait knowledge base.
The query receiving module 700 is used for acquiring teachers to be queried and evaluation units to be queried;
a search evaluation module 800, configured to search and obtain a teaching effect portrait of the teacher to be queried from a teaching effect portrait knowledge base, and obtain values of all evaluation unit tags belonging to the evaluation units to be queried from the teaching effect portrait of the teacher to be queried;
and the effect calculation module 900 is configured to obtain weights of all the evaluation units belonging to the evaluation unit to be queried, and use a value obtained by performing weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as a teaching effect of the evaluation unit of the teacher to be queried.
The modules behind the tag assignment module 500 have the same beneficial effects as the steps behind the tag assignment module S500, which are not described herein again.
(1) In a further preferred embodiment, the logging knowledge base module 600 comprises a unit 601. The unit 601 corresponds to the step S601 in the foregoing preferred embodiment, and details are not repeated here. The unit 601 is configured to execute the S601.
(2) In a further preferred embodiment, the accept query module 700 includes elements 701, 702. The units 701 and 702 correspond to the steps S701 and S702 in the foregoing preferred embodiment one to one, and the description is not repeated here. The units 701 and 702 are respectively used for executing the steps S701 and S702.
(3) In a further preferred embodiment, the search evaluation module 800 comprises units 801, 802, 803. The units 801, 802, and 803 respectively correspond to the steps S801, S802, and S803 in the foregoing preferred embodiment one to one, and details are not repeated here. The units 801, 802, 803 are used to perform the above S801, S802, S803, respectively.
(4) In a further preferred embodiment, the effect calculation module 900 comprises units 901, 902, 903. The units 901, 902, 903 correspond to the steps S801, S901, S902, S903 in the foregoing preferred embodiment one to one, and the description is not repeated here. Units 901, 902, 903 are used to execute the above S901, S902, S903, respectively.
7. Example 7
In a preferred embodiment, based on any one of embodiments 1 to 6, the evaluation unit includes a lesson for a preset period; the preset action of carefully attending the class comprises that the student raises the head and eyes to look forward or/and takes notes manually.
The beneficial effects of the evaluation unit and the preset action are as described above.
(III) teaching effect portrait robot system based on big data and artificial intelligence
One embodiment provides a teaching effect portrait robot system, wherein the teaching effect portrait system is configured in the robot system.
The teaching effect portrait robot system has the same beneficial effects as the teaching effect portrait system, and the description is omitted here.
The teaching effect portrait based on the process big data is used as the standard of teaching effect evaluation, and the teaching effect portrait is used for the evaluation of the teaching effect, so that the subjectivity of evaluation by artificial judges is reduced or eliminated. On one hand, the method can be used for full-automatic teaching evaluation; on the other hand, the teaching effect portrait can be used for assisting a panel comment to perform teaching evaluation, for example, the teaching effect portrait or the teaching evaluation result provided by the embodiment of the invention is provided to the panel comment reference.
According to the teaching effect portrait method and the robot system based on the big data and the artificial intelligence, each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher, the total duration of the first preset action of all students identified from the teaching video corresponding to each evaluation unit of each teacher accounts for the total duration of each evaluation unit, and the proportion is used as the value of the evaluation unit label of the teaching effect portrait of each teacher, so that the teaching effect of teachers is portrayed through the reactions of the students on courses in the big data in the teaching process, the teaching effect of the teachers is reflected more truly and objectively, and the objectivity and the accuracy of the teaching portrayal and the teaching evaluation can be greatly improved.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for portraying educational effects, the method comprising:
an evaluation labeling step, wherein each evaluation unit of each teacher is used as an evaluation unit label of the teaching effect portrait of each teacher;
identifying each student from the teaching video corresponding to each evaluation unit of each teacher through a face identification technology, and coding the students;
prompting the user to preset actions of seriously attending class, including the names and the characteristics of the actions; prompting the user to preset actions including the names of the actions and the characteristics of the actions including duration of the actions, wherein the actions are not taken seriously; receiving the input of a user, adding a preset set of actions of carefully listening to the lesson and a complement set of a preset set of actions of not carefully listening to the lesson into a first preset action set, and storing the first preset action set into a teaching effect recognition knowledge base;
acquiring a preset first action set from a teaching effect recognition knowledge base, and acquiring a preset action set of carefully attending a lesson and a preset action set of unsuccessfully attending the lesson from the first action set;
identifying the action of each student in the teaching video corresponding to each evaluation unit of each teacher and matching each action in a preset action set of serious class listening to obtain at least one first matching degree, if one first matching degree is greater than or equal to the first preset matching degree, the identified action is a first preset action, if each first matching degree is less than the first preset matching degree, the identified action is matched with each action in the preset action set of non-serious class listening to obtain at least one second matching degree, and if each second matching degree is less than the second preset matching degree, the identified action is a first preset action;
a video identification step, namely, the total duration of the first preset actions of all students identified from the teaching videos corresponding to each evaluation unit of each teacher accounts for the proportion of the total duration of each evaluation unit;
a label assignment step of using the ratio as a value of the label of the evaluation unit of the teaching effect portrait of each teacher;
searching and evaluating, namely searching and acquiring a teaching effect portrait of a teacher to be queried from a teaching effect portrait knowledge base, and acquiring values of all evaluation unit labels belonging to evaluation units to be queried from the teaching effect portrait of the teacher to be queried;
an effect calculation step, namely acquiring the weights of all the evaluation units belonging to the evaluation unit to be inquired, and taking the value obtained by carrying out weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as the teaching effect of the evaluation unit of the teacher to be inquired;
the higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be queried is; the lower the value obtained after the weighted average is, the poorer the teaching effect of the evaluation unit of the teacher to be queried is; and judging the relative merits of the teaching effects of the evaluation units of the teachers to be inquired by comparing the values obtained after the weighted averages are different.
2. The method of claim 1, wherein said step of evaluating said label further comprises:
acquiring data, namely acquiring big teaching process data, wherein the big teaching process data comprise teaching videos corresponding to each evaluation unit of each teacher;
and a preset action step, namely taking the acquired preset action of carefully listening to the class as a first preset action.
3. A method of depicting an educational effect, according to claim 1, wherein said step of assigning a label is followed by the steps of:
and a step of storing the value of the evaluation unit label of the teaching effect portrait of each teacher into a knowledge base of teaching effect portrait.
4. A method for displaying educational effect, according to any of claims 1 to 3, wherein said step of assigning a label is followed by the steps of:
and receiving the query step, and acquiring the teacher to be queried and the evaluation unit to be queried.
5. The method of claim 1 to 3, wherein the evaluation unit includes lessons for a preset period of time; the first preset action comprises that the student raises the head and eyes to look forward or/and takes notes manually.
6. A educational effect representation system, the system comprising:
the evaluation label module is used for taking each evaluation unit of each teacher as an evaluation unit label of the teaching effect portrait of each teacher;
identifying each student from the teaching video corresponding to each evaluation unit of each teacher through a face identification technology, and coding the students;
prompting the user to preset actions of seriously attending class, including the names and the characteristics of the actions; prompting the user to preset actions of carelessly attending the class, including the names of the actions and the characteristics of the actions including duration; receiving the input of a user, adding a preset set of actions of carefully listening to the lesson and a complement set of a preset set of actions of not carefully listening to the lesson into a first preset action set, and storing the first preset action set into a teaching effect recognition knowledge base;
acquiring a preset first action set from a teaching effect recognition knowledge base, and acquiring a preset action set of carefully attending a lesson and a preset action set of unsuccessfully attending the lesson from the first action set;
identifying the action of each student in the teaching video corresponding to each evaluation unit of each teacher and matching each action in a preset action set of serious class listening to obtain at least one first matching degree, if one first matching degree is greater than or equal to the first preset matching degree, the identified action is a first preset action, if each first matching degree is less than the first preset matching degree, the identified action is matched with each action in the preset action set of non-serious class listening to obtain at least one second matching degree, and if each second matching degree is less than the second preset matching degree, the identified action is a first preset action;
the video identification module is used for identifying the proportion of the total duration of the first preset actions of all students in the teaching video corresponding to each evaluation unit of each teacher to the total duration of each evaluation unit;
a label assignment module, configured to use the ratio as a value of the label of the evaluation unit of the representation of the teaching effect of each teacher;
the search evaluation module is used for searching and acquiring a teaching effect portrait of a teacher to be queried from the teaching effect portrait knowledge base and acquiring values of all evaluation unit labels belonging to evaluation units to be queried from the teaching effect portrait of the teacher to be queried;
the effect calculation module is used for acquiring the weights of all the evaluation units belonging to the evaluation unit to be queried, and taking the value obtained by performing weighted average on the values of all the evaluation unit labels according to the weights of all the evaluation units as the teaching effect of the evaluation unit of the teacher to be queried;
the higher the value obtained after the weighted average is, the better the teaching effect of the evaluation unit of the teacher to be queried is; the lower the value obtained after the weighted average is, the poorer the teaching effect of the evaluation unit of the teacher to be queried is; and judging the relative merits of the teaching effects of the evaluation units of the teachers to be inquired by comparing the values obtained after the weighted averages are different.
7. The educational effect representation system of claim 6, further comprising:
the system comprises a data acquisition module, a data storage module and a data processing module, wherein the data acquisition module is used for acquiring big data of a teaching process, and the big data of the teaching process comprises teaching videos corresponding to each evaluation unit of each teacher;
and the preset action module is used for acquiring a preset action of carefully attending a class as a first preset action.
8. The educational effect representation system of claim 6 further comprising:
the knowledge base storage module is used for storing the value of the evaluation unit label of the teaching effect picture of each teacher into a teaching effect picture knowledge base;
and the query receiving module is used for acquiring teachers to be queried and evaluation units to be queried.
9. The educational effect representation system of claim 7 wherein the evaluation unit comprises a lesson for a preset period of time; the preset action of carefully attending the class comprises that the student raises the head and eyes to look forward or/and takes notes manually.
10. A robot system for teaching effect representation, wherein each of the robot systems is provided with a teaching effect representation system as claimed in any one of claims 6 to 9.
CN201810634275.9A 2018-06-20 2018-06-20 Teaching effect image drawing method based on big data and artificial intelligence and robot system Active CN108776794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810634275.9A CN108776794B (en) 2018-06-20 2018-06-20 Teaching effect image drawing method based on big data and artificial intelligence and robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810634275.9A CN108776794B (en) 2018-06-20 2018-06-20 Teaching effect image drawing method based on big data and artificial intelligence and robot system

Publications (2)

Publication Number Publication Date
CN108776794A CN108776794A (en) 2018-11-09
CN108776794B true CN108776794B (en) 2023-03-28

Family

ID=64026186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810634275.9A Active CN108776794B (en) 2018-06-20 2018-06-20 Teaching effect image drawing method based on big data and artificial intelligence and robot system

Country Status (1)

Country Link
CN (1) CN108776794B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228293A (en) * 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN107918821A (en) * 2017-03-23 2018-04-17 广州思涵信息科技有限公司 Teachers ' classroom teaching process analysis method and system based on artificial intelligence technology

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040009461A1 (en) * 2000-04-24 2004-01-15 Snyder Jonathan Scott System for scheduling classes and managing eductional resources
JP2007034507A (en) * 2005-07-25 2007-02-08 Nec Corp Diagnostic method, device, and program for equipment using radio
CN102024151B (en) * 2010-12-02 2012-12-26 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN103559812B (en) * 2013-11-07 2016-01-20 大连东方之星信息技术有限公司 A kind of educational supervision's appraisal report generation system
CN107085721A (en) * 2017-06-26 2017-08-22 厦门劢联科技有限公司 A kind of intelligence based on Identification of Images patrols class management system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228293A (en) * 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN107918821A (en) * 2017-03-23 2018-04-17 广州思涵信息科技有限公司 Teachers ' classroom teaching process analysis method and system based on artificial intelligence technology

Also Published As

Publication number Publication date
CN108776794A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN109359215B (en) Video intelligent pushing method and system
CN108281052B (en) A kind of on-line teaching system and online teaching method
CN106547815B (en) Big data-based targeted job generation method and system
CN108829842B (en) Learning expression image method and robot system based on big data and artificial intelligence
CN108765229B (en) Learning performance evaluation method based on big data and artificial intelligence and robot system
CN110753256B (en) Video playback method and device, storage medium and computer equipment
CN110580470A (en) Monitoring method and device based on face recognition, storage medium and computer equipment
Ma A contextualised study of EFL learners' vocabulary learning approaches: Framework, learner approach and degree of success
JP2004279808A (en) Telelearning system
CN108629715A (en) Accurate teaching method and robot system based on big data and artificial intelligence
CN108876677A (en) Assessment on teaching effect method and robot system based on big data and artificial intelligence
CN110033662A (en) A kind of method and system of topic information acquisition
CN108804705B (en) Review recommendation method based on big data and artificial intelligence and education robot system
CN108805770A (en) Content of courses portrait method based on big data and artificial intelligence and robot system
CN108776794B (en) Teaching effect image drawing method based on big data and artificial intelligence and robot system
CN108921405A (en) Accurate learning evaluation method and robot system based on big data and artificial intelligence
CN108764757A (en) Accurate Method of Teaching Appraisal and robot system based on big data and artificial intelligence
Rachmat et al. “I USE MULTIPLE-CHOICE QUESTION IN MOST ASSESSMENT I PREPARED”: EFL TEACHERS’VOICE ON SUMMATIVE ASSESSMENT
Nisa Photovoice activities to teach writing for high school students
JP7427906B2 (en) Information processing device, control method and program
CN113793539A (en) Auxiliary teaching method and device, electronic equipment and storage medium
CN117036117B (en) Classroom state assessment method based on artificial intelligence
CN112000798A (en) Chinese question type answer obtaining method and device
CN117455126B (en) Ubiquitous practical training teaching and evaluation management system and method
KR101023901B1 (en) System and method for learning management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant