CN112052800A - Intelligent teaching auxiliary system for foreign language teaching based on Internet of things - Google Patents

Intelligent teaching auxiliary system for foreign language teaching based on Internet of things Download PDF

Info

Publication number
CN112052800A
CN112052800A CN202010935320.1A CN202010935320A CN112052800A CN 112052800 A CN112052800 A CN 112052800A CN 202010935320 A CN202010935320 A CN 202010935320A CN 112052800 A CN112052800 A CN 112052800A
Authority
CN
China
Prior art keywords
student
image data
teacher
image
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010935320.1A
Other languages
Chinese (zh)
Inventor
郭猛
刘荣辉
张敬普
王可
王卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Urban Construction
Original Assignee
Henan University of Urban Construction
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Urban Construction filed Critical Henan University of Urban Construction
Priority to CN202010935320.1A priority Critical patent/CN112052800A/en
Publication of CN112052800A publication Critical patent/CN112052800A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Educational Technology (AREA)
  • Signal Processing (AREA)
  • Educational Administration (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses an intelligent teaching auxiliary system for foreign language teaching based on the Internet of things, which comprises a display screen, a camera, a database, an identification unit, an analysis unit and a judgment unit, wherein the display screen is used for displaying a foreign language teaching; the system comprises a camera, a judging unit, a first time data processing unit, a second time data processing unit, a teacher image data processing unit, a teacher time data processing unit and a teacher time data processing unit, wherein the camera is used for acquiring the images of the class taking states of students and teachers in real time; the judgment unit is used for judging the related data obtained by analysis, so that the accuracy of data analysis is improved, the persuasive force of the data is increased, the analysis result is judged quickly, the time consumed by artificial judgment is saved, and the working efficiency is improved.

Description

Intelligent teaching auxiliary system for foreign language teaching based on Internet of things
Technical Field
The invention relates to the technical field of teaching assistance, in particular to an intelligent teaching assistance system for foreign language teaching based on the Internet of things.
Background
Teaching is a human-specific talent training activity consisting of teacher's teaching and student's learning. Through the activities, teachers purposefully, programmatically and organically guide students to learn and master cultural scientific knowledge and skills, and promote the improvement of the qualities of the students, so that the students become people required by the society.
At present, a lot of problems still exist in teaching, for example, a teacher cannot observe whether all students are listening to classes seriously during class, so that teaching quality and efficiency cannot be guaranteed, students cannot listen to classes seriously, and what students have learned is also deficient.
Disclosure of Invention
The invention aims to provide an intelligent teaching auxiliary system for foreign language teaching based on the Internet of things, which is used for acquiring images of the class taking states of students and teachers in real time through a camera, automatically acquiring image information and transmitting the image information to an identification unit; the system comprises a database, an identification unit, a judgment unit, a teacher analysis unit and a judgment unit, wherein the database stores teaching record information of students and teachers, the identification unit acquires the teaching record information from the database and compares and identifies the teaching record information with image information together, relevant data in the image information is quickly identified and is respectively calibrated, time consumed by identification is saved, and the working efficiency is improved; judge that the unit judges the operation to teacher's lip correct signal, teacher's head state correct signal, student's lip error signal and student's head state error signal, increases data analysis's accuracy, increases the persuasion dynamics of data, judges the analysis result fast, saves the time that artificially judges consumed, improves work efficiency, shows the warning operation to reminding the signal through the display screen, specifically does: when the display screen receives the reminding signal, extracting corresponding student name data, and generating a corresponding reminding subtitle according to the student name data, namely adding a character group of 'paying attention to and listening to a class' behind the student name data; the students are reminded to adjust the learning state in time, and the quality and the efficiency of classroom teaching are improved.
The purpose of the invention can be realized by the following technical scheme: the intelligent teaching auxiliary system for foreign language teaching based on the Internet of things comprises a display screen, a camera, a database, an identification unit, an analysis unit and a judgment unit;
the camera is used for acquiring the images of the class states of students and teachers in real time, automatically acquiring image information and transmitting the image information to the identification unit;
the system comprises a database, an identification unit, an analysis unit, a desk image data processing unit, a platform image data processing unit, a teacher image data processing unit and an analysis unit, wherein the database stores teaching record information of students and teachers, the identification unit acquires the teaching record information from the database and compares and identifies the teaching record information and the image information together to obtain first time data, desk image data, platform image data, image information, second time data, student image data, student name data, teacher image data and teacher name data, and transmits the first time data, the desk image data, the platform image data, the image information, the;
the analysis unit is used for analyzing the first time data, the desk image data, the platform image data, the image information, the second time data, the student image data, the student name data, the teacher image data and the teacher name data to obtain a teacher lip correct signal, a teacher head state correct signal, a student lip error signal and a student head state error signal, and transmitting the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to the judgment unit;
the judgment unit is used for judging and operating the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to obtain a reminding signal and transmitting the reminding signal to the display screen;
the display screen is used for displaying a reminding signal to remind the operation, and specifically comprises the following steps: when the display screen receives the reminding signal, corresponding student name data are extracted, and corresponding reminding subtitles are generated according to the student name data, namely a character group of 'paying attention to and listening to class' is added behind the student name data.
As a further improvement of the invention: the specific operation process of the comparison and identification operation is as follows:
the method comprises the following steps: acquiring teaching record information, calibrating videos of students therein as student image data, marking the student image data as XTi, i-1, 2,3.. No. n1, calibrating names of students therein as student name data, marking the student name data as XMi, i-1, 2,3.. No. n1, calibrating videos of teachers therein as teacher image data, marking the teacher image data as JTi, i-1, 2,3.. No. n1, calibrating names of teachers therein as teacher name data, marking the teacher name data as JMi, i-1, 2,3.. No. n1, calibrating desk images therein as desk image data, marking the desk image data as KTi, i-1, 2,3.. No. n1, calibrating desks therein as TTi, i-1, 2,3.. n1, marking desks therein as TTi image data, and marking them as TTi-desk image data, n 1;
step two: acquiring image information, and identifying and matching the image information with image data of a teacher, specifically comprising the following steps: when the matching result of the image information and the teacher image data is consistent, judging that a teacher image exists in the image information, calibrating the image as teacher image data, extracting corresponding teacher name data, when the matching result of the image information and the teacher image data is inconsistent, judging that no teacher image exists in the image information, not calibrating the image, acquiring the image information, and identifying and matching the image information and the student image data, specifically: when the matching result of the image information and the student image data is consistent, judging that a student image exists in the image information, calibrating the image as student image data, extracting corresponding student name data, and when the matching result of the image information and the student image data is inconsistent, judging that the student image does not exist in the image information, and not calibrating the image;
step three: and acquiring teacher image data, marking time in the teacher image data as first time data, acquiring student image data, and marking time in the student image data as second time data.
As a further improvement of the invention: the specific operation process of the analysis operation is as follows:
k1: acquiring image information, and matching the image information with the image data of the platform, specifically: when the matching result of image information and podium image data is unanimous, then judge to have the podium in this image, mark the podium in this image as the podium image, when the matching result of image information and podium image data is inconsistent, then judge not to have the podium in this image, do not carry out the podium and mark, acquire image information to match it with desk image data, specifically do: when the matching result of the image information and the desk image data is consistent, judging that a desk exists in the image, and marking the desk in the image as a desk image;
k2: the method comprises the steps of obtaining student name data, extracting corresponding student image data according to the student name data, establishing a virtual space rectangular coordinate system, carrying out position coordinate calibration on the student image data and the desk image data in the virtual space rectangular coordinate system, calibrating the position of the head of a student into student head coordinates, calibrating the desk corresponding to the student into desk coordinates, and bringing the student head coordinates and the desk coordinates into a difference calculation formula together, thereby calculating a head sag difference value, wherein the difference calculation formula specifically comprises the following steps: calculating a distance value according to the Pythagorean theorem for student head coordinates and desk coordinates, setting a preset value of a head sag difference value, comparing the preset value with the head sag difference value, calibrating the comparison that the head sag difference value is smaller than the preset value of the head sag difference value as a head deviation, counting the times of the head deviation, calculating the times of the head deviation and the total calculated times of the head sag difference value, calculating a ratio value, setting a preset value of the ratio, calibrating the preset value as a poor head state when the ratio value is larger than the preset value of the ratio, generating a false head state signal of the student, wherein the calculation formula of the ratio value is as follows: the ratio is the number of head deviations/total number of head droop differences calculated;
k3: marking the upper lip as a plurality of corner points, marking the upper lip as a plurality of corner points SZl (Xl, Yl), wherein l is 1,2,3.. n2, marking the lower lip as a plurality of corner points XZv (Xv, Yv), wherein v is 1,2,3.. n3, because the upper lip and the lower lip are symmetrically arranged, the difference on the Y axis exists, when the values of the X axis are the same, the difference between the two Y axes is calculated, the difference is marked as a lip distance value, the time point corresponding to the lip distance value is obtained and marked in a rectangular coordinate system together with the lip distance value, the different time points are marked as the X axis, the lip distance value is marked as the Y axis, the coordinates of the lip distance values at the different time points in the rectangular coordinate system are connected, the type of the connection line is judged, when the connection line is identified as a straight line, determining that the lips of the students are immobile, and generating error signals of the lips of the students, wherein a rectangular coordinate system is a preset coordinate system, the identification of a straight line is the prior art, and meanwhile, a conclusion can be obtained through the comparison and identification of lines, and the identification of the upper lips and the lower lips is the same as the identification of the straight line;
k4: according to the judgment method in K1, the head deviation proportion of a teacher is judged, a head deviation proportion preset value is set and is compared with the head deviation proportion, when the head deviation proportion is larger than the head deviation proportion preset value, the head deviation proportion is calibrated to be in a normal state, a correct signal of the head state of the teacher is generated, according to the student lip identification method in K3, the difference value of the lips of the teacher on the Y axis is obtained, coordinate connection is carried out in a rectangular coordinate system, the connection type is judged, and when the connection line is identified to be a wave line, the lips of the teacher are judged to move, and a correct signal of the lips of the teacher is generated.
As a further improvement of the invention: the specific operation process of the judgment operation is as follows:
h1: acquiring a teacher lip correct signal and a teacher head state correct signal, identifying the signals, and automatically acquiring a time point corresponding to the teacher lip correct signal from image data and calibrating the time point as a teaching time point when the two signals are identified simultaneously;
h2: setting a preset value of reaction time, and calculating the preset value and the teaching time point so as to calculate the learning time point, wherein the calculation formula of the learning time point is as follows: learning time point is the teaching time point plus the preset value of reaction time;
h3: and extracting a signal generated at the learning time point, judging that the learning state of the student is poor when a lip error signal of the student and a head state error signal of the student are recognized at the same time, generating a reminding signal, and judging that the student is in the learning state when the lip error signal of the student and the head state error signal of the student are not recognized at the same time.
As a further improvement of the invention: the display screen is a teaching projection screen and is arranged on one side of the blackboard.
The invention has the beneficial effects that:
(1) the method comprises the steps that the images of the class states of students and teachers are obtained in real time through a camera, image information is automatically obtained, and the image information is transmitted to an identification unit; the database stores teaching record information of students and teachers, the identification unit acquires the teaching record information from the database and compares and identifies the teaching record information and the image information together, relevant data in the image information are identified quickly and are calibrated respectively, time consumed by identification is saved, and working efficiency is improved;
(2) analyzing the first time data, the desk image data, the podium image data, the image information, the second time data, the student image data, the student name data, the teacher image data and the teacher name data through the arrangement of the analyzing unit to obtain a teacher lip correct signal, a teacher head state correct signal, a student lip error signal and a student head state error signal, and transmitting the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to the judging unit; the judgment unit judges the accurate signals of the teacher's lips, the accurate signals of the teacher's head state, the error signals of the student's lips and the error signals of the student's head state, so that the accuracy of data analysis is improved, the persuasive force of data is increased, the analysis result is judged quickly, the time consumed by artificial judgment is saved, and the working efficiency is improved.
(3) The reminding operation is displayed on the reminding signal through the display screen, and the method specifically comprises the following steps: when the display screen receives the reminding signal, extracting corresponding student name data, and generating a corresponding reminding subtitle according to the student name data, namely adding a character group of 'paying attention to and listening to a class' behind the student name data; the students are reminded to adjust the learning state in time, and the quality and the efficiency of classroom teaching are improved.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention relates to an intelligent teaching auxiliary system for foreign language teaching based on internet of things, which comprises a display screen, a camera, a database, an identification unit, an analysis unit and a judgment unit;
the camera is used for acquiring the images of the class states of students and teachers in real time, automatically acquiring image information and transmitting the image information to the identification unit;
the teaching record information of students and teachers is stored in the database, the identification unit acquires the teaching record information from the database and compares and identifies the teaching record information and the image information together, and the specific operation process of the comparison and identification operation is as follows:
the method comprises the following steps: acquiring teaching record information, calibrating videos of students therein as student image data, marking the student image data as XTi, i-1, 2,3.. No. n1, calibrating names of students therein as student name data, marking the student name data as XMi, i-1, 2,3.. No. n1, calibrating videos of teachers therein as teacher image data, marking the teacher image data as JTi, i-1, 2,3.. No. n1, calibrating names of teachers therein as teacher name data, marking the teacher name data as JMi, i-1, 2,3.. No. n1, calibrating desk images therein as desk image data, marking the desk image data as KTi, i-1, 2,3.. No. n1, calibrating desks therein as TTi, i-1, 2,3.. n1, marking desks therein as TTi image data, and marking them as TTi-desk image data, n 1;
step two: acquiring image information, and identifying and matching the image information with image data of a teacher, specifically comprising the following steps: when the matching result of the image information and the teacher image data is consistent, judging that a teacher image exists in the image information, calibrating the image as teacher image data, extracting corresponding teacher name data, when the matching result of the image information and the teacher image data is inconsistent, judging that no teacher image exists in the image information, not calibrating the image, acquiring the image information, and identifying and matching the image information and the student image data, specifically: when the matching result of the image information and the student image data is consistent, judging that a student image exists in the image information, calibrating the image as student image data, extracting corresponding student name data, and when the matching result of the image information and the student image data is inconsistent, judging that the student image does not exist in the image information, and not calibrating the image;
step three: acquiring teacher image data, marking time in the teacher image data as first time data, acquiring student image data, and marking time in the student image data as second time data;
step four: extracting first time data, desk image data, platform image data, image information, second time data, student image data, student name data, teacher image data and teacher name data, and transmitting the first time data, the desk image data, the platform image data, the image information, the second time data, the student image data, the student name data, the teacher image data and the teacher name data to an analysis unit;
the analysis unit is used for carrying out analysis operation on first time data, desk image data, podium image data, image information, second time data, student image data, student name data, teacher image data and teacher name data, and the specific operation process of analysis operation is:
k1: acquiring image information, and matching the image information with the image data of the platform, specifically: when the matching result of image information and podium image data is unanimous, then judge to have the podium in this image, mark the podium in this image as the podium image, when the matching result of image information and podium image data is inconsistent, then judge not to have the podium in this image, do not carry out the podium and mark, acquire image information to match it with desk image data, specifically do: when the matching result of the image information and the desk image data is consistent, judging that a desk exists in the image, and marking the desk in the image as a desk image;
k2: the method comprises the steps of obtaining student name data, extracting corresponding student image data according to the student name data, establishing a virtual space rectangular coordinate system, carrying out position coordinate calibration on the student image data and the desk image data in the virtual space rectangular coordinate system, calibrating the position of the head of a student into student head coordinates, calibrating the desk corresponding to the student into desk coordinates, and bringing the student head coordinates and the desk coordinates into a difference calculation formula together, thereby calculating a head sag difference value, wherein the difference calculation formula specifically comprises the following steps: calculating a distance value according to the Pythagorean theorem for student head coordinates and desk coordinates, setting a preset value of a head sag difference value, comparing the preset value with the head sag difference value, calibrating the comparison that the head sag difference value is smaller than the preset value of the head sag difference value as a head deviation, counting the times of the head deviation, calculating the times of the head deviation and the total calculated times of the head sag difference value, calculating a ratio value, setting a preset value of the ratio, calibrating the preset value as a poor head state when the ratio value is larger than the preset value of the ratio, generating a false head state signal of the student, wherein the calculation formula of the ratio value is as follows: the ratio is the number of head deviations/total number of head droop differences calculated;
k3: marking the upper lip as a plurality of corner points, marking the upper lip as a plurality of corner points SZl (Xl, Yl), wherein l is 1,2,3.. n2, marking the lower lip as a plurality of corner points XZv (Xv, Yv), wherein v is 1,2,3.. n3, because the upper lip and the lower lip are symmetrically arranged, the difference on the Y axis exists, when the values of the X axis are the same, the difference between the two Y axes is calculated, the difference is marked as a lip distance value, the time point corresponding to the lip distance value is obtained and marked in a rectangular coordinate system together with the lip distance value, the different time points are marked as the X axis, the lip distance value is marked as the Y axis, the coordinates of the lip distance values at the different time points in the rectangular coordinate system are connected, the type of the connection line is judged, when the connection line is identified as a straight line, determining that the lips of the students are immobile, and generating error signals of the lips of the students, wherein a rectangular coordinate system is a preset coordinate system, the identification of a straight line is the prior art, and meanwhile, a conclusion can be obtained through the comparison and identification of lines, and the identification of the upper lips and the lower lips is the same as the identification of the straight line;
k4: according to the judgment method in K1, judging the head deviation ratio of a teacher, setting a head deviation ratio preset value, comparing the head deviation ratio with the head deviation ratio, when the head deviation ratio is larger than the head deviation ratio preset value, calibrating the head deviation ratio to be normal and generating a correct signal of the head state of the teacher, obtaining the difference value of the lips of the teacher on the Y axis according to the student lip identification method in K3, connecting coordinates in a rectangular coordinate system, judging the type of the connecting line, and when the connecting line is identified to be a wave line, judging the lips of the teacher to move and generating a correct signal of the lips of the teacher;
k5: transmitting the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to a judging unit;
the judgment unit is used for judging and operating the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal, and the specific operation process of the judgment operation is as follows:
h1: acquiring a teacher lip correct signal and a teacher head state correct signal, identifying the signals, and automatically acquiring a time point corresponding to the teacher lip correct signal from image data and calibrating the time point as a teaching time point when the two signals are identified simultaneously;
h2: setting a preset value of reaction time, and calculating the preset value and the teaching time point so as to calculate the learning time point, wherein the calculation formula of the learning time point is as follows: learning time point is the teaching time point plus the preset value of reaction time;
h3: extracting a signal generated at a learning time point, judging that the learning state of the student is poor when a lip error signal of the student and a head state error signal of the student are recognized at the same time, generating a reminding signal, and judging that the student is in the learning state when the lip error signal of the student and the head state error signal of the student are not recognized at the same time;
h4: transmitting the reminding signal to a display screen;
the display screen is used for displaying the reminding signal, and specifically comprises the following steps: when the display screen receives the reminding signal, corresponding student name data are extracted, and corresponding reminding subtitles are generated according to the student name data, namely a character set of 'paying attention to and listening to class' is added behind the student name data, and the display screen is a teaching projection screen and is installed on one side of a blackboard.
When the intelligent teaching desk works, the images of the class states of students and teachers are acquired in real time through the camera, the image information is automatically acquired, and the image information is transmitted to the identification unit; the system comprises a database, an identification unit, an analysis unit, a desk image data processing unit, a platform image data processing unit, a teacher image data processing unit and an analysis unit, wherein the database stores teaching record information of students and teachers, the identification unit acquires the teaching record information from the database and compares the teaching record information with the image information to perform identification operation, so that first time data, desk image data, platform image data, image information, second time data, student image data, student name data, teacher image data and teacher name data are obtained and are transmitted to the analysis unit; the analysis unit analyzes the first time data, the desk image data, the platform image data, the image information, the second time data, the student image data, the student name data, the teacher image data and the teacher name data to obtain a teacher lip correct signal, a teacher head state correct signal, a student lip error signal and a student head state error signal, and transmits the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to the judgment unit; the judging unit is used for judging and operating the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to obtain a reminding signal and transmitting the reminding signal to the display screen; the display screen displays the reminding operation to the reminding signal, and specifically comprises the following steps: when the display screen receives the reminding signal, corresponding student name data are extracted, and corresponding reminding subtitles are generated according to the student name data, namely a character group of 'paying attention to and listening to class' is added behind the student name data.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (5)

1. The intelligent teaching auxiliary system for foreign language teaching based on the Internet of things is characterized by comprising a display screen, a camera, a database, a recognition unit, an analysis unit and a judgment unit;
the camera is used for acquiring the images of the class states of students and teachers in real time, automatically acquiring image information and transmitting the image information to the identification unit;
the system comprises a database, an identification unit, an analysis unit, a desk image data processing unit, a platform image data processing unit, a teacher image data processing unit and an analysis unit, wherein the database stores teaching record information of students and teachers, the identification unit acquires the teaching record information from the database and compares and identifies the teaching record information and the image information together to obtain first time data, desk image data, platform image data, image information, second time data, student image data, student name data, teacher image data and teacher name data, and transmits the first time data, the desk image data, the platform image data, the image information, the;
the analysis unit is used for analyzing the first time data, the desk image data, the platform image data, the image information, the second time data, the student image data, the student name data, the teacher image data and the teacher name data to obtain a teacher lip correct signal, a teacher head state correct signal, a student lip error signal and a student head state error signal, and transmitting the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to the judgment unit;
the judgment unit is used for judging and operating the teacher lip correct signal, the teacher head state correct signal, the student lip error signal and the student head state error signal to obtain a reminding signal and transmitting the reminding signal to the display screen;
the display screen is used for displaying a reminding signal to remind the operation, and specifically comprises the following steps: when the display screen receives the reminding signal, corresponding student name data are extracted, and corresponding reminding subtitles are generated according to the student name data, namely a character group of 'paying attention to and listening to class' is added behind the student name data.
2. The intelligent teaching assistance system for foreign language teaching based on the internet of things as claimed in claim 1, wherein the specific operation process of the comparison and identification operation is as follows:
the method comprises the following steps: acquiring teaching record information, calibrating videos of students therein as student image data, marking the student image data as XTi, i-1, 2,3.. No. n1, calibrating names of students therein as student name data, marking the student name data as XMi, i-1, 2,3.. No. n1, calibrating videos of teachers therein as teacher image data, marking the teacher image data as JTi, i-1, 2,3.. No. n1, calibrating names of teachers therein as teacher name data, marking the teacher name data as JMi, i-1, 2,3.. No. n1, calibrating desk images therein as desk image data, marking the desk image data as KTi, i-1, 2,3.. No. n1, calibrating desks therein as TTi, i-1, 2,3.. n1, marking desks therein as TTi image data, and marking them as TTi-desk image data, n 1;
step two: acquiring image information, and identifying and matching the image information with image data of a teacher, specifically comprising the following steps: when the matching result of the image information and the teacher image data is consistent, judging that a teacher image exists in the image information, calibrating the image as teacher image data, extracting corresponding teacher name data, when the matching result of the image information and the teacher image data is inconsistent, judging that no teacher image exists in the image information, not calibrating the image, acquiring the image information, and identifying and matching the image information and the student image data, specifically: when the matching result of the image information and the student image data is consistent, judging that a student image exists in the image information, calibrating the image as student image data, extracting corresponding student name data, and when the matching result of the image information and the student image data is inconsistent, judging that the student image does not exist in the image information, and not calibrating the image;
step three: and acquiring teacher image data, marking time in the teacher image data as first time data, acquiring student image data, and marking time in the student image data as second time data.
3. The intelligent teaching assistance system for foreign language teaching based on the internet of things as claimed in claim 1, wherein the specific operation process of the analysis operation is:
k1: acquiring image information, and matching the image information with the image data of the platform, specifically: when the matching result of image information and podium image data is unanimous, then judge to have the podium in this image, mark the podium in this image as the podium image, when the matching result of image information and podium image data is inconsistent, then judge not to have the podium in this image, do not carry out the podium and mark, acquire image information to match it with desk image data, specifically do: when the matching result of the image information and the desk image data is consistent, judging that a desk exists in the image, and marking the desk in the image as a desk image;
k2: the method comprises the steps of obtaining student name data, extracting corresponding student image data according to the student name data, establishing a virtual space rectangular coordinate system, carrying out position coordinate calibration on the student image data and the desk image data in the virtual space rectangular coordinate system, calibrating the position of the head of a student into student head coordinates, calibrating the desk corresponding to the student into desk coordinates, and bringing the student head coordinates and the desk coordinates into a difference calculation formula together, thereby calculating a head sag difference value, wherein the difference calculation formula specifically comprises the following steps: calculating a distance value according to the Pythagorean theorem for student head coordinates and desk coordinates, setting a preset value of a head sag difference value, comparing the preset value with the head sag difference value, calibrating the comparison that the head sag difference value is smaller than the preset value of the head sag difference value as a head deviation, counting the times of the head deviation, calculating the times of the head deviation and the total calculated times of the head sag difference value, calculating a ratio value, setting a preset value of the ratio, calibrating the preset value as a poor head state when the ratio value is larger than the preset value of the ratio, generating a false head state signal of the student, wherein the calculation formula of the ratio value is as follows: the ratio is the number of head deviations/total number of head droop differences calculated;
k3: marking the upper lip as a plurality of corner points, marking the upper lip as a plurality of corner points SZl (Xl, Yl), wherein l is 1,2,3.. n2, marking the lower lip as a plurality of corner points XZv (Xv, Yv), wherein v is 1,2,3.. n3, because the upper lip and the lower lip are symmetrically arranged, the difference on the Y axis exists, when the values of the X axis are the same, the difference between the two Y axes is calculated, the difference is marked as a lip distance value, the time point corresponding to the lip distance value is obtained and marked in a rectangular coordinate system together with the lip distance value, the different time points are marked as the X axis, the lip distance value is marked as the Y axis, the coordinates of the lip distance values at the different time points in the rectangular coordinate system are connected, the type of the connection line is judged, when the connection line is identified as a straight line, determining that the lips of the students are immobile, and generating error signals of the lips of the students, wherein a rectangular coordinate system is a preset coordinate system, the identification of a straight line is the prior art, and meanwhile, a conclusion can be obtained through the comparison and identification of lines, and the identification of the upper lips and the lower lips is the same as the identification of the straight line;
k4: according to the judgment method in K1, the head deviation proportion of a teacher is judged, a head deviation proportion preset value is set and is compared with the head deviation proportion, when the head deviation proportion is larger than the head deviation proportion preset value, the head deviation proportion is calibrated to be in a normal state, a correct signal of the head state of the teacher is generated, according to the student lip identification method in K3, the difference value of the lips of the teacher on the Y axis is obtained, coordinate connection is carried out in a rectangular coordinate system, the connection type is judged, and when the connection line is identified to be a wave line, the lips of the teacher are judged to move, and a correct signal of the lips of the teacher is generated.
4. The intelligent teaching assistance system for foreign language teaching based on the internet of things as claimed in claim 1, wherein the specific operation process of the determination operation is:
h1: acquiring a teacher lip correct signal and a teacher head state correct signal, identifying the signals, and automatically acquiring a time point corresponding to the teacher lip correct signal from image data and calibrating the time point as a teaching time point when the two signals are identified simultaneously;
h2: setting a preset value of reaction time, and calculating the preset value and the teaching time point so as to calculate the learning time point, wherein the calculation formula of the learning time point is as follows: learning time point is the teaching time point plus the preset value of reaction time;
h3: and extracting a signal generated at the learning time point, judging that the learning state of the student is poor when a lip error signal of the student and a head state error signal of the student are recognized at the same time, generating a reminding signal, and judging that the student is in the learning state when the lip error signal of the student and the head state error signal of the student are not recognized at the same time.
5. The intelligent teaching assistance system for foreign language teaching based on the internet of things as claimed in claim 1, wherein the display screen is a teaching projection screen, and the display screen is installed at one side of a blackboard.
CN202010935320.1A 2020-09-08 2020-09-08 Intelligent teaching auxiliary system for foreign language teaching based on Internet of things Withdrawn CN112052800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010935320.1A CN112052800A (en) 2020-09-08 2020-09-08 Intelligent teaching auxiliary system for foreign language teaching based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010935320.1A CN112052800A (en) 2020-09-08 2020-09-08 Intelligent teaching auxiliary system for foreign language teaching based on Internet of things

Publications (1)

Publication Number Publication Date
CN112052800A true CN112052800A (en) 2020-12-08

Family

ID=73610273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010935320.1A Withdrawn CN112052800A (en) 2020-09-08 2020-09-08 Intelligent teaching auxiliary system for foreign language teaching based on Internet of things

Country Status (1)

Country Link
CN (1) CN112052800A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887656A (en) * 2021-01-26 2021-06-01 黄旭诗 Multi-person online conference system based on virtual reality
CN113570484A (en) * 2021-09-26 2021-10-29 广州华赛数据服务有限责任公司 Online primary school education management system and method based on big data
CN117557428A (en) * 2024-01-11 2024-02-13 深圳市华视圣电子科技有限公司 Teaching assistance method and system based on AI vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887656A (en) * 2021-01-26 2021-06-01 黄旭诗 Multi-person online conference system based on virtual reality
CN113570484A (en) * 2021-09-26 2021-10-29 广州华赛数据服务有限责任公司 Online primary school education management system and method based on big data
CN117557428A (en) * 2024-01-11 2024-02-13 深圳市华视圣电子科技有限公司 Teaching assistance method and system based on AI vision
CN117557428B (en) * 2024-01-11 2024-05-07 深圳市华视圣电子科技有限公司 Teaching assistance method and system based on AI vision

Similar Documents

Publication Publication Date Title
CN112052800A (en) Intelligent teaching auxiliary system for foreign language teaching based on Internet of things
CN110991381B (en) Real-time classroom student status analysis and indication reminding system and method based on behavior and voice intelligent recognition
CN109711371A (en) A kind of Estimating System of Classroom Teaching based on human facial expression recognition
JP2010107787A (en) Evaluation analysis system
CN110059978B (en) Teacher evaluation system based on cloud computing auxiliary teaching evaluation
CN106373444A (en) English teaching tool-equipped multifunctional English classroom
CN108876195A (en) A kind of intelligentized teachers ' teaching quality evaluating system
JP2020016871A (en) Information processing apparatus and program
CN113537801B (en) Blackboard writing processing method, blackboard writing processing device, terminal and storage medium
CN111444389A (en) Conference video analysis method and system based on target detection
Chen et al. WristEye: Wrist-wearable devices and a system for supporting elderly computer learners
CN108304779B (en) Intelligent regulation and control method for student education management
CN108428073A (en) A kind of intelligent evaluation system for teachers ' teaching quality
KR20200056760A (en) System for evaluating educators and improving the educational achievement of the trainees using artificial intelligence and method thereof
CN115600922A (en) Multidimensional intelligent teaching quality assessment method and system
CN108985290A (en) A kind of intelligent check system for teachers ' teaching quality
CN111667128B (en) Teaching quality assessment method, device and system
CN115689340A (en) Classroom quality monitoring system based on colorful dynamic human face features
CN102903066A (en) System and method for displaying growth recording and quality early warning based on different terminals
CN113570484B (en) Online primary school education management system and method based on big data
CN114549253A (en) Online teaching system for evaluating lecture listening state in real time
JP7427906B2 (en) Information processing device, control method and program
US20210375149A1 (en) System and method for proficiency assessment and remedial practice
TWM600908U (en) Learning state improvement management system
CN111640050A (en) Intelligent teaching system suitable for English teaching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201208