CN110033659B - Remote teaching interaction method, server, terminal and system - Google Patents

Remote teaching interaction method, server, terminal and system Download PDF

Info

Publication number
CN110033659B
CN110033659B CN201910341620.4A CN201910341620A CN110033659B CN 110033659 B CN110033659 B CN 110033659B CN 201910341620 A CN201910341620 A CN 201910341620A CN 110033659 B CN110033659 B CN 110033659B
Authority
CN
China
Prior art keywords
teaching
teacher
information
terminal
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910341620.4A
Other languages
Chinese (zh)
Other versions
CN110033659A (en
Inventor
谭星
舒景辰
梁光
张岱
王正博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dami Technology Co Ltd
Original Assignee
Beijing Dami Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dami Technology Co Ltd filed Critical Beijing Dami Technology Co Ltd
Priority to CN201910341620.4A priority Critical patent/CN110033659B/en
Publication of CN110033659A publication Critical patent/CN110033659A/en
Priority to PCT/CN2020/081095 priority patent/WO2020215966A1/en
Application granted granted Critical
Publication of CN110033659B publication Critical patent/CN110033659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses a remote teaching interaction method, a server, a terminal and a system, which relate to the technical field of multimedia, wherein the method comprises the following steps: the method comprises the steps of receiving a first teaching streaming media video from a first terminal, extracting a first video frame in first teaching video data, obtaining first expression information according to the first video frame, converting the first teaching audio data to obtain first voice text information, obtaining first semantic information according to the first voice text information, obtaining display content according to the first expression information and the first semantic information, and pushing the display content to a second terminal for displaying. The server can obtain the display content of the teacher to the students according to the teaching streaming media video transmitted by the first terminal and display the display content on the student terminal, or obtain the display content of the students to the teacher and display the display content on the teacher terminal, so that the teacher and the students can check the interactive information represented by the real-time display content on the terminals respectively, and the timely interaction between the students and the teacher is realized.

Description

Remote teaching interaction method, server, terminal and system
Technical Field
The application relates to the technical field of multimedia, in particular to a remote teaching interaction method, a server, a terminal and a system.
Background
With the development of networks and scientific technologies, the learning modes of people are more and more diversified and more convenient, and remote teaching becomes an important learning mode for people.
In the prior art, when people perform remote teaching and learning, students and teachers need to log in own terminals respectively, the teacher terminal transmits the operation, demonstration pictures, audio information and the like of the teacher to the student terminals through a network transmission technology, the student terminals receive the learning feedback of the students, transmit the feedback operation, pictures and audio information to the teacher terminal through the network technology, and finally realize the learning and interaction between the students and the teachers.
However, in the prior art, as long-distance teaching is adopted, the teacher and the students communicate with each other in a long distance, and the problem of poor learning efficiency of the students caused by insufficient interaction between the teacher and the students may occur.
Disclosure of Invention
The application mainly aims to provide a remote teaching interaction method, a server, a terminal and a system, and aims to solve the technical problem that in the prior art, when remote teaching is carried out, interaction between teachers and students is insufficient, and therefore the learning efficiency of students is poor.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a remote teaching interaction method, where the method includes:
receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data;
extracting a first video frame in the first teaching video data, and obtaining first expression information according to the first video frame;
converting the first teaching audio data to obtain first voice text information, and obtaining first semantic information according to the first voice text information;
obtaining display content according to the first expression information and the first semantic information;
and pushing the display content to a second terminal for displaying.
In a second aspect, an embodiment of the present application provides a remote teaching interaction server, where the server includes:
the video receiving module is used for receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data;
the expression acquisition module is used for extracting a first video frame in the first teaching video data and obtaining first expression information according to the first video frame;
the semantic acquisition module is used for converting the first teaching audio data to obtain first voice text information and obtaining first semantic information according to the first voice text information;
the display content acquisition module is used for acquiring display content according to the first expression information and the first semantic information;
and the display content pushing module is used for pushing the display content to a second terminal for displaying.
In a third aspect, an embodiment of the present application provides a remote teaching interaction method, where the method includes:
receiving display content from a server, wherein the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, first expression information and first semantic information, and then according to the first expression information and the first semantic information;
and displaying the display content on a user interface.
In a fourth aspect, an embodiment of the present application provides a remote teaching interactive terminal, where the terminal includes:
the display content receiving module is used for receiving display content from a server, wherein the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, first expression information and first semantic information, and then according to the first expression and the first semantic information;
and the display content display module is used for displaying the display content on a user interface.
In a fifth aspect, the present application provides a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and execute the steps of the method described above.
In a sixth aspect, embodiments of the present application provide a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the steps of the method described above.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
the application provides a remote teaching interaction method, a server, a terminal and a system, wherein the method comprises the following steps: the method comprises the steps of receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data, extracting a first video frame in the first teaching video data, obtaining first expression information according to the first video frame, converting the first teaching audio data to obtain first voice text information, obtaining first semantic information according to the first voice text information, obtaining display content according to the first expression information and the first semantic information, and pushing the display content to a second terminal for displaying. Because the server can obtain expression and semantic information in the teaching streaming media video transmitted according to the first terminal, the teacher's display content to the students is obtained through expression and semantic information, and the display content is displayed on the student terminal, or the display content of the students to the teacher is obtained, and the display content is displayed on the teacher terminal, the teacher and the students can check the interactive information represented by the real-time display content on the terminals respectively, and timely interaction between the students and the teacher is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an online teaching interaction system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a remote teaching interaction method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a teaching process of a remote teaching interaction method according to an embodiment of the present application;
fig. 4 is a course schedule information diagram in an online teaching interaction method according to an embodiment of the present application;
FIG. 5 is another schematic flow chart of a remote instruction interaction method according to an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a first expression obtaining process in a remote teaching interaction method according to an embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a first voice obtaining process in a remote teaching interaction method according to an embodiment of the present application;
FIG. 8 is another schematic flow chart illustrating a remote instruction interaction method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a remote teaching interaction server according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a remote instruction interaction server according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a first expression obtaining module in a remote teaching interaction server according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a first semantic acquisition module in a remote teaching interaction server according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a remote teaching interaction server according to an embodiment of the present application;
FIG. 14 is a flowchart illustrating a remote instruction interaction method according to an embodiment of the present application;
fig. 15 is a schematic display diagram of display contents in a remote teaching interaction method according to an embodiment of the present application;
FIG. 16 is another schematic view of a remote instructional interaction method according to an embodiment of the present application showing content displayed thereon;
FIG. 17 is another schematic view of a remote instructional interaction method according to an embodiment of the present application showing content displayed thereon;
FIG. 18 is another schematic view of a remote instructional interaction method according to an embodiment of the present application showing content displayed thereon;
fig. 19 is a schematic flowchart illustrating student feedback in a remote teaching interaction method according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of a remote teaching interactive terminal according to an embodiment of the present application;
FIG. 21 is a system interaction diagram of a remote instruction interaction system according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 23 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the purpose, features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. In addition, the description in this application
In the above description, "plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
FIG. 1 illustrates an exemplary system architecture 100 that may be applied to the remote instructional interaction method of the present application.
As shown in fig. 1, the system architecture 100 may include a terminal device 101, a terminal device 102, a network 103, and a server 104, where the terminal device 101 is a first terminal and the terminal 102 is a second terminal, or the terminal device 102 is a second terminal and the terminal 101 is a first terminal. Network 103 is the medium used to provide communication links between terminal devices 101 to 102. Network 103 may include various types of wired or wireless communication links, such as: the wired communication link includes an optical fiber, a twisted pair wire, or a coaxial cable, and the WIreless communication link includes a bluetooth communication link, a WIreless-FIdelity (Wi-Fi) communication link, or a microwave communication link, etc.
A user may interact with the server 104 through the network 103 using the terminals 101 to 102 to receive messages from the server 104 or to send messages to the server 104. Various communication client applications may be installed on the terminal devices 101 to 102, for example: video recording application, video playing application, voice acquisition application, voice interaction application, search application, instant messaging tool, mailbox client, social platform software, and the like.
The terminal apparatuses 101 to 102 may be hardware or software. When the terminal apparatuses 101 to 102 are hardware, various electronic apparatuses having a display screen may be provided, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like. When the terminal apparatuses 101 to 102 are software, they may be installed in the electronic apparatuses listed above. Which may be implemented as multiple software or software modules (e.g., to provide distributed services) or as a single software or software module, and is not particularly limited herein.
When the terminal devices 101 to 102 are hardware, a display device may be further installed thereon, and the display may be various devices capable of implementing a display function, for example: a Cathode ray tube display (CR), a Light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user can check information such as displayed characters, pictures and videos by using display equipment on the terminal equipment 101-102.
The server 104 may be a server that provides various services. The server 104 may be hardware or software. When the server 104 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server 104 is software, it may be implemented as a plurality of software or software modules (for example, for providing distributed services), or may be implemented as a single software or software module, and is not limited in particular herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks, and servers may be used, as desired for implementation.
It can be understood that, in this embodiment of the present application, the first terminal or the second terminal may be a terminal used by a teacher, a student, or a parent, and the teaching in the embodiment may include activities such as teaching, learning, and accompanying reading, so the display content in the embodiment may be evaluation information of the teacher on the course of the student, evaluation or feedback information of the student on the course of the teacher, or feedback information of the parent on the course of the teacher, but specific data processing procedures are similar when different users use the terminal, and for convenience of description, the first terminal is taken as a teacher terminal, and the second terminal is taken as a student terminal, which are taken as an example, so as to describe a remote teaching interaction method, a server, a terminal, and a system.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 2, the method includes:
s201, receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data.
The first terminal can be a terminal used by a teacher in remote teaching, the teacher can create and log in a teacher account on the first terminal through a designated application program or a designated webpage, wherein teacher account information is stored in the server, and the teacher account information comprises personal information, specific course schedule information, teacher identity information and other information related to the teacher. Besides basic teacher personal information, the teacher identity information also includes language pronunciation information, language information, pronunciation habits and pronunciation information of various languages, and the like of the location where the teacher is located.
The first teaching streaming media video can be a video collected by the first terminal and used for teaching of a teacher when the teacher performs remote teaching, and the video collected by the first terminal and used for teaching of the teacher is sent to the server in real time. In order to realize real-time video transmission between the first terminal and the server, the first terminal collects videos given by teachers and transmits the videos to the server in a streaming media transmission mode. Specifically, the first terminal collects a video of teaching of a teacher in a preset time period, media data compression is carried out on the video of teaching in the time period, the compressed video is sent to the server in time, the server decodes the compressed video data to obtain the video of teaching of the teacher in the time period, namely the video of teaching of the teacher is transmitted in a form of sending data in a segmented mode on the internet, and the video transmission between the first terminal and the server can be considered as real-time video transmission due to the fact that the preset time period is extremely short.
Further, first teaching streaming media video includes first teaching video data and first teaching audio data, and first teaching streaming media video is gathered through the video acquisition module on the first terminal and is obtained, and is concrete, and video acquisition module includes camera and sound card, and wherein the camera is used for gathering the first teaching video data when the teacher taught, and the first teaching audio data when the teacher taught is gathered to the sound card. After first teaching video data and first teaching audio data when the teacher of a teacher's teaching are gathered to first terminal, accomplish the packing and transmit to the server through the media transmission mode of flowing.
S202, extracting a first video frame in the first teaching video data, and obtaining first expression information according to the first video frame.
After receiving a first teaching streaming media video transmitted by a first terminal, a server separates first teaching video data and first teaching audio data from the first teaching streaming media video. The first teaching video data can be divided into a plurality of first video frames, the first video frames in the first teaching video data are extracted, the key frames of the first video frames are further analyzed, the key frames are static images containing the faces of teachers, and the key frames contain face images of the teachers during remote teaching, so that the key frames in the first teaching video data are subjected to image recognition processing, and expression information of the teachers can be obtained.
The key frames are divided according to the intensity of change of the face pixels of the human body in the current first teaching video data, the intensity of change of the face pixels of the human body in the first teaching video data represents the probability of the occurrence of the expression of the teacher, and the expression information of the teacher can be extracted more quickly by extracting the key frames.
Further, the first facial expression information, i.e. teacher facial expression information, includes but is not limited to: happy, surprised, sad, angry, disgust, and surprise (happy + surprised), sad (sad + angry), among other distinguishable compound expressions.
S203, converting the first teaching audio data to obtain first voice text information, and obtaining first semantic information according to the first voice text information.
After the server receives the first teaching streaming media video transmitted by the first terminal, first teaching video data and first teaching audio data can be separated according to the first teaching streaming media video, the first teaching audio data is the audio of teaching of a teacher collected by the teacher in real time during remote teaching, and the audio of teaching of the teacher comprises the audio of teaching explanation of the teacher about courses and the audio of interaction of the teacher and students.
The first teaching audio data is an original analog signal, specific teaching voices of the teacher need to be recognized through the voice recognition model, the specific teaching voices of the teacher are converted into first voice text information, namely the voice text information of the teacher, and the voice text information of the teacher is the text form of the teaching voices of the teacher.
Further, the first voice information, i.e. teacher semantic information, includes but is not limited to: the teacher semantic information is different from directly translated voice, and not only contains the transliterated meaning of the voice of the teacher, but also combines the context of the current teacher and the voice meaning obtained by the voice mood of the teacher.
For example: when a teacher learns the word, the teacher just learns the word of 'excellence', the voice uttered by the teacher contains the word of 'excellence', if the voice is directly translated and recognized, the voice can be wrongly recognized as that the teacher is awarding the exaggeration students, and the fact is that the teacher is guiding the students to learn the word of 'excellence'. Therefore, in order to increase the accuracy of voice recognition, after the voice text information of the teacher contains the preset vocabulary representing the evaluation of the teacher, the context and the tone of the teacher of the vocabulary need to be combined, in the above example, after the voice sent by the teacher is recognized to contain the word "excellent", the context of the teacher is combined to be the word class learning "and the teacher reading aloud article", and the tone of the teacher is the "lecture tone", so as to obtain the voice sent by the teacher
If the word "excellent" is not an evaluation of the student, the recognized information is not output as the teacher speech information. The teacher semantic information obtained through the processing in the mode accords with the actual meaning of the teacher voice, and the interaction efficiency between the teacher and students is improved.
And S204, obtaining display content according to the first expression information and the first semantic information.
Generally, a person specifically represents a certain semantic meaning, such as: after the teacher is raised or criticized, the expression corresponding to the semantics can be expressed on the face of a person, so that the teacher expression information and the teacher semantic information are associated, and the teacher evaluation information, namely the display content, can be obtained when the teacher expression information and the teacher semantic information express the same or similar meanings.
The display content expresses the evaluation of the teacher to the performance of the students in the learning course when the teacher is in remote teaching, and the students can clearly know the current learning state of the students through the display content of the teacher and know whether the teacher is satisfied with the learning of the students, so that the timely interaction between the students and the teacher is realized.
When the students know that the teacher expresses the performance of the students according to the display contents, the students are encouraged to repeat themselves, and the learning confidence of the students is improved; when the students know that the teacher criticizes the performance of the students according to the display contents, the students are encouraged to learn well, the teachers strive for the promotion, and the learning efficiency of the students is improved.
When the first terminal is a student terminal and the second terminal is a teacher terminal, a teacher can know the teaching state of the student according to the display content displayed on the second terminal, correspondingly know the teaching state of the teacher, and the teaching scheme of the teacher can be improved continuously.
When the first terminal is the parent terminal and the second terminal is the teacher terminal, the teacher can know the satisfaction degree of the parents for the self course according to the display content displayed on the second terminal, correspondingly know the self teaching state, and the teacher can be facilitated to continuously improve the teaching scheme.
S205, pushing the display content to the second terminal for displaying.
After the server obtains the display content, namely the teacher evaluation information, the teacher evaluation information is immediately pushed to the second terminal, namely the student terminal, to be displayed, so that the students can learn the evaluation of the teacher on the learning state of the students while performing course learning at the second terminal.
The embodiment of the application provides a remote teaching interaction method, wherein the method comprises the following steps: the method comprises the steps of receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data, extracting a first video frame in the first teaching video data, obtaining first expression information according to the first video frame, converting the first teaching audio data to obtain first voice text information, obtaining first semantic information according to the first voice text information, obtaining display content according to the first expression information and the first semantic information, and pushing the display content to a second terminal for displaying. Because the server can obtain expression and semantic information in the teaching streaming media video transmitted according to the first terminal, the teacher's display content to the students is obtained through expression and semantic information, and the display content is displayed on the student terminal, or the display content of the students to the teacher is obtained, and the display content is displayed on the teacher terminal, the teacher and the students can check the interactive information represented by the real-time display content on the terminals respectively, and timely interaction between the students and the teacher is realized.
One possible scheme is that when the first terminal is a terminal used by a teacher and the second terminal is a terminal used by a student, the displayed content is teaching evaluation, and the teaching evaluation comprises positive evaluation and negative evaluation.
The positive evaluation is the evaluation that the teacher has positive and upward influence on the students, and the negative evaluation is the evaluation that the teacher has negative and negative influence on the students, and the evaluation can be customized according to different audience groups. For example: for students or pupils who are young, the students need more encouragement for teachers at the stage of beginning to learn courses, and the positive evaluation can be defined widely to achieve the effect of stimulating the students to learn. For senior high school students or adults, the students need to make progress by looking at their own defects or insufficiencies and checking for omissions, and at the moment, negative evaluation can be defined as a wide range or a normal range so as to remind and stimulate students to improve their own level.
The teaching evaluation may be a positive evaluation under the condition that the first expression representation is happy and/or the first semantic representation is raised;
or in the case that the first expression represents sadness and/or the first semantic represents criticism, the teaching evaluation is negative evaluation.
Furthermore, before the server receives the teaching stream video of the first terminal, the server and the second terminal as well as the server and the first terminal need to establish contact, meanwhile, students in class need to establish one-to-one corresponding contact with teachers giving lessons, and the teachers and the students can give lessons only when the contact is satisfied.
Referring to fig. 3, fig. 3 is a schematic flow chart illustrating a teaching process of a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 3, the method includes:
s301, the server receives a course selection request message sent by the second terminal, wherein the course selection request message comprises course schedule information and student identification.
When a student needs to learn courses, the student firstly logs in a second terminal through a student account, and at this moment, the second terminal sends a login request carrying a student identity of the student to a server, wherein the student identity is an identity representing the identity of the student, and the identity can be a string of characters or an ID tag and the like. The login request further includes a second terminal corresponding to the student identifier, that is, the student identifier not only represents the identity of the student, but also represents the identity of the second terminal corresponding to the student identifier. After receiving a login request carrying a student identification from a second terminal, the server judges whether account information of the student already exists in the server or not through the student identification, if so, the server responds to the login request and returns a login-allowing instruction to the second terminal, and then the student can perform course learning operation through the second terminal.
When the student uses the second terminal to select the course, logs in the second terminal through the student account, the relevant course selection information that the second terminal shows, the course selection information includes: course name, course date, course content, course instructor, etc. The student selects a learning course according to personal needs through course selection information, after course selection of the student is finished, the second terminal generates course schedule information according to course selection data submitted by the student, the course schedule information and the student identification of the student are packaged together into course selection request information to be sent to the server, and the server receives the course selection request information sent by the second terminal.
Referring to fig. 4, the course schedule information can be embodied in a graph, and fig. 4 is a course schedule information graph in a remote teaching interaction method provided by the embodiment of the present application. As shown in fig. 4, the course schedule information graph specifically exhibits: student name, course selection date, course name, course date, course content, and course instructor.
And S302, binding the student identification with the teacher identification according to the course schedule information, and storing the binding information.
And after receiving the course selection request message sent by the second terminal, the server obtains the student identification and the course schedule information of the student according to the course selection request message, binds the student identification of the student and the teacher identification of the teacher corresponding to the selected course according to the course information recorded in the course schedule information and the teacher information corresponding to the course, and stores the binding information in the designated storage area.
The process of binding the student identifier and the teacher identifier will be described below by taking the course schedule information chart recorded in fig. 4 as an example. As can be seen from fig. 4, the student is red, the class selection is performed on the second terminal at month 3 and month 1, the selected course is a grammar course, the lesson time of the grammar course is month 3 and month 5, the course teacher of the grammar course is marie, after the server receives the information including the course schedule information and the student identification, the student identification of the red and the teacher identification of the marie are bound together, the binding relationship is associated with specific course information, the course information is grammar course No. month 3 and month 5, the information is combined together to be used as binding information, and the binding information of the red and marie is stored in a designated storage area.
S303, receiving a login request carrying the student identification from the second terminal, and receiving a teaching request message sent by the first terminal. The teaching request message carries a teacher identification corresponding to the first terminal. At this time, the student is in a state of waiting for class, and the teacher is in a state of waiting for class.
Before the first terminal sends a teaching request, the teacher needs to log in the first terminal through a teacher account, and at the moment, the students also log in the second terminal through a student account. If the teacher needs to give lessons, the teacher operates the first terminal and sends a teaching request to the server, in order to facilitate the server to identify the first terminal, the sent teaching request needs to carry a teacher identification corresponding to the first terminal, and the teacher identification represents the identity of the first terminal used by the teacher and the identity of the teacher.
S304, responding to the teaching request message, distributing the virtual classroom, and adding the first terminal into the virtual classroom.
The virtual classroom concept is similar to a physical classroom, the virtual classroom is a virtual space and is used for accommodating a second terminal corresponding to the student identification and a first terminal corresponding to the teacher identification, and after the server receives a teaching request message of a teacher, the server allocates the virtual classroom to the first terminal corresponding to the teacher identification according to the teacher identification carried in the teaching request message.
And S305, inquiring the student identification corresponding to the teacher identification according to the binding information.
The binding information comprises the teacher identification, the student identification bound with the teacher identification and the specific course information, so that the student identification corresponding to the teacher identification can be quickly inquired from the binding information.
S306, adding the second terminal corresponding to the student identification into the virtual classroom.
In general, if a teacher and a student are teaching one-to-one, a teacher identification corresponds to a student identification, the server queries the student identification corresponding to the teacher identification, the student identification also corresponds to a second terminal, namely, the second terminal where the student logs in, and the server adds the learning terminal corresponding to the student identification into a virtual classroom.
Another feasible scheme is that if the teacher and the students are in one-to-many teaching, that is, a teacher gives a class to a plurality of students at the same time, the server can inquire that a plurality of student identifications corresponding to one teacher identification are included from the binding information, each student identification corresponds to one second terminal, and the server adds a plurality of second terminals corresponding to a plurality of student identifications into the virtual classroom.
The embodiment of the application provides a remote teaching interaction method, which is characterized in that students and second terminals used by the students correspond to student identifications, teachers and first terminals used by the teachers correspond to the teacher identifications, and the second terminals corresponding to the student identifications and the first terminals corresponding to the teacher identifications are added into a virtual classroom together when the teachers give lessons according to course schedule information, so that the remote teaching of the teachers to the students is realized.
Referring to fig. 5, fig. 5 is another schematic flow chart of a remote teaching interaction method according to an embodiment of the present application.
Because the teaching video can be a teaching video for a teacher or a teaching video for a student or a parent reading with him or her, the video related information of the teacher, the student and/or the parent can be obtained through the teaching video. The technical scheme of the embodiment of the application is described by taking a teaching video as a teacher teaching video as an example.
The technical solution described in fig. 5 is based on the technical solution described in fig. 2, and different from the embodiment in fig. 2, the method further includes:
s501, obtaining a teaching video in a preset time period, and obtaining an expression database and a voice database of a teacher, a student and/or a parent through the teaching video.
The preset time period may be: the teacher can ensure that the pronunciation state of the teacher at the current stage can be embodied to the maximum extent by the teaching video in the preset time period from the time point of carrying out course teaching to the preset time point.
Because the teaching video in the preset time period of the teacher contains a large amount of teaching video data and teaching audio data of the teacher, the expression database and the voice database of the teacher can be extracted and constructed from the teaching video data and the teaching audio data of the teacher.
S502, taking the expression database of the teacher, the students and/or the parents as expression training data, and performing expression big data training according to the identity information of the teacher, the students and/or the parents to obtain the special expression training model of the teacher, the students and/or the parents.
In order to enable the teacher's expression recognition to be more accurate, the teacher's expression recognition can be started by recognizing an expression training model used by the teacher's expression, and in general, the expression recognition model is formed into an expression database by collecting public face data.
In the embodiment, a personal special expression database about a teacher is obtained by collecting teaching videos of the personal teacher, wherein the personal special expression database comprises detailed information such as expressions frequently expressed by the teacher and action amplitudes of various expressions, the personal special expression database of the teacher is used as expression training data, identity information of the teacher is obtained at the same time, general feature information of faces of residents in the residence of the teacher is mainly used, expression recognition of the teacher is further improved, and an exclusive expression training model for accurately recognizing expressions of a specific teacher can be obtained by using the expression database of the teacher as expression training data and performing expression big data training according to the identity information of the teacher. It may be that one teacher corresponds to one exclusive expression training model, that is, each teacher has its own exclusive training model.
S503, the voice database of the teacher, the students and/or the parents is used as voice training data, and voice big data training is carried out according to the identity information of the teacher, the students and/or the parents to obtain the exclusive voice training model of the teacher, the students and/or the parents.
Similar to the special expression training model, in the embodiment, a special voice database of a teacher is obtained by collecting teaching videos of the individual teacher, wherein the special voice database of the teacher contains detailed information such as long and short sentences, vocabularies, speaking voice, speaking volume under different scenes and the like which are frequently used by the teacher in teaching, the special voice database of the teacher is used as voice training data, and identity information of the teacher is obtained at the same time. It may be that one teacher corresponds to one dedicated speech training model, i.e. each teacher has its own dedicated training model.
S504, storing the special expression training models and the special voice training models of the teachers, the students and/or the parents, and marking the identifications of the teachers, the students and/or the parents corresponding to the special expression training models and the special voice training models.
Through the teacher's mark that this teacher corresponds to the special expression training model and the special pronunciation training model mark of teacher, the server can directly seek this teacher's special expression training model and special pronunciation training model according to this teacher's teacher mark needs.
Referring to fig. 6, fig. 6 is a schematic flow chart illustrating obtaining of a first expression in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 6, the technical solution described in fig. 6 is based on the technical solution described in fig. 5, and unlike the embodiment in fig. 5, it includes:
s601, preprocessing the first video frame to extract facial features, and obtaining real-time expression information according to the facial features.
Because the video frames are static images, in order to accelerate the process of image identification, the first video frame can be preprocessed to extract facial feature data, the facial feature data comprise specific pixel positions of various facial organs of a teacher, the expression information of the teacher can be obtained according to the facial feature data of the teacher, and the expression information obtained at the moment is obtained according to the real-time video of the teacher giving lessons, so the expression information of the teacher can also be called as real-time expression information, namely the real-time expression information of the teacher.
S602, inquiring the exclusive expression training model of the teacher, the student and/or the parent corresponding to the first terminal according to the teacher, the student and/or the parent identification carried in the first teaching video data.
The special expression training model of each teacher corresponds to the teacher identification of the teacher, and the storage position of the special expression training model of the teacher in the server can be found by inquiring the teacher identification of the teacher.
S603, inputting the real-time expression information of the teacher, the students and/or the parents into the special expression training model of the teacher, the students and/or the parents, and obtaining first expression information after recognition.
Because the teacher's exclusive expression training model is trained according to the teacher's personal expression database, consequently can promote the degree of accuracy of discerning the teacher's expression greatly.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a first voice obtaining process in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 7, the technical solution described in fig. 7 is based on the technical solution described in fig. 6, and different from the embodiment in fig. 6, the method includes:
s701, inquiring the exclusive voice training model of the teacher, the student and/or the parent corresponding to the first terminal according to the identification of the teacher, the student and/or the parent carried in the first teaching video data.
The specific voice training model of each teacher corresponds to the teacher identification of the teacher, so that the storage position of the specific voice training model of the teacher in the server can be found by inquiring the teacher identification of the teacher.
S702, inputting the first teaching audio data into a special voice training model, and obtaining voices of teachers, students and/or parents after recognition.
Because the exclusive voice training model of the teacher is trained according to the personal voice database of the teacher, the accuracy of recognizing the voice of the teacher can be greatly improved.
And S703, obtaining first voice text information according to the voice conversion of the teacher, the student and/or the parent.
In order to obtain the first semantic information, namely, the teacher semantic information, subsequently, the teaching voice of the teacher needs to be converted to obtain the first voice text information, namely, the teacher voice text information, and the teacher voice text information can be regarded as transliteration information of the teaching voice of the teacher.
Further, when the server needs to send the obtained teacher evaluation information to the second terminal, in order to ensure that the evaluation information is correctly transmitted, that is, the teacher evaluation information is transmitted to the second terminal of the student who is in class with the teacher, the student identifier associated with the teacher identifier needs to be queried according to the binding information.
Because the binding message contains the course schedule information of the students, and the student identifications and the teacher identifications related to the courses, the server can inquire the student identifications related to the teacher identifications through the binding message.
And then the student evaluation information is sent to a second terminal corresponding to the student identification in real time, and the login request sent by the second terminal carries the student identification corresponding to the second terminal, so that the second terminal used by the student correspondingly can be inquired through the student identification, and after the second terminal used by the student is inquired, the teacher evaluation information of the student is sent to the second terminal.
In the above embodiment, only the process of obtaining the teacher evaluation information of the teacher through the first teaching streaming media video collected by the first terminal in a very short time and displaying the teacher evaluation information on the second terminal is described, and in practical application, the collection, conversion and display process of the teacher evaluation information is performed in a very short time and continuously and circularly, so that the students can display the teacher evaluation information in real time in the remote learning process.
Referring to fig. 8, fig. 8 is another schematic flow chart of a remote teaching interaction method according to an embodiment of the present application. Compared with the above embodiment in which the students can check the evaluation of the teacher on themselves through the second terminal, another interaction method between the teacher and the students is described in fig. 8, that is, the teacher checks the feedback of the students on the teaching of the teacher through the first terminal. Since the acquisition of the feedback information of the student is similar to the acquisition of the evaluation information of the teacher, for a part of the technical features, please refer to the above specific scheme, which is not described herein again.
As shown in fig. 8, the method specifically includes:
s801, receiving a second teaching streaming media video sent by a second terminal, wherein the second teaching streaming media video comprises second teaching video data and second teaching audio data.
The second terminal can be a terminal used by students for course learning, and students can create and log in student accounts on the second terminal through a specified application program or a specified webpage, wherein the student account information is stored in the server, and the student account information comprises personal information, purchasing course information, specific course schedule information and other information related to the students.
And the second streaming media video is collected by the second terminal and is sent to the server.
S802, extracting a second video frame in the second teaching video data, and obtaining second expression information according to the second video frame.
And S803, converting the second teaching audio data to obtain second voice text information, and obtaining second semantic information according to the second voice text information.
And S804, obtaining feedback information according to the second expression information and the second semantic information.
It is possible that the feedback information is student feedback information including but not limited to: full understanding, need to suspend thinking, confusion, and complete incomprehension.
And S805, pushing the feedback information to the first terminal for displaying, and/or sending pre-stored teaching suggestions or test questions corresponding to the feedback information to the first terminal and/or the second terminal.
The teaching method comprises the steps that feedback information of students is obtained according to second teaching streaming media videos of the students, the feedback information needs to be pushed to a teacher terminal and a first terminal, the teacher observes the feedback information of the students and then is convenient to adjust teaching progress or teaching modes in time, meanwhile, a server can send pre-stored learning suggestions or learning test questions corresponding to the feedback information of the students to the second terminal automatically or according to the operation of the teacher according to the content of the feedback information of the students, or send the pre-stored teaching suggestions corresponding to the feedback information of the students to the first terminal. The learning suggestions, the learning test questions and the teaching suggestions are all stored in advance, and the sources of the learning suggestions or the learning test questions are the learning suggestions or the selected learning test questions made by the teacher according to the feedback information of different students in different courses.
For example: when a student learns grammar courses, when the student is puzzled by expressions and sends out unintelligible voice, the server can obtain student expression information and student semantic information of the student from a learning streaming media video sent by the second terminal, then the student feedback information of the student is 'puzzled' by the student expression information and the student semantic information, the server pushes the student feedback information of the student to the first terminal to be displayed, a teacher learns that the student is puzzled to the course knowledge points at the moment through the student feedback information displayed on the first terminal, and then the course knowledge points at the moment can be repeatedly taught, so that the student understands and changes the knowledge points.
Meanwhile, the server can search related learning suggestions or learning test questions according to the current courses learned by the students and the feedback information of the students as 'doubt', and send the learning suggestions or student prompts to the second terminal.
The embodiment of the application provides a remote teaching interaction method, teacher evaluation comprises positive evaluation and negative evaluation, and interaction between a teacher and students can be realized in multiple aspects; meanwhile, the special expression training model and the special voice training model of the teacher are obtained through teaching videos of the teacher, the expression and voice recognition degree of the teacher are greatly improved through the special expression training model and the special voice training model, and the interaction efficiency between the teacher and students is also improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a remote teaching interaction server according to an embodiment of the present application.
As shown in fig. 9, the server 90 includes:
the first video receiving module 901 is configured to receive a first teaching streaming media video from a first terminal, where the first teaching streaming media video includes first teaching video data and first teaching audio data.
The first expression obtaining module 902 is configured to extract a first video frame in the first teaching video data, and obtain first expression information according to the first video frame.
The first semantic obtaining module 903 is configured to convert the first teaching audio data to obtain first voice text information, and obtain first semantic information according to the first voice text information.
And a display content obtaining module 904, configured to obtain display content according to the first expression information and the first semantic information.
The display content pushing module 905 is configured to push the display content to the second terminal for displaying.
Further, the server 90 should be constructed to include a processor, a hard disk, a memory, a system bus, etc. to meet the requirements in terms of processing power, stability, reliability, security, scalability, manageability, etc. In a network environment, the server 90 may be classified into a file server, a database server, an application server, a WEB server, and the like according to the type of service provided.
The database server is composed of one or more computers running in a local area network and database management system software, and provides data services for client application programs. The database server is established on the basis of the database system, has the characteristics of the database system and has a unique surface. The main functions are as follows: database management functions including system configuration and management, data access and update management, data integrity management, and data security management. Query and manipulation functions of the database, including database retrieval and modification. Database maintenance functions including data import/export management, database structure maintenance, data recovery functions and performance monitoring. The database runs in parallel, and because more than one user accesses the database at the same time, the database server must support a parallel running mechanism to process the simultaneous occurrence of a plurality of events.
Since the server 90 needs to store and process video and audio files in the embodiment of the present application, it may be that, in the embodiment of the present application, the server 90 uses a database server to ensure the efficiency of processing data such as video and audio files.
The display content is teacher evaluation, and the teacher evaluation comprises positive evaluation and negative evaluation;
under the condition that the teacher expression is happy and/or the teacher semantic expression is raised, the teacher is evaluated as a positive evaluation;
or the teacher evaluates as a negative evaluation in case the teacher's expression represents sadness and/or the teacher's semantics represents criticism.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a remote teaching interaction server according to an embodiment of the present application.
As shown in fig. 10, the server 90 further includes:
the training data acquisition module 906 is configured to acquire a teaching video within a preset time period, and obtain an expression database and a voice database of a teacher, a student and/or a parent through the teaching video.
The expression training module 907 is configured to use the expression database of the teacher, the student and/or the parent as expression training data and perform expression big data training according to the identity information of the teacher, the student and/or the parent to obtain a special expression training model of the teacher, the student and/or the parent.
And the voice training module 908 is used for taking the voice database of the teacher, the student and/or the parent as voice training data, and performing voice big data training according to the identity information of the teacher, the student and/or the parent to obtain a special voice training model of the teacher, the student and/or the parent.
The model storage module 909 is configured to store the specific expression training models and the specific voice training models of the teachers, the students and/or the parents, and mark the identities of the teachers, the students and/or the parents corresponding to the specific expression training models and the specific voice training models.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a first expression obtaining module in a remote teaching interaction server according to an embodiment of the present application.
As shown in fig. 11, the first expression obtaining module 902 includes:
the real-time expression obtaining module 9021 is configured to perform preprocessing on the first video frame to extract facial features, and obtain real-time expression information according to the facial features.
The expression model query module 9022 is configured to query, according to the identifier of the teacher, the student and/or the parent carried in the first teaching video data, an exclusive expression training model of the first terminal corresponding to the teacher, the student and/or the parent.
The expression output module 9023 is configured to input real-time expression information of the teacher, the student and/or the parent into an exclusive expression training model of the teacher, the student and/or the parent, and obtain first expression information after recognition.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a first semantic obtaining module in a remote teaching interaction server according to an embodiment of the present application.
As shown in fig. 12, the first semantic acquiring module 903 includes:
and the voice model query module 9031 is configured to query, according to the identifier of the teacher, the student and/or the parent carried in the first teaching video data, an exclusive voice training model of the teacher, the student and/or the parent corresponding to the first terminal.
And the voice output module 9032 is configured to input the first teaching audio data into the dedicated voice training model, and obtain voices of a teacher, a student and/or a parent after recognition.
And the voice conversion module 9033 is configured to obtain the first voice text information according to voice conversion of a teacher, a student and/or a parent.
Referring to fig. 13, fig. 13 is a schematic structural diagram of a remote teaching interaction server according to an embodiment of the present application.
As shown in fig. 13, the server 90 further includes:
the second video receiving module 9010 is configured to receive a second teaching streaming media video sent from the second terminal, where the second teaching streaming media video includes second teaching video data and second teaching audio data;
the second expression obtaining module 9011 is configured to extract a second video frame in the second teaching video data, and obtain second expression information according to the second video frame;
the second semantic acquiring module 9012 is configured to convert the second teaching audio data to obtain second voice text information, and obtain second semantic information according to the second voice text information;
the feedback information acquisition module 9013 is configured to obtain feedback information according to the second expression information and the second semantic information;
the feedback information pushing module 9014 is configured to push the feedback information to the first terminal for display, and/or send a teaching suggestion or a test question corresponding to the feedback information, which is stored in advance, to the first terminal and/or the second terminal.
The embodiment of the application provides a remote teaching interaction server, and the server comprises: the video receiving module is used for receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data; the expression acquisition module is used for extracting a first video frame in the first teaching video data and obtaining first expression information according to the first video frame; the semantic acquisition module is used for converting the first teaching audio data to obtain first voice text information and obtaining first semantic information according to the first voice text information; the display content acquisition module is used for acquiring display content according to the first expression information and the first semantic information; and the display content pushing module is used for pushing the display content to the second terminal for displaying. Because the server can obtain expression and semantic information in the teaching streaming media video transmitted according to the first terminal, the teacher's display content to the students is obtained through expression and semantic information, and the display content is displayed on the student terminal, or the display content of the students to the teacher is obtained, and the display content is displayed on the teacher terminal, the teacher and the students can check the interactive information represented by the real-time display content on the terminals respectively, and timely interaction between the students and the teacher is realized.
Referring to fig. 14, fig. 14 is a schematic flowchart illustrating a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 14, the method includes:
s1401, receiving display content from a server, wherein the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, and then according to the first expression information and the first semantic information.
Further, please refer to the description of the above embodiments for a specific step of obtaining the display content, which is not described herein again.
The second terminal, that is, the student terminal, also needs to receive the streaming media video sent by the first terminal, that is, the teacher terminal, so that the students can watch the real-time teaching video of the teacher and give lessons remotely, therefore, the display information can be carried in the streaming media video transmitted to the second terminal by the server, or the display information and the streaming media video can be separately transmitted to the second terminal, and the second terminal respectively receives the display information and the streaming media video.
And S1402, displaying the display content on the user interface.
One possible scenario is that the presentation is a teaching assessment that includes a positive assessment and a negative assessment.
The positive evaluation is the evaluation that the teacher evaluation has positive area influence on students, and the negative evaluation is the evaluation that the teacher evaluation has negative and negative influence on the students, and the evaluation can be customized according to different audience groups. For example: for students or pupils who are young, the students or pupils need more encouragement of teachers at the stage of beginning to learn courses, and the positive evaluation can be widely defined so as to achieve the effect of more motivating the students to learn. For senior high school students or adults, the students need to make progress by looking at their own defects or insufficiencies and checking for omissions, and at the moment, negative evaluation can be defined as a wide range or a normal range so as to remind and stimulate students to improve their own level.
The teaching evaluation may be a positive evaluation under the condition that the first expression representation is happy and/or the first semantic representation is raised;
or in the case that the first expression represents sadness and/or the first semantic represents criticism, the teaching evaluation is negative evaluation.
Further, since the student needs to know the learning state of the student by observing the display content, the display mode of the display content on the user interface needs to consider whether the display content can be clearly observed by the student.
It may be that the presentation content includes:
displaying the display content through static or scrolling playing pattern identifiers on the user interface, wherein the display position of the pattern identifiers can be a fixed position or full screen display, the pattern display time can be continuously displayed at a certain position or disappear after being displayed for a preset time, and the pattern identifiers comprise: emoticons, text symbols, animated emoticons, and the like.
The display content can also be displayed in an audio playing form.
In order to better understand the display manner of the content shown in the embodiment, the display manner of the content shown in the embodiment will be explained below by way of example, which is only used for reference and is not considered as a limitation to the display manner.
Referring to fig. 15, fig. 15 is a schematic view illustrating a display content in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 15, the second terminal is a portable computer, the second terminal is provided with a camera 1501 and a microphone 1502, a screen of the desktop computer, that is, a user interface, is used for displaying specific teaching information, specifically, the user interface of the second terminal includes a teacher image display window and a student image display window in addition to displaying names of students, names of teachers, learning subjects and learning contents, when the display contents of the students are positive evaluation information, the display contents display image marks in a static manner, the display position is located at the upper right corner of the screen, the display time is finished after 5 seconds of display, and the specific display pattern is a small safflower.
Referring to fig. 16, fig. 16 is another schematic display diagram illustrating content displayed in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 16, different from the technical solution described in fig. 15, when the display content of the student is the positive evaluation information, the display content displays the image identifier in a scrolling manner, the display position is located above the screen, the display time is not continuously displayed until the next display content is displayed, and the specific display pattern is an emoticon.
Referring to fig. 17, fig. 17 is another schematic view illustrating display contents in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 17, different from the technical solution described in fig. 15, when the display content of the student is the positive evaluation information, the display content is displayed in an audio playing form, and an audio icon is displayed on the upper right corner of the screen, where the audio playing time is 5 seconds, and the student can click the audio icon on the screen to close the audio in advance.
Referring to fig. 18, fig. 18 is another schematic view illustrating display contents in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 18, different from the technical solution described in fig. 15, when the display content of the student is the positive evaluation information, the display content displays the image identifier in a full-screen display manner, and since the full-screen display may be blocked by the student learning course, the full-screen display duration is short, the time may be 2 seconds, and the specific display pattern is the identifier of the clap.
Referring to fig. 19, fig. 19 is a schematic view illustrating a process of student feedback in a remote teaching interaction method according to an embodiment of the present application.
As shown in fig. 19, the method includes:
s1901, collecting a second streaming media video, wherein the second streaming media video comprises second video data and second audio data.
S1902, sending the second streaming media video to a server, so that the server obtains feedback information according to the second streaming media video, and the server pushes the feedback information to the first terminal for displaying.
And S1903, receiving teaching suggestions or test questions which are sent by the server and correspond to the feedback information and are stored in advance.
And S1904, displaying teaching suggestions or test questions on the user interface.
After feedback information of the students is obtained according to the second streaming media videos of the students, the feedback information needs to be pushed to the first terminal, and after the teachers observe the feedback information of the students through the first terminal, the teaching progress or teaching mode can be adjusted conveniently and timely. The learning suggestions and the learning test questions are pre-stored, and the sources of the learning suggestions or the learning test questions are the learning suggestions or the selected learning test questions made by the teacher according to the feedback information of different students in different courses.
The embodiment of the application provides a remote teaching interaction method, which comprises the following steps: the method includes receiving presentation content from a server and displaying the presentation content on a user interface. Because when the display content is displayed, the display content is displayed through static or rolling playing pattern identification, or the display content is displayed through an audio playing mode, the display content can be displayed on the second terminal timely and clearly, and the interactive effectiveness of teachers and students is improved through various display modes.
Referring to fig. 20, fig. 20 is a schematic structural diagram of a remote teaching interactive terminal according to an embodiment of the present application.
As shown in fig. 20, the terminal 200 includes:
the display content receiving module 2001 is configured to receive display content from a server, where the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, and then according to a first expression and first semantic information.
A display content display module 2002 for displaying the display content on the user interface.
One possible case is that the presentation is a teacher evaluation, and the teacher evaluation includes a positive evaluation and a negative evaluation.
The display content may be a positive evaluation under the condition that the first expression representation is happy and/or the first semantic representation is raised;
or in case the first expression represents sadness and/or the first semantic represents criticism, the content is presented as a negative evaluation.
Further, the presentation content display module 2002 includes:
the pattern display module 20021 is configured to display the display content in a user interface in a static or scrolling manner, where the pattern identifier includes any one of an emoticon, a text symbol, and an animation emoticon.
And the audio module 20022 is used for displaying the display content in an audio playing form.
Further, the terminal 200 further includes:
and the streaming media collection module 2003 is configured to collect a second streaming media video, where the second streaming media video includes second video data and second audio data.
The streaming media sending module 2004 is configured to send the second streaming media video to the server, so that the server obtains feedback information according to the second streaming media video, and the server pushes the feedback information to the first terminal for displaying.
And the suggestion and test question receiving module 2005 is configured to receive teaching suggestions or test questions corresponding to the feedback information, which are sent from the server and stored in advance.
And a suggestion and test question display module 2006, configured to display teaching suggestions or test questions on a user interface.
The embodiment of the application provides a remote teaching interactive terminal, and its terminal includes: the display content receiving module and the display content display module display the display content through static or rolling playing pattern identification or display the display content through an audio playing mode when the display content is displayed on the display content display module, the display content can be displayed on the second terminal timely and clearly, and the interaction effectiveness of teachers and students is improved through various display modes.
Referring to fig. 21, fig. 21 is a system interaction diagram of a remote teaching interaction system according to an embodiment of the present application.
As shown in fig. 21, the system includes: a second terminal 101, a first terminal 102 and a server 104.
The first terminal 102 is configured to send a first teaching streaming media video, where the first teaching streaming media video includes first teaching video data and first teaching audio data.
And the server 104 is configured to extract a first video frame in the first teaching video data, and obtain first expression information according to the first video frame.
The server 104 is further configured to convert the first teaching audio data to obtain first speech text information, and obtain first semantic information according to the first speech text information.
The server 104 is further configured to obtain the display content according to the first expression information and the first semantic information.
The server 104 is further configured to push the display content to the second terminal for displaying.
And the second terminal 102 is used for displaying the display content.
Further, a method corresponding to the remote teaching interactive system comprises the following steps:
s2101, a first teaching streaming media video is sent.
S2102, extracting a first video frame in the first teaching video data, and obtaining first expression information according to the first video frame.
S2103, converting the first teaching audio data to obtain first voice text information, and obtaining first semantic information according to the first voice text information.
S2104, display content is obtained according to the first expression information and the first semantic information.
S2105, pushing the display content.
S2106, receiving the display content.
S2107, displaying the display content on the user interface.
For the specific explanation of each step, refer to the explanation contents in the above embodiments, which are not repeated herein.
Further, please refer to fig. 22, which provides a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 22, the terminal device 2200 may include: at least one processor 2201, at least one network interface 2204, a user interface 2203, memory 2205, at least one communication bus 2202.
A communication bus 2202 is used, among other things, to enable communications among the components.
The user interface 2203 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 2203 may also include a standard wired interface and a wireless interface.
The network interface 2204 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
The processor 2201 may include one or more processing cores, among others. The processor 2201 is coupled throughout various portions of the terminal 2200 by various interfaces and lines to perform various functions of the terminal 2200 and to process data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 2205 and invoking data stored in the memory 2205. Alternatively, the processor 2201 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 2201 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 2201, but may be implemented by a chip.
The Memory 2205 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 2205 includes a non-transitory computer-readable medium. The memory 2205 may be used to store instructions, programs, code sets or instruction sets. The memory 2205 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-mentioned method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 2205 may optionally be at least one storage device located remotely from the processor 2201. As shown in fig. 22, the memory 2205, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a tutorial operation response application.
Further, please refer to fig. 23, which provides a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 23, the terminal device 2300 may include: at least one processor 2301, at least one network interface 2304, memory 2305, at least one communication bus 2302.
Wherein a communication bus 2302 is used to enable connection communication between these components.
The network interface 2304 may optionally include a standard wired interface or a wireless interface (e.g., WI-FI interface).
The processor 2301 may include one or more processing cores, among others. The processor 2301 interfaces various components throughout the terminal 2300 using various interfaces and lines to perform various functions of the terminal 2300 and to process data by executing or executing instructions, code sets or instruction sets stored in the memory 2305 and invoking data stored in the memory 2305. Optionally, the processor 2301 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
The Memory 2305 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 2305 includes non-transitory computer-readable media. The memory 2305 may be used to store instructions, code, a set of codes, or a set of instructions.
The embodiment of the application provides a computer storage medium. The computer storage medium stores a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any of the above embodiments.
The embodiment of the application provides computer equipment. The apparatus comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of the method according to any of the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the remote teaching interaction method, server, terminal and system provided by the present application, those skilled in the art will be able to change the embodiments and application scope according to the ideas of the embodiments of the present application.

Claims (14)

1. A remote teaching interaction method, the method comprising:
receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data;
extracting a first video frame in the first teaching video data, and obtaining first expression information according to the first video frame;
converting the first teaching audio data to obtain first voice text information, and obtaining first semantic information according to the first voice text information, wherein the first semantic information comprises transliteration of the first voice text information and is combined with the current teaching context and teaching tone to obtain a voice meaning;
obtaining display content according to the first expression information and the first semantic information, wherein the display content is teaching evaluation;
and pushing the display content to a second terminal for displaying.
2. The method of claim 1, wherein the instructional rating comprises a positive rating and/or a negative rating;
under the condition that the first expression information represents joy and/or the first semantic information represents pop, the teaching evaluation is positive evaluation;
or under the condition that the first expression information represents sadness and/or the first semantic information represents criticism, the teaching evaluation is negative evaluation.
3. The method of claim 1, wherein receiving the instructional streaming video from the first terminal comprises, prior to:
the method comprises the steps of obtaining a teaching video in a preset time period, and obtaining an expression database and a voice database of a teacher, a student and/or a parent through the teaching video;
taking the expression database of the teacher, the students and/or the parents as expression training data, and performing expression big data training according to the identity information of the teacher, the students and/or the parents to obtain an exclusive expression training model of the teacher, the students and/or the parents;
taking the voice database of the teacher, the students and/or the parents as voice training data, and performing voice big data training according to the identity information of the teacher, the students and/or the parents to obtain an exclusive voice training model of the teacher, the students and/or the parents;
storing the teacher, the student and/or the parent the exclusive expression training model and the exclusive voice training model, and marking the exclusive expression training model and the exclusive voice training model corresponding to the identification of the teacher, the student and/or the parent.
4. The method of claim 3, wherein obtaining the first expression information according to the first video frame comprises:
preprocessing the first video frame to extract facial features, and obtaining real-time expression information according to the facial features;
inquiring an exclusive expression training model of the teacher, the student and/or the parent corresponding to the first terminal according to the identifier of the teacher, the student and/or the parent carried in the first teaching video data;
and inputting the real-time expression information of the teacher, the student and/or the parents into the special expression training model of the teacher, the student and/or the parents, and obtaining the first expression information after recognition.
5. The method of claim 4, wherein said converting first phonetic text information from the first instructional audio data comprises:
inquiring a dedicated voice training model of the first terminal corresponding to the teacher, the student and/or the parent according to the teacher, the student and/or the parent identification carried in the first teaching video data;
inputting the first teaching audio data into the exclusive voice training model, and obtaining voices of the teacher, the students and/or the parents after recognition;
and obtaining the first voice text information according to the voice conversion of the teacher, the student and/or the parent.
6. The method of claim 1, further comprising:
receiving a second teaching streaming media video sent by a second terminal, wherein the second teaching streaming media video comprises second teaching video data and second teaching audio data;
extracting a second video frame in the second teaching video data, and obtaining second expression information according to the second video frame;
converting the second teaching audio data to obtain second voice text information, and obtaining second semantic information according to the second voice text information;
obtaining feedback information according to the second expression information and the second semantic information;
and pushing the feedback information to a first terminal for displaying, and/or sending pre-stored teaching suggestions or test questions corresponding to the feedback information to the first terminal and/or a second terminal.
7. A remote teaching interaction server, the server comprising:
the video receiving module is used for receiving a first teaching streaming media video from a first terminal, wherein the first teaching streaming media video comprises first teaching video data and first teaching audio data;
the expression acquisition module is used for extracting a first video frame in the first teaching video data and obtaining first expression information according to the first video frame;
the semantic acquisition module is used for converting the first teaching audio data to obtain first voice text information and obtaining first semantic information according to the first voice text information, wherein the first semantic information comprises the transliteration of the first voice text information and is combined with the current teaching context and teaching tone to obtain a voice meaning;
the display content acquisition module is used for obtaining display content according to the first expression information and the first semantic information, and the display content is teaching evaluation;
and the display content pushing module is used for pushing the display content to a second terminal for displaying.
8. A remote teaching interaction method, the method comprising:
receiving display content from a server, wherein the display content is teaching evaluation, the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, the first semantic information comprises transliteration of voice text information, and a voice meaning obtained by combining a current teaching context and teaching tone is obtained according to the first expression information and the first semantic information;
and displaying the display content on a user interface.
9. The method of claim 8, wherein the instructional rating comprises a positive rating and/or a negative rating;
under the condition that the first expression information represents joy and/or the first semantic information represents pop, the teaching evaluation is positive evaluation;
or under the condition that the first expression information represents sadness and/or the first semantic information represents criticism, the teaching evaluation is negative evaluation.
10. The method of claim 9, wherein the displaying the presentation on the user interface comprises:
displaying the display content on a user interface in a static or rolling playing mode of a pattern identifier, wherein the pattern identifier comprises any one of emoticons, character symbols and animation expressions;
or the display content is displayed in an audio playing mode.
11. The method according to any one of claims 8 to 10, further comprising:
acquiring a second streaming media video, wherein the second streaming media video comprises second video data and second audio data;
sending the second streaming media video to a server, so that the server obtains feedback information according to the second streaming media video, and the server pushes the feedback information to a first terminal for displaying;
receiving a pre-stored teaching suggestion or test question corresponding to the feedback information sent by the server;
displaying the teaching suggestions or the test questions on a user interface.
12. A remote teaching interactive terminal, characterized in that, the terminal includes:
the display content receiving module is used for receiving display content from a server, wherein the display content is teaching evaluation, the display content is obtained by the server according to a first teaching streaming media video sent by a first terminal, the first semantic information comprises transliteration of voice text information, and a voice meaning obtained by combining a current teaching context and teaching tone is obtained according to the first expression and the first semantic information;
and the display content display module is used for displaying the display content on a user interface.
13. A computer storage medium storing instructions adapted to be loaded by a processor and to perform the steps of the method according to any of claims 1 to 6 and 8 to 11.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 6 and 8 to 11 when executing the program.
CN201910341620.4A 2019-04-26 2019-04-26 Remote teaching interaction method, server, terminal and system Active CN110033659B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910341620.4A CN110033659B (en) 2019-04-26 2019-04-26 Remote teaching interaction method, server, terminal and system
PCT/CN2020/081095 WO2020215966A1 (en) 2019-04-26 2020-03-25 Remote teaching interaction method, server, terminal and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910341620.4A CN110033659B (en) 2019-04-26 2019-04-26 Remote teaching interaction method, server, terminal and system

Publications (2)

Publication Number Publication Date
CN110033659A CN110033659A (en) 2019-07-19
CN110033659B true CN110033659B (en) 2022-01-21

Family

ID=67240422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910341620.4A Active CN110033659B (en) 2019-04-26 2019-04-26 Remote teaching interaction method, server, terminal and system

Country Status (2)

Country Link
CN (1) CN110033659B (en)
WO (1) WO2020215966A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033659B (en) * 2019-04-26 2022-01-21 北京大米科技有限公司 Remote teaching interaction method, server, terminal and system
CN110599829A (en) * 2019-09-20 2019-12-20 许昌学院 Background anti-interference modern education and teaching interaction method and interaction system
CN110610628B (en) * 2019-09-30 2021-05-04 浙江学海教育科技有限公司 Remote teaching method and device based on voice interaction, electronic equipment and medium
CN110675674A (en) * 2019-10-11 2020-01-10 广州千睿信息科技有限公司 Online education method and online education platform based on big data analysis
CN110991329A (en) * 2019-11-29 2020-04-10 上海商汤智能科技有限公司 Semantic analysis method and device, electronic equipment and storage medium
CN111260975B (en) * 2020-03-16 2022-09-06 安博思华智能科技有限责任公司 Method, device, medium and electronic equipment for multimedia blackboard teaching interaction
CN111507754B (en) * 2020-03-31 2023-11-14 北京大米科技有限公司 Online interaction method and device, storage medium and electronic equipment
CN111522971A (en) * 2020-04-08 2020-08-11 广东小天才科技有限公司 Method and device for assisting user in attending lessons in live broadcast teaching
CN111598746A (en) * 2020-04-15 2020-08-28 北京大米科技有限公司 Teaching interaction control method, device, terminal and storage medium
CN111556156A (en) * 2020-04-30 2020-08-18 北京大米科技有限公司 Interaction control method, system, electronic device and computer-readable storage medium
CN111711834B (en) * 2020-05-15 2022-08-12 北京大米未来科技有限公司 Recorded broadcast interactive course generation method and device, storage medium and terminal
CN111796846B (en) * 2020-07-06 2023-12-12 广州一起精彩艺术教育科技有限公司 Information updating method, device, terminal equipment and readable storage medium
CN112163491B (en) * 2020-09-21 2023-09-01 百度在线网络技术(北京)有限公司 Online learning method, device, equipment and storage medium
CN112232066A (en) * 2020-10-16 2021-01-15 腾讯科技(北京)有限公司 Teaching outline generation method and device, storage medium and electronic equipment
CN112580896A (en) * 2020-12-31 2021-03-30 南京谦萃智能科技服务有限公司 Knowledge point prediction method, knowledge point prediction device, knowledge point prediction equipment and storage medium
CN112767755A (en) * 2021-01-18 2021-05-07 黄河科技学院 Biochemical experiment teaching remote display interactive system
CN112861650A (en) * 2021-01-19 2021-05-28 北京百家科技集团有限公司 Behavior evaluation method, device and system
CN113420132A (en) * 2021-06-15 2021-09-21 读书郎教育科技有限公司 Method for quickly responding to questions asked in large live class forum
CN113506482A (en) * 2021-06-15 2021-10-15 浙江传媒学院 Remote teaching intelligent blackboard writing system
CN113625985B (en) * 2021-08-25 2024-01-23 京东方科技集团股份有限公司 Intelligent blackboard and display method and device thereof
CN113689300A (en) * 2021-08-26 2021-11-23 杭州高能投资咨询有限公司 Securities investment interactive teaching system
CN113723354A (en) * 2021-09-14 2021-11-30 联想(北京)有限公司 Information processing method and device
CN114267213A (en) * 2021-12-16 2022-04-01 郑州捷安高科股份有限公司 Real-time demonstration method, device, equipment and storage medium for practical training
CN114005079B (en) * 2021-12-31 2022-04-19 北京金茂教育科技有限公司 Multimedia stream processing method and device
CN114549249B (en) * 2022-02-24 2023-02-24 江苏兴教科技有限公司 Online teaching resource library management system and method for colleges
CN117391541A (en) * 2023-11-29 2024-01-12 宏湃信息技术(南京)有限公司 Online education quality monitoring method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203953A (en) * 2017-07-14 2017-09-26 深圳极速汉语网络教育有限公司 It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation
CN107801097A (en) * 2017-10-31 2018-03-13 上海高顿教育培训有限公司 A kind of video classes player method based on user mutual
CN108281052A (en) * 2018-02-09 2018-07-13 郑州市第十中学 A kind of on-line teaching system and online teaching method
CN108831222A (en) * 2018-06-26 2018-11-16 肖哲睿 A kind of cloud tutoring system
CN109147440A (en) * 2018-09-18 2019-01-04 周文 A kind of interactive education system and method
CN109147430A (en) * 2018-10-19 2019-01-04 渭南师范学院 A kind of teleeducation system based on cloud platform
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
CN109448477A (en) * 2018-12-28 2019-03-08 广东新源信息技术有限公司 A kind of remote interactive teaching system and method
CN109614849A (en) * 2018-10-25 2019-04-12 深圳壹账通智能科技有限公司 Remote teaching method, apparatus, equipment and storage medium based on bio-identification
CN109615961A (en) * 2019-01-31 2019-04-12 华中师范大学 A kind of classroom instruction classroom interactions network system and method

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872535B2 (en) * 2009-07-24 2020-12-22 Tutor Group Limited Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups
US8798521B2 (en) * 2011-07-20 2014-08-05 Collaborize Inc. Content creation in an online learning environment
CN104750380A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Information processing method and electronic equipment
CN104635574B (en) * 2014-12-15 2017-07-25 山东大学 A kind of early education towards child is accompanied and attended to robot system
CN204322085U (en) * 2014-12-15 2015-05-13 山东大学 A kind of early education towards child is accompanied and attended to robot
CN105045122A (en) * 2015-06-24 2015-11-11 张子兴 Intelligent household natural interaction system based on audios and videos
US20170278067A1 (en) * 2016-03-25 2017-09-28 International Business Machines Corporation Monitoring activity to detect potential user actions
CN106205245A (en) * 2016-07-15 2016-12-07 深圳市豆娱科技有限公司 Immersion on-line teaching system, method and apparatus
CN106878677B (en) * 2017-01-23 2020-01-07 西安电子科技大学 Student classroom mastery degree evaluation system and method based on multiple sensors
CN106851216B (en) * 2017-03-10 2019-05-28 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
US10832587B2 (en) * 2017-03-15 2020-11-10 International Business Machines Corporation Communication tone training
US20190065464A1 (en) * 2017-08-31 2019-02-28 EMR.AI Inc. Artificial intelligence scribe
CN107657017B (en) * 2017-09-26 2020-11-13 百度在线网络技术(北京)有限公司 Method and apparatus for providing voice service
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality
CN109034011A (en) * 2018-07-06 2018-12-18 成都小时代科技有限公司 It is a kind of that Emotional Design is applied to the method and system identified in label in car owner
CN109147433A (en) * 2018-10-25 2019-01-04 重庆鲁班机器人技术研究院有限公司 Childrenese assistant teaching method, device and robot
CN109360458A (en) * 2018-10-25 2019-02-19 重庆鲁班机器人技术研究院有限公司 Interest assistant teaching method, device and robot
CN109147765B (en) * 2018-11-16 2021-09-03 安徽听见科技有限公司 Audio quality comprehensive evaluation method and system
CN110033659B (en) * 2019-04-26 2022-01-21 北京大米科技有限公司 Remote teaching interaction method, server, terminal and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203953A (en) * 2017-07-14 2017-09-26 深圳极速汉语网络教育有限公司 It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation
CN107801097A (en) * 2017-10-31 2018-03-13 上海高顿教育培训有限公司 A kind of video classes player method based on user mutual
CN108281052A (en) * 2018-02-09 2018-07-13 郑州市第十中学 A kind of on-line teaching system and online teaching method
CN108831222A (en) * 2018-06-26 2018-11-16 肖哲睿 A kind of cloud tutoring system
CN109147440A (en) * 2018-09-18 2019-01-04 周文 A kind of interactive education system and method
CN109147430A (en) * 2018-10-19 2019-01-04 渭南师范学院 A kind of teleeducation system based on cloud platform
CN109614849A (en) * 2018-10-25 2019-04-12 深圳壹账通智能科技有限公司 Remote teaching method, apparatus, equipment and storage medium based on bio-identification
CN109377797A (en) * 2018-11-08 2019-02-22 北京葡萄智学科技有限公司 Virtual portrait teaching method and device
CN109448477A (en) * 2018-12-28 2019-03-08 广东新源信息技术有限公司 A kind of remote interactive teaching system and method
CN109615961A (en) * 2019-01-31 2019-04-12 华中师范大学 A kind of classroom instruction classroom interactions network system and method

Also Published As

Publication number Publication date
WO2020215966A1 (en) 2020-10-29
CN110033659A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110033659B (en) Remote teaching interaction method, server, terminal and system
CN110570698B (en) Online teaching control method and device, storage medium and terminal
CN105632251B (en) 3D virtual teacher system and method with phonetic function
Ibrahim Implications of designing instructional video using cognitive theory of multimedia learning
CN110600033B (en) Learning condition evaluation method and device, storage medium and electronic equipment
CN107992195A (en) A kind of processing method of the content of courses, device, server and storage medium
CN110491218A (en) A kind of online teaching exchange method, device, storage medium and electronic equipment
CN111651497B (en) User tag mining method and device, storage medium and electronic equipment
CN110405791B (en) Method and system for simulating and learning speech by robot
CN110569364A (en) online teaching method, device, server and storage medium
Zhan et al. The role of technology in teaching and learning Chinese characters
CN111711834B (en) Recorded broadcast interactive course generation method and device, storage medium and terminal
Sherwani et al. Orality-grounded HCID: Understanding the oral user
Mehta et al. Automated 3D sign language caption generation for video
CN110880324A (en) Voice data processing method and device, storage medium and electronic equipment
CN102881199A (en) Method and system for interactively reciting words
US20220309949A1 (en) Device and method for providing interactive audience simulation
CN111383493A (en) English auxiliary teaching system based on social interaction and data processing method
KR101227131B1 (en) Interactive language education system
Andrei et al. Designing an American Sign Language avatar for learning computer science concepts for deaf or hard-of-hearing students and deaf interpreters
CN116010569A (en) Online answering method, system, electronic equipment and storage medium
CN110046290B (en) Personalized autonomous teaching course system
CN110867187B (en) Voice data processing method and device, storage medium and electronic equipment
CN111933128B (en) Method and device for processing question bank of questionnaire and electronic equipment
CN109272983A (en) Bilingual switching device for child-parent education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant