CN112614400A - Control method and system for educational robot and classroom teaching - Google Patents

Control method and system for educational robot and classroom teaching Download PDF

Info

Publication number
CN112614400A
CN112614400A CN202011510458.3A CN202011510458A CN112614400A CN 112614400 A CN112614400 A CN 112614400A CN 202011510458 A CN202011510458 A CN 202011510458A CN 112614400 A CN112614400 A CN 112614400A
Authority
CN
China
Prior art keywords
multimedia
mode
student
teaching
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011510458.3A
Other languages
Chinese (zh)
Other versions
CN112614400B (en
Inventor
何宋西莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Luban Robot Technology Research Institute Co ltd
Chongqing Normal University
Original Assignee
Chongqing Luban Robot Technology Research Institute Co ltd
Chongqing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Luban Robot Technology Research Institute Co ltd, Chongqing Normal University filed Critical Chongqing Luban Robot Technology Research Institute Co ltd
Priority to CN202011510458.3A priority Critical patent/CN112614400B/en
Publication of CN112614400A publication Critical patent/CN112614400A/en
Application granted granted Critical
Publication of CN112614400B publication Critical patent/CN112614400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of education robots, and particularly discloses a control method and a control system for an education robot and classroom teaching, wherein the system comprises a multimedia dual-arm education robot, a multimedia server, a multimedia student end and a multimedia teacher end, and the multimedia dual-arm education robot comprises a robot body, a multimedia communication module, a multimedia acquisition module, a multimedia information processing module, a teaching module and a state analysis module; the multimedia teacher end and the multimedia student end are used for sending teaching mode instructions to the multimedia communication module, and the teaching mode instructions comprise a classroom teaching mode and a classroom tutoring mode; the classroom teaching mode comprises a teacher-speaking mode, a robot-assisted mode and a robot-speaking mode; in the teacher's mode of talkbacking, state analysis module is used for the study condition of analysis student and feeds back, and teaching module is based on the study condition of student and is tutor and suggestion to the student. By adopting the technical scheme of the invention, the interaction in classroom teaching can be more efficient.

Description

Control method and system for educational robot and classroom teaching
Technical Field
The invention relates to the technical field of multimedia dual-arm education robots, in particular to a control method and a control system for education robots and classroom teaching.
Background
With the development of society, educational problems become an increasingly focused topic. In order to effectively utilize precious learning time, it is important to improve the learning efficiency of students, and improving the participation and investment of teachers and students is one of means for improving the learning efficiency.
Along with the development of the robot technology, the multimedia double-arm education robot is gradually applied to teaching, but the existing multimedia double-arm education robot mainly answers some simple questions and cannot form good interaction with students and teachers, the students and teachers have poor experience, the participation degree and the input degree of the students and teachers are affected, an immersive dynamic language learning environment cannot be provided, and the teaching efficiency of the teachers and the students cannot be improved better.
Therefore, a control method and system for teaching an educational robot and a classroom, which can perform efficient interaction, are needed.
Disclosure of Invention
The invention provides a control method and a control system for teaching robots and classroom teaching, which can enable interaction in classroom teaching to be more efficient.
In order to solve the technical problem, the present application provides the following technical solutions:
a control system for education robots and classroom teaching comprises a multimedia dual-arm education robot, a multimedia server, a multimedia student end and a multimedia teacher end;
the multimedia dual-arm education robot comprises a robot body, a multimedia communication module, a multimedia acquisition module, a multimedia information processing module, a teaching module and a state analysis module;
the multimedia server stores education resource data and a teaching control model; the education resource data comprises a case library, a student feature library and a teacher feature library;
the state analysis module is connected with the multimedia server, and a state analysis model is stored in the state analysis module; the state analysis model comprises a student expression recognition analysis model and a student learning state analysis model;
the multimedia acquisition module is used for acquiring sound wave signals and image signals of teachers or students; the multimedia information processing module is used for acquiring sound wave signals and image signals of teachers or students from the multimedia acquisition module, processing the sound wave signals to generate voice information, identifying the voice information to generate character information, and processing the image signals to generate image information; the multimedia communication module is used for sending voice information, character information or image information to a multimedia teacher end or a multimedia student end;
the multimedia student end and the multimedia teacher end are also used for collecting voice information and character information and selecting a multimedia communication mode; the multimedia communication mode comprises a broadcasting mode and a bidirectional mode; bidirectional includes one-to-one, one-to-many, and many-to-many;
the multimedia teacher end and the multimedia student end are used for sending teaching mode instructions to the multimedia communication module, and the teaching mode instructions comprise a classroom teaching mode and a classroom tutoring mode; the classroom teaching mode comprises a teacher-speaking mode, a robot-assisted mode and a robot-speaking mode; types of robot-assisted include discussion, personalization, and gaming;
in the teacher-speaking mode, the state analysis module is used for analyzing and feeding back the learning condition of students, and the teaching module is used for guiding and prompting the students based on the learning condition of the students;
in the robot talkback mode, the teaching module is used for playing the content in a preset teaching content library for teaching explanation, acquiring the voice information, the character information and the image information of a student from the multimedia information processing module, and tracking the learning process of the student; and after the contents in the teaching content library are played, automatically entering a classroom tutoring mode.
The basic scheme principle and the beneficial effects are as follows:
in this scheme, teacher and student can be according to the actual teaching condition, send the teaching mode instruction through multi-media teacher end and multi-media student end to multi-media both arms education robot carries out different actions under the teaching mode of difference, and the pertinence is stronger, can be better carry out the interaction with teacher or student. Meanwhile, teachers and students can also select a multimedia communication mode to realize broadcast type and bidirectional type information transmission.
The multimedia dual-arm education robot manages the students, analyzes the learning conditions of the students in real time, and the teaching module guides and prompts the students based on the learning conditions of the students; the learning efficiency can be improved.
The multi-media information processing module changes the traditional single-channel teaching mode into multiple channels (sound + characters + images), the interactivity is stronger, and multiple dynamic information stimulates the brain nerves of students, thereby being beneficial to improving the long-term memory ability and the learning efficiency of the students.
In conclusion, the scheme enables interaction between students, teachers and the multimedia double-arm education robot to be more efficient and tight.
Furthermore, the teaching module is also used for acquiring the text information of the teacher class explanation content sent by the multimedia teacher end, analyzing the text information, generating an explanation result, and answering the students based on the explanation result and the preset teaching content.
The student can be answered conveniently according to the explanation of the teacher.
Furthermore, the multimedia dual-arm education robot further comprises a broadcasting module, and the broadcasting module is further used for acquiring voice information from the multimedia information processing module and playing the voice information.
Furthermore, the multimedia communication module is also used for acquiring the updated data of the case library, the student feature library and the teacher feature library from the multimedia server.
The case library, the student feature library and the teacher feature library are convenient to update.
The multimedia teacher end is further used for sending a multimedia control instruction to the multimedia communication module, and the multimedia communication module is further used for sending a control signal to the multimedia end according to the multimedia control instruction.
The multimedia terminal can assist teachers to control multimedia terminals, and operation of the teachers is simplified.
A control method for teaching robots and classes comprises the following steps:
s1, selecting a teaching mode by the multimedia student end or the multimedia teacher end, wherein the teaching mode comprises a teacher-speaking mode, a robot-speaking mode and a robot-assisted mode; jumping to S3 when the teacher-speaking mode or the robot-speaking mode is selected, and jumping to S2 when the robot-assisted mode is selected;
s2, acquiring mode type selection information in the robot assistance mode, and jumping to S3; the mode type selection information comprises a personalized mode, a discussion mode and a game mode;
s3, the multimedia dual-arm education robot starts teaching and collects learning state information of students during learning, wherein the learning state information comprises image information, character information and voice information;
s4, the multimedia double-arm education robot analyzes the learning state of the student based on the image information, the character information and the voice information during learning; judging whether the student is in good condition, wherein the bad condition comprises poor appetite, eastern desire and hyperactivity; jumping to S5 when the student status is not good, and jumping to S6 when the student status is good;
s5, the multimedia dual-arm education robot reminds students and jumps to S3 after reminding;
s6, the multimedia double-arm education robot performs classroom tests on each student;
s7, the multimedia dual-arm education robot collects learning result information of students and analyzes test results of the students based on the learning result information;
s8, judging whether the answer accuracy of a single student is lower than N1; if the value is lower than the preset value, jumping to S9, and if the value is not lower than the preset value, jumping to S11;
s9, judging whether the number of student tests with answer accuracy lower than N1 exceeds N, if so, jumping to S14, and if not, jumping to S10;
s10, regrouping students, adjusting a teaching method, and jumping to S1;
s11, judging whether the answer accuracy of a single student is lower than N2, if the answer accuracy is lower than N2, jumping to S2, and if not, jumping to S12;
s12, judging whether the answer accuracy of a single student is higher than N3, if not, jumping to S13, and if so, jumping to S14; wherein N1 is more than N2 is more than N3;
s13, classifying students with answer accuracy not higher than N3 into a study group; and jumps to S6;
and S14, acquiring the summary of the teacher and playing the summary.
In this scheme, teacher and student can be according to the actual teaching condition, select the teaching mode through teacher's end and student's end to different actions are carried out to multi-media state both arms education robot under the teaching mode of difference, and the pertinence is stronger, can be better carry out the interaction with teacher or student. The multimedia dual-arm education robot manages the students, analyzes the learning conditions of the students in real time, and the teaching module guides and prompts the students based on the learning conditions of the students; the learning efficiency can be improved. In conclusion, the scheme enables interaction between students, teachers and the multimedia double-arm education robot to be more efficient and tight.
Drawings
FIG. 1 is a logic block diagram of a control system implementing an educational robot and classroom instruction;
FIG. 2 is a schematic diagram of a visual tracking unit, a voice acquisition unit, and a communication module information acquisition in a control system implementing a second education robot and classroom teaching;
FIG. 3 is a schematic diagram of the information transfer of the visual tracking unit in the control system implementing the second education robot and the classroom teaching;
FIG. 4 is a flowchart illustrating a communication mode in the control system for teaching the educational robot and the classroom teaching according to the second embodiment;
fig. 5 is a flowchart of a control method of the third teaching robot and classroom teaching according to the embodiment.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, the control system for education robot and classroom teaching of the present embodiment includes a multimedia dual-arm education robot, a multimedia server, a multimedia student terminal, a multimedia teacher terminal, and a multimedia terminal.
The multimedia dual-arm education robot comprises a robot body, a multimedia communication module, a multimedia acquisition module, a multimedia information processing module, a teaching module, a state analysis module and a broadcasting module.
The multimedia server stores education resource data and a teaching control model; the education resource data comprises a case library, a student feature library and a teacher feature library;
the state analysis module is connected with the multimedia server, and a state analysis model is stored in the state analysis module; the state analysis model comprises a student expression recognition analysis model and a student learning state analysis model.
The multimedia teacher end and the multimedia student end are used for sending teaching mode instructions to the multimedia communication module, and the teaching mode instructions comprise a classroom teaching mode and a classroom tutoring mode; the classroom teaching mode comprises a teacher-speaking mode, a robot-assisted mode and a robot-speaking mode; types of robot-assisted include discussion, personalization, and gaming;
in the teacher-speaking mode, the state analysis module is used for analyzing and feeding back the learning condition of students, and the teaching module is used for guiding and prompting the students based on the learning condition of the students;
in the robot talkback mode, the teaching module is used for playing the content in a preset teaching content library for teaching explanation, acquiring the voice information, the character information and the image information of a student from the multimedia information processing module, and tracking the learning process of the student; after the contents in the teaching content library are played, automatically entering a classroom tutoring mode;
the multimedia acquisition module is used for acquiring sound wave signals and image signals of teachers or students; the multimedia information processing module is used for acquiring sound wave signals and image signals of teachers or students from the multimedia acquisition module, processing the sound wave signals to generate voice information, identifying the voice information to generate character information, and processing the image signals to generate image information; the multimedia communication module is used for sending voice information, character information or image information to a multimedia teacher end or a multimedia student end;
the multimedia student end and the multimedia teacher end are also used for collecting voice information and character information and selecting a multimedia communication mode; the multimedia communication mode comprises a broadcasting mode and a bidirectional mode; the bi-directional type includes one-to-one, one-to-many, and many-to-many. Through the relay of the multimedia dual-arm education robot, the communication between a multimedia teacher end and a multimedia student end and the communication between the multimedia student end and the multimedia teacher end can be realized.
The teaching module is also used for acquiring the text information of the teacher classroom explanation content sent by the multimedia teacher end, analyzing the text information, generating an explanation result, and answering the students based on the explanation result and the preset teaching content.
The broadcasting module is used for acquiring the voice information from the multimedia information processing module and playing the voice information. The broadcasting module can play a role in enhancing and amplifying the sound of the teacher.
The multimedia communication module is also used for acquiring the updated data of the case library, the student feature library and the teacher feature library from the multimedia server.
The multimedia teacher end is further used for sending a multimedia control instruction to the multimedia communication module, and the multimedia communication module is further used for sending a control signal to the multimedia end according to the multimedia control instruction.
Example two
The difference between the educational robot and the classroom teaching control system in this embodiment and the first embodiment is that the multimedia end in this embodiment includes an electronic blackboard and a projector.
In this embodiment, the robot body structurally includes a control module, a head similar to a human figure, a neck capable of ascending and descending, two arms, a body, a wheel-type base, and the like. The base is fixedly connected with the lower part of the machine body, the upper part of the machine body is respectively and rotatably connected with the two symmetrical arms, the top of the machine body is rotatably connected with the lower end of the neck, and the upper end of the neck is fixedly connected with the head.
As shown in fig. 2 and fig. 3, the multimedia capturing module in this embodiment includes a visual tracking unit and a voice capturing unit; the visual tracking unit comprises a first concentrator, a computer and a plurality of cameras; the camera is connected with the computer through the concentrator. The voice acquisition unit comprises a second concentrator, a computer and a plurality of sound pickups; the sound pickup is connected with the computer through the concentrator. The computer used by the visual tracking unit and the voice acquisition unit can be the same computer, or can be two different computers, and the two different computers are adopted in the implementation. In this embodiment, camera quantity is 2, installs the visual zone in anthropomorphic dummy head facial respectively, and adapter quantity is 4, installs central zone behind ear, oral area, the head about anthropomorphic dummy head respectively.
The neck includes 2 driving motor, driving motor and control module signal connection, and control module is used for controlling driving motor and rotates to drive the neck motion, realize 360 rotations of class people's head, and go up and down, so that adjust the shooting angle of camera. In the embodiment, the electronic skin with the touch sense is further included and is arranged on the outer portion of the neck to play a role in human-like perception.
The double arms are composed of symmetrical 2 small 3-degree-of-freedom mechanical arms and 2 flexible mechanical arms with 5 rotating fingers at the tail ends respectively, so that a plurality of gesture actions or a handheld wireless phone can be completed. This is prior art and will not be described further herein.
The base includes 2 driving motor, 2 action wheels, 1 follow driving wheel, prevents bumping the sensor. The anti-collision sensor consists of ultrasonic waves and radars and is used for preventing collision. The robot body is moved through the base; this is prior art and will not be described further herein.
The multimedia teacher end and the multimedia student end adopt mobile phones and/or tablet computers. The multimedia student end can be equipped with one student or a plurality of students share one student.
The remote control is used for controlling the multimedia end, for example, a teacher controls the electronic blackboard through the remote control.
The state analysis module is used for acquiring image information from the multimedia acquisition module, identifying students and analyzing the states of the students on the basis of the image information, judging whether the states of the students are good or not, and generating voice reminding information if the states are not good; the multimedia communication module is also used for sending voice reminding information to a multimedia student end corresponding to the student. The state analysis module is further used for recording the number of times of sending voice reminding information, when the number of times exceeds a threshold value, a touch reminding instruction is generated, the control module is further used for controlling the base to move towards the position of the student according to the touch reminding instruction, and the control module is further used for controlling the flexible manipulator to touch the student after the base moves to the position of the student. Positioning based on image information, path planning during movement, and control of a manipulator have been widely applied to existing robots, and belong to the prior art, and are not described herein again. In this embodiment, the bad status includes stupefied, eastern west, restlessness, no speech of the student within a preset time, deviation of the discussion content of the student from the theme, sleep of the student, playing of the mobile phone, and the like.
As shown in fig. 4, in this embodiment, when the multimedia communication mode is broadcast, the multimedia student end or the multimedia teacher end sends the voice message or the text message to all other multimedia student ends and multimedia teacher ends through the multimedia communication module;
the multimedia state communication mode is one-to-one in a two-way mode, and the multimedia state student end or the multimedia state teacher end sends the voice information or the text information to a specified single multimedia state student end or a specified multimedia state teacher end through the multimedia state communication module;
when the multimedia communication mode is one-to-many in a bidirectional mode, the multimedia student end or the multimedia teacher end sends voice information or text information to a plurality of designated multimedia student ends or multimedia teacher ends through the multimedia communication module;
when the multimedia communication mode is two-way, many-to-many, the multimedia student end or the multimedia teacher end sends the voice information or the text information to the appointed multimedia student ends or the multimedia teacher end through the multimedia communication module, and receives the voice information or the text information from the appointed multimedia student ends or the multimedia teacher end through the multimedia communication module.
For example:
1. broadcast interaction mode
(1) Teacher interactive teaching process through broadcast mode
The teacher makes a sound, the sound is transmitted to the students through the air, meanwhile, the multimedia double-arm education robot collects voice information of the teacher, identifies the voice information, generates character information and sends the character information to a multimedia terminal, such as an electronic blackboard. The multimedia dual-arm education robot also sends the voice information to each multimedia student end respectively, and the multimedia student ends play the voice information.
(2) Multimedia double-arm education robot performs teaching interaction process to teachers and students in broadcasting mode
The multimedia dual-arm education robot receives a multimedia control instruction of a multimedia teacher end, for example, a projector is started, and a control signal is sent to the projector; the projector is turned on.
And the multimedia teacher end sends a robot-assisted teaching mode instruction to the multimedia double-arm education robot, and the multimedia double-arm education robot automatically enters a robot-assisted mode. The multimedia dual-arm education robot sends voice information of a teacher to a multimedia student end or other multimedia teacher ends respectively, converts the voice information into text information and sends the text information to an electronic blackboard, the multimedia student end and the multimedia teacher end to be displayed.
The multimedia dual-arm education robot judges the states of students and reminds the students with bad states.
After the teacher finishes teaching contents, the multimedia dual-arm education robot asks questions, answers questions and arranges homework pertinently to the students.
When the multimedia teacher end sends a teaching mode instruction of a robot talkback type to the multimedia dual-arm education robot, the multimedia dual-arm education robot automatically enters the robot talkback type. And the multimedia dual-arm education robot plays the contents in a preset teaching content library for teaching.
(3) The student adopts the broadcast type to carry out the interactive process with the teacher and other students
The multimedia student terminal collects voice information of students and sends the voice information to the multimedia dual-arm education robot; the multimedia double-arm education robot sends the voice information to a multimedia teacher end and multimedia student ends of other students; the multimedia dual-arm education robot also converts voice information into text information and sends the text information to a multimedia terminal, such as an electronic blackboard for displaying.
2. Two-way interactive mode
(1) Interaction process of teacher and other teachers, students and multimedia dual-arm education robot
The teacher can transmit the content of the lecture hall to students or other teachers through two ways of air transmission of voice information and the multimedia double-arm education robot.
Students or other teachers put forward questions (text information or voice information) through the multimedia student end or the multimedia teacher end, and the questions are sent to the multimedia teacher end of the teacher who gives the teacher a charge through the multimedia double-arm education robot. The main teacher answers the received answer in a one-to-one mode or a one-to-many mode.
If the multimedia double-arm education robot is one-to-many, the multimedia double-arm education robot can also transmit questions or answers to the electronic blackboard respectively.
If the answer is one-to-one, the multimedia dual-arm education robot transmits the question or answer to the user end (multimedia teacher end or multimedia student end) of the speaker teacher or questioner, respectively.
(2) Interaction process of students with teachers, other students and teaching robot respectively
The first step of the students is to send instruction of teaching mode through multimedia student terminals.
The second step of the student is to select a multimedia communication mode through a multimedia student terminal: in a bidirectional manner. For example one-to-one, one-to-many or many-to-many.
The third step for the student is to perform the interaction process. The students receive the instruction of the teacher and the feedback of the multimedia double-arm education robot through the multimedia student terminal to ask and answer questions. And the multimedia student end can also perform voice or text interaction with teachers, other students and the multimedia double-arm education robot.
EXAMPLE III
As shown in fig. 5, the present embodiment provides a control method for an educational robot and classroom teaching, comprising the following steps:
s1, selecting a teaching mode by the multimedia student end or the multimedia teacher end, wherein the teaching mode comprises a teacher-speaking mode, a robot-speaking mode and a robot-assisted mode; jumping to S3 when the teacher-speaking mode or the robot-speaking mode is selected, and jumping to S2 when the robot-assisted mode is selected;
s2, acquiring mode type selection information in the robot assistance mode, and jumping to S3; the mode type selection information comprises a personalized mode, a discussion mode and a game mode;
s3, teaching and collecting learning state information of students during learning, wherein the learning state information comprises image information, character information and voice information;
s4, the multimedia double-arm education robot analyzes the learning state of the student based on the image information, the character information and the voice information during learning; judging whether the student is in good condition, wherein the bad condition comprises poor appetite, eastern desire and hyperactivity; jumping to S5 when the student status is not good, and jumping to S6 when the student status is good;
s5, the multimedia dual-arm education robot reminds students and jumps to S3 after reminding;
s6, the multimedia double-arm education robot performs classroom tests on each student;
s7, the multimedia dual-arm education robot collects learning result information of students and analyzes test results of the students based on the learning result information;
s8, judging whether the answer accuracy of a single student is lower than N1; if the value is lower than the preset value, jumping to S9, and if the value is not lower than the preset value, jumping to S11;
s9, judging whether the number of student tests with answer accuracy lower than N1 exceeds N, if so, jumping to S14, and if not, jumping to S10;
s10, regrouping students, adjusting a teaching method, and jumping to S1;
s11, judging whether the answer accuracy of a single student is lower than N2, if the answer accuracy is lower than N2, jumping to S2, and if not, jumping to S12;
s12, judging whether the answer accuracy of a single student is higher than N3, if not, jumping to S13, and if so, jumping to S14; wherein N1 is more than N2 is more than N3; in this embodiment, N1 is 60%, N2 is 80%, and N3 is 90%, and by setting different values for N1, N2, and N3, passing, good, and excellent students can be distinguished according to knowledge grasping conditions of the students.
S13, classifying students with answer accuracy not higher than N3 into a study group; and jumps to S6;
and S14, acquiring the summary of the teacher and playing the summary.
Example four
The difference between this embodiment and the first embodiment is that the teaching in this embodiment is an example of the teaching robot and the teaching control system in classroom, the number of learning people is 50, and an average of 5 people is one. For example, the class is "social drama" by Luxun.
During teaching, a teacher can control the multimedia double-arm education robot through the multimedia teacher end, so that the progress of a classroom is controlled.
For example, in the teacher-spoken form, a teacher is spoken to start asking questions: what is the social in the eyes and mind of people? The students are answered by leaving five minutes, each answer of the students is transmitted to the multimedia double-arm education robot through a microphone of the multimedia student terminal, and the multimedia double-arm education robot displays the answer on an electronic blackboard and the multimedia student terminal (such as iPad) in a slide way in real time.
Then, the speaker teacher shows the society to everybody through the video, if the speaker teacher says through the microphone: "how do students have wonderful ideas and do their actual situations? Is it verified by video? . Please appreciate the society of games of anjiu. The multimedia dual-arm education robot identifies voice information of a speaker teacher to generate text information, and extracts keywords from the text information, namely 'society of enjoy Anjiu'. The multimedia dual-arm education robot then responds, and the projector is controlled to project the video to the electronic blackboard after the video is searched through networking.
After enjoying the video of the social game, the teacher may ask questions again: "what new ideas do you have for social games? Please review the following summary. While the students discuss, the students can request the multimedia double-arm education robot to retrieve the historical background, the origin and the cultural significance of the social games and other related data through the multimedia student terminal. After receiving the instruction, the multimedia dual-arm education robot is networked to retrieve data, picks corresponding content and transmits the content to the iPad of the student. Students can ask questions through the iPad teacher or discuss among the students so that the teacher can give consideration to other groups in time while assisting one group. When group discussion is carried out, the multimedia dual-arm education robot monitors the whole process to ensure that the group discussion is smoothly carried out, if the situation that students do not speak for a long time, the discussion content of the students deviates from the theme or the students sleep in the group discussion is found, the multimedia dual-arm education robot sends voice reminding information to ipads of the students, and can also send the voice reminding information to multimedia student terminals of nearby students or multimedia teacher terminals of a master teacher to reflect the situation. The multimedia dual-arm education robot can also move to the side of the student and touch the student with the flexible manipulator for prompting.
After the fact that the discussion of students is finished is detected (if students say, it is concluded that the society shows on a grass platform built on water, and shows at night, and audiences need to sit on a ship for appreciation), the multimedia dual-arm education robot enters a robot-assisted mode, can extract effective information to generate knots, and sends the knots to a teacher after group modification confirmation, so that the teacher can grasp the whole learning progress and timely drive the classroom progress.
Under the discussion mode dominated by students, 10 subgroups are discussed respectively, each team leader gives a prepared proposition for discussion, and records and knots are made; each student carries out multi-directional communication through voice, characters and video by the multimedia double-arm education robot.
For example, the problems posed by the group leader are: why did you say that you really did not eat a good bean that was on night anymore? "then students discuss around this question, really because of the delicious bean in that night? Where again is a sentence supporting the conclusion? Or with a historical background? When the students discuss and exchange, the voice information is recorded and analyzed by the multimedia double-arm education robot. Combing the discussed thought out: is the lucida true because is the night beans eaten well? Not. The beans in the text are picked in the night, are seasoned only by salt, are peeled by young authors and partners, are busy and panic, and suggest that roasted beans are not excellent in flavor. Why did the author say that no more delicious beans were eaten? According to the context, the author sees the expected society, and the bean is roasted in a 'juggling' way with the partner in the home, so that the roasted bean is not in the memorable state of delicacy, but in the mood at that moment, is purely happy and satisfied, all things on that day become special memory, and the thought and the idea of the author can be hooked by comparing the adult life of the author. It was concluded that beans are in fact the symbols of a simple nice life, people, events, and in fact express the authors' tendency, love and belief in such life.
The multimedia dual-arm education robot also provides questions for each group or each student respectively according to teaching contents and student answering conditions in a classroom tutoring mode, requires the students to answer, analyzes the learning conditions according to the answers of the students, gives individual guidance to each person aiming at the shortcomings of the students, and achieves the purposes of participating in the people and commonly mastering the teaching contents. As discussed in the group, "why did you say that she did not actually eat good beans that night anymore? When the student can not answer, the multimedia double-arm education robot can ask questions: is it because the night beans are really good to eat? How is the original text of the roasted beans written? Thereby assisting the student in the progress of the discussion. Ask questions to each person after the discussion ends, why mr. rushi remembers beans for that evening alone? The student answers: since that is the memory of the mr. luxun childhood. The students can know that the students do not really understand the meaning of the text, and the multimedia dual-arm education robot can search the characteristics and the writing purpose of the text by guiding the students to read part of the text again.
EXAMPLE five
The difference between the embodiment and the first embodiment is that the embodiment further comprises a progress control module; the progress control module is also used for marking the students as number n-1 speaking users, number n speaking users and number n +1 speaking users … … according to the speaking sequence of the students in the classroom discussion, wherein n is larger than or equal to 2.
The progress control module is also used for collecting the speech information of the number n-1 speech user, extracting key words based on the speech information of the number n-1 speech user, counting the occurrence frequency of the key words and sequencing the key words according to the occurrence frequency from high to low.
The progress control module is also used for collecting the speech information of the n number speech users, extracting keywords based on the speech information of the n number speech users, counting the occurrence frequency of the keywords and sequencing the keywords from high to low according to the occurrence frequency.
The progress control module is also used for comparing the keywords of the number n-1 speaking user with the keywords of the number n speaking user and judging whether the keywords with high occurrence frequency in the keywords of the number n speaking user are all the keywords with low occurrence frequency of the number n-1 speaking user or the keywords which are not appeared in the number n-1 speaking user; if the number n speaking users speak, the image information of the number n-1 speaking users and the number n speaking users is acquired from the acquisition module, whether the expressions of the number n-1 speaking users and the number n speaking users accord with the preset expressions or not is judged based on the image information, and if the expressions accord with the preset expressions, recording is carried out. The occurrence frequency can be a set time standard, wherein the occurrence frequency is higher when being larger than or equal to the time standard and is lower when being smaller than the time standard; or a sequencing intermediate value can be set, the sequencing serial number is higher than the sequencing intermediate value, and the sequencing serial number is lower than the sequencing intermediate value; the sorting can also be performed by simultaneously meeting the two times standards and the sorting intermediate value. In this embodiment, a manner of setting a sorting median is adopted.
The progress control module is further used for timing after the n number speaking user finishes speaking, judging the speaking interval between the n number speaking user and the n +1 number speaking user, acquiring the image information of the n +1 number speaking user if the speaking interval is smaller than a threshold value, judging whether the expression of the n +1 number speaking user accords with a preset expression or not based on the image information, and recording if the expression accords with the preset expression. In this embodiment, the preset expression includes laughter and anger.
The progress control module is further used for generating a discussion departure prompt when the expression changes of the n-1 speaking user, the n1 speaking user and the n +1 speaking user exceed preset expressions.
The existing semantic recognition relates to key word recognition, vector recognition and other modes. When the application and judgment of the discussion of students deviates from the topic, the keyword identification needs to establish a keyword library conforming to the classroom content and a keyword library not suggesting the discussion content, and then the speaking information of the students is collected and matched to judge whether the topic is biased, so that the identification accuracy is low, and the richness of the keyword library is relied on. The vector recognition mode has high requirement on computing power and low processing speed, and if the multimedia double-arm education robot performs local recognition, the requirement on hardware is high and the cost is high; if the identification is carried out through the remote multimedia server, because more students open a class at the same time and the processing capacity of the remote multimedia server is large, the processing speed is further low, and the identification result can be obtained only after the discussion of the students is finished, so that the students can not correct the discussion in time when discussing the theme.
The embodiment is used as a supplement for keyword identification, and judges whether all keywords with high occurrence frequency in keywords of the number n speaking user are keywords with low occurrence frequency of the number n-1 speaking user or keywords which do not appear in the number n speaking user, so that whether the number n speaking user has the content which is mainly spoken by the number n-1 speaking user and continues to discuss downwards can be known, if all the keywords with high occurrence frequency in the keywords of the number n speaking user are the keywords with low occurrence frequency of the number n-1 speaking user, the number n speaking user may not grasp the key point of the number n-1 speaking user for speaking, and a new idea may be found by the number n speaking user under the inspiration of the number n-1 speaking user; the keywords with high occurrence frequency are all keywords which are not appeared by the number n-1 speaking user, and the number n user may completely lean towards the discussion direction, specifically, the direction is judged by combining the expressions, because the expressions of the students are not changed greatly in normal discussion, but the students are caught to a certain non-critical small problem in the utterance to perform enlarged discussion, so that angry expressions may appear; or when chatting other things outside the classroom, a happy expression may appear, and the possibility of the existence of the partial problem is increased.
Finally, after the n number user finishes speaking, the speaking interval of the n +1 number speaking user is smaller than the threshold value, the n +1 number user quickly chats, the expression of the n +1 number speaking user accords with the preset expression, at this time, students probably chat other unrelated topics, can chat with great care in less thinking, or compete for a certain point, which are subjects deviating from the discussion, so that a discussion deviation prompt is generated. The embodiment can effectively discover the condition of deviating from the discussion theme in the student discussion and remind the student in time. The teaching effect is enhanced.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (6)

1. A control system for education robots and classroom teaching comprises a multimedia dual-arm education robot, a multimedia server, a multimedia student end and a multimedia teacher end;
the multimedia dual-arm education robot is characterized by comprising a robot body, a multimedia communication module, a multimedia acquisition module, a multimedia information processing module, a teaching module and a state analysis module;
the multimedia server stores education resource data and a teaching control model; the education resource data comprises a case library, a student feature library and a teacher feature library;
the state analysis module is connected with the multimedia server, and a state analysis model is stored in the state analysis module; the state analysis model comprises a student expression recognition analysis model and a student learning state analysis model;
the multimedia acquisition module is used for acquiring sound wave signals and image signals of teachers or students; the multimedia information processing module is used for acquiring sound wave signals and image signals of teachers or students from the multimedia acquisition module, processing the sound wave signals to generate voice information, identifying the voice information to generate character information, and processing the image signals to generate image information; the multimedia communication module is used for sending voice information, character information or image information to a multimedia teacher end or a multimedia student end;
the multimedia student end and the multimedia teacher end are also used for collecting voice information and character information and selecting a multimedia communication mode; the multimedia communication mode comprises a broadcasting mode and a bidirectional mode; bidirectional includes one-to-one, one-to-many, and many-to-many;
the multimedia teacher end and the multimedia student end are used for sending teaching mode instructions to the multimedia communication module, and the teaching mode instructions comprise a classroom teaching mode and a classroom tutoring mode; the classroom teaching mode comprises a teacher-speaking mode, a robot-assisted mode and a robot-speaking mode; types of robot-assisted include discussion, personalization, and gaming;
in the teacher-speaking mode, the state analysis module is used for analyzing and feeding back the learning condition of students, and the teaching module is used for guiding and prompting the students based on the learning condition of the students;
in the robot talkback mode, the teaching module is used for playing the content in a preset teaching content library for teaching explanation, acquiring the voice information, the character information and the image information of a student from the multimedia information processing module, and tracking the learning process of the student; and after the contents in the teaching content library are played, automatically entering a classroom tutoring mode.
2. The control system for educational robots and classroom teaching according to claim 1, wherein: the teaching module is also used for acquiring the text information of the teacher classroom explanation content sent by the multimedia teacher end, analyzing the text information, generating an explanation result, and answering the students based on the explanation result and the preset teaching content.
3. The control system for educational robots and classroom teaching according to claim 2, wherein: the multimedia dual-arm education robot further comprises a broadcasting module, and the broadcasting module is further used for acquiring voice information from the multimedia information processing module and playing the voice information.
4. The control system for educational robots and classroom teaching according to claim 3, wherein: the multimedia communication module is also used for acquiring the updated data of the case library, the student feature library and the teacher feature library from the multimedia server.
5. The control system for educational robots and classroom teaching according to claim 4, wherein: the multimedia teacher end is further used for sending a multimedia control instruction to the multimedia communication module, and the multimedia communication module is further used for sending a control signal to the multimedia end according to the multimedia control instruction.
6. A control method for an educational robot and classroom teaching is characterized by comprising the following steps:
s1, selecting a teaching mode by the multimedia student end or the multimedia teacher end, wherein the teaching mode comprises a teacher-speaking mode, a robot-speaking mode and a robot-assisted mode; jumping to S3 when the teacher-speaking mode or the robot-speaking mode is selected, and jumping to S2 when the robot-assisted mode is selected;
s2, acquiring mode type selection information in the robot assistance mode, and jumping to S3; the mode type selection information comprises a personalized mode, a discussion mode and a game mode;
s3, the multimedia dual-arm education robot starts teaching and collects learning state information of students during learning, wherein the learning state information comprises image information, character information and voice information;
s4, the multimedia double-arm education robot analyzes the learning state of the student based on the image information, the character information and the voice information during learning; judging whether the student is in good condition, wherein the bad condition comprises poor appetite, eastern desire and hyperactivity; jumping to S5 when the student status is not good, and jumping to S6 when the student status is good;
s5, the multimedia dual-arm education robot reminds students and jumps to S3 after reminding;
s6, the multimedia double-arm education robot performs classroom tests on each student;
s7, the multimedia dual-arm education robot collects learning result information of students and analyzes test results of the students based on the learning result information;
s8, judging whether the answer accuracy of a single student is lower than N1; if the value is lower than the preset value, jumping to S9, and if the value is not lower than the preset value, jumping to S11;
s9, judging whether the number of student tests with answer accuracy lower than N1 exceeds N, if so, jumping to S14, and if not, jumping to S10;
s10, regrouping students, adjusting a teaching method, and jumping to S1;
s11, judging whether the answer accuracy of a single student is lower than N2, if the answer accuracy is lower than N2, jumping to S2, and if not, jumping to S12;
s12, judging whether the answer accuracy of a single student is higher than N3, if not, jumping to S13, and if so, jumping to S14; wherein N1 is more than N2 is more than N3;
s13, classifying students with answer accuracy not higher than N3 into a study group; and jumps to S6;
and S14, acquiring the summary of the teacher and playing the summary.
CN202011510458.3A 2020-12-18 2020-12-18 Control method and system for educational robot and classroom teaching Active CN112614400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011510458.3A CN112614400B (en) 2020-12-18 2020-12-18 Control method and system for educational robot and classroom teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011510458.3A CN112614400B (en) 2020-12-18 2020-12-18 Control method and system for educational robot and classroom teaching

Publications (2)

Publication Number Publication Date
CN112614400A true CN112614400A (en) 2021-04-06
CN112614400B CN112614400B (en) 2021-08-31

Family

ID=75241112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011510458.3A Active CN112614400B (en) 2020-12-18 2020-12-18 Control method and system for educational robot and classroom teaching

Country Status (1)

Country Link
CN (1) CN112614400B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113823135A (en) * 2021-09-30 2021-12-21 创泽智能机器人集团股份有限公司 Robot-based auxiliary teaching method and equipment
CN114549245A (en) * 2022-02-09 2022-05-27 武汉颂大教育技术有限公司 Classroom information management method and system based on grouping teaching
CN114596748A (en) * 2022-03-12 2022-06-07 重庆师范大学 Interactive computer remote education system
CN115817063A (en) * 2022-10-27 2023-03-21 重庆鲁班机器人技术研究院有限公司 Double-arm drawing robot teaching system and drawing control method and device thereof

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001242780A (en) * 2000-02-29 2001-09-07 Sony Corp Information communication robot device, information communication method, and information communication robot system
US20070150099A1 (en) * 2005-12-09 2007-06-28 Seung Ik Lee Robot for generating multiple emotions and method of generating multiple emotions in robot
CN105869468A (en) * 2016-06-24 2016-08-17 苏州美丽澄电子技术有限公司 Intelligent family education robot
US20160284232A1 (en) * 2013-11-27 2016-09-29 Engino. Net Ltd. System and method for teaching programming of devices
CN107705643A (en) * 2017-11-16 2018-02-16 四川文理学院 Teaching method and its device are presided over by a kind of robot
CN108039081A (en) * 2017-12-22 2018-05-15 四川文理学院 Robot teaching's assessment method and device
CN108735017A (en) * 2018-08-17 2018-11-02 安徽爱依特科技有限公司 A kind of man-machine collaboration robot teaching system and its teaching method
CN108877361A (en) * 2018-07-17 2018-11-23 安徽爱依特科技有限公司 The man-machine robot system for teaching mode altogether
CN109087222A (en) * 2018-08-01 2018-12-25 阔地教育科技有限公司 Classroom data analysing method and system
CN109147433A (en) * 2018-10-25 2019-01-04 重庆鲁班机器人技术研究院有限公司 Childrenese assistant teaching method, device and robot
CN109697904A (en) * 2019-02-28 2019-04-30 苏州阿杜机器人有限公司 Robot wisdom classroom assisted teaching system and method
CN110176163A (en) * 2019-06-13 2019-08-27 天津塔米智能科技有限公司 A kind of tutoring system
JP2019211762A (en) * 2018-05-30 2019-12-12 カシオ計算機株式会社 Learning device, robot, learning support system, learning device control method, and program

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001242780A (en) * 2000-02-29 2001-09-07 Sony Corp Information communication robot device, information communication method, and information communication robot system
US20070150099A1 (en) * 2005-12-09 2007-06-28 Seung Ik Lee Robot for generating multiple emotions and method of generating multiple emotions in robot
US20160284232A1 (en) * 2013-11-27 2016-09-29 Engino. Net Ltd. System and method for teaching programming of devices
CN105869468A (en) * 2016-06-24 2016-08-17 苏州美丽澄电子技术有限公司 Intelligent family education robot
CN107705643A (en) * 2017-11-16 2018-02-16 四川文理学院 Teaching method and its device are presided over by a kind of robot
CN108039081A (en) * 2017-12-22 2018-05-15 四川文理学院 Robot teaching's assessment method and device
JP2019211762A (en) * 2018-05-30 2019-12-12 カシオ計算機株式会社 Learning device, robot, learning support system, learning device control method, and program
CN108877361A (en) * 2018-07-17 2018-11-23 安徽爱依特科技有限公司 The man-machine robot system for teaching mode altogether
CN109087222A (en) * 2018-08-01 2018-12-25 阔地教育科技有限公司 Classroom data analysing method and system
CN108735017A (en) * 2018-08-17 2018-11-02 安徽爱依特科技有限公司 A kind of man-machine collaboration robot teaching system and its teaching method
CN109147433A (en) * 2018-10-25 2019-01-04 重庆鲁班机器人技术研究院有限公司 Childrenese assistant teaching method, device and robot
CN109697904A (en) * 2019-02-28 2019-04-30 苏州阿杜机器人有限公司 Robot wisdom classroom assisted teaching system and method
CN110176163A (en) * 2019-06-13 2019-08-27 天津塔米智能科技有限公司 A kind of tutoring system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113823135A (en) * 2021-09-30 2021-12-21 创泽智能机器人集团股份有限公司 Robot-based auxiliary teaching method and equipment
CN114549245A (en) * 2022-02-09 2022-05-27 武汉颂大教育技术有限公司 Classroom information management method and system based on grouping teaching
CN114596748A (en) * 2022-03-12 2022-06-07 重庆师范大学 Interactive computer remote education system
CN115817063A (en) * 2022-10-27 2023-03-21 重庆鲁班机器人技术研究院有限公司 Double-arm drawing robot teaching system and drawing control method and device thereof

Also Published As

Publication number Publication date
CN112614400B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN112614400B (en) Control method and system for educational robot and classroom teaching
Engwall et al. Interaction and collaboration in robot-assisted language learning for adults
Kory et al. Storytelling with robots: Learning companions for preschool children's language development
Neumann Social robots and young children’s early language and literacy learning
Kagan Influencing human interaction.
CN107633719B (en) Anthropomorphic image artificial intelligence teaching system and method based on multi-language human-computer interaction
CN107967830A (en) Method, apparatus, equipment and the storage medium of online teaching interaction
Freed " This is the fluffy robot that only speaks french": language use between preschoolers, their families, and a social robot while sharing virtual toys
Ferm et al. Participation and enjoyment in play with a robot between children with cerebral palsy who use AAC and their peers
Sihem Using video techniques to develop students’ speaking skill
Chen et al. Designing long-term parent-child-robot triadic interaction at home through lived technology experiences and interviews
Mahardika et al. Camera roll, action! non-specialist undergraduate English learners’ perceptions of using video production in learning English
Godley Literacy learning as gendered identity work
CN112651860B (en) Discussion type robot teaching system, method and device
Romano EJ in focus: Defining fun and seeking flow in English language arts
Brady-Myerov Listen Wise: Teach Students to Be Better Listeners
Yamamoto et al. Trial of using robotic pet as human interface of multimedia education system for pre-school aged child in kindergarten
Eisenring et al. THE USE OF CHATBOTS IN THE ENGLISH LANGUAGE TEACHING TO PROMOTE MODERN LANGUAGE LEARNING: A LITERATURE REVIEW
Gritter et al. Can pop culture and Shakespeare exist in the same classroom?: Using student interest to bring complex texts to life
Chang et al. Using a humanoid robot to develop a dialogue-based interactive learning environment for elementary foreign language classrooms
Nengsih THE EFFECT OF USING YOUTUBE VIDEO BASED MEDIA FOR STUDENTS’LISTENING COMPREHENSION AT SENIOR HIGH SCHOOL 1 BANGKINANG
Wen Trends in approaches to teaching: Flipped learning
Ran A Study on the Application of Intelligent Educational Robot in Teaching of “Communicative German” Course
Lee Reading with Robots
AHMAD A COMPARATIVE STUDY BETWEEN STUDENTS’READING COMPREHENSION USING CLASSICAL MUSIC AND VIDEO IN 10TH GRADER OF SMAN 10 BANDAR LAMPUNG IN THE ACADEMIC YEAR OF 2019/2020

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant