CN109637207B - Preschool education interactive teaching device and teaching method - Google Patents

Preschool education interactive teaching device and teaching method Download PDF

Info

Publication number
CN109637207B
CN109637207B CN201811424917.9A CN201811424917A CN109637207B CN 109637207 B CN109637207 B CN 109637207B CN 201811424917 A CN201811424917 A CN 201811424917A CN 109637207 B CN109637207 B CN 109637207B
Authority
CN
China
Prior art keywords
information
module
user
acquisition module
limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811424917.9A
Other languages
Chinese (zh)
Other versions
CN109637207A (en
Inventor
曹臻祎
李晓红
赵华
袁芳
李宁
王会莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811424917.9A priority Critical patent/CN109637207B/en
Publication of CN109637207A publication Critical patent/CN109637207A/en
Application granted granted Critical
Publication of CN109637207B publication Critical patent/CN109637207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Abstract

The invention discloses an interactive teaching system for preschool education, which comprises a camera, a facial information acquisition module, a matching module, a problem rating module, a scene image acquisition module, an animation production module, a display image acquisition module and a display screen, wherein the camera is used for shooting the facial information of a user; the preschool education interactive teaching system further comprises a cloud server and a plurality of actual scene acquisition devices arranged in different scenes, wherein the actual scene acquisition devices are in communication connection with the cloud server, and the cloud server is also in communication connection with the scene image acquisition module. The invention has the beneficial effects that: (1) interest points of the child user can be better identified; (2) real scenes can be selected according to interest points of the child user to interact with the child user in real time.

Description

Preschool education interactive teaching device and teaching method
Technical Field
The invention relates to the field of teaching, in particular to a preschool education interactive teaching device and a teaching method.
Background
Preschool education is a topic which is very concerned, and the goal of pursuing by children is to make children learn and grow happily and healthily. As is well known, preschool education has important characteristics of itself, children have a strong curiosity about the world, and are eager to explore and understand the world and understand and know the world through own exploration activities. Meanwhile, for children, the interest is the motivation, and the pursuit and the exploration can be actively pursued and explored with strong interest, and pleasant emotional experience can be generated in learning. Therefore, it is always the aim and direction of our efforts to create a pleasant and relaxed psychological environment, maintain and continuously develop the curiosity of children, respect the interest choices of children, and stimulate the interest of children in learning. Preschool education is the foundation of basic education, is the starting of life education and provides basic tone for the future development of children.
However, the existing preschool education system usually only performs single figure-learning training or language training on the children, and often only lists simple contents to the children without considering the interest points and the interaction quality of the children.
Disclosure of Invention
An object of the present invention is to provide a preschool education interactive teaching apparatus and a teaching method, which can collect materials in real time according to the interests of children to perform all-round cognitive education.
Specifically, the invention is realized by the following technical scheme:
the utility model provides an interactive teaching system of preschool education, the system includes camera, facial information acquisition module, matching module, problem rating module, scene image acquisition module, animation production module, display image acquisition module and display screen, and wherein, camera and facial information acquisition module communication connection, facial information acquisition module, matching module, problem rating module and scene image acquisition module connect gradually to scene image acquisition module still is connected with the display screen, animation production module and display image acquisition module are connected, display image acquisition module is connected with the display screen, animation production module still directly is connected with the display screen for send the animation that makes to the display screen. The preschool education interactive teaching system further comprises a cloud server and a plurality of actual scene acquisition devices arranged in different scenes, wherein the actual scene acquisition devices are in communication connection with the cloud server, and the cloud server is also in communication connection with the scene image acquisition module.
An interactive teaching method for preschool education, which uses the interactive teaching system for preschool education, comprises the following steps:
s1: the face information acquisition module acquires face feature information which is shot by a camera and represents the current user age bracket;
s2: the matching module acquires the facial feature information to be acquired and matches the facial feature information with preset child facial feature information; the facial feature information comprises at least one of skin state, five-sense organ proportion and facial feature and is used for determining the age range of the child;
s3: the question rating module acquires an individual feature set of the current child user according to the matched child age range;
s4: the question rating module acquires rating questions matched with the individual feature sets, and determines the attention point information of the current user according to the responses of children to the rating questions, wherein the attention point information comprises interest points, strong points and insufficient knowledge points;
s5: the scene image acquisition module is used for projecting a preset scene stored in a computer and a real-time scene transmitted through the Internet into a display screen to form a virtual reality teaching environment;
s6: a display image acquisition module acquires interactive scene image data in a current display screen;
s7: the display image acquisition module generates teaching content text information based on the target object, and the interactive scene image data is determined according to interest points, excellence points and insufficient knowledge points of the current user;
s8: the animation production module acquires an original picture in a display screen;
s9: the animation production module carries out contour detection and extraction on the original picture through deep learning, divides the original picture into a plurality of picture blocks, matches different colors for each picture block and generates a plurality of picture blocks with different colors; determining a color range of each picture block;
s10: the animation production module produces the material of the part according to the identified content;
s11: the animation production module extracts prefabricated animation from an animation library according to the identified content;
s12: the animation production module endows the produced material to the corresponding part of the prefabricated animation so as to ensure that the obtained material is the drawn material;
s13: the animation module returns the finally formed intelligent animation to the screen for display.
Preferably, in S2, if the distance between the widest positions of the face features included in the face feature information obtained by the matching module is less than or equal to 10cm, the cheekbone shadow is less than or equal to 4 cm; the skin condition comprises a skin moisture value greater than or equal to 35; and if the maximum value of the distance between the five sense organs included in the proportion of the five sense organs is less than or equal to 4cm, determining that the obtained facial feature information is correctly matched with the preset facial feature information of the child.
Preferably, the S4 includes:
randomly selecting questions in the question bank of children of the age group from the question bank according to the age range of the children to ask the user, and determining the attention points of the user according to the right answer rate of the user and the expression characteristics of the user during answer.
Preferably, the S5 includes:
the scene image acquisition module receives interest point information of a user, classifies the interest point information, sends classified type information to the cloud server, the cloud server sends an identification instruction to the corresponding real-time scene acquisition device according to the type information, the corresponding real-time scene acquisition device determines whether an interest point object is identified at present, and if yes, a picture in the corresponding real-time scene acquisition device is acquired.
Preferably, the S7 includes:
the display image acquisition module analyzes the interactive scene image data, extracts object image information from the interactive scene image data, and judges whether the extracted object can be used for language teaching, wherein if the extracted object is in a teaching outline corresponding to a child user, the extracted object is determined to be capable of being used for language teaching.
Preferably, in S9, after the determining the color range of each picture block, the method further includes:
outputting the picture blocks corresponding to the picture blocks to a display screen, receiving the instruction of a child user, and reserving one picture block for each picture block according to the instruction of the child user;
and combining all the picture blocks reserved by the user and outputting the combined picture blocks to a display screen.
Preferably, the S10 includes:
the picture is reduced to 6x6 size, total 36 pixels are used for removing the details of the picture, only the structure, the brightness and the other basic information are kept, and the picture difference caused by different sizes and proportions is abandoned; simultaneously backing up a 32x32 size picture, and taking 1024 pixels in total for extracting picture pixel information;
storing color position information of each part of the animation, reducing the original picture for comparison to 32x32 pixel size, taking five pixel coordinates of the same movable part on the picture, wherein four are maximum values, namely, upper, lower, left and right angle coordinates and a middle value;
and taking pixel values from the backed-up 32x 32-sized picture by using the five values, and making materials at corresponding positions by using annular gradient colors.
Preferably, the interactive preschool education teaching system further comprises a voice acquisition device, and the method further comprises:
s14: the cloud server receives and records the classroom interaction frequency of the children obtained by the voice acquisition devices in real time;
s15: the cloud server calculates the average volume in the interaction time by counting the accessed voice information, and determines and records the active degree of the children in the classroom according to the average volume;
s16: the cloud server extracts voice emotion characteristic information of each face in the classroom video information through a voice recognition algorithm, matches and recognizes the voice emotion characteristic information according to preset voice emotion characteristic parameters, and determines the emotion state and the concentration state of the infant in the classroom.
Preferably, the interactive teaching system for preschool education further includes a command transmitting device, in S7, after generating the teaching content text information based on the target object, the system further includes:
s71: the command sending device sends the action gesture command and the voice information;
s72: the command sending device acquires the motion information of each bone point corresponding to the action posture command, collects the real-time human body operation information through the camera and sends the motion information of each bone point to the server;
s73: the server analyzes the motion information of each skeleton point, generates a limb motion thread of the user, searches and matches a limb motion corresponding to the limb motion thread of the user in a pre-generated association information table of the motion thread and the limb motion, and controls a virtual character on an interactive education project interface of the terminal display equipment to execute and display the limb motion;
the S73 includes:
s731: the cloud server is configured in advance to generate an association information table of the motion thread and the limb action;
s732: the cloud server analyzes the motion information of each bone point in the three-dimensional coordinate system, obtains the displacement of each bone point on the x axis, the y axis and the z axis, and generates a limb motion thread of the user according to the displacement information of each bone point;
s733: the cloud server searches and matches the limb actions corresponding to the limb movement threads of the user in a pre-generated association information table of the movement threads and the limb actions;
s734: the cloud server judges whether the limb actions corresponding to the limb movement threads of the user are searched and matched in the association information table or not; if yes, controlling the virtual character in the display screen to execute the limb action; if not, analyzing the limb movement thread, determining the limb movement content of the user, forming a limb action, and continuously executing the step of controlling the virtual character in the display screen to execute and display the limb action, wherein the limb movement content comprises the movement direction and the movement displacement of the skeleton point.
The invention has the beneficial effects that: (1) interest points of the child user can be better identified; (2) real scenes can be selected according to interest points of the child user to interact with the child user in real time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an interactive teaching system for preschool education according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an interactive teaching method for preschool education according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of an interactive teaching method for preschool education according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of an interactive teaching method for preschool education according to a third embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The present invention will be described in detail below by way of examples.
The utility model provides an interactive teaching system of preschool education, the system includes camera, facial information acquisition module, matching module, problem rating module, scene image acquisition module, animation production module, display image acquisition module and display screen, and wherein, camera and facial information acquisition module communication connection, facial information acquisition module, matching module, problem rating module and scene image acquisition module connect gradually to scene image acquisition module still is connected with the display screen, animation production module and display image acquisition module are connected, display image acquisition module is connected with the display screen, animation production module still directly is connected with the display screen for send the animation that makes to the display screen. The preschool education interactive teaching system further comprises a cloud server and a plurality of actual scene acquisition devices arranged in different scenes, wherein the actual scene acquisition devices are in communication connection with the cloud server, and the cloud server is also in communication connection with the scene image acquisition module. The communication connection comprises a wireless connection and a wired connection.
An interactive teaching method for preschool education, which uses the interactive teaching system for preschool education, as shown in fig. 2, the method comprising:
s1: the face information acquisition module acquires face feature information which is shot by a camera and represents the current user age bracket;
s2: the matching module acquires the facial feature information to be acquired and matches the facial feature information with preset child facial feature information;
the facial feature information includes at least one of a skin state, a proportion of five sense organs, and facial features for determining an age range of the child.
For example, the face information acquisition module may use a camera as the face image acquisition device. When determining that the current user needs to use the teaching interaction device, the facial information acquisition module can instruct the camera to be started, and acquire facial image information representing the age bracket of the current user, for example, information such as the distance of the widest position of the face of the current user, the skin humidity value, the distance between five sense organs, the color difference of facial skin and the like is acquired, and then the extracted information is combined into facial feature information.
S3: the question rating module acquires an individual feature set of the current child user according to the matched child age range;
and the matching module matches the acquired facial feature information with the preset facial feature information of the children.
For example, the terminal may store preset child facial feature information in the terminal at initialization, where the preset child facial feature information describes facial features of a child user.
The facial information acquisition module can match information such as skin states, proportion of five sense organs, facial features and the like included in the acquired facial feature information with preset facial feature information of the children.
Further, if the distance of the widest position of the face shape included in the face shape features in the face feature information acquired by the matching module is less than or equal to 10cm, the cheekbone shadow is less than or equal to 4 square centimeters; the skin condition comprises a skin moisture value greater than or equal to 35; and if the maximum value of the distance between the five sense organs included in the proportion of the five sense organs is less than or equal to 4cm, determining that the obtained facial feature information is correctly matched with the preset facial feature information of the child. And determining the age range of the current user according to the comprehensive comparison table of the distance of the widest position of the face, the zygomatic bone shadow, the skin state and the proportion of the five sense organs. The comprehensive comparison table is a table preset in the matching module and is a comparison table with the error rate within a preset range obtained through multiple statistical tests.
S4: and the question rating module acquires the individual feature set of the current child user according to the matched child age range.
The question rating module acquires rating questions matched with the individual feature sets, and determines the attention point information of the current user according to responses of children to the rating questions;
randomly selecting questions in the question bank of children of the age group from the question bank according to the age range of the children to ask the user, and determining the attention points of the user according to the right answer rate of the user and the expression characteristics of the user during answer.
And comparing the expression characteristics with a standard template in a database to obtain expressions with pleasure, impatience, confusion, disappointment, fatigue, excitement, expectation, vitality or dislike, and completing expression analysis.
Analyzing the expression data through an expression recognition algorithm, and generating emotion state information corresponding to the expression data comprises the following steps:
s41: performing image preprocessing on the expression data by adopting a gray image histogram equalization method;
s42: carrying out face recognition on the expression data subjected to image preprocessing by using a face recognition classifier to generate a face area;
s43: extracting expression features from the face region by an LDA (latent Dirichlet allocation) feature extraction algorithm;
s44: carrying out expression classification on the expression characteristics by adopting a support vector machine to obtain a classified facial expression;
s45: and recognizing the classified facial expressions and generating emotional state information corresponding to the facial expressions.
Here, the individual feature set is a set of one or more individual features, each of which is used to describe a feature (e.g., age, learning experience, interest bias, etc.) of the child user, and these features can be further obtained according to the identity information of the child user, for example, according to the age range of the child and the location information of the city where the child is located, to determine an average sample of the learning experiences of the child in the city as the basic individual feature of the child user. For example, the question rating module takes children in the age range of 5-8 years, and is located in city B of nation A, then, as determined from an average sample of learning experiences of children in city B, nation a, stored in the problem rating module (or a database networked with the problem rating module), children in the age range of 5-8 years are generally able to identify general living goods, and because the city B is a coastal city, children generally have higher interest in marine organisms such as fish and the like, the question rating module will output questions related to marine life and meeting the difficulty of children of 5-8 years old as questions matching the current individual feature set of the child user for question asking, and correcting in real time the average sample of the learning experience of the children stored in the question rating module (or a database networked with the question rating module) according to the correct rate of the answers of the children users.
And acquiring a rating problem matched with the individual feature set of the child user according to the individual feature set of the child user, and outputting the rating problem to the child user in a multi-mode output mode. Then acquiring response input of the child user for the rating problem; determining a capability level of the child user based on the response input; configuring behavior output information for the child user based on the child user's capability level. And finally, in the subsequent process of interacting with the child user, performing multi-mode output by using the behavior output information.
Since the multi-modal output for the child user is based on the behavioral output information, the configuration of the behavioral output information is based on the child user's own level of competency. Therefore, the problem rating module can realize interactive output matched with the personal ability level of the children, so that the interactive teaching device can provide education coaching contents more conforming to the self development of the children in an education application scene, and an education mode of teaching according to the personal ability is realized. Compared with the prior art, the method provided by the invention not only greatly improves the user experience of the interactive teaching device in the human-computer interaction process with children, but also effectively improves the teaching quality of teaching.
And the question rating module acquires the rating question matched with the individual feature set, and determines the attention point information of the current user according to the response of the children to the rating question, wherein the attention point information comprises an interest point, an excellence point and an insufficient knowledge point. For example, for children in the 5-8 years old, city a and city B, questions related to marine organisms are asked, the answer accuracy of the children is judged through different types of questions, the content of the questions is continuously corrected according to the historical answer accuracy until the answer accuracy of the children is stabilized at a certain level, the questions with the stabilized answer accuracy of the children are determined to be interesting questions of the children, and the current focus information of the user is determined.
S5: the scene image acquisition module is used for projecting a preset scene stored in a computer and a real-time scene transmitted through the Internet into a display screen to form a virtual reality teaching environment.
Further, the scene image obtaining module is used for projecting a predetermined scene stored in a computer and a real-time scene transmitted through the internet into the display screen to form a virtual reality teaching environment, and includes:
the scene image acquisition module receives interest point information of a user, classifies the interest point information, sends classified type information to the cloud server, the cloud server sends an identification instruction to the corresponding real-time scene acquisition device according to the type information, the corresponding real-time scene acquisition device determines whether an interest point object is identified at present, and if yes, a picture in the corresponding real-time scene acquisition device is acquired.
For example, the plurality of real-time scene acquisition devices respectively correspond to an zoo, a zoo and a gymnasium, when the scene image acquisition module receives that the point-of-interest information of the user is a "turtle", the scene image acquisition module classifies the point-of-interest information, the obtained category information is a "marine organism", the category information of the "marine organism" is sent to the cloud server, the cloud server sends an identification instruction to the corresponding real-time scene acquisition device arranged in the gymnasium according to the category information, the real-time scene acquisition device arranged in the gymnasium automatically starts to analyze the image shot by the real-time scene acquisition device, judges whether the point-of-interest object "turtle" exists in the shot image, and if so, the picture corresponding to the real-time scene acquisition device is obtained. The acquisition means that the shot image is transmitted to the cloud server in real time.
By adopting the mode, the cloud server does not need to send all the real-time scene acquisition devices to the scene image acquisition module in real time, so that network resources are saved, and in the real-time scene acquisition devices, only the identification signals of the interest points of the scenes corresponding to the real-time scene acquisition devices need to be stored.
The scene image acquisition module is used for projecting a preset scene stored in the computer and (or) a real-time scene transmitted through the Internet into the display screen to form a virtual reality teaching environment. For example, the related videos can be collected from the existing videos or from the internet, or obtained in a manner of making videos in advance, and stored in the computer, and the videos can be called from the computer when needed, so that a virtual reality teaching environment is created. And the real-time scene transmitted by the Internet is acquired by the actual scene acquisition device.
The display screen can be further replaced by a display and touch device which can be a terminal device with a touch screen, such as a PAD, a mobile phone, a notebook computer and other terminal communication devices with touch screens. The display and touch device can be connected with the computer device in a wired or wireless mode, and a wireless connection mode is preferably adopted. The touch screen can receive a scene picture set by the computer device, display image information of the scene, and transmit touch operation from the display and touch devices to the computer device to interact with the computer device. Besides calling scenes corresponding to the focus information from the cloud server according to the focus information output by the problem rating module, the scene image acquisition module can enable children to know and learn the contents of the conventional learning module and the interest selection module through the display and touch device and can be used for customizing personal demand items through the display and touch device and the personalized customization module. For example, when selecting interests, the child may select his own subjects of interest through multiple sub-modules of music, dance, literature, sports, science, manual, animals, plants, etc. in the interest selection module. Children who like marine life, such as turtles, can search for turtles in the animal submodule and are used for browsing basic knowledge about turtles, including pictures, videos, simple animations, knowledge questions and the like, stored in items under the turtles. The children can gradually go deeper than the related knowledge of the turtles through the topics set in a gradual and deep mode. In addition, children can also make full use of the camera control device in the system to transfer protocol units, such as video monitoring devices in the sea turtle hall of the ocean hall, and observe and know the daily life and habits of the sea turtles in a real scene. If the children want to further know and seek participation, the children can enter the personalized customization module to create own accounts and issue demand information, and the administrator can process the demand information and push the demand information to corresponding protocol units, so that the personalized demands of the children are met to the maximum extent, the interests of the children are stimulated, the children are promoted to pay attention to the sea turtles for a long time and continuously, and comprehensive and deep knowledge is obtained.
The actual scene acquisition device is used for controlling the operation and signal transmission of a preset camera and is connected with the social public camera resource and the personal camera resource of a protocol unit through a network. The camera control device is used for transmitting the live-action video to the system, so that the live-action video can be browsed by children in the display and touch devices, or the video required by the personalized customization module is transmitted to the computer for observation, practice and research of the children with preset personalized customization service. Or the live-action video can be transmitted to a computer for being adopted by a scene building device. The computer can transmit the related video data to the scene image acquisition module for the learning and interactive participation of the children. For example, a child makes a live-action video request in the conventional learning module and interest rotation module, and the system connects the relevant opened video devices and transmits the live-action video data to the computer and the display and touch device for the child to watch. If the open live-action video does not contain the content requested by the children, a system administrator can carry out matching according to the existing resources and communicate with a protocol unit, and the open resources meet the requirements of the children. If the children propose corresponding requirements in the personalized customized service, the system administrator pushes the related requirements to the protocol unit, and then the protocol unit and the children who propose the requirements jointly formulate feasible personalized service contents so as to meet the personalized requirements of the children to the maximum extent. In the system, children can continuously pay attention to personal personalized items on own terminals, and the children can store all or part of interesting videos on a computer.
S6: and the display image acquisition module acquires the interactive scene image data in the current display screen.
S7: the display image acquisition module generates teaching content text information based on the target object, and the interactive scene image data is determined according to the interest points, the excellence points and the insufficient knowledge points of the current user.
For example, because it cannot be guaranteed that the real-time scene acquired by the actual scene acquiring device is a real-time scene in which a child user is interested, for example, a turtle is interested by the child, when the interactive scene image in the display screen is an image with contents of aquatic weeds, fishes, turtles, pebbles and the like, the display image acquiring module acquires the interactive scene image data in the display screen, and then recognizes the target object of the turtle in a key point, and after the target object of the turtle is recognized, a text prompt of 'the turtle is drawn together' is generated on the display screen, so that the child user is attracted to draw the turtle. And if the interactive scene image in the display screen does not contain the turtle, calling a new actual scene acquisition device again for identification.
Further, the display image acquisition module analyzes the interactive scene image data, extracts object image information from the interactive scene image data, and judges whether the extracted object can be used for language teaching, wherein if the extracted object is in a teaching outline corresponding to a child user, it is determined that the extracted object can be used for language teaching.
For example, in a scene with a turtle, objects such as algae, coral and rock are often provided, the display image acquisition module analyzes the image data of the interactive scene, extracts image information of the algae, coral and rock from the image data, and determines according to the age of the user, and the user should or can already know basic marine objects such as algae and rock in a teaching outline corresponding to the current age, but does not require children to be able to recognize complex marine organisms such as coral in the teaching outline, and then determines that the extracted objects such as algae and rock can be used for language teaching.
S8: the animation production module acquires an original picture in a display screen;
i.e. a picture containing the target object "turtle".
S9: the animation production module carries out contour detection and extraction on the original picture through deep learning, divides the original picture into a plurality of picture blocks, matches different colors for each picture block and generates a plurality of picture blocks with different colors; a color range for each picture block is determined.
Specifically, the animation production module outputs the picture blocks corresponding to the picture blocks to a display screen, receives the instruction of a child user, and reserves one picture block for each picture block according to the instruction of the child user; and all the picture blocks retained by the user are combined and output to the display screen.
S10: and the animation production module produces the material of the part according to the identified content.
Specifically, the picture is reduced to 6x6 size, total 36 pixels are used for removing the details of the picture, only the structure, brightness and other basic information are kept, and the picture difference caused by different sizes and proportions is abandoned; simultaneously backing up a 32x32 size picture, and taking 1024 pixels in total for extracting picture pixel information;
storing color position information of each part of the animation, reducing the original picture for comparison to 32x32 pixel size, taking five pixel coordinates of the same movable part on the picture, wherein four are maximum values, namely, upper, lower, left and right angle coordinates and a middle value;
and taking pixel values from the backed-up 32x 32-size picture by using the five values, and making corresponding position materials by annular gradient colors, wherein the corresponding position materials are the materials with the highest color fitting degree with the position, specifically, the materials with the highest fitting degree can be circularly compared with the materials in the material library aiming at the color of each position, and the materials with the highest fitting degree are taken as the corresponding position materials of the position.
S11: and the animation production module extracts the prefabricated animation from the animation library according to the identified content. For example, when the turtle is recognized, the animation of the turtle is called from an animation library preset by the animation production module, so that the interest and the entertainment are increased.
S12: and the animation production module endows the produced material to the corresponding part of the prefabricated animation so as to ensure that the obtained material is the drawn material.
S13: the animation module returns the finally formed intelligent animation to the screen for display.
Further, the interactive teaching system for preschool education further comprises a voice acquisition device, and the method further comprises the following steps:
s14: the cloud server receives and records the classroom interaction frequency of the children obtained by the voice acquisition devices in real time.
S15: the cloud server calculates the average volume in the interaction time by counting the accessed voice information, and determines and records the active degree of the children in the classroom according to the average volume;
s16: the cloud server extracts voice emotion characteristic information of each face in the classroom video information through a voice recognition algorithm, matches and recognizes the voice emotion characteristic information according to preset voice emotion characteristic parameters, and determines the emotion state and the concentration state of the infant in the classroom.
The speech emotion feature recognition method specifically comprises the following steps:
(a) the method is characterized in that a tester records sounds respectively corresponding to different states of happiness, anger, sadness, normality and the like in advance, characteristic extraction and analysis are carried out on the sound signals, a sound corpus is established, and a speech model is established according to the attributes of the speech, the pitch and the like included in the speech signals of the sound corpus.
(b) The voice acquisition device acquires voices of students, and selects time periods which the students want to know in the interactive process to perform sampling detection according to the requirements of the cloud server.
(c) And extracting the pitch attribute of the voice to be detected, and inputting the voice into a voice model for distinguishing. The sound discrimination types include four categories of happiness, anger, sadness and normality.
Further, the interactive teaching system for preschool education further includes a command transmitting device, and after generating the teaching content text information based on the target object, as shown in fig. 3, the system further includes:
s71: the command sending device sends an action gesture command and voice information to the child user;
s72: the command sending device acquires prestored motion information of each bone point corresponding to the action posture instruction, collects real-time human body operation information through the camera and sends the motion information of each bone point to the cloud server;
s73: the cloud server analyzes the motion information of each skeleton point, generates a limb motion thread of the user, searches and matches a limb motion corresponding to the limb motion thread of the user in a pre-generated association information table of the motion thread and the limb motion, and controls a virtual character on an interactive education project interface of the terminal display equipment to execute and display the limb motion.
Specifically, as shown in fig. 4, the S73 includes:
s731: the cloud server is configured in advance to generate an association information table of the motion thread and the limb action.
The motion thread is formed by positioning each skeleton point at different position points, and the limb actions comprise jumping, squatting, right-hand lifting, left-hand lifting, forward-hand lifting, backward-hand lifting, sliding, side-body lifting, left-foot lifting, right-foot lifting, parallel separation of two feet and separation of front and rear feet;
s732: and the cloud server analyzes the motion information of each bone point in the three-dimensional coordinate system, obtains the displacement of each bone point on the x axis, the y axis and the z axis, and generates a limb motion thread of the user according to the displacement information of each bone point.
S733: and the cloud server searches and matches the limb actions corresponding to the limb movement threads of the user in a pre-generated association information table of the movement threads and the limb actions.
S734: the cloud server judges whether the limb actions corresponding to the limb movement threads of the user are searched and matched in the association information table or not; if yes, controlling the virtual character in the display screen to execute the limb action; if not, analyzing the limb movement thread, determining the limb movement content of the user, forming a limb action, and continuously executing the step of controlling the virtual character in the display screen to execute and display the limb action, wherein the limb movement content comprises the movement direction and the movement displacement of the skeleton point.
For example, the interactive teaching system for preschool education generates a virtual character of a child on the display screen, sends a voice command of 'please feed the little turtle', and displays a virtual 'turtle food' at a certain position of the display screen, so that a child user is required to perform limb activities and execute two actions of 'taking food' and 'feeding food', the command sending device, the camera and the cloud server analyze the actions of the child according to the steps of S71-S73, and the virtual character in the display screen executes corresponding actions according to the actions of the child, thereby further increasing the interactivity and interestingness of the interactive teaching system for preschool education.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. An interactive teaching method of preschool education uses an interactive teaching system of preschool education, which comprises a camera, a facial information acquisition module, a matching module, a problem rating module, a scene image acquisition module, an animation production module, a display image acquisition module and a display screen, wherein the camera is in communication connection with the facial information acquisition module, the matching module, the problem rating module and the scene image acquisition module are sequentially connected, the scene image acquisition module is also connected with the display screen, the animation production module is connected with the display image acquisition module, the display image acquisition module is connected with the display screen, and the animation production module is also directly connected with the display screen and used for sending a produced animation to the display screen; the preschool education interactive teaching system further comprises a cloud server and a plurality of actual scene acquisition devices arranged in different scenes, the actual scene acquisition devices are in communication connection with the cloud server, the cloud server is also in communication connection with a scene image acquisition module, and the preschool education interactive teaching system is characterized by comprising:
s1: the face information acquisition module acquires face feature information which is shot by a camera and represents the current user age bracket;
s2: the matching module matches the acquired facial feature information with preset child facial feature information; the facial feature information comprises at least one of skin state, five-sense organ proportion and facial feature and is used for determining the age range of the child;
s3: the question rating module acquires an individual feature set of the current child user according to the matched child age range;
s4: the question rating module acquires rating questions matched with the individual feature sets, and determines the attention point information of the current user according to the responses of children to the rating questions, wherein the attention point information comprises interest points, strong points and insufficient knowledge points;
s5: the scene image acquisition module is used for projecting a preset scene stored in a computer and a real-time scene transmitted through the Internet into a display screen to form a virtual reality teaching environment;
s6: a display image acquisition module acquires interactive scene image data in a current display screen;
s7: the display image acquisition module generates teaching content text information based on a target object, and the interactive scene image data is determined according to interest points, excellence points and insufficient knowledge points of a current user;
s8: the animation production module acquires an original picture in a display screen;
s9: the animation production module carries out contour detection and extraction on the original picture through deep learning, divides the original picture into a plurality of picture blocks, matches different colors for each picture block and generates a plurality of picture blocks with different colors; determining a color range of each picture block;
s10: the animation production module produces the material of the part according to the identified content;
s11: the animation production module extracts prefabricated animation from an animation library according to the identified content;
s12: the animation production module endows the produced material to the corresponding part of the prefabricated animation so as to ensure that the obtained material is the drawn material;
s13: the animation module returns the finally formed intelligent animation to the screen for display.
2. The interactive teaching method of preschool education of claim 1, wherein in S2, if the distance of the widest part of the face shape included in the face shape features in the face feature information obtained by the matching module is less than or equal to 10cm, the cheekbone shadow is less than or equal to 4 cm; the skin condition comprises a skin moisture value greater than or equal to 35; and if the maximum value of the distance between the five sense organs included in the proportion of the five sense organs is less than or equal to 4cm, determining that the obtained facial feature information is correctly matched with the preset facial feature information of the child.
3. The interactive teaching method of preschool education of claim 1, wherein the S4 includes: randomly selecting questions in the question bank of children of the age group from the question bank according to the age range of the children to ask the user, and determining the attention points of the user according to the right answer rate of the user and the expression characteristics of the user during answer.
4. The interactive teaching method of preschool education of claim 1, wherein the S5 includes: the scene image acquisition module receives interest point information of a user, classifies the interest point information, sends classified type information to the cloud server, the cloud server sends an identification instruction to the corresponding real-time scene acquisition device according to the type information, the corresponding real-time scene acquisition device determines whether an interest point object is identified at present, and if yes, a picture in the corresponding real-time scene acquisition device is acquired.
5. The interactive teaching method of preschool education of claim 1, wherein the S7 includes: the display image acquisition module analyzes the interactive scene image data, extracts object image information from the interactive scene image data, and judges whether the extracted object can be used for language teaching, wherein if the extracted object is in a teaching outline corresponding to a child user, the extracted object is determined to be capable of being used for language teaching.
6. The interactive tutoring method of claim 1, wherein in step S9, after determining the color range of each tile, the method further comprises: outputting the picture blocks corresponding to the picture blocks to a display screen, receiving the instruction of a child user, and reserving one picture block for each picture block according to the instruction of the child user; and combining all the picture blocks reserved by the user and outputting the combined picture blocks to a display screen.
7. An interactive teaching method of preschool education as claimed in claim 1 wherein the interactive teaching system of preschool education further includes a voice capture device, the method further comprising: s14: the cloud server receives and records the classroom interaction frequency of the children obtained by the voice acquisition devices in real time; s15: the cloud server calculates the average volume in the interaction time by counting the accessed voice information, and determines and records the active degree of the children in the classroom according to the average volume; s16: the cloud server extracts voice emotion characteristic information of each face in the classroom video information through a voice recognition algorithm, matches and recognizes the voice emotion characteristic information according to preset voice emotion characteristic parameters, and determines the emotion state and the concentration state of the infant in the classroom.
8. An interactive teaching method of preschool education as claimed in claim 1, wherein the interactive teaching system of preschool education further comprises a command transmitting means, S7, after generating the teaching contents character information based on the target object, further comprising: s71: the command sending device sends the action gesture command and the voice information; s72: the command sending device acquires the motion information of each bone point corresponding to the action posture command, collects the real-time human body operation information through the camera and sends the motion information of each bone point to the server; s73: the server analyzes the motion information of each skeleton point, generates a limb motion thread of the user, searches and matches a limb motion corresponding to the limb motion thread of the user in a pre-generated association information table of the motion thread and the limb motion, and controls a virtual character on an interactive education project interface of the terminal display equipment to execute and display the limb motion; the S73 includes:
s731: the cloud server is configured in advance to generate an association information table of the motion thread and the limb action;
s732: the cloud server analyzes the motion information of each bone point in the three-dimensional coordinate system, obtains the displacement of each bone point on the x axis, the y axis and the z axis, and generates a limb motion thread of the user according to the displacement information of each bone point;
s733: the cloud server searches and matches the limb actions corresponding to the limb movement threads of the user in a pre-generated association information table of the movement threads and the limb actions;
s734: the cloud server judges whether the limb actions corresponding to the limb movement threads of the user are searched and matched in the association information table or not; if yes, controlling the virtual character in the display screen to execute the limb action; if not, analyzing the limb movement thread, determining the limb movement content of the user, forming a limb action, and continuously executing the step of controlling the virtual character in the display screen to execute and display the limb action, wherein the limb movement content comprises the movement direction and the movement displacement of the skeleton point.
CN201811424917.9A 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method Active CN109637207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811424917.9A CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811424917.9A CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Publications (2)

Publication Number Publication Date
CN109637207A CN109637207A (en) 2019-04-16
CN109637207B true CN109637207B (en) 2020-09-01

Family

ID=66069260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811424917.9A Active CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Country Status (1)

Country Link
CN (1) CN109637207B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781966A (en) * 2019-10-23 2020-02-11 史文华 Method and device for identifying character learning sensitive period of infant and electronic equipment
CN110826510A (en) * 2019-11-12 2020-02-21 电子科技大学 Three-dimensional teaching classroom implementation method based on expression emotion calculation
JP6733027B1 (en) * 2019-11-28 2020-07-29 株式会社ドワンゴ Content control system, content control method, and content control program
CN110909702B (en) * 2019-11-29 2023-09-22 侯莉佳 Artificial intelligence-based infant sensitive period direction analysis method
CN111311460A (en) * 2020-04-08 2020-06-19 上海乂学教育科技有限公司 Development type teaching system for children
CN111613100A (en) * 2020-04-30 2020-09-01 华为技术有限公司 Interpretation and drawing method and device, electronic equipment and intelligent robot
CN111638783A (en) * 2020-05-18 2020-09-08 广东小天才科技有限公司 Man-machine interaction method and electronic equipment
CN112381699A (en) * 2020-12-04 2021-02-19 湖北致未来智能教育科技有限公司 Automatic interactive intelligent education management system
CN112734609A (en) * 2021-01-06 2021-04-30 西安康宸科技有限公司 Artificial intelligence-based early child development management system
CN112954235B (en) * 2021-02-04 2021-10-29 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN115951851B (en) * 2022-09-27 2023-07-28 武汉市公共交通集团有限责任公司信息中心 Comprehensive display system for bus station dispatching
CN116226411B (en) * 2023-05-06 2023-07-28 深圳市人马互动科技有限公司 Interactive information processing method and device for interactive project based on animation
CN117218912B (en) * 2023-05-09 2024-03-26 华中师范大学 Intelligent education interaction system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition
CN105493130A (en) * 2013-10-07 2016-04-13 英特尔公司 Adaptive learning environment driven by real-time identification of engagement level
CN107729491A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 Improve the method, apparatus and equipment of the accuracy rate of topic answer search
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180053431A1 (en) * 2016-05-19 2018-02-22 Timothy J. Young Computer architecture for customizing the content of publications and multimedia
US10573193B2 (en) * 2017-05-11 2020-02-25 Shadowbox, Llc Video authoring and simulation training tool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition
CN105493130A (en) * 2013-10-07 2016-04-13 英特尔公司 Adaptive learning environment driven by real-time identification of engagement level
CN107729491A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 Improve the method, apparatus and equipment of the accuracy rate of topic answer search
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality

Also Published As

Publication number Publication date
CN109637207A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109637207B (en) Preschool education interactive teaching device and teaching method
CN110850983B (en) Virtual object control method and device in video live broadcast and storage medium
CN107203953B (en) Teaching system based on internet, expression recognition and voice recognition and implementation method thereof
CN110334626B (en) Online learning system based on emotional state
CN107633719B (en) Anthropomorphic image artificial intelligence teaching system and method based on multi-language human-computer interaction
CN109584648B (en) Data generation method and device
CN111290568A (en) Interaction method and device and computer equipment
CN112199002B (en) Interaction method and device based on virtual role, storage medium and computer equipment
US20190184573A1 (en) Robot control method and companion robot
CN108833941A (en) Man-machine dialogue system method, apparatus, user terminal, processing server and system
CN109844735A (en) Affective state for using user controls the technology that virtual image generates system
CN109614849A (en) Remote teaching method, apparatus, equipment and storage medium based on bio-identification
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
CN110598576A (en) Sign language interaction method and device and computer medium
JP7278307B2 (en) Computer program, server device, terminal device and display method
CN109145871A (en) Psychology and behavior recognition methods, device and storage medium
CN110134863B (en) Application program recommendation method and device
CN108615439A (en) Method, apparatus, equipment and medium for formulating ability training scheme
CN110531849A (en) A kind of intelligent tutoring system of the augmented reality based on 5G communication
CN109278051A (en) Exchange method and system based on intelligent robot
CN109409199A (en) Micro- expression training method, device, storage medium and electronic equipment
CN113012490A (en) Language learning system and device based on virtual scene
CN115731751A (en) Online teaching system integrating artificial intelligence and virtual reality technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant