CN109637207A - A kind of preschool education interactive teaching device and teaching method - Google Patents

A kind of preschool education interactive teaching device and teaching method Download PDF

Info

Publication number
CN109637207A
CN109637207A CN201811424917.9A CN201811424917A CN109637207A CN 109637207 A CN109637207 A CN 109637207A CN 201811424917 A CN201811424917 A CN 201811424917A CN 109637207 A CN109637207 A CN 109637207A
Authority
CN
China
Prior art keywords
module
information
user
children
cloud server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811424917.9A
Other languages
Chinese (zh)
Other versions
CN109637207B (en
Inventor
曹臻祎
李晓红
赵华
袁芳
李宁
王会莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811424917.9A priority Critical patent/CN109637207B/en
Publication of CN109637207A publication Critical patent/CN109637207A/en
Application granted granted Critical
Publication of CN109637207B publication Critical patent/CN109637207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Abstract

The invention discloses a kind of preschool education interactive education systems, including camera, facial information to obtain module, matching module, problem grading module, scene image and obtain module, cartoon making module, display image collection module and display screen;The preschool education interactive education system further includes Cloud Server, and multiple actual scene acquisition device in different scenes are set, the multiple actual scene acquisition device and Cloud Server communicate to connect, and the Cloud Server also obtains module communication connection with scene image.The beneficial effects of the present invention are: (1) can preferably identify the point of interest of child user;(2) scene and child user that can be chosen according to the point of interest of child user in reality carry out real-time interactive.

Description

A kind of preschool education interactive teaching device and teaching method
Technical field
The present invention relates to teaching field, it is specifically related to a kind of preschool education interactive teaching device and teaching method.
Background technique
Preschool education is the topic that everybody is concerned about very much, how to allow children are happy healthily to learn and grow up, is always big The target that family is pursued.It is well known that preschool education has the important feature of itself, children are born in day and have strongly to this world Curiosity, they thirst for exploring and understand this world, and understand and recognize this world by the Exploratory behavior of oneself.Together When, for children, interest is exactly power, has keen interest that will actively go to pursue and explore, and produce in study Raw pleasant emotional experience.Therefore happy loose psychological environment how is created, keeps and continue to develop the curiosity of children, such as The curiosity of what respect children, the interest selection of respect children, and children are excited to the interest of study, it is always what we made great efforts Target and direction.Preschool education is the basis of basic education, is the starting of life-education, is provided to the development of children from now on " keynote ".
However existing preschool education system is generally only that children are carried out with the single training of knowledge figure or speech training, it is past Toward only enumerating simple content to children, without the point of interest and interaction quality of consideration children.
Summary of the invention
It is an object of the present invention to provide a kind of preschool education interactive teaching device and teaching methods, make it possible to root Material is acquired in real time according to the interest of children carries out comprehensive Cognitive education.
Specifically, the present invention is achieved through the following technical solutions:
A kind of preschool education interactive education system, the system comprises camera, facial informations to obtain module, matching module, asks Topic grading module, scene image obtain module, cartoon making module, display image collection module and display screen, wherein camera shooting Head obtains module communication connection with facial information, and facial information obtains module, matching module, problem grading module and scene image It obtains module to be sequentially connected, and scene image obtains module and also connect with display screen, the cartoon making module and display are schemed As obtaining module connection, the display image collection module connect with display screen, the cartoon making module also directly with display Screen connection, for sending the animation made to display screen.The preschool education interactive education system further includes Cloud Server, with And multiple actual scene acquisition device in different scenes are set, the multiple actual scene acquisition device and Cloud Server are logical Letter connection, the Cloud Server also obtain module communication connection with scene image.
A kind of preschool education interactive teaching method uses preschool education interactive education system above-mentioned, which comprises
S1: facial information obtains the face feature information that module obtains characterization active user's age bracket of camera shooting;
S2: the face feature information that will acquire described in matching module acquisition is matched with default children's face feature information;Institute Stating face feature information includes at least one of skin condition, face ratio, shape of face feature, for determining the age model of children It encloses;
S3: problem grades module according to the child age range matched, obtains the personal feature collection of current child user;
S4: problem grading module obtains the grading problem for matching the personal feature collection, the answer according to children to grading problem It determines the focus information of active user, including point of interest and is good at a little and knowledge not foot point;
S5: scene image obtains module and is graded the focus information of module output according to problem, transfers from Cloud Server and institute The corresponding scene of focus information is stated, scene image obtains module and is used for the predetermined scene stored in computer and passes through The real-time scene of the Internet transmission projects in display screen, forms the teaching environment of virtual reality;
S6: display image collection module obtains the interaction scenarios image data in current display screen;
S7: display image collection module is based on the target object and generates content of courses text information, the interaction scenarios image Data by according to the point of interest of active user and be good at a little and knowledge foot point is not determined;
S8: cartoon making module obtains the original image in display screen;
S9: cartoon making module carries out contour detecting and extraction to the original image by deep learning, by the original graph Piece is divided into multiple picture blocks, to each picture block, matches different colours, generates the picture block of multiple and different colors;Determine each figure The color gamut of tile;
S10: cartoon making module makes position material according to the content identified;
S11: cartoon making module extracts prefabricated animation according to the content identified from animation library;
S12: the material of the production is assigned to the corresponding position of prefabricated animation by cartoon making module, to ensure that gained is institute It draws;
S13: cartoon making module shows finally formed intelligent animation back to screen.
Preferably, in S2, if the shape of face that shape of face feature includes in the face feature information that matching module is got is most wide The distance at place is less than or equal to 10cm, and cheekbone shade is less than or equal to 4 square centimeters;The humidity of skin value that skin condition includes is big In or equal to 35;The maximum value for the face spacing that face ratio includes is less than or equal to 4cm, it is determined that the facial characteristics got The matching of information and default children's face feature information is correct.
Preferably, the S4 includes:
The problems in age bracket children's exam pool is randomly choosed from problem base according to child age range to put question to user, Expressive features when according to user's answer accuracy and answer, determine the focus of user.
Preferably, the S5 includes:
Scene image obtains the interest point information that module receives user, and interest point information is classified, and obtains sorted To information be sent to Cloud Server, Cloud Server, which is sent according to information to corresponding real-time scene acquisition device, to be known It does not instruct, determines whether to identify point of interest object by corresponding real-time scene acquisition device, if so, will corresponding real-time field Picture in scape acquisition device is obtained.
Preferably, the S7 includes:
Display image collection module parses interaction scenarios image data, object image information is therefrom extracted, for quilt Whether extraction object, which can be used in progress language teaching, is judged, wherein if what is extracted is extracted object in corresponding children In the syllabus of user, it is determined that the extract physical efficiency is enough in carry out language teaching.
Preferably, in S9, after the color gamut of each picture block of the determination, the method also includes:
The corresponding picture block of each picture block is output on display screen, and receives the instruction of child user, according to child user Instruction retains a picture block for each picture block;
All picture blocks that user retains are combined and are output on display screen.
Preferably, the S10 includes:
Picture is narrowed down to the size of 6x6, in total 36 pixels, to remove the details of picture, only retain structure, light and shade and its His essential information, abandons different sizes, ratio bring picture difference;32x32 size picture is backed up simultaneously, in total 1024 pictures Element, for extracting picture pixels information;
Storage each position of animation takes color position information, and the picture for being originally used for comparing is narrowed down to 32x32 Pixel Dimensions, is being schemed The identical movable position of on piece takes five pixel coordinates, and four are maximum — that is, angular coordinate and a median up and down;
With this five values from value pixel value in the 32x32 size picture of the backup, annular graduated colors make corresponding position Material.
Preferably, the preschool education interactive education system further includes voice acquisition device, the method also includes:
S14: children's classroom interaction frequency that the multiple voice acquisition devices of Cloud Server real-time reception obtain simultaneously records;
S15: the Cloud Server calculates the average volume size in interaction time by the voice messaging of statistics access, according to institute State aggressiveness level and record that average volume size determines classroom children;
S16: the Cloud Server extracts the voice mood of each face in the classroom video information by speech recognition algorithm Characteristic information, and match cognization is carried out to the voice mood characteristic information according to default voice mood characteristic parameter, determine children Emotional state and absorbed state of the youngster in classroom.
Preferably, the preschool education interactive education system further includes order sending device, in S7, it is being based on the target After object generates content of courses text information, further includes:
S71: the instruction of order sending device sending action posture and voice messaging;
S72: order sending device obtains the motion information that corresponding each skeleton point is instructed with action, and passes through camera shooting Head is acquired real-time body's operation information, and the motion information of each skeleton point is sent to the server end;
S73: the server end parses the motion information of each skeleton point, generates the limb motion line of user Journey is searched in the related information table of pre-generated movement thread and limb action and is matched and the limb motion thread phase of user Corresponding limb action, and control and execute display on the virtual portrait of the mutual education project interface of the terminal presentation facility The limb action;
The S73 includes:
S731: Cloud Server is pre-configured with the related information table for generating movement thread and limb action;
S732: Cloud Server parses the motion information of each skeleton point, obtains each bone in three-dimensional system of coordinate Displacement of the bone point in x-axis, y-axis and z-axis generates the limb motion thread of user according to the displacement information of each skeleton point;
S733: Cloud Server is searched in the related information table of pre-generated movement thread and limb action to be matched with user's The corresponding limb action of limb motion thread;
S734: Cloud Server judges that matching limbs corresponding with the limb motion thread of user are searched in related information table to be moved Whether succeed;If so, control executes on the virtual portrait in the display screen shows the limb action;If it is not, then The limb motion thread is parsed, determines the limb motion content of user, forms limb action, and is continued to execute described Control is executing the step of showing the limb action on the virtual portrait in the display screen, the limb motion content includes Skeleton point moving direction and moving displacement.
The beneficial effects of the present invention are: (1) can preferably identify the point of interest of child user;It (2) can be according to children The point of interest of user chooses scene and child user progress real-time interactive in reality.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of preschool education interactive education system structural schematic diagram provided in an embodiment of the present invention;
Fig. 2 is a kind of preschool education interactive teaching method schematic diagram that first embodiment of the invention provides;
Fig. 3 is a kind of preschool education interactive teaching method schematic diagram that second embodiment of the invention provides;
Fig. 4 is a kind of preschool education interactive teaching method schematic diagram that third embodiment of the invention provides.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
It is only to be not intended to limit the invention merely for for the purpose of describing particular embodiments in terminology used in the present invention. It is also intended in the present invention and the "an" of singular used in the attached claims, " described " and "the" including majority Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the present invention A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from In the case where the scope of the invention, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination ".
The present invention will be described in detail by way of examples below.
A kind of preschool education interactive education system, the system comprises camera, facial informations to obtain module, matching mould Block, problem grading module, scene image obtain module, cartoon making module, display image collection module and display screen, In, camera and facial information obtain module communication connection, facial information obtain module, matching module, problem grading module and Scene image obtains module and is sequentially connected, and scene image obtains module and also connect with display screen, the cartoon making module It is connect with display image collection module, the display image collection module is connect with display screen, and the cartoon making module is also straight It connects and is connect with display screen, for sending the animation made to display screen.The preschool education interactive education system further includes cloud Server, and the multiple actual scene acquisition device being arranged in different scenes, the multiple actual scene acquisition device with Cloud Server communication connection, the Cloud Server also obtain module communication connection with scene image.Herein described communication connection Including wireless connection and wired connection.
A kind of preschool education interactive teaching method, using preschool education interactive education system above-mentioned, as shown in Fig. 2, institute The method of stating includes:
S1: facial information obtains the face feature information that module obtains characterization active user's age bracket of camera shooting;
S2: the face feature information that will acquire described in matching module acquisition is matched with default children's face feature information;
The face feature information includes at least one of skin condition, face ratio, shape of face feature, for determining children's The range of age.
For example, facial information acquisition module can be using camera as facial image acquisition device.Determining that active user need to When wanting teaching of use interactive device, facial information, which obtains module, can indicate that camera is opened, and acquire characterization active user year The facial image information of age section, for example, the distance of shape of face the widest part of acquisition active user, humidity of skin value, face spacing, face Then the information extracted is formed face feature information by the information such as the color difference of portion's skin.
S3: problem grades module according to the child age range matched, obtains the personal feature collection of current child user;
The face feature information that matching module will acquire is matched with default children's face feature information.
For example, can store default children's face feature information in terminal when initialization, default children's facial characteristics Information describes the facial characteristics of child user.
Facial information obtains the skin condition for including in the face feature information that module can will acquire, face ratio, face The information such as type feature are matched with default children's face feature information.
Further, if shape of face the widest part that shape of face feature includes in the face feature information that matching module is got away from From 10cm is less than or equal to, cheekbone shade is less than or equal to 4 square centimeters;The humidity of skin value that skin condition includes is greater than or waits In 35;The maximum value for the face spacing that face ratio includes is less than or equal to 4cm, it is determined that the face feature information that gets with The matching of default children's face feature information is correct.And according to the distance of shape of face the widest part, cheekbone shade, skin condition and five The synthesis table of comparisons of official's ratio, determines the range of age of active user.The comprehensive table of comparisons is to be set in advance in matching module In table, for the table of comparisons of the error rate that is obtained by multiple statistical test within a preset range.
S4: problem grades module according to the child age range matched, obtains the personal feature collection of current child user.
Described problem grading module obtains the grading problem for matching the personal feature collection, according to children to grading problem Reply the focus information for determining active user;
The problems in age bracket children's exam pool is randomly choosed from problem base according to child age range to put question to user, Expressive features when according to user's answer accuracy and answer, determine the focus of user.
Wherein, expressive features are compared with database Plays template, obtain it is happy, impatient, feel uncertain, be disappointed, is tired Labor, excitement, expectation, anger or dislike expression, complete Expression analysis.
Expression data is analyzed by Expression Recognition algorithm, it includes following for generating emotional state information corresponding with expression data Step:
S41: image preprocessing is carried out to expression data using gray level image histogram equalization method;
S42: recognition of face is carried out to the expression data after image preprocessing using recognition of face classifier, generates face area Domain;
S43: expression is extracted from human face region by LDA (Latent Dirichlet Allocation) feature extraction algorithm Feature;
S44: expression classification is carried out to expressive features using support vector machines, obtains sorted human face expression;
S45: identifying sorted human face expression, generates emotional state information corresponding with human face expression.
Herein, personal feature collection is the set of one or more personal feature, and every personal feature is for describing children The Xiang Tezheng (for example, age, learning experiences, interest deviation etc.) of user, these features can be used further according to children The identity information at family obtains, for example, being determined in the city according to the location information in city where child age range and children Children's study undergoes average sample, the basic personal feature as the child user.For example, problem grading module obtains children The range of age be 5 ~ 8 years old, and be located in the city B, A state, then (or grade module with problem according to being stored in problem grading module By the database of networking relationships) in the children for learning experience average sample in the city B, A state determine that the range of age is 5 ~ 8 years old youngster Child usually has been able to identify general daily necessity, and since the city B is coastal cities, children are generally to seas such as fish Foreign biology possesses higher interest, then problem grading module, which will be exported, relevant to marine organisms meets 5 ~ 8 years old children's difficulty The problem of topic is matched as the personal feature collection with current child user is putd question to, and the answer of root child user is correct Rate, in real time amendment are stored in the children for learning in problem grading module (or database with problem grading module by networking relationships) Undergo average sample.
The grading problem of matching child user personal feature collection is obtained, and according to the personal feature collection of child user with multimode State output mode exports grading problem to child user.Then the response for obtaining child user for grading problem inputs;Root The ability grade for determining child user is inputted according to response;Behavior of the ability grade configuration pin to child user based on child user Output information.Finally, carrying out multi-modal output using behavior output information later with child user interactive process.
Since the multi-modal output carried out for child user is Behavior-based control output information, and behavior output information Configuration is the ability grade according to child user itself.Therefore, described problem grading module may be implemented matching children individual's The interaction output of ability grade, so that interactive teaching device can provide in educational applications scene more meets children's self-growth Educational counseling content, realize the educational pattern taught students in accordance with their aptitude.Compared to the prior art, according to the method for the present invention, not only greatly User experience of the interactive teaching device in the human-computer interaction process with children is improved greatly, and effectively increases the religion of teaching Learn quality.
Problem grading module obtains the grading problem for matching the personal feature collection, the answer according to children to grading problem It determines the focus information of active user, including point of interest and is good at a little and knowledge not foot point.For example, for 5 ~ 8 years old A The problem of state city B children carry out the enquirement of the problem of marine organisms correlation, pass through different type, judges that the answer of children is correct Rate constantly corrects the content set a question according to history answer accuracy, until children answer accuracy stablize after certain level, It then determines that the problem after the answer accuracy of children is stablized is the interested problem of children, that is, the focus of active user has been determined Information.
S5: scene image obtains module according to the focus information of problem grading module output, transfers from Cloud Server Scene corresponding with the focus information, scene image obtain module for will the predetermined scene that be stored in computer and It is projected in display screen by the real-time scene of the Internet transmission, forms the teaching environment of virtual reality.
Further, the scene image obtains module and is used for the predetermined scene stored in computer and passes through interconnection The real-time scene of net transmission projects in display screen, forms the teaching environment of virtual reality, comprising:
Scene image obtains the interest point information that module receives user, and interest point information is classified, and obtains sorted To information be sent to Cloud Server, Cloud Server, which is sent according to information to corresponding real-time scene acquisition device, to be known It does not instruct, determines whether to identify point of interest object by corresponding real-time scene acquisition device, if so, will corresponding real-time field Picture in scape acquisition device is obtained.
For example, multiple real-time scene acquisition device respectively correspond zoo, botanical garden and aquarium, when scene image obtains When the interest point information of module reception user is " green turtle ", scene image obtains module and interest point information is classified, and obtains Information be " marine organisms ", then send Cloud Server, Cloud Server root for information as " marine organisms " Identification instruction is sent to the corresponding real-time scene acquisition device that aquarium is arranged according to information, is arranged in aquarium Real-time scene acquisition device just automatically begins to analyze the image that it shoots, and judges in the image of shooting with the presence or absence of emerging Interest point object " green turtle ", if so, the picture in corresponding real-time scene acquisition device is obtained.The acquisition, it is meant that right The image taken carries out real-time Transmission to Cloud Server.
In this manner, all real-time scene acquisition device need not be all sent in real time scene image by Cloud Server Module is obtained, saves Internet resources, and in real-time scene acquisition device, it is only necessary to storage and real-time scene acquisition device The identification signal of the point of interest object of corresponding scene.
Scene image obtain module be used for by the predetermined scene stored in computer and (or) pass through the Internet transmission Real-time scene project in display screen, form the teaching environment of virtual reality.For example, can be from existing video, Huo Zhecong Online collection associated video is interconnected, or obtains associated video by the way of pre-production video, and associated video is stored Into computer, video can be called from computer when needed, creates the teaching environment of virtual reality.The interconnection The real-time scene of net transmission is obtained by actual scene acquisition device.
The display screen can further be replaced with display and touch device can be the terminal device with touch screen, Such as PAD, mobile phone, laptop etc. have the terminal called equipment of touch screen.The display and touch device can pass through Wired or wireless mode is connect with computer installation, preferential by the way of being wirelessly connected.Touch screen itself can receive meter The scenic picture of calculation machine device setting shows the pictorial information of scene, and for that will grasp from the touch-control of display and touch device Computer installation is passed to, is interacted with computer installation.Scene image obtains module in addition to according to problem grading module The focus information of output, from being transferred except scene corresponding with the focus information in Cloud Server, children can lead to It crosses display and touch device understands and learns the content of conventional study module, interest selecting module, and for passing through display and touching Control device customizes the individual demand project of oneself by personalized customization module.For example, when carrying out interest selection, children Music in interest selecting module, dancing, literature, sport, science, craft, animal, plant etc. multiple submodule can be passed through To select oneself interested subject.Like marine organisms, such as the children of green turtle, green turtle can be searched in animal submodule, Children be used for green turtle item browse now deposit in the entry about the basic knowledge including picture of green turtle, video and Simple animation and knowledge question etc..Topic that children can be arranged by mode gradually deepening is gradually in depth compared with green turtle Relevant knowledge.In addition, children can also make full use of the camera shooting head controlling device in system, bargaining units, such as sea are transferred The daily life and habit of green turtle are observed to outdoor scene and understood to video monitoring apparatus in the green turtle shop in foreign shop.If child wants Further understand and seek sense of participation, then can enter personalized customization module, found the account of oneself, and issues demand letter Breath, administrator understands process demand information, and demand information is pushed to corresponding bargaining units, farthest meets children's Individual demand excites the interest of children, promotes child on long-term and constantly pays close attention to green turtle, obtains comprehensive and deep knowledge.
The actual scene acquisition device is used to control the operation and signal transmission of predetermined camera, and for passing through net The public camera resource of the society of network and bargaining units and personal camera resource connect.Head controlling device is imaged to be used for outdoor scene Transmission of video can browse in display and touch device into system for children, or personalized customization module is needed To computer, children's observation, practice and research for predetermined personalized customization service use transmission of video.Outdoor scene can also be regarded Frequency is transferred in computer, is used for scene constructing device.Associated video data can be passed to scene image and obtained by computer Modulus block is participated in for children for learning and interaction.For example, children issue outdoor scene view in conventional study module, interest rotary module Frequency is requested, and system will connect correlation and open video equipment, and real scene video data are transmitted to computer and display and touch-control In device, watched for children.If not including the content of children's request in open real scene video, system manager will root It is matched according to existing resource, is communicated with bargaining units, open source meets children's demand.If children are in personalized customization Corresponding demand is proposed in service, then system manager will be pushed to related needs bargaining units, then bargaining units can and The children of proposition demand formulate feasible personalized service content jointly, utmostly meet the individual demand of children.It is being In system, children can give more sustained attention the individual project of oneself in the terminal of oneself, and children can will be all or part of interesting Video storage on computer.
S6: display image collection module obtains the interaction scenarios image data in current display screen.
S7: display image collection module is based on the target object and generates content of courses text information, the interaction scenarios Image data by according to the point of interest of active user and be good at a little and knowledge foot point is not determined.
For example, as not ensuring that centainly will appear children in real-time scene acquired in actual scene acquisition device uses The interested article in family, such as it is green turtle that children are interested, when the interaction scenarios image in display screen is with water plant, fish When the image of the contents such as class, green turtle, cobble, display image collection module then carries out the interaction scenarios image data in display screen After acquisition, then to " green turtle ", this target object identifies emphasis, after recognizing " green turtle " this target object, then aobvious The text prompt that " we draw green turtle together " is generated in display screen, attracts child user to paint green turtle.If in display screen When interaction scenarios image does not include green turtle, then re-calls new actual scene acquisition device and identified.
Further, the display image collection module parses interaction scenarios image data, therefrom extracts object Body image information, for be extracted object whether can be used in carry out language teaching judge, wherein exist if being extracted object In the syllabus of corresponding child user, it is determined that the extract physical efficiency is enough in carry out language teaching.
For example, there is the objects such as seaweed, coral, rock, the display figure toward contact in a scene with green turtle Interaction scenarios image data is parsed as obtaining module, therefrom extracts the image information of seaweed, coral, rock, and root Determined according to age of user, user in the corresponding syllabus of current age, should or have been able to understanding seaweed, Basic ocean object as rock, but be not required in syllabus children and can identify that complexity ocean as coral is raw Object, it is determined that extract physical efficiency as the seaweed, rock is enough in carry out language teaching.
S8: cartoon making module obtains the original image in display screen;
It include the picture of " green turtle " this target object.
S9: cartoon making module carries out contour detecting and extraction to the original image by deep learning, by the original Beginning picture is divided into multiple picture blocks, to each picture block, matches different colours, generates the picture block of multiple and different colors;It determines every The color gamut of a picture block.
Specifically, the corresponding picture block of each picture block is output on display screen by cartoon making module, and receive children's use The instruction at family retains a picture block for each picture block according to the instruction of child user;And all picture blocks for retaining user It is combined and is output on display screen.
S10: cartoon making module makes position material according to the content identified.
Specifically, picture to be narrowed down to the size of 6x6,36 pixels only retain knot to remove the details of picture in total Structure, light and shade and other essential informations abandon different sizes, ratio bring picture difference;32x32 size picture is backed up simultaneously, 1024 pixels in total, for extracting picture pixels information;
Storage each position of animation takes color position information, and the picture for being originally used for comparing is narrowed down to 32x32 Pixel Dimensions, is being schemed The identical movable position of on piece takes five pixel coordinates, and four are maximum — that is, angular coordinate and a median up and down;
With this five values from value pixel value in the 32x32 size picture of the backup, annular graduated colors make corresponding position Material, the corresponding position material, the i.e. highest material of color compatible degree with the position, specifically, each portion can be directed to The color of position, circulation compatible degree compared with each material in Materials Library, taking the maximum material of compatible degree is the correspondence at the position Position material.
S11: cartoon making module extracts prefabricated animation according to the content identified from animation library.For example, when identification Out after " green turtle ", i.e., the animation of " green turtle " is transferred from the preset animation library of cartoon making module, increase interest and entertainment.
S12: the material of the production is assigned to the corresponding position of prefabricated animation by cartoon making module, with ensure gained be It is drawn.
S13: cartoon making module shows finally formed intelligent animation back to screen.
Further, the preschool education interactive education system further includes voice acquisition device, the method also includes:
S14: children's classroom interaction frequency that the multiple voice acquisition devices of Cloud Server real-time reception obtain simultaneously records.
S15: the Cloud Server calculates the average volume size in interaction time by the voice messaging of statistics access, according to The aggressiveness level and record of classroom children are determined according to the average volume size;
S16: the Cloud Server extracts the voice mood of each face in the classroom video information by speech recognition algorithm Characteristic information, and match cognization is carried out to the voice mood characteristic information according to default voice mood characteristic parameter, determine children Emotional state and absorbed state of the youngster in classroom.
The voice mood characteristic recognition method, specifically includes the following steps:
(a) some sound for respectively corresponding the different conditions such as happiness, anger, sorrow, normal are specially recorded by tester in advance, and to this A little voice signals carry out feature extraction and analysis, sound corpus are established, included by the voice signal by sound corpus The attributes such as voice and its pitch, establish speech model.
(b) voice of voice acquisition device acquisition student is chosen its in interactive process according to the demand of Cloud Server and is wanted The period of understanding carries out sample detecting.
(c) the pitch attribute of voice to be detected is extracted, input speech model is differentiated.Sound differentiate type include happiness, Anger, sorrow, normal four kinds of classification.
Further, the preschool education interactive education system further includes order sending device, is being based on the object After body generates content of courses text information, as shown in Figure 3, further includes:
S71: order sending device is instructed to child user sending action posture and voice messaging;
S72: what the acquisition of order sending device prestored instructs the motion information of corresponding each skeleton point with action, and leads to It crosses camera to be acquired real-time body's operation information, and the motion information of each skeleton point is sent to the cloud service Device;
S73: the Cloud Server parses the motion information of each skeleton point, generates the limb motion line of user Journey is searched in the related information table of pre-generated movement thread and limb action and is matched and the limb motion thread phase of user Corresponding limb action, and control and execute display on the virtual portrait of the mutual education project interface of the terminal presentation facility The limb action.
Specifically, as shown in figure 4, the S73 includes:
S731: Cloud Server is pre-configured with the related information table for generating movement thread and limb action.
The movement thread is to be located at different location points by each skeleton point to be formed, the limb action include jump, Squat down, lift the right hand, lift left hand, both hands forward, both hands backward, slide, lean to one side, left foot lifts, right crus of diaphragm lifts, both feet are parallel separates It is separately acted with front and back foot;
S732: Cloud Server parses the motion information of each skeleton point, obtains each bone in three-dimensional system of coordinate Displacement of the bone point in x-axis, y-axis and z-axis generates the limb motion thread of user according to the displacement information of each skeleton point.
S733: Cloud Server is searched in the related information table of pre-generated movement thread and limb action to be matched and uses The corresponding limb action of limb motion thread at family.
S734: Cloud Server, which judges to search in related information table, matches limb corresponding with the limb motion thread of user Whether body movement succeeds;If so, control executes on the virtual portrait in the display screen shows the limb action;If It is no, then the limb motion thread is parsed, determine the limb motion content of user, forms limb action, and continue to hold The row control is the step of executing the display limb action on the virtual portrait in the display screen, in the limb motion Hold includes skeleton point moving direction and moving displacement.
For example, preschool education interactive education system generates the virtual portrait of children on a display screen, and issue voice command " a little turtle feeding please be give ", and virtual " green turtle food " is shown in a certain position of display screen, this requires that child user carries out Limb activity executes " taking food " and " feeding " two movements, and order sending device, camera and Cloud Server are according to aforementioned The step of S71 ~ S73, analyzes the movement of children, and the virtual portrait in display screen is executed according to the movement of children Respective action further increases the interactivity and interest of preschool education interactive education system.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the present invention.

Claims (10)

1. a kind of preschool education interactive education system, which is characterized in that the system comprises camera, facial informations to obtain mould Block, matching module, problem grading module, scene image obtain module, cartoon making module, display image collection module and show Display screen, wherein camera and facial information obtain module communication connection, and facial information obtains module, matching module, problem grading Module and scene image obtain module and are sequentially connected, and scene image obtains module and also connect with display screen, the animation system Make module to connect with display image collection module, the display image collection module is connect with display screen, the cartoon making mould Block is also directly connect with display screen, for sending the animation made to display screen;The preschool education interactive education system is also Including Cloud Server, and the multiple actual scene acquisition device being arranged in different scenes, the multiple actual scene obtain Device and Cloud Server communicate to connect, and the Cloud Server also obtains module communication connection with scene image.
2. a kind of preschool education interactive teaching method uses preschool education interactive education system described in claim 1, feature It is, which comprises
S1: facial information obtains the face feature information that module obtains characterization active user's age bracket of camera shooting;
S2: the face feature information that will acquire described in matching module acquisition is matched with default children's face feature information;Institute Stating face feature information includes at least one of skin condition, face ratio, shape of face feature, for determining the age model of children It encloses;
S3: problem grades module according to the child age range matched, obtains the personal feature collection of current child user;
S4: problem grading module obtains the grading problem for matching the personal feature collection, the answer according to children to grading problem It determines the focus information of active user, including point of interest and is good at a little and knowledge not foot point;
S5: scene image obtains module and is graded the focus information of module output according to problem, transfers from Cloud Server and institute The corresponding scene of focus information is stated, scene image obtains module and is used for the predetermined scene stored in computer and passes through The real-time scene of the Internet transmission projects in display screen, forms the teaching environment of virtual reality;
S6: display image collection module obtains the interaction scenarios image data in current display screen;
S7: display image collection module is based on the target object and generates content of courses text information, the interaction scenarios image Data by according to the point of interest of active user and be good at a little and knowledge foot point is not determined;
S8: cartoon making module obtains the original image in display screen;
S9: cartoon making module carries out contour detecting and extraction to the original image by deep learning, by the original graph Piece is divided into multiple picture blocks, to each picture block, matches different colours, generates the picture block of multiple and different colors;Determine each figure The color gamut of tile;
S10: cartoon making module makes position material according to the content identified;
S11: cartoon making module extracts prefabricated animation according to the content identified from animation library;
S12: the material of the production is assigned to the corresponding position of prefabricated animation by cartoon making module, to ensure that gained is institute It draws;
S13: cartoon making module shows finally formed intelligent animation back to screen.
3. the interactive teaching method of preschool education according to claim 2, which is characterized in that in S2, if matching module The distance of shape of face the widest part that shape of face feature includes in the face feature information got is less than or equal to 10cm, and cheekbone shade is small In or equal to 4 square centimeters;The humidity of skin value that skin condition includes is greater than or equal to 35;The face spacing that face ratio includes Maximum value be less than or equal to 4cm, it is determined that the matching of the face feature information that gets and default children's face feature information Correctly.
4. the interactive teaching method of preschool education according to claim 2, which is characterized in that the S4 includes:
The problems in age bracket children's exam pool is randomly choosed from problem base according to child age range to put question to user, Expressive features when according to user's answer accuracy and answer, determine the focus of user.
5. the interactive teaching method of preschool education according to claim 2, which is characterized in that the S5 includes:
Scene image obtains the interest point information that module receives user, and interest point information is classified, and obtains sorted To information be sent to Cloud Server, Cloud Server, which is sent according to information to corresponding real-time scene acquisition device, to be known It does not instruct, determines whether to identify point of interest object by corresponding real-time scene acquisition device, if so, will corresponding real-time field Picture in scape acquisition device is obtained.
6. the interactive teaching method of preschool education according to claim 2, which is characterized in that the S7 includes:
Display image collection module parses interaction scenarios image data, object image information is therefrom extracted, for quilt Whether extraction object, which can be used in progress language teaching, is judged, wherein if what is extracted is extracted object in corresponding children In the syllabus of user, it is determined that the extract physical efficiency is enough in carry out language teaching.
7. the interactive teaching method of preschool education according to claim 2, which is characterized in that every in the determination in S9 After the color gamut of a picture block, the method also includes:
The corresponding picture block of each picture block is output on display screen, and receives the instruction of child user, according to child user Instruction retains a picture block for each picture block;
All picture blocks that user retains are combined and are output on display screen.
8. the interactive teaching method of preschool education according to claim 2, which is characterized in that the S10 includes:
Picture is narrowed down to the size of 6x6, in total 36 pixels, to remove the details of picture, only retain structure, light and shade and its His essential information, abandons different sizes, ratio bring picture difference;32x32 size picture is backed up simultaneously, in total 1024 pictures Element, for extracting picture pixels information;
Storage each position of animation takes color position information, and the picture for being originally used for comparing is narrowed down to 32x32 Pixel Dimensions, is being schemed The identical movable position of on piece takes five pixel coordinates, and four are maximum — that is, angular coordinate and a median up and down;
With this five values from value pixel value in the 32x32 size picture of the backup, annular graduated colors make corresponding position Material.
9. the interactive teaching method of preschool education according to claim 2, which is characterized in that the preschool education interaction religion System further includes voice acquisition device, the method also includes:
S14: children's classroom interaction frequency that the multiple voice acquisition devices of Cloud Server real-time reception obtain simultaneously records;
S15: the Cloud Server calculates the average volume size in interaction time by the voice messaging of statistics access, according to institute State aggressiveness level and record that average volume size determines classroom children;
S16: the Cloud Server extracts the voice mood of each face in the classroom video information by speech recognition algorithm Characteristic information, and match cognization is carried out to the voice mood characteristic information according to default voice mood characteristic parameter, determine children Emotional state and absorbed state of the youngster in classroom.
10. the interactive teaching method of preschool education according to claim 1, which is characterized in that the preschool education interaction Tutoring system further includes order sending device, in S7, after generating content of courses text information based on the target object, is also wrapped It includes:
S71: the instruction of order sending device sending action posture and voice messaging;
S72: order sending device obtains the motion information that corresponding each skeleton point is instructed with action, and passes through camera shooting Head is acquired real-time body's operation information, and the motion information of each skeleton point is sent to the server end;
S73: the server end parses the motion information of each skeleton point, generates the limb motion line of user Journey is searched in the related information table of pre-generated movement thread and limb action and is matched and the limb motion thread phase of user Corresponding limb action, and control and execute display on the virtual portrait of the mutual education project interface of the terminal presentation facility The limb action;
The S73 includes:
S731: Cloud Server is pre-configured with the related information table for generating movement thread and limb action;
S732: Cloud Server parses the motion information of each skeleton point, obtains each bone in three-dimensional system of coordinate Displacement of the bone point in x-axis, y-axis and z-axis generates the limb motion thread of user according to the displacement information of each skeleton point;
S733: Cloud Server is searched in the related information table of pre-generated movement thread and limb action to be matched with user's The corresponding limb action of limb motion thread;
S734: Cloud Server judges that matching limbs corresponding with the limb motion thread of user are searched in related information table to be moved Whether succeed;If so, control executes on the virtual portrait in the display screen shows the limb action;If it is not, then The limb motion thread is parsed, determines the limb motion content of user, forms limb action, and is continued to execute described Control is executing the step of showing the limb action on the virtual portrait in the display screen, the limb motion content includes Skeleton point moving direction and moving displacement.
CN201811424917.9A 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method Active CN109637207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811424917.9A CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811424917.9A CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Publications (2)

Publication Number Publication Date
CN109637207A true CN109637207A (en) 2019-04-16
CN109637207B CN109637207B (en) 2020-09-01

Family

ID=66069260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811424917.9A Active CN109637207B (en) 2018-11-27 2018-11-27 Preschool education interactive teaching device and teaching method

Country Status (1)

Country Link
CN (1) CN109637207B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781966A (en) * 2019-10-23 2020-02-11 史文华 Method and device for identifying character learning sensitive period of infant and electronic equipment
CN110826510A (en) * 2019-11-12 2020-02-21 电子科技大学 Three-dimensional teaching classroom implementation method based on expression emotion calculation
CN110909702A (en) * 2019-11-29 2020-03-24 侯莉佳 Artificial intelligence-based infant sensitivity period direction analysis method
CN111311460A (en) * 2020-04-08 2020-06-19 上海乂学教育科技有限公司 Development type teaching system for children
CN111613100A (en) * 2020-04-30 2020-09-01 华为技术有限公司 Interpretation and drawing method and device, electronic equipment and intelligent robot
CN111638783A (en) * 2020-05-18 2020-09-08 广东小天才科技有限公司 Man-machine interaction method and electronic equipment
CN112381699A (en) * 2020-12-04 2021-02-19 湖北致未来智能教育科技有限公司 Automatic interactive intelligent education management system
CN112734609A (en) * 2021-01-06 2021-04-30 西安康宸科技有限公司 Artificial intelligence-based early child development management system
CN112954235A (en) * 2021-02-04 2021-06-11 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN114402277A (en) * 2019-11-28 2022-04-26 多玩国株式会社 Content control system, content control method, and content control program
CN115951851A (en) * 2022-09-27 2023-04-11 武汉市公共交通集团有限责任公司信息中心 Integrated display system for dispatching of bus station
CN116226411A (en) * 2023-05-06 2023-06-06 深圳市人马互动科技有限公司 Interactive information processing method and device for interactive project based on animation
CN117218912A (en) * 2023-05-09 2023-12-12 华中师范大学 Intelligent education interaction system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition
CN105493130A (en) * 2013-10-07 2016-04-13 英特尔公司 Adaptive learning environment driven by real-time identification of engagement level
US20180053431A1 (en) * 2016-05-19 2018-02-22 Timothy J. Young Computer architecture for customizing the content of publications and multimedia
CN107729491A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 Improve the method, apparatus and equipment of the accuracy rate of topic answer search
US20180330630A1 (en) * 2017-05-11 2018-11-15 Shadowbox, Llc Video authoring and simulation training tool
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604382A (en) * 2009-06-26 2009-12-16 华中师范大学 A kind of learning fatigue recognition interference method based on human facial expression recognition
CN105493130A (en) * 2013-10-07 2016-04-13 英特尔公司 Adaptive learning environment driven by real-time identification of engagement level
US20180053431A1 (en) * 2016-05-19 2018-02-22 Timothy J. Young Computer architecture for customizing the content of publications and multimedia
US20180330630A1 (en) * 2017-05-11 2018-11-15 Shadowbox, Llc Video authoring and simulation training tool
CN107729491A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 Improve the method, apparatus and equipment of the accuracy rate of topic answer search
CN108877336A (en) * 2018-03-26 2018-11-23 深圳市波心幻海科技有限公司 Teaching method, cloud service platform and tutoring system based on augmented reality

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781966A (en) * 2019-10-23 2020-02-11 史文华 Method and device for identifying character learning sensitive period of infant and electronic equipment
CN110826510A (en) * 2019-11-12 2020-02-21 电子科技大学 Three-dimensional teaching classroom implementation method based on expression emotion calculation
CN114402277A (en) * 2019-11-28 2022-04-26 多玩国株式会社 Content control system, content control method, and content control program
CN110909702A (en) * 2019-11-29 2020-03-24 侯莉佳 Artificial intelligence-based infant sensitivity period direction analysis method
CN110909702B (en) * 2019-11-29 2023-09-22 侯莉佳 Artificial intelligence-based infant sensitive period direction analysis method
CN111311460A (en) * 2020-04-08 2020-06-19 上海乂学教育科技有限公司 Development type teaching system for children
CN111613100A (en) * 2020-04-30 2020-09-01 华为技术有限公司 Interpretation and drawing method and device, electronic equipment and intelligent robot
CN111638783A (en) * 2020-05-18 2020-09-08 广东小天才科技有限公司 Man-machine interaction method and electronic equipment
CN112381699A (en) * 2020-12-04 2021-02-19 湖北致未来智能教育科技有限公司 Automatic interactive intelligent education management system
CN112734609A (en) * 2021-01-06 2021-04-30 西安康宸科技有限公司 Artificial intelligence-based early child development management system
CN112954235A (en) * 2021-02-04 2021-06-11 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN112954235B (en) * 2021-02-04 2021-10-29 读书郎教育科技有限公司 Early education panel interaction method based on family interaction
CN115951851A (en) * 2022-09-27 2023-04-11 武汉市公共交通集团有限责任公司信息中心 Integrated display system for dispatching of bus station
CN116226411A (en) * 2023-05-06 2023-06-06 深圳市人马互动科技有限公司 Interactive information processing method and device for interactive project based on animation
CN116226411B (en) * 2023-05-06 2023-07-28 深圳市人马互动科技有限公司 Interactive information processing method and device for interactive project based on animation
CN117218912A (en) * 2023-05-09 2023-12-12 华中师范大学 Intelligent education interaction system
CN117218912B (en) * 2023-05-09 2024-03-26 华中师范大学 Intelligent education interaction system

Also Published As

Publication number Publication date
CN109637207B (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN109637207A (en) A kind of preschool education interactive teaching device and teaching method
Park et al. A metaverse: Taxonomy, components, applications, and open challenges
JP6888096B2 (en) Robot, server and human-machine interaction methods
US11410570B1 (en) Comprehensive three-dimensional teaching field system and method for operating same
Fothergill et al. Instructing people for training gestural interactive systems
Radvansky et al. Event cognition
CN109176535B (en) Interaction method and system based on intelligent robot
CN109584648B (en) Data generation method and device
CN112199002B (en) Interaction method and device based on virtual role, storage medium and computer equipment
CN110227266A (en) Reality-virtualizing game is constructed using real world Cartographic Virtual Reality System to play environment
CN109614849A (en) Remote teaching method, apparatus, equipment and storage medium based on bio-identification
CN109885595A (en) Course recommended method, device, equipment and storage medium based on artificial intelligence
CN103377568B (en) Multifunctional child somatic sensation educating system
CN109902912B (en) Personalized image aesthetic evaluation method based on character features
CN108460707A (en) A kind of the operation intelligent supervision method and its system of student
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
Gsöllpointner et al. Digital synesthesia: a model for the aesthetics of digital art
CN110531849A (en) A kind of intelligent tutoring system of the augmented reality based on 5G communication
CN113506624A (en) Autism child cognitive ability assessment intervention system based on layer-by-layer generalization push logic
Campbell The theatre of the oppressed in practice today: An introduction to the work and principles of Augusto Boal
Lupton Creating and expressing: Information-as-it-is-experienced
CN117055724A (en) Generating type teaching resource system in virtual teaching scene and working method thereof
CN110245253A (en) A kind of Semantic interaction method and system based on environmental information
CN117651960A (en) Interactive avatar training system
KR102323601B1 (en) System for analyzing growth and development of infants by using big data analytics with deep learning algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant