CN108537321A - A kind of robot teaching's method, apparatus, server and storage medium - Google Patents

A kind of robot teaching's method, apparatus, server and storage medium Download PDF

Info

Publication number
CN108537321A
CN108537321A CN201810230256.XA CN201810230256A CN108537321A CN 108537321 A CN108537321 A CN 108537321A CN 201810230256 A CN201810230256 A CN 201810230256A CN 108537321 A CN108537321 A CN 108537321A
Authority
CN
China
Prior art keywords
target user
robot
input information
teaching
modal input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810230256.XA
Other languages
Chinese (zh)
Inventor
朱辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rubu Technology Co.,Ltd.
Original Assignee
Beijing Intelligent Housekeeper Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Intelligent Housekeeper Technology Co Ltd filed Critical Beijing Intelligent Housekeeper Technology Co Ltd
Priority to CN201810230256.XA priority Critical patent/CN108537321A/en
Publication of CN108537321A publication Critical patent/CN108537321A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of robot teaching's method, apparatus, server and storage medium, this method to include:Acquire the multi-modal input information of current scene;The multi-modal input information is analyzed using emotion engine, determines the affective characteristics of target user;According to the multi-modal input information, the affective characteristics and the interaction results of the target user, the exclusive robot portrait of the target user is generated;Adaptive teaching is carried out to the target user according to the exclusive robot portrait.The present invention passes through the acquisition of multi-modal input information and the excavation of target user's affective characteristics, interaction and the teaching method of true man can be simulated, realize that guiding target user learns in the form of the ideal robot portrait of target user, intelligent and teaching the adaptivity that robot is interacted with target user is improved, the learning interest and learning experience of target user are enhanced.

Description

A kind of robot teaching's method, apparatus, server and storage medium
Technical field
The present invention relates to intelligent robot technology field more particularly to a kind of robot teaching's method, apparatus, server and Storage medium.
Background technology
With the fast development of machine information technology, computer technology and artificial intelligence technology, intelligent robot product is It is no exception in education sector through being deep into people's life and the every field in work more and more.
Currently there are the e-education equipment of wide variety to come assisting child or teen-age study, such as early education The electronic products such as machine and learning machine, but the intelligent robot that can be used in teaching is few few.And existing it can be used in teaching Intelligent robot provides simple one or more of robots and shows form for user's selection, is triggered in user and starts intelligence After robot enters teaching pattern, intelligent robot is implemented to impart knowledge to students with the fixed content of courses, and according to user in learning process In feedback result, the learning state of user is assessed.
Existing intelligent robot imparts knowledge to students to user in the form of machine-made show, and is typically only capable to basis and sets in advance Fixed action is completed in the instruction set, and when being interacted with user, the interactive component triggering that robot is carried by system is ordered It enables, and it is very few with the interactive contact of user.And then the adaptive teaching for meeting user demand and state can not be provided to the user, it uses Family Experience Degree is relatively low.
Invention content
An embodiment of the present invention provides a kind of robot teaching's method, apparatus, server and storage mediums, can improve machine Intelligent and teaching the adaptivity that device people interacts with user.
In a first aspect, an embodiment of the present invention provides a kind of robot teaching's methods, including:
Acquire the multi-modal input information of current scene;
The multi-modal input information is analyzed using emotion engine, determines the affective characteristics of target user;
According to the multi-modal input information, the affective characteristics and the interaction results of the target user, institute is generated State the exclusive robot portrait of target user;
Adaptive teaching is carried out to the target user according to the exclusive robot portrait.
Second aspect, an embodiment of the present invention provides a kind of robot teaching's devices, including:
Information acquisition module, the multi-modal input information for acquiring current scene;
Sentiment analysis module determines that target is used for being analyzed the multi-modal input information using emotion engine The affective characteristics at family;
Portrait generation module, for according to the multi-modal input information, the affective characteristics and the target user Interaction results, generate the target user exclusive robot portrait;
Adaptive teaching module, for adaptively being taught the target user according to the exclusive robot portrait It learns.
The third aspect, an embodiment of the present invention provides a kind of servers, including:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processing Device realizes robot teaching's method described in any embodiment of the present invention.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage mediums, are stored thereon with computer journey Sequence realizes robot teaching's method described in any embodiment of the present invention when the program is executed by processor.
The present invention analyzes multi-modal input information using emotion engine by acquiring multi-modal input information, To multi-modal input information, the visual feedback result and target of target user and robot interactive in comprehensive current scene The affective characteristics that user sends out from inside to outside generate the exclusive robot portrait of target user, and with exclusive robot portrait Form carries out adaptive teaching to target user.The present invention passes through the acquisition of multi-modal input information and user feeling feature It excavates, interaction and the teaching method of true man can be simulated, realize and guide mesh in the form of the ideal robot portrait of target user Mark user learns, and improves intelligent and teaching the adaptivity that robot is interacted with target user, enhances target user Learning interest and learning experience.
Description of the drawings
Fig. 1 is a kind of flow chart for robot teaching's method that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow chart of robot teaching's method provided by Embodiment 2 of the present invention;
Fig. 3 is a kind of structural schematic diagram for robot teaching's device that the embodiment of the present invention three provides;
Fig. 4 is a kind of structural schematic diagram for server that the embodiment of the present invention four provides.
Specific implementation mode
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limitation of the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart for robot teaching's method that the embodiment of the present invention one provides, and the present embodiment is applicable to The case where robot imparts knowledge to students to user, this method can be executed by robot teaching's device.This method specifically includes as follows Step:
Step 110, the multi-modal input information for acquiring current scene.
In the specific embodiment of the invention, current scene refers to the scene where robot, can be teaching object target In the family of user or other it is any can use the robot scenes.Can include any article or people in its Scene Object, such as using the target user of robot or the kinsfolk of target user, can also there is nothing.
Multi-modal input information refers to the collected relevant information of many aspects of multiple information sources, although these information It is not quite similar in expression way, confidence level and emphasis, but has inevitable contact between them.Traditional single mode letter The local message of the only target signature of breath acquisition, not only information content is limited, but also for by own characteristic, performance and use The influence of scope, the information that single mode can be provided be often it is imperfect and uncertain, it is sometimes even wrong and can not It makes up, therefore is only difficult to meet the accurate acquisition to target object feature by single mode, influence the intelligent of robot.
Multiple sensors may be used to acquire the multi-modal input information in current scene in the robot of the present embodiment.Example Such as, the voice messaging in current scene is acquired using audio sensor, is acquired in current scene using video sensor Image information acquires the information such as word or button using input equipments such as keyboard, touch screens.It can also use in industry The various sensors generally used in perceive the information in environment, for example, perceived in environment using range sensor The presence of object utilizes temperature sensors for the temperature etc. in perception environment.In the present embodiment, robot system is used Sensing detection means do not limit.The acquisition of multi-modal input information, for scene and user analysis provide comprehensively and can The guarantee leaned on.When deviation occurs in a certain single mode information, system can obtain the single mode of target object using other mode State feature improves the fault-tolerance and reliability of system.
In the present embodiment, teaching pattern, setting and religion of the robot according to user can be entered taking human as triggering robot It learns record and adaptive teaching is carried out to the teaching object of the i.e. robot of target user.It can also be by robot in current scene Acquisition, identification and the processing of multi-modal input information can work as mesh to improve the study frequency or the learning time of target user When mark user is appeared in current scene, actively start adaptive teaching, guiding target user learns.
Step 120 analyzes multi-modal input information using emotion engine, determines the affective characteristics of target user.
In the specific embodiment of the invention, target user refers to the teaching object of robot, such as children or teenager Deng.Target user can be one-to-one relation among teaching with robot, i.e. the exclusive robot of the artificial target user of machine, only The target user provides Teaching Service.Alternatively, target user and robot may be many-to-one relation among teaching, i.e. robot Respective user account is established respectively for multiple target users, to use different teaching methods and the content of courses to different mesh Mark user imparts knowledge to students.
Emotion engine merges a variety of intelligent information processing technologies, is handled collected multi-modal input information, is known It not and analyzes, affective characteristics of the target user in interactive process is calculated.Since emotion is a kind of combined state of the mankind, Not only included to extraneous experience, and but also included the physiological reaction from people itself, therefore the affective characteristics of the mankind can be from side Face fully reflects the information such as mood, cognitive state, hobby tendency and the intention of target user.Specifically, emotion can be utilized Engine respectively analyzes the image information of target user and voice messaging, by realizing the identification of image information to target The emotional characteristics such as the Emotion identification of user, such as happy, sad, frightened, detest, surprised, contempt and neutrality;To target user Cognitive state identification, such as fatigue, thinking, be absorbed in and the cognitive states feature such as absent-minded;To the face action of target user Identification, such as the face actions feature such as nod, shake the head, open eyes, close one's eyes and blink;By to the special tone in voice messaging The identification of word and intonation variation, realizes the identification to emotional characteristics such as the mood of target user and cognitive states.Comprehensive analysis feelings The handling result for feeling engine, so that it is determined that the affective characteristics of target user.
Illustratively, for target user when expression is liked or is interested, face can represent happy expression, Such as smile, while being further accompanied by the action nodded, it is also possible to the meaning of " I likes " is given expression to by language.Example again Such as, for target user when expressing unapprehended, face can represent the expression of doubt, such as frown or curl one's lip, simultaneously It is further accompanied by the action shaken the head, it is also possible to the meaning and the query tone of " not understanding " are given expression to by language.To root According to the above-mentioned multi-modal input information for target user, can analyze the preference information of determining target user, be absorbed in degree with And the affective characteristics that learning state etc. is implicit, it is interacted with user to further more intelligentized.
Step 130, according to multi-modal input information, affective characteristics and the interaction results of target user, generate target and use The exclusive robot at family draws a portrait.
In the specific embodiment of the invention, interacted with target user in order to more intelligentized, and inhale as far as possible The eyeball for drawing target user, improves the learning interest of target user, and robot can be according to collected multi-modal input information, feelings Feel the target user's affective characteristics and target user and the intuitive interaction results of robot that engine analysis determines, it is comprehensive to generate mesh Mark the exclusive robot portrait of user.Wherein, robot portrait not only include intuitive visual on Figure Characteristics, such as gender, The features such as appearance and expression further include that feature, such as age, personality, personality, voice and intonation for showing other images etc. is special Sign.In addition to this, target user can also be that robot sets specific title, and comprehensive by robot simulation is true man. It, can also multi-modal input information, emotion be special in real time also according to target user and in later stage continuous interactive process Sign and interaction results, certain features of synchronized update robot portrait, such as the features such as expression and intonation, improve interactive intelligence It can property
Illustratively, in purchase robot and for the first time in use, to robot into line activating.It can be to robot typing mesh Mark the information of user and its kinsfolk, such as simple personal information, human face image information and voiceprint.Robot root The initialization drawn a portrait to robot is completed according to the information of above-mentioned typing, and gradually improves robot picture in the interactive process in later stage Picture.It can be said when face for example, robot is detected by recognition of face:" hello, you are the first man class that I met, you are My small ownerIt may I ask that is your name" target user feedback say:" yes, I is Xiao Ming." and then according to user's Feedback lock target user, and obtain target user's name information.If according to multi-modal input information and analysis result at this time, Temporarily determine that target user is boy, robot continues:" I thinks that you are a boy, right" target user is anti- Feedback is said:" yes." then obtain target user gender information.Robot continues:" you how old" target user feedback say:“6 Year ".Then obtain the age information of target user.Robot continues:" you think that I is boy or girl" target use Family feedback is said:" you are girl." in turn, robot changes voice tone color and is the sound of girl, and continues:" I am present Sound you like" target user feedback say:" like!" to complete the gender setting of robot, i.e. girl.And In subsequent continuous interaction, the determination of the information such as age, gender, personality of robot and adaptive is completed eventually by these information It should be arranged, generate favorite robot portrait in target user mind.
Step 140 carries out adaptive teaching according to exclusive robot portrait to target user.
In the specific embodiment of the invention, the exclusive robot portrait according to target user is the robot to set The features such as age, personality, personality, voice and intonation export multi-modal information, and adaptive teaching is carried out to user.It can pass through Recognition of face to target user and Application on Voiceprint Recognition, when target user appears in current scene, by current scene Image recognition, if generating the displaying multimode for scene content comprising the scene content that can be used in teaching in current scene State output data, main belt moving-target user carry out interactive teaching.To improve learning interest, the learning time of target user With the study frequency.Alternatively, at a fixed time, equally when target user appears in current scene, according to robot system In teaching record course teaching, review, practice/error correction, test, reading and talk may be implemented into formal teaching Equal multi-mode teachings.And can be in teaching process, the learning state of intelligent recognition target user and be actually used in study It practises the time, while according to multi-modal input information, the interaction results of the affective characteristics of target user and target user, comprehensive, visitor It sees, the study situation of accurate assessment target user, to according to assessment result, the content of courses of the dynamic adjustment to target user And teaching difficulty.The adaptivity and intelligent being further implemented in teaching so that robot also can be targetedly right Target user imparts knowledge to students, and achievees the purpose that teach students in accordance with their aptitude, and improves the learning efficiency of user.
The technical solution of the present embodiment, by acquiring multi-modal input information, and using emotion engine to multi-modal input Information is analyzed, so that the multi-modal input information, target user and robot interactive in comprehensive current scene is intuitive anti- The affective characteristics that feedback result and target user send out from inside to outside generate the exclusive robot portrait of target user, and with special The form for belonging to robot portrait carries out adaptive teaching to target user.The present invention by the acquisition of multi-modal input information and The excavation of user feeling feature can simulate interaction and the teaching method of true man, realize and drawn with the ideal robot of target user The form guiding target user of picture learns, and raising robot interacts intelligent and teaching adaptive with target user Property, enhance the learning interest and learning experience of target user.
Embodiment two
The present embodiment on the basis of the above embodiment 1, provides a preferred implementation side of robot teaching's method Formula can make full use of multi-modal input information and carry out adaptive teaching to target user.Fig. 2 be second embodiment of the present invention provides A kind of robot teaching's method flow chart, as shown in Fig. 2, this method includes step in detail below:
The information of the kinsfolk of step 201, the information of typing target user and target user.
In the specific embodiment of the invention, in purchase robot and for the first time in use, to robot into line activating.According to Family needs can be to the information of robot typing target user and its kinsfolk, wherein the information of target user and family at The information of member includes but is not limited to personally identifiable information, facial image and voiceprint.So far, robot to facial image and Voiceprint carries out feature extraction and identification, to play the piece identity of typing information, face characteristic and vocal print feature association Come, so that the later stage may recognize that piece identity by face or sound.
Step 202, the information according to target user, the exclusive robot portrait of initialized target user.
In the specific embodiment of the invention, the setting of the exclusive robot of target user portrait is to allow target user It is more interesting to interact and learn with robot, therefore robot completes to draw a portrait to robot according to the information of above-mentioned typing Initialization.For example, according to the personally identifiable information of typing, target user is 6 years old spadger for having deep love for movement, accordingly It is similar with target user that robot portrait can be initialized, i.e. the age is 6 years old, and gender is man, and sound is full of the first of dynamic vigor Shi Hua robots draw a portrait.
Step 203, the multi-modal input information for acquiring current scene.
In the specific embodiment of the invention, multiple sensors may be used to acquire the letter of the multi-modal input in current scene Breath, by perceiving the information in environment to the analysis of multi-modal input information.The acquisition of multi-modal input information, be scene and The analysis of user provides comprehensive and reliable guarantee.The present embodiment does not do sensing detection means used by robot system It limits.
Step 204 is respectively identified each single mode input information in multi-modal input information.
Step 205, according to the recognition result of each single mode input information, current scene is analyzed.
If there is target user's appearance in step 206, current scene, actively start adaptive teaching, guiding target user Learnt.
In the specific embodiment of the invention, the image information and sound of current scene are included at least in multi-modal input information Information.Therefore when being activated according to robot the personally identifiable information, facial image of the target user and its kinsfolk of typing and Voiceprint, image information and acoustic information to current scene carry out feature extraction, matching and identification.Due between different people Facial image and vocal print feature be all different, therefore can be to designated person according to facial image and/or voiceprint It is identified.When identifying that target user appears in current scene, then start adaptive teaching, is used with active guiding target Family is learnt, and learning time and the study frequency of target user are improved.Alternatively, at a fixed time, equally working as target user It when appearing in current scene, is recorded according to the teaching in robot system, into formal teaching.
Step 207, according to multi-modal input information, obtain the image information and voice messaging of target user.
Step 208 analyzes the image information and voice messaging of target user using emotion engine, determines that target is used The affective characteristics at family;Wherein, affective characteristics include but is not limited to mood, cognitive state and the face action of target user.
In the specific embodiment of the invention, emotion engine merges a variety of intelligent information processing technologies, to collected multimode The image information of target user and voice messaging are handled, identify and are analyzed in state input information, and target user is calculated Affective characteristics in interactive process.Specifically, emotion engine can be utilized respectively to the image information of target user and voice Information is analyzed, by realizing the identification of image information to the Emotion identification of target user, such as happy, sad, frightened, The emotional characteristics such as detest, surprised, contempt and neutrality;To the identification of the cognitive state of target user, such as fatigue, thinking, it is absorbed in And absent-minded equal cognitive states feature;Face action identification to target user, such as nod, shake the head, open eyes, close one's eyes and blink The face actions feature such as eyes;By the identification to special modal particle in voice messaging and intonation variation, realize to target user Mood and the emotional characteristics such as cognitive state identification.The handling result of comprehensive analysis emotion engine, so that it is determined that target user Affective characteristics.
Step 209, according to multi-modal input information, affective characteristics and the interaction results of target user, obtain target and use The hobby feature at family and current affective characteristics.
In the specific embodiment of the invention, the interaction results of multi-modal input information and target user are visual feedbacks to machine The visual information of device people, and affective characteristics are the hiding informations therefrom excavated, therefore comprehensive visual information and hiding information, from And determine the hobby feature of target user and current affective characteristics.Such as when road is inquired by robot:" small owner you like night " if target user frowns and say with shaking the head:It " does not like!" then according to information such as the expression of target user and face actions, Fear and the detest for obtaining target user can be analyzed by emotion engine, and then can clearly judge mesh in conjunction with interactive information Mark user does not like night especially.
Step 210, foundation hobby feature and current affective characteristics, are improved or the exclusive robot of update target user is drawn Picture.
In the specific embodiment of the invention, according to the hobby feature of target user, robot portrait is improved or is updated to The appearance that target user likes furthers robot at a distance from target user to the aspect that evading target user detests.And root According to the current affective characteristics of target user, cooperation target user updates robot portrait, such as the expression of robot or intonation etc., Increase the intelligent of interaction.For example, in the examples described above, when target user indicates not liking night, robot also updates One secondary frightened or detest expression, echos the affective state of target user.
Step 211 carries out adaptive teaching according to exclusive robot portrait to user.
In the specific embodiment of the invention, the exclusive robot portrait according to target user is the robot to set The features such as age, personality, personality, expression, voice and intonation export multi-modal information, and adaptive teaching is carried out to user.
Preferably, according to multi-modal input information, the scene content that can be used in teaching in current scene is determined;Generate needle To the multi-modal output data of the displaying of scene content;It draws a portrait according to exclusive robot and exports the multi-modal output data of displaying, Interactive teaching is carried out to target user.
It, can be with scene where automatic identification target user according to multi-modal input information in the specific embodiment of the invention In object, when identified object can be suitable for current teaching, then combining target user object master in the scene The dynamic mode exchanged with target user carries out the interactive teaching of displaying.
Illustratively, at noon 11:30, robot identifies mother by acquiring the multi-modal input information in scene It holds food to pass by be put on dining table, at the same time target user, that is, child appears in current scene, and then robot is at once Into interactive teaching, target user says in robot:" good fragrant food, it appears that yum-yum, I is hungry a little, I Am hungry, I am hungry, do you hangry" combined with scene of having a meal actively exchanged with child by way of into The interactive teaching of row displaying.
Preferably, the interaction results according to multi-modal input information, the affective characteristics of target user and target user, carry Multi-modal input information when target user being taken to learn;The feelings of the corresponding target user of multi-modal input information when according to study Feel feature and the interaction results of target user, determines the learning state of target user and be actually used in the learning time of study; Establish learning Content, learning state and the correspondence of learning time of target user;It generates for the formal of correspondence Change multi-modal output data, exports formalized multi-modal output data according to exclusive robot portrait, target user is carried out more Pattern Teaching in Women.
In the specific embodiment of the invention, according to collected multi-modal input information, only extract target user really into Multi-modal input information when row study.It is affected by environment without multi-modal in the study to evade out target user Input information, and then the affective characteristics of multi-modal input information corresponding target user when according to study and target user Interaction results determine and the learning state of target user and are actually used in the learning time of study, and will be in the study of target user Hold, learning state and learning time associate, according to above-mentioned incidence relation, to target user may be implemented course teaching, The multi-mode teachings such as review, practice/error correction, test, reading and talk.
Illustratively, every night 19 is set:30 couples of target users impart knowledge to students or robot is according to target user It practises regularity summarization and goes out every night 19:30 couples of target users impart knowledge to students.And then at night 19:When 30, robot passes through acquisition Multi-modal input information in scene, identifies that target user appears in current scene, into formal teaching.Robot pair Target user says:" small owner, you know that the English of apple is apple, and let us comes together to learn more fruit bars!" it Start the education theory according to profession afterwards, course framework teaches child and learns word related with fruit and sentence.
For another example second day after learning word related with fruit and sentence, when robot identifies that target is used When family is appeared in current scene, into formal teaching.Target user says in robot:" small owner allows me to examine You, you know how the English of apple is said" target user feedback say:“Apple." with similar to this mode, reach multiple The purpose of habit.
For another example after learning word related with fruit and sentence after a week, when robot identifies that target is used When family is appeared in current scene, into formal teaching.Target user says in robot:" small owner, this week, you learnt The word of 3 fruit:Apple, banana, pear, present let us play a game together, what I can reward to you, see What this is" robot shows that the picture of apple, target user's feedback are said on the screen:“Apple." robot feedback road: " you are excellent, and pronunciation is very accurate, and it is what to take another look at this" robot shows the picture of banana, target user's feedback on the screen It says:“Banana." robot feedback road:“Good job!The pronunciation of small owner, banana are less perfect, are please read with me banana." error correction carried out to the pronunciation of target user with this, and child is guided to repeat pronunciation, it teaches child and correctly sends out Sound.With similar this mode, practice/error correction is achieved the purpose that.
Preferably, according to multi-modal input information, the affective characteristics of target user, the interaction results of target user and right It should be related to, the study situation of target user is assessed, generate assessment report;According to assessment report, dynamic adjustment is to target The content of courses of user and teaching difficulty.
It, can not only be by the interaction of target user and robot as a result, but also can tie in the specific embodiment of the invention Close other multi-modal input informations, for example, the absorbed degree of the target user obtained by above-mentioned analysis, expression shape change, Reaction time with And the informixes such as open answer assess the study situation of target user, generate assessment report.And then according to assessment Report, the content of courses and teaching difficulty of the dynamic adjustment to target user.
Illustratively, during carrying out interactive learning with robot, target user is returning target user at this time One of topic is answered, i.e., " could you tell me on screen there are several applesIf " at this time child mother inquire child school bag put where , then after answer has been stopped over 30 seconds, target user just starts answer, and has used 10 seconds to reply " having 7 apples ".In turn Robot is known according to multi-modal input information, the affective characteristics of target user, the interaction results of target user and correspondence It is 10 seconds rather than 40 seconds not go out target user to be actually used in the time of answer.And the feedback result of combining target user is correct Property, judge the difficulty of this topic.It is understood that the feedback time and feedback result of content of courses difficulty and target user Correctness is directly proportional, and when feedback time is longer and/or feedback result is incorrect, then content of courses difficulty is bigger, conversely, when feedback Time is shorter and feedback result is correct, then content of courses difficulty is lower.When content of courses difficulty is larger and target user is difficult to connect By or receive the slower or content of courses excessively be easy and less than target user grasp level when, then should dynamically adjust to mesh Mark the content of courses and the teaching difficulty of user.Therefore avoid due to the calculating Reaction time of mistake and by the difficulty of this topic into The case where row erroneous judgement, to more intelligent help target user's adaptive learning.
The technical solution of the present embodiment, by acquiring multi-modal input information, and using emotion engine to multi-modal input Information is analyzed, so that the multi-modal input information, target user and robot interactive in comprehensive current scene is intuitive anti- The affective characteristics that feedback result and target user send out from inside to outside generate the exclusive robot portrait of target user.And with special The form for belonging to robot portrait carries out the adaptive teachings such as interactive teaching and multi-mode teaching to target user.Finally intelligently The study situation of target user is assessed, it is difficult to the content of courses and teaching of target user according to the adjustment of assessment result dynamic Degree.The present invention by the acquisition of multi-modal input information and the excavation of user feeling feature, can simulate true man interaction and Teaching method realizes that guiding target user learns in the form of the ideal robot portrait of target user.And according to multimode State input information intelligently assesses the study situation of target user, the content of courses to dynamic adjustment to target user And teaching difficulty.Intelligent and teaching the adaptivity that robot is interacted with user is improved, the study for enhancing target user is emerging Interest and learning experience.
Embodiment three
Fig. 3 is a kind of structural schematic diagram for robot teaching's device that the embodiment of the present invention three provides, and the present embodiment can fit The case where imparting knowledge to students to user for robot, the device can realize the robot teaching side described in any embodiment of the present invention Method.The device specifically includes:
Information acquisition module 310, the multi-modal input information for acquiring current scene;
Sentiment analysis module 320 determines target for being analyzed the multi-modal input information using emotion engine The affective characteristics of user;
Portrait generation module 330, for being used according to the multi-modal input information, the affective characteristics and the target The interaction results at family generate the exclusive robot portrait of the target user;
Adaptive teaching module 340, it is adaptive for being carried out to the target user according to the exclusive robot portrait Teaching.
Further, described device includes:
Information identification module 350 is used for after the multi-modal input information of the acquisition current scene, respectively to described Each single mode input information in multi-modal input information is identified;
Scene analysis module 360 divides current scene for the recognition result according to each single mode input information Analysis;
If starting module 370 of imparting knowledge to students actively starts adaptive religion for there is the target user to occur in current scene It learns, the target user is guided to learn.
Preferably, the sentiment analysis module 320, including:
Information acquisition unit, image information for according to the multi-modal input information, obtaining the target user and Voice messaging;
Emotion determination unit, for using the emotion engine to the image information of the target user and voice messaging into Row analysis, determines the affective characteristics of the target user;Wherein, the affective characteristics include but is not limited to the target user Mood, cognitive state and face action.
Further, described device further includes:
Data input module 380 is used for before the multi-modal input information of the acquisition current scene, mesh described in typing Mark the information of the information of user and the kinsfolk of the target user;
Active module 390 initializes the exclusive robot of the target user for the information according to the target user Portrait.
Preferably, the portrait generation module 330, including:
Feature acquiring unit, for according to the multi-modal input information, the affective characteristics and the target user Interaction results, obtain the hobby feature of the target user and current affective characteristics;
Portrait updating unit is used for according to the hobby feature and current affective characteristics, improving or updating the target The exclusive robot at family draws a portrait.
Preferably, the adaptive teaching module 340, including:
Scene determination unit can be used in teaching for according to the multi-modal input information, determining in current scene Scene content;
Data generating unit, for generating the multi-modal output data of displaying for the scene content;
Interactive teaching unit, for exporting the multi-modal output data of displaying according to the exclusive robot portrait, Interactive teaching is carried out to the target user.
Preferably, the adaptive teaching module 340, including:
Information extraction unit, for according to the multi-modal input information, the affective characteristics of the target user and institute The interaction results for stating target user extract multi-modal input information when target user's study;
Learn situation determination unit, the corresponding target user of multi-modal input information when for according to the study Affective characteristics and the target user interaction results, determine the target user learning state and be actually used in study Learning time;
Relationship establishes unit, learning Content, the learning state and the study for establishing the target user The correspondence of time;
Multi-mode teaching unit, for generating the formalized multi-modal output data for the correspondence, according to institute It states exclusive robot portrait and exports the formalized multi-modal output data, multi-mode teaching is carried out to the target user.
Preferably, the adaptive teaching module 340 further includes:
Assessment unit, for being used according to the multi-modal input information, the affective characteristics of the target user, the target The interaction results at family and the correspondence assess the study situation of the target user, generate assessment report;
Adjustment unit, for according to the assessment report, the content of courses and teaching of the dynamic adjustment to the target user Difficulty.
The technical solution of the present embodiment realizes activating, being more for robot by the mutual cooperation between each function module The acquisition of mode input information, the identification of information and analysis, the determination of user feeling feature, the generation, adaptive of robot portrait Should impart knowledge to students and the dynamic of the content of courses and difficulty adjustment etc. functions.Acquisition and use of the present invention by multi-modal input information The excavation of family affective characteristics can simulate interaction and the teaching method of true man, realize and drawn a portrait with the ideal robot of target user Form guiding target user learn, the adaptivity for improving interacted with target user intelligent of robot and imparting knowledge to students, Enhance the learning interest and learning experience of target user.
Example IV
Fig. 4 is a kind of structural schematic diagram for server that the embodiment of the present invention four provides.As shown in figure 4, the service utensil Body includes:One or more processors 410, in Fig. 4 by taking a processor 410 as an example;Memory 420, for store one or Multiple programs, when one or more programs are executed by one or more processors 410 so that one or more processors 410 are real Robot teaching's method described in existing any embodiment of the present invention.Processor 410 and memory 420 can by bus or other Mode connects, in Fig. 4 for being connected by bus.
Memory 420 can be used for storing software program, computer executable as a kind of computer readable storage medium Program and module, if the corresponding program instruction of robot teaching's method in the embodiment of the present invention is (for example, multi-modal input is believed The generation of the acquisition of breath and identification and the analysis of emotion and robot portrait).Processor 410 is stored in memory by operation Software program, instruction in 420 and module, the various function application to execute server and data processing, that is, realize Above-mentioned robot teaching's method.
Memory 420 can include mainly storing program area and storage data field, wherein storing program area can store operation system Application program needed for system, at least one function;Storage data field can be stored uses created data etc. according to server. Can also include nonvolatile memory in addition, memory 420 may include high-speed random access memory, for example, at least one A disk memory, flush memory device or other non-volatile solid state memory parts.In some instances, memory 420 can be into One step includes the memory remotely located relative to processor 410, these remote memories can pass through network connection to servicing Device.The example of above-mentioned network includes but not limited to internet, intranet, LAN, mobile radio communication and combinations thereof.
Embodiment five
The embodiment of the present invention five also provides a kind of computer readable storage medium, be stored thereon with computer program (or For computer executable instructions), for executing a kind of robot teaching's method, this method packet when which is executed by processor It includes:
Acquire the multi-modal input information of current scene;
The multi-modal input information is analyzed using emotion engine, determines the affective characteristics of target user;
According to the multi-modal input information, the affective characteristics and the interaction results of the target user, institute is generated State the exclusive robot portrait of target user;
Adaptive teaching is carried out to the target user according to the exclusive robot portrait.
Certainly, a kind of computer readable storage medium that the embodiment of the present invention is provided, computer executable instructions are not It is limited to method operation as described above, the phase in robot teaching's method that any embodiment of the present invention is provided can also be performed Close operation.
By the description above with respect to embodiment, it is apparent to those skilled in the art that, the present invention It can be realized by software and required common hardware, naturally it is also possible to which by hardware realization, but the former is more in many cases Good embodiment.Based on this understanding, technical scheme of the present invention substantially in other words contributes to the prior art Part can be expressed in the form of software products, which can be stored in computer readable storage medium In, such as the floppy disk of computer, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or CD etc., including some instructions are with so that a computer is set Standby (can be personal computer, server or the network equipment etc.) executes the method described in each embodiment of the present invention.It is worth It is noted that in the embodiment of above-mentioned searcher, included each unit and module are only drawn according to function logic Point, but it is not limited to above-mentioned division, as long as corresponding function can be realized;In addition, each functional unit is specific Title is also only to facilitate mutually distinguish, the protection domain being not intended to restrict the invention.
Note that above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The present invention is not limited to specific embodiments described here, can carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out to the present invention by above example It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also May include other more equivalent embodiments, and the scope of the present invention is determined by scope of the appended claims.

Claims (11)

1. a kind of robot teaching's method, which is characterized in that including:
Acquire the multi-modal input information of current scene;
The multi-modal input information is analyzed using emotion engine, determines the affective characteristics of target user;
According to the multi-modal input information, the affective characteristics and the interaction results of the target user, the mesh is generated Mark the exclusive robot portrait of user;
Adaptive teaching is carried out to the target user according to the exclusive robot portrait.
2. according to the method described in claim 1, it is characterized in that, it is described acquisition current scene multi-modal input information it Afterwards, including:
Each single mode input information in the multi-modal input information is identified respectively;
According to the recognition result of each single mode input information, current scene is analyzed;
If there is the target user to occur in current scene, actively start adaptive teaching, the target user is guided to carry out Study.
3. according to the method described in claim 1, it is characterized in that, described utilize emotion engine to the multi-modal input information It is analyzed, determines the affective characteristics of target user, including:
According to the multi-modal input information, the image information and voice messaging of the target user are obtained;
The image information and voice messaging of the target user are analyzed using the emotion engine, determine that the target is used The affective characteristics at family;Wherein, the affective characteristics include but is not limited to mood, cognitive state and the face of the target user Portion acts.
4. according to the method described in claim 1, it is characterized in that, it is described acquisition current scene multi-modal input information it Before, including:
The information of the kinsfolk of the information of target user described in typing and the target user;
According to the information of the target user, the exclusive robot portrait of the target user is initialized.
5. method according to claim 1 or 4, which is characterized in that described according to the multi-modal input information, the feelings Feel feature and the interaction results of the target user, generates the exclusive robot portrait of the target user, including:
According to the multi-modal input information, the affective characteristics and the interaction results of the target user, the mesh is obtained Mark the hobby feature of user and current affective characteristics;
According to the hobby feature and current affective characteristics, the exclusive robot portrait of the target user is improved or updated.
6. according to the method described in claim 1, it is characterized in that, described draw a portrait according to the exclusive robot to the user Adaptive teaching is carried out, including:
According to the multi-modal input information, the scene content that can be used in teaching in current scene is determined;
Generate the multi-modal output data of displaying for the scene content;
The multi-modal output data of displaying is exported according to the exclusive robot portrait, the target user is carried out interactive Formula is imparted knowledge to students.
7. according to the method described in claim 1, it is characterized in that, described draw a portrait according to the exclusive robot to the user Adaptive teaching is carried out, including:
According to the interaction results of the multi-modal input information, the affective characteristics of the target user and the target user, Extract multi-modal input information when target user's study;
The affective characteristics of the corresponding target user of multi-modal input information when according to the study and the target are used The interaction results at family determine the learning state of the target user and are actually used in the learning time of study;
Establish learning Content, the learning state and the correspondence of the learning time of the target user;
The formalized multi-modal output data for the correspondence is generated, described in the exclusive robot portrait output Formalized multi-modal output data carries out multi-mode teaching to the target user.
8. method according to claim 1 or claim 7, which is characterized in that described to draw a portrait to described according to the exclusive robot User carries out adaptive teaching, further includes:
According to the multi-modal input information, the affective characteristics of the target user, the target user interaction results and The correspondence assesses the study situation of the target user, generates assessment report;
According to the assessment report, the content of courses and teaching difficulty of the dynamic adjustment to the target user.
9. a kind of robot teaching's device, which is characterized in that including:
Information acquisition module, the multi-modal input information for acquiring current scene;
Sentiment analysis module determines target user's for being analyzed the multi-modal input information using emotion engine Affective characteristics;
Portrait generation module, for according to the multi-modal input information, the affective characteristics and the friendship of the target user Mutually as a result, generating the exclusive robot portrait of the target user;
Adaptive teaching module, for carrying out adaptive teaching to the target user according to the exclusive robot portrait.
10. a kind of server, which is characterized in that including:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processors are real Now such as robot teaching's method described in any item of the claim 1 to 8.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Such as robot teaching's method described in any item of the claim 1 to 8 is realized when execution.
CN201810230256.XA 2018-03-20 2018-03-20 A kind of robot teaching's method, apparatus, server and storage medium Pending CN108537321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810230256.XA CN108537321A (en) 2018-03-20 2018-03-20 A kind of robot teaching's method, apparatus, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810230256.XA CN108537321A (en) 2018-03-20 2018-03-20 A kind of robot teaching's method, apparatus, server and storage medium

Publications (1)

Publication Number Publication Date
CN108537321A true CN108537321A (en) 2018-09-14

Family

ID=63484195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810230256.XA Pending CN108537321A (en) 2018-03-20 2018-03-20 A kind of robot teaching's method, apparatus, server and storage medium

Country Status (1)

Country Link
CN (1) CN108537321A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166365A (en) * 2018-09-21 2019-01-08 深圳市科迈爱康科技有限公司 The method and system of more mesh robot language teaching
CN109524085A (en) * 2018-10-25 2019-03-26 广州市和声信息技术有限公司 It is a kind of based on interactive cognitive analysis method and system
CN109754810A (en) * 2019-02-21 2019-05-14 珠海格力电器股份有限公司 A kind of sound control method, device, storage medium and air-conditioning
CN109918409A (en) * 2019-03-04 2019-06-21 珠海格力电器股份有限公司 A kind of equipment portrait construction method, device, storage medium and equipment
CN109949619A (en) * 2019-04-19 2019-06-28 安徽智训机器人技术有限公司 A kind of household instruction robot system of self study
CN110111159A (en) * 2019-05-15 2019-08-09 邵美琪 A kind of medium-sized and small enterprises questionnaire service system
CN110120170A (en) * 2019-04-19 2019-08-13 安徽智训机器人技术有限公司 A kind of educational robot with mood setting
CN110825824A (en) * 2019-10-16 2020-02-21 天津大学 User relation portrayal method based on semantic visual/non-visual user character expression
CN110992222A (en) * 2019-11-05 2020-04-10 深圳追一科技有限公司 Teaching interaction method and device, terminal equipment and storage medium
CN111695777A (en) * 2020-05-11 2020-09-22 深圳追一科技有限公司 Teaching method, teaching device, electronic device and storage medium
CN111785109A (en) * 2020-07-07 2020-10-16 上海茂声智能科技有限公司 Medical robot answering method, device, system, equipment and storage medium
CN111966221A (en) * 2020-08-10 2020-11-20 广州汽车集团股份有限公司 In-vehicle interaction processing method and device
CN112733994A (en) * 2020-12-10 2021-04-30 中国科学院深圳先进技术研究院 Autonomous emotion generation method and system for robot and application
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior
JP2023021878A (en) * 2021-08-02 2023-02-14 ベアー ロボティックス,インコーポレイテッド Method, system, and non-transitory computer-readable recording medium for controlling serving robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258450A (en) * 2013-03-22 2013-08-21 华中师范大学 Intelligent learning platform for children with autism
US20160303737A1 (en) * 2015-04-15 2016-10-20 Abb Technology Ltd. Method and apparatus for robot path teaching
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN106844675A (en) * 2017-01-24 2017-06-13 北京光年无限科技有限公司 A kind of robot multi-modal output intent and robot for children
CN106886162A (en) * 2017-01-13 2017-06-23 深圳前海勇艺达机器人有限公司 The method of smart home management and its robot device
CN106919251A (en) * 2017-01-09 2017-07-04 重庆邮电大学 A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition
CN107689020A (en) * 2017-09-11 2018-02-13 深圳市鼎盛智能科技有限公司 The data processing method and system of intelligent robot
CN107765852A (en) * 2017-10-11 2018-03-06 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258450A (en) * 2013-03-22 2013-08-21 华中师范大学 Intelligent learning platform for children with autism
US20160303737A1 (en) * 2015-04-15 2016-10-20 Abb Technology Ltd. Method and apparatus for robot path teaching
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106919251A (en) * 2017-01-09 2017-07-04 重庆邮电大学 A kind of collaborative virtual learning environment natural interactive method based on multi-modal emotion recognition
CN106886162A (en) * 2017-01-13 2017-06-23 深圳前海勇艺达机器人有限公司 The method of smart home management and its robot device
CN106844675A (en) * 2017-01-24 2017-06-13 北京光年无限科技有限公司 A kind of robot multi-modal output intent and robot for children
CN107689020A (en) * 2017-09-11 2018-02-13 深圳市鼎盛智能科技有限公司 The data processing method and system of intelligent robot
CN107765852A (en) * 2017-10-11 2018-03-06 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐立芳等: "智能教育与教育智能化技术研究", 《教育现代化》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166365A (en) * 2018-09-21 2019-01-08 深圳市科迈爱康科技有限公司 The method and system of more mesh robot language teaching
CN109524085A (en) * 2018-10-25 2019-03-26 广州市和声信息技术有限公司 It is a kind of based on interactive cognitive analysis method and system
CN109754810A (en) * 2019-02-21 2019-05-14 珠海格力电器股份有限公司 A kind of sound control method, device, storage medium and air-conditioning
CN109918409A (en) * 2019-03-04 2019-06-21 珠海格力电器股份有限公司 A kind of equipment portrait construction method, device, storage medium and equipment
CN109949619A (en) * 2019-04-19 2019-06-28 安徽智训机器人技术有限公司 A kind of household instruction robot system of self study
CN110120170A (en) * 2019-04-19 2019-08-13 安徽智训机器人技术有限公司 A kind of educational robot with mood setting
CN110111159A (en) * 2019-05-15 2019-08-09 邵美琪 A kind of medium-sized and small enterprises questionnaire service system
CN110825824A (en) * 2019-10-16 2020-02-21 天津大学 User relation portrayal method based on semantic visual/non-visual user character expression
CN110992222A (en) * 2019-11-05 2020-04-10 深圳追一科技有限公司 Teaching interaction method and device, terminal equipment and storage medium
CN111695777A (en) * 2020-05-11 2020-09-22 深圳追一科技有限公司 Teaching method, teaching device, electronic device and storage medium
CN111785109A (en) * 2020-07-07 2020-10-16 上海茂声智能科技有限公司 Medical robot answering method, device, system, equipment and storage medium
CN111785109B (en) * 2020-07-07 2022-07-12 上海茂声智能科技有限公司 Medical robot answering method, device, system, equipment and storage medium
CN111966221A (en) * 2020-08-10 2020-11-20 广州汽车集团股份有限公司 In-vehicle interaction processing method and device
CN111966221B (en) * 2020-08-10 2024-04-26 广州汽车集团股份有限公司 In-vehicle interaction processing method and device
CN112733994A (en) * 2020-12-10 2021-04-30 中国科学院深圳先进技术研究院 Autonomous emotion generation method and system for robot and application
JP2023021878A (en) * 2021-08-02 2023-02-14 ベアー ロボティックス,インコーポレイテッド Method, system, and non-transitory computer-readable recording medium for controlling serving robot
JP7382991B2 (en) 2021-08-02 2023-11-17 ベアー ロボティックス,インコーポレイテッド Method, system and non-transitory computer-readable recording medium for controlling a serving robot
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior

Similar Documents

Publication Publication Date Title
CN108537321A (en) A kind of robot teaching's method, apparatus, server and storage medium
Hortensius et al. The perception of emotion in artificial agents
McDuff et al. Designing emotionally sentient agents
Wang et al. Examining the use of nonverbal communication in virtual agents
Sarrafzadeh et al. “How do you know that I don’t understand?” A look at the future of intelligent tutoring systems
Vinciarelli et al. A survey of personality computing
Amirova et al. 10 years of human-nao interaction research: A scoping review
KR101604593B1 (en) Method for modifying a representation based upon a user instruction
Beck et al. Interpretation of emotional body language displayed by a humanoid robot: A case study with children
US20150072322A1 (en) Situated simulation for training, education, and therapy
US20170169715A1 (en) User state model adaptation through machine driven labeling
Hu et al. Storytelling agents with personality and adaptivity
Imbernon Cuadrado et al. ARTIE: An integrated environment for the development of affective robot tutors
Maroto-Gómez et al. Active learning based on computer vision and human–robot interaction for the user profiling and behavior personalization of an autonomous social robot
Chen et al. Dyadic affect in parent-child multimodal interaction: Introducing the dami-p2c dataset and its preliminary analysis
KR20160051020A (en) User-interaction toy and interaction method of the toy
Kenny et al. Embodied conversational virtual patients
Ince et al. An audiovisual interface-based drumming system for multimodal human–robot interaction
Rach et al. Emotion recognition based preference modelling in argumentative dialogue systems
Yu Robot behavior generation and human behavior understanding in natural human-robot interaction
Reed et al. Negotiating Experience and Communicating Information Through Abstract Metaphor
Robben et al. It’s nao or never! facilitate bonding between a child and a social robot: Exploring the possibility of a robot adaptive to personality
Johal Companion Robots Behaving with Style: Towards Plasticity in Social Human-Robot Interaction
Ritschel Real-time generation and adaptation of social companion robot behaviors
Hortensius et al. The perception of emotion in artificial agents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 508-598, Xitian Gezhuang Town Government Office Building, No. 8 Xitong Road, Miyun District Economic Development Zone, Beijing 101500

Applicant after: BEIJING ROOBO TECHNOLOGY Co.,Ltd.

Address before: Room 508-598, Xitian Gezhuang Town Government Office Building, No. 8 Xitong Road, Miyun District Economic Development Zone, Beijing 101500

Applicant before: BEIJING INTELLIGENT STEWARD Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20210824

Address after: Room 301-112, floor 3, building 2, No. 18, YANGFANGDIAN Road, Haidian District, Beijing 100089

Applicant after: Beijing Rubu Technology Co.,Ltd.

Address before: Room 508-598, Xitian Gezhuang Town Government Office Building, No. 8 Xitong Road, Miyun District Economic Development Zone, Beijing 101500

Applicant before: BEIJING ROOBO TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20180914

RJ01 Rejection of invention patent application after publication