CN106541408B - Child behavior bootstrap technique based on intelligent robot and system - Google Patents
Child behavior bootstrap technique based on intelligent robot and system Download PDFInfo
- Publication number
- CN106541408B CN106541408B CN201610887338.2A CN201610887338A CN106541408B CN 106541408 B CN106541408 B CN 106541408B CN 201610887338 A CN201610887338 A CN 201610887338A CN 106541408 B CN106541408 B CN 106541408B
- Authority
- CN
- China
- Prior art keywords
- perception
- result
- setting
- decision
- child behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
The invention discloses a kind of child behavior bootstrap technique and system based on intelligent robot, the bootstrap technique include obtaining the multi-modal input information of parent and children, and parse to the multi-modal input information;Judge whether to need to guide current child behavior according to the result of parsing, and generates the decision of behavioral guidance when being judged as needing guiding;Multi-modal expression is exported based on the result of decision.The bootstrap technique can realize preferable communication effectiveness, solve the problems, such as well to interact with child since parent lacks the experience looked after children at this stage.
Description
Technical field
The invention belongs to field in intelligent robotics, especially design a kind of child behavior bootstrap technique based on intelligent robot
And system.
Background technology
With the development of intelligent robot technology, the application of intelligent robot has gradually been deep into each of people's life
Aspect.From aided education, the various aspects of health care to public service, the busy figure of intelligent robot can be seen.
Meanwhile people are also increasing to intelligence machine Man's Demands, no longer only stay in the execution stage to instruction,
People are more desirable to provide more suggest and help using the function of intelligent robot itself for them.Therefore, it is necessary to continuous
The interaction capabilities for improving intelligent robot, to meet the interaction demand of user.
Invention content
The first technical problem to be solved by the present invention is to need to provide a kind of intelligence machine meeting user's interaction demand
People.
In order to solve the above-mentioned technical problem, embodiments herein provides firstly a kind of children based on intelligent robot
Behavioral guidance method, including:The multi-modal input information of parent and children are obtained, and the multi-modal input information is solved
Analysis;Judge whether to need to guide current child behavior according to the result of parsing, and is given birth to when being judged as needing guiding
At the decision of behavioral guidance;Multi-modal expression is exported based on the result of decision.
Preferably, described that the multi-modal input information is parsed, including in conjunction with the multi-modal input information into
Pedestrian group perception is intended to perception and scene perception.
Preferably, the result according to parsing judges whether to need to guide current child behavior, including:Root
It scores the desirability guided for current child behavior according to the result of intention perception and scene perception, works as institute
When commentary point is higher than the score threshold set, it is judged as needing to guide current child behavior;When the scoring is less than
Or equal to setting score threshold when, be judged as that current child behavior need not be guided.
Preferably, described according to being intended to perception and the result of scene perception to being guided for current child behavior
Desirability scores, including:Extraction setting vocabulary and/or setting body from the result for being intended to perception with scene perception
State;By score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure to obtain for current children
The scoring for the desirability that behavior guides.
Preferably, described that multi-modal expression is exported based on the result of decision, including voice output, action output is defeated with expression
Go out.
Embodiments herein additionally provides a kind of child behavior guiding system based on intelligent robot, including:Parsing
Module, obtains the multi-modal input information of parent and children, and is parsed to the multi-modal input information;Decision model
Block judges whether to need to guide current child behavior according to the result of parsing, and when being judged as needing guiding
Generate the decision of behavioral guidance;Output module exports multi-modal expression based on the result of decision.
Preferably, the parsing module carries out crowd's perception in conjunction with the multi-modal input information, is intended to perception and scene
Perception.
Preferably, the decision-making module according to be intended to perception and the result of scene perception to for current child behavior into
The desirability of row guiding scores, and when score threshold of the scoring higher than setting, is judged as needing to current youngster
Virgin behavior guides;When score threshold of the scoring less than or equal to setting, being judged as need not be to current children
Behavior guides.
Preferably, the decision-making module is perceiving the result with scene perception to for current child behavior according to intention
When the desirability guided is scored, including:The extraction setting word from the result for being intended to perception with scene perception
Remittance and/or setting figure;By score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure to obtain needle
Scoring to the desirability that current child behavior guides.
Preferably, multi-modal expression of the output module based on output include voice output, action output it is defeated with expression
Go out.
Compared with prior art, one or more of said program embodiment can have the following advantages that or beneficial to effect
Fruit:
By the multi-modal input function based on intelligent robot, parent and the crowd in children's interaction scenarios, field are obtained
The information of the various aspects such as scape, figure, and comprehensive descision and decision carried out to various perception informations, then by intelligent robot according to sentencing
Disconnected and decision result carries out multi-modal expression output by various ways such as language, action, expressions, passes through intelligence machine
People realizes the behavioral guidance to children, solves that correct behavior guiding cannot be carried out to children since parent lacks experience, influences
Virgin physically and mentally healthy problem.
Other advantages, target and the feature of the present invention will be illustrated in the following description to a certain extent, and
And to a certain extent, based on will be apparent to those skilled in the art to investigating hereafter, Huo Zheke
To be instructed from the practice of the present invention.The target and other advantages of the present invention can be wanted by following specification, right
Specifically noted structure is sought in book and attached drawing to realize and obtain.
Description of the drawings
Attached drawing is used for providing to the technical solution of the application or further understanding for the prior art, and constitution instruction
A part.Wherein, technical side of the attached drawing of the embodiment of the present application together with embodiments herein for explaining the application is expressed
Case, but do not constitute the limitation to technical scheme.
Fig. 1 is to be illustrated according to the flow of the child behavior bootstrap technique based on intelligent robot of first embodiment of the invention
Figure;
Fig. 2 is to be illustrated according to the flow of the child behavior bootstrap technique based on intelligent robot of second embodiment of the invention
Figure;
Fig. 3 is the structural representation that system is guided according to the child behavior based on intelligent robot of third embodiment of the invention
Figure.
Specific implementation mode
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and examples, how to be applied to the present invention whereby
Technological means solves technical problem, and the realization process for reaching relevant art effect can fully understand and implement.This Shen
Each feature that please be in embodiment and embodiment, can be combined with each other under the premise of not colliding, be formed by technical solution
Within protection scope of the present invention.
Due to being unfamiliar with to living scene, common sense, children often may require that suitable behavioral guidance, but due to the overwhelming majority
Parent by training and emotion control is not linked up correctly relatively, and threat consciously or unconsciously threatens, lures child by promise of gain, these sides
Formula is all incorrect, influences the physical and mental health of children.
For example, when parent send kindergarten on children, especially at the separation moment of parent and child, children are easy to be reluctant to
And it is worried.
The present invention proposes solution regarding to the issue above.
First embodiment:
Fig. 1 is to be illustrated according to the flow of the child behavior bootstrap technique based on intelligent robot of first embodiment of the invention
Figure, as shown, this approach includes the following steps:
Step S110, the multi-modal input information of parent and children are obtained, and multi-modal input information is parsed.
Step S120, judge whether to need to guide current child behavior according to the result of parsing, and judging
The decision of behavioral guidance is generated when to need guiding.
Step S130, it is based on the result of decision and exports multi-modal expression.
Specifically, in step s 110, intelligent robot is received interacts both sides' from parent with children's interaction scenarios
Multi-modal input information.Multi-modal input information can be language in-put information, image input information and be adopted by sensor
The information etc. collected.The form of the mode and information that receive multi-modal information is not limited in the present embodiment.
Intelligent robot parses the multi-modal input information received, including combines various multi-modal input informations
Carry out crowd perception is intended to perception and scene perception.
Intelligent robot is after obtaining analysis result, in the step s 120, judges whether to need to working as according to analysis result
Preceding child behavior guides.
Described crowd perception refers to the people information for judging current scene in the present embodiment, to judge whether to belong to children
The range of behavioral guidance.Crowd perception process can by intelligent robot to the multi-modal input such as image information parsed Lai
It realizes.
For example, intelligent robot captures current scene first with camera, can shoot static figure
As information, dynamic video information can also be obtained.This is not limited in the present embodiment, by intelligent robot vision module
Function setting determines.
By the analysis to image information or video information, intelligent robot can judge that the personage in current scene believes
Breath, for example, the gender information of personage and personage age information, and then determine parent gender and age, the gender of children and year
Age, so as to which the interaction scenarios of current parent and children to be carried out to preliminary positioning.
Further, by the analysis to image information or video information, intelligent robot can also be judged to work as front court
The emotional state of each personage in scape, such as judge whether parent is in the emotional states such as irritability, anger or children are in not
Show interest in, cry etc. in emotional states or parent and children are simultaneously each in states above.
Intelligent robot is according to the emotional state of each personage in current scene to determine whether needing to parent and children
Between interbehavior intervened.For example, if parent and children are in friendship that is tranquil, mitigating in current scene
When in mutually, then current interbehavior need not be intervened.If the mood table of one side of parent or children in current scene
It is now stronger or unfriendly, mismatches, then needs to intervene current interbehavior.
The perception of described intention refers to that each personage in current interaction scenarios is judged according to the language of user in the present embodiment
Intention, and then according to personage intention to determine whether need current interbehavior is intervened.This is because passing through
Image information may be insufficient the emotional state of the parent and/or children known to be sentenced with judge or make to make mistake
It is disconnected.
In embodiments of the present invention, it is intended to perception by increasing come auxiliary judgment.Intelligent robot is first to the language of user
Sound input information is acquired and stores.Then call its internal voice analysis module to revealing out in user speech information
The potential intention of user is analyzed.Such as the voice input by analyzing parent " is spoken carefully!" it can know parent's mesh
Before treat the ill-matched behavior of children and take a kind of processing method more forced or suppressed, it is more likely that so that current
Parent further upgrades to quarrel with exchanging for children.
If the action of parent and children all show inviolent in above-mentioned interaction scenarios, by image recognition with regard to nothing
Method makes correct judgement.And intelligent robot will be felt after obtaining the intention of parent by intention perception in conjunction with crowd
The information known and be intended to perception makes comprehensive descision.
It is combined by being perceived crowd and being intended to perception, further improves intelligent robot to current interaction scenarios
The accuracy that actual conditions judge.
Described scene perception refers to the image information obtained using camera in the present embodiment, or utilizes various sensors
Obtain the characteristic information in current environment.The scene information got may include temporal information, spatial information, temperature information
Deng intelligent robot is identified current interaction scenarios in conjunction with above-mentioned scene information, and then judges whether to need to parent
Interbehavior between children is intervened.
Below by by specifically illustrate how combine crowd perception, be intended to perception and scene perception result to being
It is no that the interbehavior intervened between parent and children is needed to judge.
In example 1, parent and children be in following interaction scenarios, and child is unwilling upper kindergarten, such as his possibility
The wish of oneself is expressed by language, ", I is not desired to kindergarten ", while also thinking for oneself is expressed by acting
Method, such as the body of oneself is ceaselessly rocked, or it is not positive in face of parent.And at this moment parent is intended to allow the kindergartens child Qu Shang
, such as " upper kindergarten is gone " is said to child, but again helpless to mismatching of showing of child simultaneously, it is not known that how
The kindergartens child Qu Shang could be persuaded, is interacted between parent and child and is absorbed in communication predicament.
Intelligent robot first passes through camera and obtains image information and video information in current interaction scenarios, and leads to
The means such as image recognition are crossed to carry out crowd's perception, so that it is determined that current interaction scenarios are the interaction fields between parent and children
Scape.
Further, intelligent robot combination crowd perception and intention perception, such as identified by " upper kindergarten is gone "
Parent's it is intended that child goes to kindergarten, and identifies that being intended that for child is not wanted to go to by ", I is not desired to kindergarten "
Kindergarten, the i.e. intention of both sides may lead to interaction conflict on the contrary.
The mood that intelligent robot can also perceive obtained personage in conjunction with crowd carrys out comprehensive descision.Such as do not stopped by child
Ground rocks the body of oneself, or not positive in face of parent, and judges parent by emotional characteristics such as the double hands on hips place of parent
Communication between child is not very smooth, and parent seems not know how child could be persuaded to go to kindergarten.
The intelligent robot various information that in summary crowd's perception is perceived with intention, judge to need to parent and children
Between interbehavior intervened.
Further, if currently interactive scene is that at night, place is in bedroom, that is to say, that actually parent
It is to interact to second day imminent thing with child, then in view of child will may sleep at once, goes not
Go the thing of kindergarten that can again be discussed to when the next morning, if at this time in conjunction with some other situations, such as intelligently
Robot has determined that conflicting between parent and child is not very fierce, then intelligent robot is by combining above-mentioned field
Scape information will make the judgement that need not be intervened the interbehavior between parent and children, be weighed again until second day
Newly judge.
Certainly, if currently interactive scene is in 7 points of parlor in morning, even when intelligent robot has determined that
Conflicting between parent and child is not very fierce, and robot is also likely to combine current scene information, makes needs
The judgement that interbehavior between parent and children is intervened.
By above-mentioned example as can be seen that intelligent robot utilizes the scene information auxiliary judgment that scene perception is obtained,
Be conducive to improve the accuracy judged, improve service quality.
Intelligent robot carries out scene perception mainly by camera or various sensor gain location information, temperature letter
The spatial informations such as breath, and obtaining for the information of time can be by the timekeeping system of intelligent robot itself or by connecting net
Network obtains.The specific means for carrying out scene perception in the present embodiment to intelligent robot does not limit.
In the present embodiment, multi-modal information is obtained by intelligent robot, and by combining various multi-modal informations to carry out
Comprehensive descision is conducive to improve intelligent robot to whether needing the accurate of the judgement guided to current child behavior
Property.
Next, in the step s 120, intelligent robot also when judging to need to guide child behavior, claims tool
The decision of body.In step s 130, intelligent robot carries out exporting multi-modal expression according to specific decision.
For example, scene still as in the previous examples, intelligent robot are made to persuade the kindergartens child Qu Shang
Decision is that use the language of guided bone, for example is diverted the conversation to another topic, and using attitude, expression appropriate of close friend etc. come with child
It links up.
Then, the result of decision of intelligent robot may include being said to child, and " dotey, father like you very much, and mother is also very
You is liked, so we want to allow your upper kindergarten ", and then divert the conversation to another topic, " afternoon, you wanted to allow father or mother to meet you”.
While voice output, intelligent robot can be along with body, the expression of gesture, expression.Such as intelligent robot is into action
It exports, goes trial to be gone to draw the hand of child with hand, promote his adjustment.
Multi-modal output in the present embodiment includes but not limited to the voice output of intelligent robot, action output and table
Feelings export.
What the judgement of intelligent robot and decision can be completed by high in the clouds brain.Various perception letters are collected by intelligent robot
Breath, and various perception informations are sent to high in the clouds brain, high in the clouds brain can obtain decision based on being scanned for intelligent base
As a result, can also based on decision model come obtain decision as a result, not limited it in the present embodiment.
After obtaining the result of decision, high in the clouds brain generates multi-modal expression according to the result of decision and instructs, and will be multi-modal
Expression instruction is transferred to intelligent robot.Intelligent robot carries out specific multi-modal output table according to multi-modal expression instruction
It reaches.
In embodiments of the present invention, the perception of intelligent robot, decision, guiding function are borrowed, child behavior guiding is carried out.
Specifically, the multi-modal input function based on intelligent robot, obtains parent and the crowd in children's interaction scenarios, scene, body
The information of the various aspects such as state, and comprehensive descision and decision carried out to various perception informations, then by intelligent robot according to judging and
The result of decision carries out multi-modal expression output by various ways such as language, action, expressions, passes through intelligent robot reality
Now to the behavioral guidance of children, solves that correct behavior guiding cannot be carried out to children since parent lacks experience, influence children's body
The problem of heart health.
Second embodiment:
Fig. 2 is to be illustrated according to the flow of the child behavior bootstrap technique based on intelligent robot of second embodiment of the invention
Figure, as shown, this approach includes the following steps:
Step S210, the multi-modal input information of parent and children are obtained, and multi-modal input information is parsed.
Step S220, according to being intended to perception and the result of scene perception to the need that are guided for current child behavior
Degree is asked to score.
Step S230, when score threshold of the scoring less than or equal to setting, being judged as need not be to current children's row
To guide.
Step S240, when score threshold of the scoring higher than setting, it is judged as needing to draw current child behavior
It leads.
Step S250, the decision of behavioral guidance is generated when being judged as needing guiding.
Step S260, it is based on the result of decision and exports multi-modal expression.
Specifically, in step S210, intelligent robot is received interacts both sides' from parent with children's interaction scenarios
Multi-modal input information.Multi-modal input information can be language in-put information, image input information and be adopted by sensor
The information etc. collected.
Intelligent robot parses the multi-modal input information received, including combines various multi-modal input informations
Carry out crowd perception is intended to perception and scene perception.
The step executes operation identical with step S110, and details are not described herein again.
In step S220, the result with scene perception is perceived to being guided for current child behavior according to intention
Desirability score, can also specifically include:
Step S221, extraction setting vocabulary and/or setting figure from the result for being intended to perception and scene perception.
Step S222, by score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure to obtain
For the scoring for the desirability that current child behavior guides.
For example, if in current scene, parent says " obedient, you is not liked if not obedient ", this band to child
The mode slightly threatened does not meet correct communication requirements.Therefore, intelligent robot " will not may like " in advance
It is defined as setting vocabulary, and presets score corresponding with the setting vocabulary, such as 2 points.So, when intelligent robot is to parent
Intention perception in when extracting " not liking " this word, just by the desirability guided for current child behavior
Scoring plus 2.
For another example, if in current scene, parent loses the patience of communication gradually, angry when having inserted waist, this mode
Correct communication requirements are not met.Therefore, " slotting waist " this action definition may be in advance setting by intelligent robot
Figure, and score corresponding with setting figure is preset, such as 4 points.So, when intelligent robot from the mood sensing to parent
In when extracting " slotting waist " this action, the scoring of the desirability guided for current child behavior is just added 4.
It, can be according to point with setting after the scoring for the desirability for obtaining guiding for current child behavior
Number threshold value is compared to decide whether to guide child behavior.Specifically, in step S230 or step S240
Judged.
Finally, in step S250 and step S260, the decision of behavioral guidance is generated by high in the clouds brain, then by intelligence machine
People exports multi-modal expression according to the result of decision.Its operation is identical with the first embodiment, and details are not described herein again.
In the present embodiment, by default settings vocabulary and/or setting figure, and corresponding scoring is established so that intelligence
Energy robot can easily measure the desirability guided for current child behavior.Simultaneously according to communication requirements come
Different scores is set separately, is conducive to obtain more rational judgement, improves intelligent robot to whether needing to children
Judgement when behavior guides.
3rd embodiment:
Fig. 3 is the structural representation that system is guided according to the child behavior based on intelligent robot of third embodiment of the invention
Figure, as shown, the system includes:
Parsing module 31, obtains the multi-modal input information of parent and children, and is solved to multi-modal input information
Analysis.
Decision-making module 32 judges whether to need to guide current child behavior according to the result of parsing, and
It is judged as needing generating the decision of behavioral guidance when guiding.
Output module 33 exports multi-modal expression based on the result of decision.
Further, decision-making module 32 can also be divided into extraction unit 321 and scoring unit 322, wherein
Extraction unit 321 sets vocabulary and/or setting figure from extraction in result of the perception with scene perception is intended to.
Score unit 322, by score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure with
Obtain the scoring of the desirability guided for current child behavior.
Decision-making module 32 is judged according to appraisal result:
When score threshold of the scoring higher than setting, it is judged as needing to guide current child behavior.
When score threshold of the scoring less than or equal to setting, it is judged as that current child behavior need not be drawn
It leads.
The concrete operations of above-mentioned each function module can refer to the correlation method step of first embodiment and second embodiment
It obtains, details are not described herein again.
The child behavior based on intelligent robot of the embodiment of the present invention guides system, and multimode is obtained by intelligent robot
State information, and by combining various multi-modal informations to carry out comprehensive descision, be conducive to improve intelligent robot to whether needing pair
The accuracy for the judgement that current child behavior guides.
Although disclosed herein embodiment it is as above, the content is only to facilitate understanding the present invention and adopting
Embodiment is not limited to the present invention.Any those skilled in the art to which this invention pertains are not departing from this
Under the premise of the disclosed spirit and scope of invention, any modification and change can be made in the implementing form and in details,
But the scope of patent protection of the present invention, still should be subject to the scope of the claims as defined in the appended claims.
Claims (8)
1. a kind of child behavior bootstrap technique based on intelligent robot, including:
The multi-modal input information of parent and children are obtained, and the multi-modal input information is parsed;
Judge whether to need to guide current child behavior according to the result of parsing, and is given birth to when being judged as needing guiding
At the decision of behavioral guidance;
Multi-modal expression is exported based on the result of decision;
Wherein, the result of parsing includes being intended to perception and scene perception, and the result according to parsing judges whether to need to working as
Preceding child behavior guides, including:
The desirability guided for current child behavior is commented according to the result of intention perception and scene perception
Point,
When score threshold of the scoring higher than setting, it is judged as needing to guide current child behavior;
When score threshold of the scoring less than or equal to setting, it is judged as that current child behavior need not be drawn
It leads.
2. according to the method described in claim 1, it is characterized in that, described parse the multi-modal input information, also
Including carrying out crowd's perception in conjunction with the multi-modal input information.
3. method according to claim 1 or 2, which is characterized in that described according to the result for being intended to perception and scene perception
It scores the desirability guided for current child behavior, including:
Extraction setting vocabulary and/or setting figure from the result for being intended to perception with scene perception;
By score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure to obtain for current children
The scoring for the desirability that behavior guides.
4. method according to claim 1 or 2, which is characterized in that described to export multi-modal expression, packet based on the result of decision
Include voice output, action output is exported with expression.
5. a kind of child behavior based on intelligent robot guides system, including:
Parsing module, obtains the multi-modal input information of parent and children, and is parsed to the multi-modal input information;
Decision-making module judges whether to need to guide current child behavior, and is being judged as according to the result of parsing
The decision of behavioral guidance is generated when needing to guide;
Output module exports multi-modal expression based on the result of decision;
Wherein, the result of parsing includes being intended to perception and scene perception, and the decision-making module is according to intention perception and scene perception
Result score the desirability guided for current child behavior,
When score threshold of the scoring higher than setting, it is judged as needing to guide current child behavior;
When score threshold of the scoring less than or equal to setting, it is judged as that current child behavior need not be drawn
It leads.
6. system according to claim 5, which is characterized in that the parsing module is herein in connection with the multi-modal input information
Carry out crowd's perception.
7. system according to claim 5 or 6, which is characterized in that the decision-making module is according to intention perception and scene
When the result of perception scores to the desirability guided for current child behavior, including:
Extraction setting vocabulary and/or setting figure from the result for being intended to perception with scene perception;
By score weighted sum corresponding with the obtained setting vocabulary of extraction and/or setting figure to obtain for current children
The scoring for the desirability that behavior guides.
8. system according to claim 5 or 6, which is characterized in that multi-modal expression of the output module based on output
It is exported with expression including voice output, action output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610887338.2A CN106541408B (en) | 2016-10-11 | 2016-10-11 | Child behavior bootstrap technique based on intelligent robot and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610887338.2A CN106541408B (en) | 2016-10-11 | 2016-10-11 | Child behavior bootstrap technique based on intelligent robot and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106541408A CN106541408A (en) | 2017-03-29 |
CN106541408B true CN106541408B (en) | 2018-10-12 |
Family
ID=58368498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610887338.2A Active CN106541408B (en) | 2016-10-11 | 2016-10-11 | Child behavior bootstrap technique based on intelligent robot and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106541408B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109614955A (en) * | 2018-12-29 | 2019-04-12 | 苏州科技大学 | A kind of child growth acquired behavior bootstrap technique based on home intelligent robot |
CN110517690A (en) * | 2019-08-30 | 2019-11-29 | 四川长虹电器股份有限公司 | The bootstrap technique and system of voice control function |
CN110948496A (en) * | 2019-10-12 | 2020-04-03 | 安徽奇智科技有限公司 | Child behavior guiding method and system, electronic equipment and robot |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101083703B1 (en) * | 2009-09-08 | 2011-11-16 | 주식회사 유진로봇 | Education System Capable of Interaction Between Mobile Robot and Students |
CH709251B1 (en) * | 2014-02-01 | 2018-02-28 | Gostanian Nadler Sandrine | System for telepresence. |
CN105082150B (en) * | 2015-08-25 | 2017-04-05 | 国家康复辅具研究中心 | A kind of robot man-machine interaction method based on user emotion and intention assessment |
CN205184786U (en) * | 2015-12-01 | 2016-04-27 | 南通唐人文化传播有限公司 | Children intelligent robot that grows up |
CN105598972B (en) * | 2016-02-04 | 2017-08-08 | 北京光年无限科技有限公司 | A kind of robot system and exchange method |
CN105868827B (en) * | 2016-03-25 | 2019-01-22 | 北京光年无限科技有限公司 | A kind of multi-modal exchange method of intelligent robot and intelligent robot |
-
2016
- 2016-10-11 CN CN201610887338.2A patent/CN106541408B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106541408A (en) | 2017-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107030691B (en) | Data processing method and device for nursing robot | |
CN106096717B (en) | Information processing method towards intelligent robot and system | |
CN104985599B (en) | Study of Intelligent Robot Control method, system and intelligent robot based on artificial intelligence | |
CN102354349B (en) | Human-machine interaction multi-mode early intervention system for improving social interaction capacity of autistic children | |
US8903176B2 (en) | Systems and methods using observed emotional data | |
AU2007342471B2 (en) | Situated simulation for training, education, and therapy | |
CN107765852A (en) | Multi-modal interaction processing method and system based on visual human | |
Kupferberg et al. | Biological movement increases acceptance of humanoid robots as human partners in motor interaction | |
Yang et al. | AI-enabled emotion-aware robot: The fusion of smart clothing, edge clouds and robotics | |
CN106541408B (en) | Child behavior bootstrap technique based on intelligent robot and system | |
Broekens | Emotion and reinforcement: affective facial expressions facilitate robot learning | |
WO2019207896A1 (en) | Information processing system, information processing method, and recording medium | |
CN106447042B (en) | Psychological analysis method and device based on drawing projection | |
US20160078366A1 (en) | Computer system of an artificial intelligence of a cyborg or an android, wherein a received signal-reaction of the computer system of the artificial intelligence of the cyborg or the android, a corresponding association of the computer system of the artificial intelligence of the cyborg or the android, a corresponding thought of the computer system of the artificial intelligence of the cyborg or the android are physically built, and a working method of the computer system of the artificial intelligence of the artificial intelligence of the cyborg or the android | |
CN108899081B (en) | Man-machine interaction system for assisted rehabilitation of autism | |
CN106503043A (en) | A kind of interaction data processing method for intelligent robot | |
CN107943276A (en) | Based on the human body behavioral value of big data platform and early warning | |
Tanevska et al. | A cognitive architecture for socially adaptable robots | |
CN105945949A (en) | Information processing method and system for intelligent robot | |
CN106503786A (en) | Multi-modal exchange method and device for intelligent robot | |
Wang et al. | Research progress of artificial psychology and artificial emotion in China | |
CN113053492B (en) | Self-adaptive virtual reality intervention system and method based on user background and emotion | |
CN105005691A (en) | Social emotion accompanying system | |
KR20100087599A (en) | Method for responding to user emotion with multiple sensors | |
Sarne-Fleischmann et al. | Multimodal communication for guiding a person following robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |