CN106537425A - Method and system for generating robot interaction content, and robot - Google Patents
Method and system for generating robot interaction content, and robot Download PDFInfo
- Publication number
- CN106537425A CN106537425A CN201680001752.7A CN201680001752A CN106537425A CN 106537425 A CN106537425 A CN 106537425A CN 201680001752 A CN201680001752 A CN 201680001752A CN 106537425 A CN106537425 A CN 106537425A
- Authority
- CN
- China
- Prior art keywords
- robot
- life
- user
- parameter
- time axle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
The invention provides a method for generating robot interaction content. The method comprises steps of actively activating the robot; acquiring the user multi-modal information; determining the user's intention according to the user multi-modal information; generating the robot interaction content according to the user multi-modal information and the user intention, in combination with the current robot life timeline. The method adds the life timeline of the robot to the interaction content generation of the robot, makes the robot more humanized when it interacts with the human person, so that the robot has the human life way in the life timeline. The method can enhance the human nature of robot interaction content generation, enhance the human-computer interaction experience and improve the intelligence.
Description
Technical field
The present invention relates to robot interactive technical field, more particularly to a kind of generation method of robot interactive content, it is
System and robot.
Background technology
The generally mankind can actively wake up robot interactive during interacting with a computer, and start to hand over after robot pickup
Mutually, and during expression feedback is made, the mankind are usually that dialogue can be actively initiated after meeting, hear talker through big
Rational expression feedback is carried out after brain pair and the language and Expression analysis of speaker, and for robot, current machine
The interactive mode of people is typically started using pickup, and makes the feedback in expression, and this mode causes the interactivity of robot very
It is low, it is intelligent very low to leave following point:The general robot for actively having changed, primarily serves the purpose of greeting, is that user sets
Language and expression, this mode, in this case robot actually enters also according to the pre-designed interactive mode of the mankind
The output of row expression, this causes robot not have to personalize, it is impossible to as the mankind, seeing that other side is, can be to other side
Expression is analyzed, and is actively inquired the mode of other side afterwards, and is fed back corresponding expression.
Therefore, the automatic detection human face expression for how making robot actively wake up, generates machine expression, and hoisting machine people hands over
The personification that mutually content is generated, is the technical problem of the art urgent need to resolve.
The content of the invention
It is an object of the invention to provide a kind of generation method of robot interactive content, system and robot, make robot
The automatic detection human face expression for actively waking up, generates machine expression, and the personification that hoisting machine people interaction content is generated lifts people
Machine interactive experience, improves intelligent.
The purpose of the present invention is achieved through the following technical solutions:
A kind of generation method of robot interactive content, including:
Robot is waken up actively;
Obtain user's multi-modal information;
User view is determined according to user's multi-modal information;
According to user's multi-modal information and the user view, machine is generated with reference to current robot life-time axle
Device people's interaction content.
Preferably, it is described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Preferably, methods described is further included:Obtain and analyze voice signal;
It is described according to user's multi-modal information and the user view, with reference to current robot life-time axle life
Further include into robot interactive content:
According to user's multi-modal information, voice signal and the user view, when living with reference to current robot
Countershaft generates robot interactive content.
Preferably, the generation method of the parameter of the robot life countershaft includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
Preferably, it is described to specifically include the step of the self cognition of robot is extended:By living scene and machine
The self-recognition of people combines the self cognition curve to be formed based on life-time axle.
Preferably, the step of parameter in the parameter of the self cognition to robot and life-time axle is fitted
Specifically include:Using probabilistic algorithm, each ginseng of the robot on life-time axle after the change of time shafts scenario parameters is calculated
The probability that number changes, forms matched curve.
Preferably, wherein, the life-time axle refers to the time shafts comprising 24 hours a day, in the life-time axle
Parameter at least includes user the daily life behavior for carrying out and the parameter value for representing the behavior on the life-time axle.
The present invention discloses a kind of generation system of robot interactive content, including:
Light sensation automatic detection module, for actively waking up robot;
Expression analysis cloud processing module, for obtaining user's multi-modal information;
Intention assessment module, for determining user view according to user's multi-modal information;
Content generating module, for according to user's multi-modal information and the user view, with reference to current machine
People's life-time axle generates robot interactive content.
Preferably, the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Preferably, the system is further included:Speech analysises cloud processing module, for obtaining and analyzing voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user
Be intended to, robot interactive content is generated with reference to current robot life-time axle.
Preferably, the system includes based on time shafts and artificial intelligence's cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
Preferably, it is described to be further used for artificial intelligence's cloud processing module based on time shafts:By living scene and machine
The self-recognition of people combines the self cognition curve to be formed based on life-time axle.
Preferably, it is described to be further used for artificial intelligence's cloud processing module based on time shafts:Using probabilistic algorithm, calculate
The probability of each parameter change of the robot on life-time axle after the change of time shafts scenario parameters, forms matched curve.
Preferably, wherein, the life-time axle refers to the time shafts comprising 24 hours a day, in the life-time axle
Parameter at least includes user the daily life behavior for carrying out and the parameter value for representing the behavior on the life-time axle.
The present invention discloses a kind of robot, including a kind of generation system of arbitrary described robot interactive content as described above
System.
Compared to existing technology, the present invention has advantages below:Existing robot is generally based on for application scenarios
Solid scene in question and answer interact the generation method of robot interactive content, it is impossible to based on current scene come more accurately raw
Into the expression of robot.The generation method of the present invention includes:Robot is waken up actively;Obtain user's multi-modal information;According to institute
State user's multi-modal information and determine user view;According to user's multi-modal information and the user view, with reference to current
Robot life-time axle generates robot interactive content.Thus can be in the ad-hoc location of user distance robot, machine
Device people actively wakes up, and recognizes according to user's multi-modal information and intention, comes more with reference to the life-time axle of robot
Accurately generate robot interactive content, so as to more accurately, personalize interacting with people and linking up.It is every for people
It life all has certain regularity, in order to allow machine person-to-person communication when more personalize, in 24 hours one day, allow
Robot also has sleep, motion, has a meal, and dances, and reading is had a meal, and makes up, the action such as sleep.Therefore it is of the invention by robot
The life-time axle at place is added in the interaction content generation of robot, makes machine person to person more personalize when interacting,
So that robot has the life style of the mankind in life-time axle, the method is capable of the generation of hoisting machine people interaction content
Personification, lifts man-machine interaction experience, improves intelligent.
Description of the drawings
Fig. 1 is a kind of flow chart of the generation method of robot interactive content of the embodiment of the present invention one;
Fig. 2 is a kind of schematic diagram of the generation system of robot interactive content of the embodiment of the present invention two.
Specific embodiment
Although operations to be described as flow chart the process of order, many of which operation can by concurrently,
Concomitantly or while implement.The order of operations can be rearranged.Process when its operations are completed and can be terminated,
It is also possible to have the additional step being not included in accompanying drawing.Process can correspond to method, function, code, subroutine, son
Program etc..
Computer equipment includes user equipment and the network equipment.Wherein, user equipment or client include but is not limited to electricity
Brain, smart mobile phone, PDA etc.;The network equipment includes but is not limited to single network server, the service of multiple webservers composition
Device group or the cloud being made up of a large amount of computers or the webserver based on cloud computing.Computer equipment can isolated operation realizing
The present invention, also can access network and by with network in other computer equipments interactive operation realizing the present invention.Calculate
Network residing for machine equipment includes but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
Term " first ", " second " etc. are may have been used here to describe unit, but these units should not
When limited by these terms, using these terms just for the sake of a unit and another unit are made a distinction.Here institute
The term "and/or" for using includes any and all combination of one of them or more listed associated items.When one
Unit is referred to as " connection " or during " coupled " to another unit, and which can be connected or coupled to another unit, or
There may be temporary location.
Term used herein above is not intended to limit exemplary embodiment just for the sake of description specific embodiment.Unless
Context clearly refers else, and singulative " one " otherwise used herein above, " one " also attempt to include plural number.Should also
When being understood by, term " including " used herein above and/or "comprising" specify stated feature, integer, step, operation,
The presence of unit and/or component, and do not preclude the presence or addition of one or more other features, integer, step, operation, unit,
Component and/or its combination.
The invention will be further described with preferred embodiment below in conjunction with the accompanying drawings.
Embodiment one
As shown in figure 1, a kind of generation method of robot interactive content disclosed in the present embodiment, including:
S101, actively wake up robot;
S102, acquisition user's multi-modal information;
S103, user view is determined according to user's multi-modal information;
S104, according to user's multi-modal information and the user view, with reference to current robot life-time axle
300 generate robot interactive content.
Existing robot is generally based in the question and answer interaction robot interactive in solid scene for application scenarios
The generation method of appearance, it is impossible to which the expression of robot is more accurately generated based on current scene.The generation method of the present invention
Including:Robot is waken up actively;Obtain user's multi-modal information;User view is determined according to user's multi-modal information;Root
According to user's multi-modal information and the user view, generate in robot interactive with reference to current robot life-time axle
Hold.Thus can be in the ad-hoc location of user distance robot, robot actively wakes up, and recognizes many according to user
Modal information and intention, more accurately generate robot interactive content with reference to the life-time axle of robot, so as to more
That what is accurately, personalized being interacted with people and being linked up.The daily life for people all has certain regularity, in order to allow
More personalize during machine person-to-person communication, in 24 hours one day, allow robot also to have sleep, move, have a meal, dance,
Reading, has a meal, and makes up, the action such as sleep.Therefore the life-time axle that robot is located is added to the present invention friendship of robot
During mutually content is generated, machine person to person is made more to personalize when interacting so that robot has the mankind in life-time axle
Life style, the method be capable of hoisting machine people interaction content generate personification, lifted man-machine interaction experience, improve intelligence
Property.Interaction content can be expression or word or voice etc..Robot life-time axle 300 is to be fitted in advance and set up
Into, specifically, robot life-time axle 300 is a series of parameter intersection, and this parameter is transferred to system is carried out
Generate interaction content.
Multi-modal information in the present embodiment can be user's expression, voice messaging, gesture information, scene information, image
The one of which therein or several such as information, video information, face information, pupil iris information, light sensation information and finger print information.
User's expression, the so efficiency high of the accurate and identification of identification are preferably in the present embodiment.
In the present embodiment, based on life-time axle specifically:According to the time shafts of mankind's daily life, by machine person to person
The time shafts of class daily life are fitted, and the behavior of robot is according to this fitting action, that is, obtains Tian Zhong robots
Factum, so as to allow robot to go to carry out factum based on life-time axle, for example, generates interaction content and the mankind
Link up etc..If if robot is waken up always, will be according to the behavior action on this time shaft, the self cognition of robot
Also can be changed according to this time shaft accordingly.Life-time axle and variable element can to the attribute in self cognition,
The change of such as mood value, fatigue data etc., it is also possible to be automatically added to new self cognition information, does not such as have indignation before
Value, the scene based on life-time axle and variable factor will automatically according to front simulation mankind's self cognition scene, so as to
The self cognition of robot is added.
For example, user not in face of robot when, the light sensation automatic detection module of robot is not triggered, therefore machine
Device people is in a dormant state.And when user go to robot in face of when, the light sensation automatic detection module of robot detects use
Family it is close, therefore robot actively will wake up, and the expression of identifying user, with reference to the life-time axle of robot, example
Such as, current time is 6 pm, is the quitting time of user, then when the expression that robot recognizes user is happy, main
Dynamic wake-up is greeted, and mixes happy expression, when unhappy, the first song of active release, and mix the expression of sympathy.
It is according to one of example, described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Thus can be by user's multi-modal information, for example, the action of user, after the collection such as the expression of user with it is default
Wake up parameter is compared, if having reached default wake up parameter, robot is just actively waken up by that, if being not reaching to
Will not wake up.For example in the mankind after robot, the detection module of robot detects the close of the mankind, will active
Oneself is waken up, so as to interact with the mankind.Wake up the expression that robot can also be made by the mankind, action, or other tools
There is a dynamic behavior, and if the mankind are to plant oneself, do not make expression and action, or the static state such as recumbency is motionless, that
Can be just to be not reaching to default wake up parameter, so as to be not considered as waking up robot, robot detects these behaviors
Shi Buhui actively wakes up oneself.
According to one of example, the generation method of the parameter of the robot life countershaft includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.So life-time axle is added in the self cognition of robot itself, makes robot that there is the life for personalizing.
For example the cognition that noon has a meal is added in robot.
It is according to other in which example, described to specifically include the step of the self cognition of robot is extended:Will be raw
Scene living is combined with the self-recognition of robot the self cognition curve to be formed based on life-time axle.Thus can be concrete
Life-time axle is added in the parameter of robot itself.
According to other in which example, parameter and the parameter in life-time axle of the self cognition to robot are entered
The step of row fitting, specifically includes:Using probabilistic algorithm, the robot calculated on life-time axle changes in time shafts scenario parameters
The probability of each parameter change after change, forms matched curve.Thus can specifically by the ginseng of the self cognition of robot
Number is fitted with the parameter in life-time axle.Probabilistic algorithm can adopt bayesian algorithm.
For example, in 24 hours one day, make robot have sleep, move, have a meal, dance, reading is had a meal, and makes up, sleeps
The actions such as feel.Each action can affect the self cognition of robot itself, by the parameter on life-time axle and robot itself
Self cognition be combined, after fitting, that is, allow the self cognition of robot to include, mood, fatigue data, cohesion, good opinion
Degree, interaction times, the cognition of the three-dimensional of robot, age, height, body weight, cohesion, scene of game value, game object value, ground
Point scene value, site objects value etc..For the place scene that robot can be located with oneself identification, such as coffee shop, bedroom etc..
Different actions can be carried out in the machine time shafts of a day, is such as slept at night, noon has a meal, motion on daytime etc.
Deng the scene in these all of life-time axles, for self cognition can all have an impact.It is general that the change of these numerical value is adopted
The performance matching mode of rate model, by these everythings, odds is fitted out on a timeline.Scene Recognition:It is this
Place scene Recognition can change the geographic scenes value in self cognition.
According to other in which example, methods described is further included:Obtain and analyze voice signal;
It is described according to user's multi-modal information and the user view, with reference to current robot life-time axle life
Further include into robot interactive content:
According to user's multi-modal information, voice signal and the user view, when living with reference to current robot
Countershaft generates robot interactive content.
Thus robot interactive content can be generated with reference to voice signal, more accurately.
Embodiment two
As shown in Fig. 2 a kind of generation system of robot interactive content disclosed in the present embodiment, including:
Light sensation automatic detection module 201, for actively waking up robot;
Expression analysis cloud processing module 202, for obtaining user's multi-modal information;
Intention assessment module 203, for determining user view according to user's multi-modal information;
Content generating module 204, for according to user's multi-modal information and the user view, with reference to machine life
The current robot life-time axle that live time axle module 301 sends generates robot interactive content.
Thus can be in the ad-hoc location of user distance robot, robot actively wakes up, and recognizes basis
User's multi-modal information and intention, more accurately generate robot interactive content with reference to the life-time axle of robot, from
And more accurately, personalize interacting with people and linking up.The daily life for people all has certain regularity,
In order to allow machine person-to-person communication when more personalize, in 24 hours one day, allow robot also to have sleep, move, have a meal,
Dance, reading is had a meal, and makes up, the action such as sleep.Therefore the life-time axle that robot is located is added to robot by the present invention
Interaction content generate in, make machine person to person interact when more personalize so that robot has in life-time axle
The life style of the mankind, the method are capable of the personification of hoisting machine people interaction content generation, lift man-machine interaction experience, improve
It is intelligent.
For example, user not in face of robot when, the light sensation automatic detection module of robot is not triggered, therefore machine
Device people is in a dormant state.And when user go to robot in face of when, the light sensation automatic detection module of robot detects use
Family it is close, therefore robot actively will wake up, and the expression of identifying user, for example when expression for it is happy when, actively call out
Wake up and greet, mix happy expression, when unhappy, the first song of active release, and mix the expression of sympathy.
According to one of example, the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Thus can be by user's multi-modal information, for example, the action of user, after the collection such as the expression of user with it is default
Wake up parameter is compared, if having reached default wake up parameter, robot is just actively waken up by that, such as close in the mankind
After robot, the detection module of robot detects the close of the mankind, will active wake-up oneself, so as to carry out with the mankind
Interaction.Wake up the expression that can also make by the mankind of robot, action, or other have dynamic behavior, and if the mankind
To plant oneself, do not make expression and action, or the static state such as recumbency is motionless, then can be just be not reaching to it is default
Wake up parameter, so as to be not considered as waking up robot, will not actively wake up oneself when robot detects these behaviors.
According to one of example, the system is included based on time shafts and artificial intelligence's cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
So life-time axle is added in the self cognition of robot itself, makes robot that there is the life for personalizing
It is living.For example the cognition that noon has a meal is added in robot.
It is according to other in which example, described to be further used for artificial intelligence's cloud processing module based on time shafts:Will be raw
Scene living is combined with the self-recognition of robot the self cognition curve to be formed based on life-time axle.Thus can be concrete
Life-time axle is added in the parameter of robot itself.
It is according to other in which example, described to be further used for artificial intelligence's cloud processing module based on time shafts:Use
Probabilistic algorithm, calculates the probability of each parameter change of the robot on life-time axle after the change of time shafts scenario parameters,
Form matched curve.Thus specifically the parameter of the self cognition of robot can be carried out with the parameter in life-time axle
Fitting.
For example, in 24 hours one day, make robot have sleep, move, have a meal, dance, reading is had a meal, and makes up, sleeps
The actions such as feel.Each action can affect the self cognition of robot itself, by the parameter on life-time axle and robot itself
Self cognition be combined, after fitting, that is, allow the self cognition of robot to include, mood, fatigue data, cohesion, good opinion
Degree, interaction times, the cognition of the three-dimensional of robot, age, height, body weight, cohesion, scene of game value, game object value, ground
Point scene value, site objects value etc..For the place scene that robot can be located with oneself identification, such as coffee shop, bedroom etc..
Different actions can be carried out in the machine time shafts of a day, is such as slept at night, noon has a meal, motion on daytime etc.
Deng the scene in these all of life-time axles, for self cognition can all have an impact.It is general that the change of these numerical value is adopted
The performance matching mode of rate model, by these everythings, odds is fitted out on a timeline.Scene Recognition:It is this
Place scene Recognition can change the geographic scenes value in self cognition.
According to other in which example, the system is further included:Speech analysises cloud processing module, for obtaining and dividing
Analysis voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user
Be intended to, robot interactive content is generated with reference to current robot life-time axle.
Thus robot interactive content can be generated with reference to voice signal, more accurately.
The present invention discloses a kind of robot, including a kind of generation system of arbitrary described robot interactive content as described above
System.
Above content is with reference to specific preferred implementation further description made for the present invention, it is impossible to assert
The present invention be embodied as be confined to these explanations.For general technical staff of the technical field of the invention,
On the premise of without departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's
Protection domain.
Claims (15)
1. a kind of generation method of robot interactive content, it is characterised in that include:
Robot is waken up actively;
Obtain user's multi-modal information;
User view is determined according to user's multi-modal information;
According to user's multi-modal information and the user view, robot is generated with reference to current robot life-time axle
Interaction content.
2. generation method according to claim 1, it is characterised in that described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
3. generation method according to claim 1, it is characterised in that methods described is further included:Obtain and analytic language
Message number;
It is described according to user's multi-modal information and the user view, generate machine with reference to current robot life-time axle
Device people's interaction content is further included:
According to user's multi-modal information, voice signal and the user view, with reference to current robot life-time axle
Generate robot interactive content.
4. generation method according to claim 1, it is characterised in that the generation side of the parameter of the robot life countershaft
Method includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, robot life-time is generated
Axle.
5. generation method according to claim 4, it is characterised in that described that the self cognition of robot is extended
Step is specifically included:The self cognition to be formed based on life-time axle that living scene is combined with the self-recognition of robot is bent
Line.
6. generation method according to claim 4, it is characterised in that the parameter of the self cognition to robot and life
The step of parameter in live time axle is fitted specifically includes:Using probabilistic algorithm, the robot on life-time axle is calculated
The probability of each parameter change after time shafts scenario parameters change, forms matched curve.
7. generation method according to claim 4, it is characterised in that wherein, the life-time axle refers to comprising a day 24
The time shafts of hour, the parameter in the life-time axle at least include the daily life that user is carried out on the life-time axle
Behavior living and represent the parameter value of the behavior.
8. a kind of generation system of robot interactive content, it is characterised in that include:
Light sensation automatic detection module, for actively waking up robot;
Expression analysis cloud processing module, for obtaining user's multi-modal information;
Intention assessment module, for determining user view according to user's multi-modal information;
Content generating module, for according to user's multi-modal information and the user view, with reference to current machine life
Live time axle generates robot interactive content.
9. generation system according to claim 8, it is characterised in that the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
10. generation system according to claim 8, it is characterised in that the system is further included:At speech analysises cloud
Reason module, for obtaining and analyzing voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user view,
Robot interactive content is generated with reference to current robot life-time axle.
11. generation systems according to claim 8, it is characterised in that the system is included based on time shafts and artificial intelligence
Energy cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, robot life-time is generated
Axle.
12. generation systems according to claim 11, it is characterised in that described to be processed with artificial intelligence's cloud based on time shafts
Module is further used for:Living scene is combined with the self-recognition of robot the self cognition to be formed based on life-time axle
Curve.
13. generation systems according to claim 11, it is characterised in that described to be processed with artificial intelligence's cloud based on time shafts
Module is further used for:Using probabilistic algorithm, the robot on calculating life-time axle is after the change of time shafts scenario parameters
The probability of each parameter change, forms matched curve.
14. generation systems according to claim 11, it is characterised in that wherein, the life-time axle referred to comprising one day
The time shafts of 24 hours, it is daily that the parameter in the life-time axle at least includes that user is carried out on the life-time axle
Life-form structure and represent the parameter value of the behavior.
15. a kind of robots, it is characterised in that include a kind of robot interactive content as described in claim 8 to 14 is arbitrary
Generation system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/087739 WO2018000260A1 (en) | 2016-06-29 | 2016-06-29 | Method for generating robot interaction content, system, and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106537425A true CN106537425A (en) | 2017-03-22 |
Family
ID=58335767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680001752.7A Pending CN106537425A (en) | 2016-06-29 | 2016-06-29 | Method and system for generating robot interaction content, and robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106537425A (en) |
WO (1) | WO2018000260A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086392A (en) * | 2018-07-27 | 2018-12-25 | 北京光年无限科技有限公司 | A kind of exchange method and system based on dialogue |
CN112497217A (en) * | 2020-12-02 | 2021-03-16 | 深圳市香蕉智能科技有限公司 | Robot interaction method and device, terminal equipment and readable storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11407116B2 (en) * | 2017-01-04 | 2022-08-09 | Lg Electronics Inc. | Robot and operation method therefor |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN105490918A (en) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | System and method for enabling robot to interact with master initiatively |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11212934A (en) * | 1998-01-23 | 1999-08-06 | Sony Corp | Information processing device and method and information supply medium |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
-
2016
- 2016-06-29 WO PCT/CN2016/087739 patent/WO2018000260A1/en active Application Filing
- 2016-06-29 CN CN201680001752.7A patent/CN106537425A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN105490918A (en) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | System and method for enabling robot to interact with master initiatively |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086392A (en) * | 2018-07-27 | 2018-12-25 | 北京光年无限科技有限公司 | A kind of exchange method and system based on dialogue |
CN112497217A (en) * | 2020-12-02 | 2021-03-16 | 深圳市香蕉智能科技有限公司 | Robot interaction method and device, terminal equipment and readable storage medium |
CN112497217B (en) * | 2020-12-02 | 2022-12-13 | 深圳市香蕉智能科技有限公司 | Robot interaction method and device, terminal equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2018000260A1 (en) | 2018-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106537294A (en) | Method, system and robot for generating interactive content of robot | |
CN106462254A (en) | Robot interaction content generation method, system and robot | |
CN107632706B (en) | Application data processing method and system of multi-modal virtual human | |
CN105345818B (en) | Band is in a bad mood and the 3D video interactives robot of expression module | |
CN106463118B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
CN106471444A (en) | A kind of exchange method of virtual 3D robot, system and robot | |
CN107797663A (en) | Multi-modal interaction processing method and system based on visual human | |
CN106096717B (en) | Information processing method towards intelligent robot and system | |
CN106662932A (en) | Method, system and robot for recognizing and controlling household appliances based on intention | |
CN107894833A (en) | Multi-modal interaction processing method and system based on visual human | |
CN103919537B (en) | Emotion record analysis guidance system and its implementation | |
CN107861626A (en) | The method and system that a kind of virtual image is waken up | |
CN107340865A (en) | Multi-modal virtual robot exchange method and system | |
CN106537293A (en) | Method and system for generating robot interactive content, and robot | |
CN106462255A (en) | A method, system and robot for generating interactive content of robot | |
CN106462124A (en) | Method, system and robot for identifying and controlling household appliances based on intention | |
CN107704169A (en) | The method of state management and system of visual human | |
CN106471572B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
CN112233211B (en) | Animation production method, device, storage medium and computer equipment | |
CN106537425A (en) | Method and system for generating robot interaction content, and robot | |
CN114492831A (en) | Method and device for generating federal learning model | |
CN111312222A (en) | Awakening and voice recognition model training method and device | |
EP3566180A1 (en) | Systems and methods for artificial intelligence interface generation, evolution, and/or adjustment | |
CN106489114A (en) | A kind of generation method of robot interactive content, system and robot | |
CN106462804A (en) | Method and system for generating robot interaction content, and robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170322 |
|
RJ01 | Rejection of invention patent application after publication |