CN106537293A - Method and system for generating robot interactive content, and robot - Google Patents
Method and system for generating robot interactive content, and robot Download PDFInfo
- Publication number
- CN106537293A CN106537293A CN201680001750.8A CN201680001750A CN106537293A CN 106537293 A CN106537293 A CN 106537293A CN 201680001750 A CN201680001750 A CN 201680001750A CN 106537293 A CN106537293 A CN 106537293A
- Authority
- CN
- China
- Prior art keywords
- robot
- life
- user
- information
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
Abstract
The present invention provides a method for generating a robot interactive content, comprising: proactively activating a robot; acquiring user multi-modal information; determining a user intention based on the user multi-modal information; acquiring location scene information; and generating, based on the user multi-modal information, the user intention and the location scene information, robot interaction content in conjunction with the current robot life timeline. The invention adds the life timeline of the robot to the interactive content generation of the robot and makes the robot more humanized when it interacts with the human, so that the robot has the human life way in the life timeline, and the method can enhance the human nature of the robot interaction content generation to enhance human-computer interaction experience and improve intelligence.
Description
Technical field
The present invention relates to robot interactive technical field, more particularly to a kind of generation method of robot interactive content, it is
System and robot.
Background technology
The generally mankind can actively wake up robot interactive during interacting with a computer, and start to hand over after robot pickup
Mutually, and during expression feedback is made, the mankind are usually that dialogue can be actively initiated after meeting, hear talker through big
Rational expression feedback is carried out after brain pair and the language and Expression analysis of speaker, and for robot, current machine
The interactive mode of people is typically started using pickup, and makes the feedback in expression, and this mode causes the interactivity of robot very
It is low, it is intelligent very low to leave following point:The general robot for actively having changed, primarily serves the purpose of greeting, is that user sets
Language and expression, this mode, in this case robot actually enters also according to the pre-designed interactive mode of the mankind
The output of row expression, this causes robot not have to personalize, it is impossible to as the mankind, seeing that other side is, can be to other side
Expression is analyzed, and is actively inquired the mode of other side afterwards, and is fed back corresponding expression.
Therefore, the machine expression generation method of the actively automatic detection human face expression of wake-up how is proposed, being capable of elevator
The personification that device people interaction content is generated, is the technical problem of the art urgent need to resolve.
The content of the invention
It is an object of the invention to provide a kind of generation method of robot interactive content, system and robot, make robot
The machine interaction content generation method of the automatic detection human face expression for actively waking up, is capable of the generation of hoisting machine people interaction content
Personification, lifts man-machine interaction experience, improves intelligent.
The purpose of the present invention is achieved through the following technical solutions:
A kind of generation method of robot interactive content, including:
Robot is waken up actively;
Obtain user's multi-modal information;
User view is determined according to user's multi-modal information;
Gain location scene information;
According to user's multi-modal information, the user view and place scene information, with reference to current machine life
Live time axle generates robot interactive content.
Preferably, it is described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Preferably, the generation method of the parameter of the robot life-time axle includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
Preferably, it is described to specifically include the step of the self cognition of robot is extended:By living scene and machine
The self-recognition of people combines the self cognition curve to be formed based on life-time axle.
Preferably, the step of parameter in the parameter of the self cognition to robot and life-time axle is fitted
Specifically include:Using probabilistic algorithm, each ginseng of the robot on life-time axle after the change of time shafts scenario parameters is calculated
The probability that number changes, forms matched curve.
Preferably, wherein, the life-time axle refers to the time shafts comprising 24 hours a day, in the life-time axle
Parameter at least includes user the daily life behavior for carrying out and the parameter value for representing the behavior on the life-time axle.
Preferably, methods described is further included:Obtain and analyze voice signal;
It is described according to user's multi-modal information and the user view, with reference to current robot life-time axle life
Further include into robot interactive content:
According to user's multi-modal information, voice signal and the user view, when living with reference to current robot
Countershaft generates robot interactive content.
Preferably, specifically include the step of the gain location scene information:Believed by acquiring video information place scene
Breath.
Preferably, specifically include the step of the gain location scene information:Believed by pictorial information gain location scene
Breath.
Preferably, specifically include the step of the gain location scene information:Believed by gesture information gain location scene
Breath.
The present invention discloses a kind of generation system of robot interactive content, including:
Light sensation automatic detection module, for actively waking up robot;
Expression analysis cloud processing module, for obtaining user's multi-modal information;
Intention assessment module, for determining user view according to user's multi-modal information;
Scene Recognition module, for gain location scene information;
Content generating module, for according to user's multi-modal information and the user view, with reference to current machine
People's life-time axle generates robot interactive content.
Preferably, the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Preferably, the system includes based on time shafts and artificial intelligence's cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
Preferably, it is described to be further used for artificial intelligence's cloud processing module based on time shafts:By living scene and machine
The self-recognition of people combines the self cognition curve to be formed based on life-time axle.
Preferably, it is described to be further used for artificial intelligence's cloud processing module based on time shafts:Using probabilistic algorithm, calculate
The probability of each parameter change of the robot on life-time axle after the change of time shafts scenario parameters, forms matched curve.
Preferably, wherein, the life-time axle refers to the time shafts comprising 24 hours a day, in the life-time axle
Parameter at least includes user the daily life behavior for carrying out and the parameter value for representing the behavior on the life-time axle.
Preferably, the system is further included:Speech analysises cloud processing module, for obtaining and analyzing voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user
Be intended to, robot interactive content is generated with reference to current robot life-time axle.
Preferably, the scene Recognition module is specifically for by acquiring video information place scene information
Preferably, the scene Recognition module is specifically for by pictorial information gain location scene information.
Preferably, the scene Recognition module is specifically for by gesture information gain location scene information.
The present invention discloses a kind of robot, including a kind of generation system of arbitrary described robot interactive content as described above
System.
Compared to existing technology, the present invention has advantages below:Existing robot is generally based on for application scenarios
Solid scene in question and answer interact the generation method of robot interactive content, it is impossible to based on current scene come more accurately raw
Into the expression of robot.A kind of generation method of robot interactive content, including:Robot is waken up actively;Obtain user's multimode
State information;User view is determined according to user's multi-modal information;Gain location scene information;It is multi-modal according to the user
Information, the user view and place scene information, generate robot interactive content with reference to current robot life-time axle.
Thus can be in the ad-hoc location of user distance robot, robot actively wakes up, and recognizes according to user's multimode
State information and intention, put the life-time axle of scene information and robot more accurately to generate in robot interactive in combination
Hold, so as to more accurately, personalize interacting with people and linking up.The daily life for people all has certain rule
Rule property, in order to allow machine person-to-person communication when more personalize, in 24 hours one day, allow robot also to have sleep, motion,
Have a meal, dance, reading is had a meal, and makes up, the action such as sleep.Therefore the life-time axle that robot is located is added to by the present invention
During the interaction content of robot is generated, machine person to person is made more to personalize when interacting so that robot is in life-time axle
The interior life style with the mankind, the method are capable of the personification of hoisting machine people interaction content generation, lift man-machine interaction body
Test, improve intelligent.
Description of the drawings
Fig. 1 is a kind of flow chart of the generation method of robot interactive content of the embodiment of the present invention one;
Fig. 2 is a kind of schematic diagram of the generation system of robot interactive content of the embodiment of the present invention two.
Specific embodiment
Although operations to be described as flow chart the process of order, many of which operation can by concurrently,
Concomitantly or while implement.The order of operations can be rearranged.Process when its operations are completed and can be terminated,
It is also possible to have the additional step being not included in accompanying drawing.Process can correspond to method, function, code, subroutine, son
Program etc..
Computer equipment includes user equipment and the network equipment.Wherein, user equipment or client include but is not limited to electricity
Brain, smart mobile phone, PDA etc.;The network equipment includes but is not limited to single network server, the service of multiple webservers composition
Device group or the cloud being made up of a large amount of computers or the webserver based on cloud computing.Computer equipment can isolated operation realizing
The present invention, also can access network and by with network in other computer equipments interactive operation realizing the present invention.Calculate
Network residing for machine equipment includes but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
Term " first ", " second " etc. are may have been used here to describe unit, but these units should not
When limited by these terms, using these terms just for the sake of a unit and another unit are made a distinction.Here institute
The term "and/or" for using includes any and all combination of one of them or more listed associated items.When one
Unit is referred to as " connection " or during " coupled " to another unit, and which can be connected or coupled to another unit, or
There may be temporary location.
Term used herein above is not intended to limit exemplary embodiment just for the sake of description specific embodiment.Unless
Context clearly refers else, and singulative " one " otherwise used herein above, " one " also attempt to include plural number.Should also
When being understood by, term " including " used herein above and/or "comprising" specify stated feature, integer, step, operation,
The presence of unit and/or component, and do not preclude the presence or addition of one or more other features, integer, step, operation, unit,
Component and/or its combination.
The invention will be further described with preferred embodiment below in conjunction with the accompanying drawings.
Embodiment one
As shown in figure 1, a kind of generation method of robot interactive content disclosed in the present embodiment, including:
S101, actively wake up robot;
S102, acquisition user's multi-modal information;
S103, user view is determined according to user's multi-modal information;
S104, gain location scene information;
S105, according to user's multi-modal information, the user view and place scene information, with reference to current machine
People's life-time axle 300 generates robot interactive content.
Existing robot is generally based in the question and answer interaction robot interactive in solid scene for application scenarios
The generation method of appearance, it is impossible to which the expression of robot is more accurately generated based on current scene.In a kind of robot interactive
The generation method of appearance, including:Robot is waken up actively;Obtain user's multi-modal information;It is true according to user's multi-modal information
Determine user view;Gain location scene information;Believed according to user's multi-modal information, the user view and place scene
Breath, generates robot interactive content with reference to current robot life-time axle.Thus can be in user distance robot
During ad-hoc location, robot actively wakes up, and recognizes according to user's multi-modal information and intention, puts scene information in combination
Robot interactive content is more accurately generated with the life-time axle of robot, so as to more accurately, personalize and people
Interact and link up.The daily life for people all has certain regularity, in order to allow machine person-to-person communication when
More personalize, in 24 hours one day, allow robot also to have sleep, move, have a meal, dance, reading is had a meal, and makes up, sleeps
The actions such as feel.Therefore, during the interaction content that the life-time axle that robot is located is added to robot is generated by the present invention, make
Machine person to person more personalizes when interacting so that robot has the life style of the mankind, the method in life-time axle
It is capable of the personification of hoisting machine people interaction content generation, lifts man-machine interaction experience, improve intelligent.Robot life-time
Axle 300 is fitted in advance and is provided with, and specifically, robot life-time axle 300 is that a series of parameter is closed
Collection, this parameter is transferred to system carries out generation interaction content.
Multi-modal information in the present embodiment can be user's expression, voice messaging, gesture information, scene information, image
The one of which therein or several such as information, video information, face information, pupil iris information, light sensation information and finger print information.
User's expression, the so efficiency high of the accurate and identification of identification are preferably in the present embodiment.
In the present embodiment, based on life-time axle specifically:According to the time shafts of mankind's daily life, by machine person to person
The time shafts of class daily life are fitted, and the behavior of robot is according to this fitting action, that is, obtains Tian Zhong robots
Factum, so as to allow robot to go to carry out factum based on life-time axle, for example, generates interaction content and the mankind
Link up etc..If if robot is waken up always, will be according to the behavior action on this time shaft, the self cognition of robot
Also can be changed according to this time shaft accordingly.Life-time axle and variable element can to the attribute in self cognition,
The change of such as mood value, fatigue data etc., it is also possible to be automatically added to new self cognition information, does not such as have indignation before
Value, the scene based on life-time axle and variable factor will automatically according to front simulation mankind's self cognition scene, so as to
The self cognition of robot is added.
For example, user not in face of robot when, the light sensation automatic detection module of robot is not triggered, therefore machine
Device people is in a dormant state.And when user go to robot in face of when, the light sensation automatic detection module of robot detects use
Family it is close, therefore robot actively will wake up, and the expression of identifying user, put scene information in combination, robot
Life-time axle, for example, current time is 6 pm, and place scene is doorway, is the quitting time of user, and user just returns to
, then when the expression that robot recognizes user is happy, actively wakes up and greet, mix and happily express one's feelings, when unhappy
When, the first song of active release, and mix the expression of sympathy.And if current time is at 9 points in the morning, when local point scene is room, that
When the expression that robot recognizes user is happy, actively wakes up and greet, mix the expression that good morning, when unhappy
When, the first song of active release, and mix pitiful expression.Interaction content can be expression or word or voice etc..
It is according to one of example, described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Thus can be by user's multi-modal information, for example, the action of user, after the collection such as the expression of user with it is default
Wake up parameter is compared, if having reached default wake up parameter, robot is just actively waken up by that, if being not reaching to
Will not wake up.For example in the mankind after robot, the detection module of robot detects the close of the mankind, will active
Oneself is waken up, so as to interact with the mankind.Wake up the expression that robot can also be made by the mankind, action, or other tools
There is a dynamic behavior, and if the mankind are to plant oneself, do not make expression and action, or the static state such as recumbency is motionless, that
Can be just to be not reaching to default wake up parameter, so as to be not considered as waking up robot, robot detects these behaviors
Shi Buhui actively wakes up oneself.
According to one of example, the generation method of the parameter of the robot life-time axle includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.So life-time axle is added in the self cognition of robot itself, makes robot that there is the life for personalizing.
For example the cognition that noon has a meal is added in robot.
It is according to other in which example, described to specifically include the step of the self cognition of robot is extended:Will be raw
Scene living is combined with the self-recognition of robot the self cognition curve to be formed based on life-time axle.Thus can be concrete
Life-time axle is added in the parameter of robot itself.
According to other in which example, parameter and the parameter in life-time axle of the self cognition to robot are entered
The step of row fitting, specifically includes:Using probabilistic algorithm, the robot calculated on life-time axle changes in time shafts scenario parameters
The probability of each parameter change after change, forms matched curve.Thus can specifically by the ginseng of the self cognition of robot
Number is fitted with the parameter in life-time axle.
For example, in 24 hours one day, make robot have sleep, move, have a meal, dance, reading is had a meal, and makes up, sleeps
The actions such as feel.Each action can affect the self cognition of robot itself, by the parameter on life-time axle and robot itself
Self cognition be combined, after fitting, that is, allow the self cognition of robot to include, mood, fatigue data, cohesion, good opinion
Degree, interaction times, the cognition of the three-dimensional of robot, age, height, body weight, cohesion, scene of game value, game object value, ground
Point scene value, site objects value etc..For the place scene that robot can be located with oneself identification, such as coffee shop, bedroom etc..
Different actions can be carried out in the machine time shafts of a day, is such as slept at night, noon has a meal, motion on daytime etc.
Deng the scene in these all of life-time axles, for self cognition can all have an impact.It is general that the change of these numerical value is adopted
The performance matching mode of rate model, by these everythings, odds is fitted out on a timeline.Scene Recognition:It is this
Place scene Recognition can change the geographic scenes value in self cognition.
According to other in which example, methods described is further included:Obtain and analyze voice signal;
It is described according to user's multi-modal information and the user view, with reference to current robot life-time axle life
Further include into robot interactive content:
According to user's multi-modal information, voice signal and the user view, when living with reference to current robot
Countershaft generates robot interactive content.Thus robot interactive content can be generated with reference to voice signal, more accurately.
Specifically included according to the step of other in which example, the gain location scene information:Obtained by video information
Take place scene information.Put scene information in this wise to obtain by video, it is more accurate by video acquisition.
Specifically included according to the step of other in which example, the gain location scene information:Obtained by pictorial information
Take place scene information.The amount of calculation that can save robot is obtained by picture, makes machine person's development rapider.
Specifically included according to the step of other in which example, the gain location scene information:Obtained by gesture information
Take place scene information.Being obtained by gesture to make the scope of application of robot wider, such as individuals with disabilities or
Owner is sometimes not desired to speak, it is possible to by gesture to robotic delivery information.
Embodiment two
As shown in Fig. 2 a kind of generation system of robot interactive content disclosed in the present embodiment, including:
Light sensation automatic detection module 201, for actively waking up robot;
Expression analysis cloud processing module 202, for obtaining user's multi-modal information;
Intention assessment module 203, for determining user view according to user's multi-modal information;
Scene Recognition module 204, for gain location scene information;
Content generating module 205, for being believed according to user's multi-modal information, the user view and place scene
Breath, generates robot interactive content with reference to the current robot life-time axle that machine life live time axle module 301 sends.
Thus can be in the ad-hoc location of user distance robot, robot actively wakes up, and recognizes basis
User's multi-modal information and intention, put the life-time axle of scene information and robot more accurately to generate machine in combination
People's interaction content, so as to more accurately, personalize interacting with people and linking up.The daily life for people all has
Certain regularity, in order to allow machine person-to-person communication when more personalize, in 24 hours one day, allow robot also to have and sleep
Feel that motion is had a meal, and dances, reading is had a meal, and makes up, the action such as sleep.Therefore the life-time that robot is located by the present invention
Axle is added in the interaction content generation of robot, makes machine person to person more personalize when interacting so that robot is giving birth to
There is in live time axle the life style of the mankind, the method is capable of the personification of hoisting machine people interaction content generation, lifts people
Machine interactive experience, improves intelligent.
For example, user not in face of robot when, the light sensation automatic detection module of robot is not triggered, therefore machine
Device people is in a dormant state.And when user go to robot in face of when, the light sensation automatic detection module of robot detects use
Family it is close, therefore robot actively will wake up, and the expression of identifying user, put scene information in combination, robot
Life-time axle, for example, current time is 6 pm, and place scene is doorway, is the quitting time of user, and user just returns to
, then when the expression that robot recognizes user is happy, actively wakes up and greet, mix and happily express one's feelings, when unhappy
When, the first song of active release, and mix the expression of sympathy.
According to one of example, the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
Thus can be by user's multi-modal information, for example, the action of user, after the collection such as the expression of user with it is default
Wake up parameter is compared, if having reached default wake up parameter, robot is just actively waken up by that, such as close in the mankind
After robot, the detection module of robot detects the close of the mankind, will active wake-up oneself, so as to carry out with the mankind
Interaction.Wake up the expression that can also make by the mankind of robot, action, or other have dynamic behavior, and if the mankind
To plant oneself, do not make expression and action, or the static state such as recumbency is motionless, then can be just be not reaching to it is default
Wake up parameter, so as to be not considered as waking up robot, will not actively wake up oneself when robot detects these behaviors.
According to one of example, the system is included based on time shafts and artificial intelligence's cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, when generating robot life
Countershaft.
So life-time axle is added in the self cognition of robot itself, makes robot that there is the life for personalizing
It is living.For example the cognition that noon has a meal is added in robot.
It is according to other in which example, described to be further used for artificial intelligence's cloud processing module based on time shafts:Will be raw
Scene living is combined with the self-recognition of robot the self cognition curve to be formed based on life-time axle.Thus can be concrete
Life-time axle is added in the parameter of robot itself.
It is according to other in which example, described to be further used for artificial intelligence's cloud processing module based on time shafts:Use
Probabilistic algorithm, calculates the probability of each parameter change of the robot on life-time axle after the change of time shafts scenario parameters,
Form matched curve.Thus specifically the parameter of the self cognition of robot can be carried out with the parameter in life-time axle
Fitting.Wherein, probabilistic algorithm can adopt Bayesian probability algorithm.
For example, in 24 hours one day, make robot have sleep, move, have a meal, dance, reading is had a meal, and makes up, sleeps
The actions such as feel.Each action can affect the self cognition of robot itself, by the parameter on life-time axle and robot itself
Self cognition be combined, after fitting, that is, allow the self cognition of robot to include, mood, fatigue data, cohesion, good opinion
Degree, interaction times, the cognition of the three-dimensional of robot, age, height, body weight, cohesion, scene of game value, game object value, ground
Point scene value, site objects value etc..For the place scene that robot can be located with oneself identification, such as coffee shop, bedroom etc..
Different actions can be carried out in the machine time shafts of a day, is such as slept at night, noon has a meal, motion on daytime etc.
Deng the scene in these all of life-time axles, for self cognition can all have an impact.It is general that the change of these numerical value is adopted
The performance matching mode of rate model, by these everythings, odds is fitted out on a timeline.Scene Recognition:It is this
Place scene Recognition can change the geographic scenes value in self cognition.
According to other in which example, the system is further included:Speech analysises cloud processing module, for obtaining and dividing
Analysis voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user
Be intended to, robot interactive content is generated with reference to current robot life-time axle.Thus can generate with reference to voice signal
Robot interactive content, more accurately.
According to other in which example, the scene Recognition module is specifically for by acquiring video information place scene
Information.Put scene information in this wise to obtain by video, it is more accurate by video acquisition.
According to other in which example, the scene Recognition module is specifically for by pictorial information gain location scene
Information.The amount of calculation that can save robot is obtained by picture, makes machine person's development rapider.
According to other in which example, the scene Recognition module is specifically for by gesture information gain location scene
Information.Being obtained by gesture to make the scope of application of robot wider, for example for individuals with disabilities or owner sometimes
It is not desired to speak, it is possible to by gesture to robotic delivery information.
A kind of robot disclosed in the present embodiment, including a kind of generation of arbitrary described robot interactive content as described above
System.
Above content is with reference to specific preferred implementation further description made for the present invention, it is impossible to assert
The present invention be embodied as be confined to these explanations.For general technical staff of the technical field of the invention,
On the premise of without departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's
Protection domain.
Claims (21)
1. a kind of generation method of robot interactive content, it is characterised in that include:
Robot is waken up actively;
Obtain user's multi-modal information;
User view is determined according to user's multi-modal information;
Gain location scene information;
According to user's multi-modal information, the user view and place scene information, when living with reference to current robot
Countershaft generates robot interactive content.
2. generation method according to claim 1, it is characterised in that described to include the step of actively wake up robot:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
3. generation method according to claim 1, it is characterised in that methods described is further included:Obtain and analytic language
Message number;
It is described according to user's multi-modal information and the user view, generate machine with reference to current robot life-time axle
Device people's interaction content is further included:
According to user's multi-modal information, voice signal and the user view, with reference to current robot life-time axle
Generate robot interactive content.
4. generation method according to claim 1, it is characterised in that the generation of the parameter of the robot life-time axle
Method includes:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, robot life-time is generated
Axle.
5. generation method according to claim 4, it is characterised in that described that the self cognition of robot is extended
Step is specifically included:The self cognition to be formed based on life-time axle that living scene is combined with the self-recognition of robot is bent
Line.
6. generation method according to claim 4, it is characterised in that the parameter of the self cognition to robot and life
The step of parameter in live time axle is fitted specifically includes:Using probabilistic algorithm, the robot on life-time axle is calculated
The probability of each parameter change after time shafts scenario parameters change, forms matched curve.
7. generation method according to claim 4, it is characterised in that wherein, the life-time axle refers to comprising a day 24
The time shafts of hour, the parameter in the life-time axle at least include the daily life that user is carried out on the life-time axle
Behavior living and represent the parameter value of the behavior.
8. generation method according to claim 1, it is characterised in that specifically wrap the step of the gain location scene information
Include:By acquiring video information place scene information.
9. generation method according to claim 1, it is characterised in that specifically wrap the step of the gain location scene information
Include:By pictorial information gain location scene information.
10. generation method according to claim 1, it is characterised in that concrete the step of the gain location scene information
Including:By gesture information gain location scene information.
11. a kind of generation systems of robot interactive content, it is characterised in that include:
Light sensation automatic detection module, for actively waking up robot;
Expression analysis cloud processing module, for obtaining user's multi-modal information;
Intention assessment module, for determining user view according to user's multi-modal information;
Scene Recognition module, for gain location scene information;
Content generating module, for according to user's multi-modal information, the user view and place scene information, with reference to working as
Front robot life-time axle generates robot interactive content.
12. generation systems according to claim 11, it is characterised in that the light sensation automatic detection module specifically for:
Obtain user's multi-modal information;
Matched with user's multi-modal information according to default wake up parameter;
Robot is actively waken up if user's multi-modal information reaches default wake up parameter.
13. generation systems according to claim 11, it is characterised in that the system is further included:Speech analysises cloud
Processing module, for obtaining and analyzing voice signal;
The content generating module is further used for:According to user's multi-modal information, voice signal and the user view,
Robot interactive content is generated with reference to current robot life-time axle.
14. generation systems according to claim 11, it is characterised in that the system is included based on time shafts and artificial intelligence
Energy cloud processing module, is used for:
The self cognition of robot is extended;
Obtain the parameter of life-time axle;
The parameter of the self cognition of robot is fitted with the parameter in life-time axle, robot life-time is generated
Axle.
15. generation systems according to claim 14, it is characterised in that described to be processed with artificial intelligence's cloud based on time shafts
Module is further used for:Living scene is combined with the self-recognition of robot the self cognition to be formed based on life-time axle
Curve.
16. generation systems according to claim 14, it is characterised in that described to be processed with artificial intelligence's cloud based on time shafts
Module is further used for:Using probabilistic algorithm, the robot on calculating life-time axle is after the change of time shafts scenario parameters
The probability of each parameter change, forms matched curve.
17. generation systems according to claim 14, it is characterised in that wherein, the life-time axle referred to comprising one day
The time shafts of 24 hours, it is daily that the parameter in the life-time axle at least includes that user is carried out on the life-time axle
Life-form structure and represent the parameter value of the behavior.
18. generation systems according to claim 11, it is characterised in that the scene Recognition module is specifically for passing through
Acquiring video information place scene information.
19. generation systems according to claim 11, it is characterised in that the scene Recognition module is specifically for passing through
Pictorial information gain location scene information.
20. generation systems according to claim 11, it is characterised in that the scene Recognition module is specifically for passing through
Gesture information gain location scene information.
21. a kind of robots, it is characterised in that include a kind of robot interactive content as described in claim 10 to 20 is arbitrary
Generation system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/087740 WO2018000261A1 (en) | 2016-06-29 | 2016-06-29 | Method and system for generating robot interaction content, and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106537293A true CN106537293A (en) | 2017-03-22 |
Family
ID=58335931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680001750.8A Pending CN106537293A (en) | 2016-06-29 | 2016-06-29 | Method and system for generating robot interactive content, and robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106537293A (en) |
WO (1) | WO2018000261A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363492A (en) * | 2018-03-09 | 2018-08-03 | 南京阿凡达机器人科技有限公司 | A kind of man-machine interaction method and interactive robot |
CN109176535A (en) * | 2018-07-16 | 2019-01-11 | 北京光年无限科技有限公司 | Exchange method and system based on intelligent robot |
CN112099630A (en) * | 2020-09-11 | 2020-12-18 | 济南大学 | Man-machine interaction method for reverse active fusion of multi-mode intentions |
KR20220011078A (en) * | 2020-07-20 | 2022-01-27 | 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. | Active interaction method, device, electronic equipment and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080195566A1 (en) * | 2007-02-08 | 2008-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for expressing behavior of software robot |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105409197A (en) * | 2013-03-15 | 2016-03-16 | 趣普科技公司 | Apparatus and methods for providing persistent companion device |
CN105345818B (en) * | 2015-11-04 | 2018-02-09 | 深圳好未来智能科技有限公司 | Band is in a bad mood and the 3D video interactives robot of expression module |
CN105490918A (en) * | 2015-11-20 | 2016-04-13 | 深圳狗尾草智能科技有限公司 | System and method for enabling robot to interact with master initiatively |
-
2016
- 2016-06-29 WO PCT/CN2016/087740 patent/WO2018000261A1/en active Application Filing
- 2016-06-29 CN CN201680001750.8A patent/CN106537293A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080195566A1 (en) * | 2007-02-08 | 2008-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for expressing behavior of software robot |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363492A (en) * | 2018-03-09 | 2018-08-03 | 南京阿凡达机器人科技有限公司 | A kind of man-machine interaction method and interactive robot |
CN108363492B (en) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | Man-machine interaction method and interaction robot |
CN109176535A (en) * | 2018-07-16 | 2019-01-11 | 北京光年无限科技有限公司 | Exchange method and system based on intelligent robot |
CN109176535B (en) * | 2018-07-16 | 2021-10-19 | 北京光年无限科技有限公司 | Interaction method and system based on intelligent robot |
KR20220011078A (en) * | 2020-07-20 | 2022-01-27 | 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. | Active interaction method, device, electronic equipment and readable storage medium |
KR102551835B1 (en) | 2020-07-20 | 2023-07-04 | 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. | Active interaction method, device, electronic equipment and readable storage medium |
US11734392B2 (en) | 2020-07-20 | 2023-08-22 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Active interaction method, electronic device and readable storage medium |
CN112099630A (en) * | 2020-09-11 | 2020-12-18 | 济南大学 | Man-machine interaction method for reverse active fusion of multi-mode intentions |
CN112099630B (en) * | 2020-09-11 | 2024-04-05 | 济南大学 | Man-machine interaction method for multi-modal intention reverse active fusion |
Also Published As
Publication number | Publication date |
---|---|
WO2018000261A1 (en) | 2018-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106537294A (en) | Method, system and robot for generating interactive content of robot | |
CN106462254A (en) | Robot interaction content generation method, system and robot | |
CN107894833B (en) | Multi-modal interaction processing method and system based on virtual human | |
CN107632706B (en) | Application data processing method and system of multi-modal virtual human | |
CN109658928A (en) | A kind of home-services robot cloud multi-modal dialog method, apparatus and system | |
CN106537293A (en) | Method and system for generating robot interactive content, and robot | |
CN106662932A (en) | Method, system and robot for recognizing and controlling household appliances based on intention | |
CN107797663A (en) | Multi-modal interaction processing method and system based on visual human | |
CN109789550A (en) | Control based on the social robot that the previous role in novel or performance describes | |
CN107944542A (en) | A kind of multi-modal interactive output method and system based on visual human | |
CN106462255A (en) | A method, system and robot for generating interactive content of robot | |
CN106463118B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
CN106462124A (en) | Method, system and robot for identifying and controlling household appliances based on intention | |
Chen et al. | Cp-robot: Cloud-assisted pillow robot for emotion sensing and interaction | |
CN106471572B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
CN111312222A (en) | Awakening and voice recognition model training method and device | |
CN106489114A (en) | A kind of generation method of robot interactive content, system and robot | |
CN106537425A (en) | Method and system for generating robot interaction content, and robot | |
CN106815321A (en) | Chat method and device based on intelligent chat robots | |
CN106774797A (en) | Robot automatic power-saving method, device and robot | |
US20230237059A1 (en) | Managing engagement methods of a digital assistant while communicating with a user of the digital assistant | |
CN106462804A (en) | Method and system for generating robot interaction content, and robot | |
Thakur et al. | A context-driven complex activity framework for smart home | |
Chen et al. | Designing an elderly virtual caregiver using dialogue agents and WebRTC | |
KR101330268B1 (en) | Method for building emotional-speech recognition model by using neuro-fuzzy network with a weighted fuzzy membership function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170322 |