CN109284811A - A kind of man-machine interaction method and device towards intelligent robot - Google Patents
A kind of man-machine interaction method and device towards intelligent robot Download PDFInfo
- Publication number
- CN109284811A CN109284811A CN201811014450.0A CN201811014450A CN109284811A CN 109284811 A CN109284811 A CN 109284811A CN 201811014450 A CN201811014450 A CN 201811014450A CN 109284811 A CN109284811 A CN 109284811A
- Authority
- CN
- China
- Prior art keywords
- result
- modal
- parsing
- information
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/008—Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
Abstract
A kind of man-machine interaction method towards intelligent robot comprising: Step 1: obtaining the multi-modal interactive information of user's input;Step 2: carrying out memory parsing to multi-modal interactive information, and according to whether memory parsing result can be obtained and call different interaction models to generate corresponding multi-modal feedback information and export.Compared to existing man-machine interaction method, customer attribute information included in the multi-modal interactive information that this method can effectively input user in human-computer interaction process identifies, above-mentioned customer attribute information can also be embodied in the feedback information exported simultaneously, the process for also allowing for human-computer interaction in this way is more in line with the interaction habits of the mankind, improves man-machine interaction experience.
Description
Technical field
The present invention relates to robotic technology fields, specifically, being related to a kind of human-computer interaction side towards intelligent robot
Method and device, while additionally providing a kind of man-machine interactive system towards intelligent robot and children special-purpose smart machine.
Background technique
With the development of the society, robot is not only widely used in industry, medicine, agricultural or military affairs, even more in life
The social activity of the involvement mankind is begun slowly in work.Robot application in common social activity is in site of activity or family, especially
In site of activity, the interaction of robot tends to the concern and interest to draw a crowd.
However, user often reveals out the relevant information of itself during interacting with intelligent robot, and
Existing intelligent robot can not but parse and extract wherein critical entities information related to user, so that man-machine friendship
Mutual process is unable to satisfy the individual demand of user.
Summary of the invention
To solve the above problems, the present invention provides a kind of man-machine interaction method towards intelligent robot, the method
Include:
Step 1: obtaining the multi-modal interactive information of user's input;
Step 2: carrying out memory parsing to the multi-modal interactive information, and according to whether memory parsing knot can be obtained
Fruit and call different interaction models to generate corresponding multi-modal feedback information and export.
According to one embodiment of present invention, in the step 2, if it is possible to obtain memory parsing result, then call
Interaction models are remembered to generate corresponding multi-modal feedback information, in the memory interaction models, using corresponding to the use
User's map at family to generate corresponding multi-modal feedback information according to the memory parsing result.
According to one embodiment of present invention, in the step 2,
Obtain the user's map for corresponding to the user;
The memory parsing result and user's map are compared, according to comparing result come to user's map
It is updated, and corresponding multi-modal feedback information is generated according to updated user's map.
According to one embodiment of present invention, carrying out the step of memory parses to the multi-modal interactive information includes:
Judge whether there is the contextual information for being directed to the multi-modal interactive information;
If it is present carrying out rule parsing and context resolution to the multi-modal interactive information, correspondence obtains rule
Parsing result and context resolution obtain as a result, by integrating to the rule parsing result and context resolution result
The memory parsing result.
According to one embodiment of present invention, if there is no contextual information, then to the multi-modal interactive information into
Line discipline parsing, correspondence obtain rule parsing as a result, obtaining the memory parsing result according to the rule parsing result in turn.
According to one embodiment of present invention, when carrying out memory parsing to the multi-modal interactive information, also to described
Multi-modal interactive information carries out arithmetic analysis, wherein
If there is the contextual information for being directed to the multi-modal interactive information, then to the multi-modal interactive information into
Line discipline parsing, arithmetic analysis and context resolution, correspondence obtain rule parsing result, arithmetic analysis result and context resolution
As a result, obtaining the memory by integrating to the rule parsing result, arithmetic analysis result and context resolution result
Parsing result;
If there is no the contextual information for being directed to the multi-modal interactive information, then to the multi-modal interactive information
Rule parsing and arithmetic analysis are carried out, correspondence obtains rule parsing result and arithmetic analysis as a result, by the rule parsing
As a result it is integrated with arithmetic analysis result, obtains the memory parsing result.
According to one embodiment of present invention, the priority of the rule parsing is higher than the arithmetic analysis and context solution
The priority of analysis.
The present invention also provides a kind of man-machine interactive system towards intelligent robot, the man-machine interactive system is configured with
One executes program, and the execution program is for realizing described in any item man-machine interaction methods as above.
The present invention also provides a kind of human-computer interaction device towards intelligent robot, described device includes:
Interactive information obtains module, is used to obtain the multi-modal interactive information of user's input;
Feedback information generation module is used to carry out memory parsing, and according to whether energy to the multi-modal interactive information
It accesses memory parsing result and calls different interaction models to generate corresponding multi-modal feedback information and export.
According to one embodiment of present invention, if it is possible to obtain memory parsing result, the feedback information generation module
It is then configured to call memory interaction models to generate corresponding multi-modal feedback information, in the memory interaction models, utilization
User's map corresponding to the user to generate corresponding multi-modal feedback information according to the memory parsing result.
According to one embodiment of present invention, the feedback information generation module is configured that
Obtain the user's map for corresponding to the user;
The memory parsing result and user's map are compared, according to comparing result come to user's map
It is updated, and corresponding multi-modal feedback information is generated according to updated user's map.
According to one embodiment of present invention, the feedback information generation module is configured to using following steps to described more
Mode interactive information carries out memory parsing:
Judge whether there is the contextual information for being directed to the multi-modal interactive information;
If it is present carrying out rule parsing and context resolution to the multi-modal interactive information, correspondence obtains rule
Parsing result and context resolution obtain as a result, by integrating to the rule parsing result and context resolution result
The memory parsing result.
According to one embodiment of present invention, if there is no contextual information, the feedback information generation module is then matched
It is set to and rule parsing is carried out to the multi-modal interactive information, it is corresponding to obtain rule parsing as a result, solving in turn according to the rule
Analysis result obtains the memory parsing result.
According to one embodiment of present invention, when carrying out memory parsing to the multi-modal interactive information, the feedback
Information generating module is configured to also carry out arithmetic analysis to the multi-modal interactive information, wherein
If there is the contextual information for being directed to the multi-modal interactive information, the feedback information generation module is then right
The multi-modal interactive information carries out rule parsing, arithmetic analysis and context resolution, and correspondence obtains rule parsing result, algorithm
Parsing result and context resolution are as a result, by the rule parsing result, arithmetic analysis result and context resolution result
It is integrated, obtains the memory parsing result;
If there is no the contextual information for being directed to the multi-modal interactive information, the feedback information generation module is then
Rule parsing and arithmetic analysis are carried out to the multi-modal interactive information, correspondence obtains rule parsing result and arithmetic analysis knot
Fruit obtains the memory parsing result by integrating to the rule parsing result and arithmetic analysis result.
The present invention also provides a kind of children special-purpose smart machine, the equipment includes described in any item man-machine friendships as above
Mutual device.
Man-machine interaction method and man-machine interactive system provided by the present invention towards intelligent robot passes through to user
The multi-modal interactive information of input carries out memory parsing and draws a portrait (information i.e. relevant to user's self attributes) to obtain user, and
User's portrait is generated into corresponding feedback information in conjunction with default knowledge mapping.Compared to existing man-machine interaction method, originally
The letter of user property included in the multi-modal interactive information that method can effectively input user in human-computer interaction process
Breath is identified, while above-mentioned customer attribute information can also be embodied in the feedback information exported, also allows for people in this way
The process of machine interaction is more in line with the interaction habits of the mankind, improves man-machine interaction experience.
Simultaneously as man-machine interaction method provided by the present invention is according to the memory parsing result to multi-modal interactive information
Difference and multi-modal feedback information is generated using different interaction models, therefore compared to existing man-machine interaction method,
This method multi-modal feedback information generated is more diversified and personalized, can further increase the pleasure of human-computer interaction in this way
Interest and user experience.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by specification, right
Specifically noted structure is achieved and obtained in claim and attached drawing.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is required attached drawing in technical description to do simple introduction:
Fig. 1 is the implementation process signal of the man-machine interaction method according to an embodiment of the invention towards intelligent robot
Figure;
Fig. 2 is that the implementation process according to an embodiment of the invention for carrying out memory parsing to multi-modal interactive information is illustrated
Figure;
Fig. 3 is the implementation process schematic diagram according to an embodiment of the invention for generating multi-modal feedback information;
Fig. 4 is the human-computer interaction schematic diagram of a scenario of intelligent robot according to an embodiment of the invention;
Fig. 5 is the structural schematic diagram of the human-computer interaction device according to an embodiment of the invention towards intelligent robot.
Specific embodiment
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and examples, how to apply to the present invention whereby
Technological means solves technical problem, and the realization process for reaching technical effect can fully understand and implement.It needs to illustrate
As long as not constituting conflict, each feature in each embodiment and each embodiment in the present invention can be combined with each other,
It is within the scope of the present invention to be formed by technical solution.
Meanwhile in the following description, for illustrative purposes and numerous specific details are set forth, to provide to of the invention real
Apply the thorough understanding of example.It will be apparent, however, to one skilled in the art, that the present invention can not have to tool here
Body details or described ad hoc fashion are implemented.
In addition, step shown in the flowchart of the accompanying drawings can be in the department of computer science of such as a group of computer-executable instructions
It is executed in system, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein
Sequence execute shown or described step.
User would generally reveal the relevant information of itself during interacting with intelligent robot, and existing people
Machine interactive system can not be resolved to and extract the critical entities information useful to human-computer interaction therein.Meanwhile for extracting
Customer attribute information, existing man-machine interactive system can not in human-computer interaction to these customer attribute informations carry out effectively benefit
With, or be merely capable of simply being replied when user clearly inquires self information to man-machine interactive system, lack packet
Reply containing thinking logic actively interaction related to application message.
For the above problem in the presence of the prior art, the present invention provides a kind of towards the man-machine of intelligent robot
Exchange method and man-machine interactive system.The man-machine interaction method and man-machine interactive system can effectively to user information into
Row effective use, to improve the interactive experience of man-machine interactive system.
Fig. 1 shows the implementation process signal of the man-machine interaction method towards intelligent robot provided by the present embodiment
Figure.
As shown in Figure 1, the man-machine interaction method towards intelligent robot provided by the present embodiment first can be in step
The multi-modal interactive information that user is inputted is obtained in S101.In the present embodiment, this method is accessed in step s101
Multi-modal interactive information preferably includes the voice messaging of user's input.Certainly, in other embodiments of the invention, according to reality
Border situation, accessed multi-modal interactive information can also include other appropriate messages, this hair to this method in step s101
It is bright without being limited thereto.For example, in one embodiment of the invention, this method accessed multi-modal interaction in step s101
The text information that information can also be inputted comprising user by equipment such as keyboards.
After obtaining the multi-modal interactive information that user is inputted, this method can be in step s 102 to above-mentioned multi-modal friendship
Mutual information carries out memory parsing, and judges whether that memory parsing result can be obtained in step s 103.In the present embodiment, the party
The obtained memory parsing result of method is preferably attribute information related to user, for example, can be such as user name,
The various dimensions information such as age, gender, constellation, birthday.
According to whether memory parsing result can be obtained, it is corresponding more to generate that this method can call different interaction models
Mode feedback information.Wherein, if it is possible to obtain memory parsing result, this method can preferably call memory in step S104
Interaction models generate corresponding multi-modal feedback information.
Specifically, as shown in Fig. 2, in the present embodiment, this method judges whether there is be directed to step in step s 201
The contextual information of accessed multi-modal interactive information in S101.Wherein, if there is above-mentioned multi-modal interactive information
Contextual information, then this method will carry out rule parsing to above-mentioned multi-modal interactive information respectively in step S202, calculate
Method parsing and context resolution, can also correspond to obtain rule parsing result, arithmetic analysis result and context solution in this way
Analyse result.By parsing to multi-modal interactive information, this method can accurately obtain multi-modal interactive information and be included
The information such as structure, semanteme and the topic of sentence.
After obtaining rule parsing result, arithmetic analysis result and context resolution result, this method can be in step S203
In above-mentioned rule parsing result, arithmetic analysis result and context resolution result are integrated, thus obtain memory parsing knot
Fruit.
And if there is no the contextual information of above-mentioned multi-modal interactive information, in the present embodiment, this method can be then in step
In rapid S204 to above-mentioned multi-modal interactive information carry out rule parsing and and arithmetic analysis, also can be obtained by rule parsing in this way
As a result with arithmetic analysis result.Then, this method can be in step S205 to above-mentioned rule parsing result and arithmetic analysis result
It is integrated, to obtain memory parsing result.
In the present embodiment, above-mentioned rule parsing, arithmetic analysis and context resolution preferably have specific priority.
For example, the priority of rule parsing is higher than the priority of arithmetic analysis, and the priority of arithmetic analysis is then higher than context
The priority of parsing.
Certainly, in other embodiments of the invention, according to actual needs, the priority of above-mentioned three kinds of resolvings may be used also
To be configured to other, rationally traveling or above-mentioned three kinds of resolving can also carry out simultaneously, and the invention is not limited thereto.
It is also desirable to, it is noted that in other embodiments of the invention, according to actual needs, this method can be with
Without arithmetic analysis process, but there are the contextual information of multi-modal interactive information carry out rule parsing and
Context resolution carries out rule parsing the case where the contextual information of multi-modal interactive information is not present, and the present invention is equally not
It is limited to this.
In the present embodiment, if it is possible to obtain memory parsing result, this method can preferably call memory interaction models
Generate corresponding multi-modal feedback information.In memory interaction models, this method can be schemed using the user corresponding to active user
It composes to generate corresponding multi-modal feedback information according to obtained memory parsing result.And if being unable to get memory parsing
As a result, this method can generate multi-modal feedback information using other logics in the present embodiment.
Fig. 3, which is shown, calls memory interaction models to generate the implementation process signal of multi-modal feedback information in the present embodiment
Figure.
As shown in figure 3, in the present embodiment, after obtaining memory parsing result, this method can obtain pair in step S301
It should be in user's map of the user.Relevant to user's self attributes various dimensions information (such as user is stored in user's map
The various dimensions information such as name, age, gender, constellation, birthday).Wherein, user's map is preferably previously stored in data storage
In chip.
Then, this method can in step s 302 compare obtained memory parsing result with user's map, and
User's map obtained in step S301 is updated according to comparing result in step S303.Specifically, this implementation
Example in, this method judge in step s 302 above-mentioned memory parsing result whether in user's map relevant parameter is consistent.
Wherein, if the customer attribute information for not included comprising above-mentioned memory parsing result in user's map,
Above-mentioned memory parsing result can be preferably added into user's map by this method in step S303 at this time, thus realization pair
The update of knowledge mapping.
And if having contained the customer attribute information and two contained by above-mentioned memory parsing result in user's map
Person's data are identical, and this method would not also go additionally to operate user's map at this time.And if wrapped in user's map
The customer attribute information contained by above-mentioned memory parsing result is contained but the two data is not identical, then this method will at this time
The confirmation prompt information that simultaneously output phase is answered is generated, to notify user to be confirmed whether to need to update user's map.
Wherein, if user's confirmation needs to update user's map, this method will utilize memory parsing result at this time
To be updated to user's map.And if user's confirmation does not need to update user's map, this method will not be right at this time
User's map carries out operation bidirectional, that is, maintains the standing state of user's map.
Certainly, in other embodiments of the invention, according to the actual situation, this method can also use other rational methods
Come according to memory parsing result user's map is updated, the invention is not limited thereto.
After completing to the update of user's map, as shown in figure 3, this method will be in step s 304 in the present embodiment
Corresponding multi-modal feedback information is generated according to updated user's map.In this way, man-machine interactive system is handed over user
In mutual process, if user talks about information relevant to self attributes, human-computer interaction progress will pass through knowledge mapping
To generate and feed back the reply for meeting mankind's logic.
From foregoing description as can be seen that the man-machine interaction method provided by the present invention towards intelligent robot by pair
The multi-modal interactive information of user's input carries out memory parsing to obtain user's portrait (letter i.e. relevant to user's self attributes
Breath), and user's portrait is generated into corresponding feedback information in conjunction with default knowledge mapping.Compared to existing human-computer interaction side
Method, user included in the multi-modal interactive information that this method can effectively input user in human-computer interaction process belong to
Property information is identified, while above-mentioned customer attribute information can also be embodied in the feedback information exported, is also just made in this way
The process for obtaining human-computer interaction is more in line with the interaction habits of the mankind, improves man-machine interaction experience.
Simultaneously as man-machine interaction method provided by the present invention is according to the memory parsing result to multi-modal interactive information
Difference and multi-modal feedback information is generated using different interaction models, therefore compared to existing man-machine interaction method,
This method multi-modal feedback information generated is more diversified and personalized, can further increase the pleasure of human-computer interaction in this way
Interest and user experience.
As shown in figure 4, man-machine interaction method provided by the present invention is arranged preferably in intelligent robot, can lead to
The robot operating system built in intelligent robot is crossed to execute.When the operating system built in intelligent robot can be realized this hair
When method provided by bright, user 400 also can input corresponding voice to intelligent robot 401 according to the habit of oneself
Interactive information, and intelligent robot 401 can then generate rationally according to the dialog information that user is inputted come knowledge based map
Feedback information, to realize the human-computer dialogue process that more personalizes.
It should be pointed out that in different embodiments of the invention, intelligent robot 401, which can be, various forms of to be had
The system of human-computer dialogue ability.For example, in one embodiment of the invention, intelligent robot 401 can be equipped with intelligence
The class humanoid robot of operating system, and in another embodiment of the present invention, intelligent robot 401, which then can be, to be held
The specific software or application of row interactive method provided by the present invention.
The present invention also provides a kind of man-machine interactive system, which executes program, the execution configured with one
Program can be realized man-machine interaction method as described above in the process of implementation.
Meanwhile the present invention also provides a kind of human-computer interaction device towards intelligent robot, Fig. 5 shows the present embodiment
In the human-computer interaction device structural schematic diagram.As shown in figure 5, the man-machine friendship towards intelligent robot provided by the present embodiment
Mutual device preferably includes interactive information and obtains module 501 and feedback information generation module 502.Wherein, interactive information obtains mould
Block 501 is used to obtain the multi-modal interactive information of user's input.
In the present embodiment, interactive information obtains module 501 and preferably includes voice acquisition equipment, can be handed over by voice
Mutual equipment obtains the interactive voice information that user is inputted.Certainly, in other embodiments of the invention, interactive information obtains
Module 501 can also include other reasonable equipment or be realized using other reasonable equipment, and the invention is not limited thereto.For example,
In one embodiment of the invention, interactive information obtain module 401 can also be comprising keyboard (such as dummy keyboard or entity
Keyboard etc.), interactive information obtains module 401 can also obtain the text information that user is inputted by keyboard.
The multi-modal input information that interactive information obtains module 501 and can will acquire is transmitted to the feedback information being attached thereto
Generation module 502, to be parsed by feedback information generation module 502 to above-mentioned multi-modal interactive information, and according to whether
It can obtain memory parsing result and call different interaction models to generate corresponding multi-modal feedback information and export.
Specifically, in the present embodiment, if it is possible to obtain memory parsing result, feedback information generation module 502 then configures
Interaction models are remembered for calling to generate corresponding multi-modal feedback information, are utilized in memory interaction models and are corresponded to user's
User's map to generate corresponding multi-modal feedback information according to memory parsing result.
In the present embodiment, feedback information generation module 502 realizes step in the principle and process and above-mentioned Fig. 1 of its function
Content described in S102 to step S104 is similar, therefore no longer carries out herein to the particular content of feedback information generation module 502
It repeats.
In addition, the equipment contains man-machine friendship as described above the present invention also provides a kind of children special-purpose smart machine
Mutual device.For children special-purpose smart machine, knowing corresponding to child user can be stored using memory
Know map, may include age, education degree, interest-oriented class, the animation liked of child user etc. in the knowledge mapping and tie up
The information of degree, in this way the children special-purpose smart machine also can be interacted preferably with child user, to meet
The personalized interaction demand of virgin user.
It should be noted that the smart machine can be with are as follows: humanoid intelligent robot, children special-purpose intelligent robot, youngster
Virgin Story machine, plate, smart phone, children draw this arrangement for reading etc., do not limit to.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein or processing step
Suddenly, the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also be understood that
It is that term as used herein is used only for the purpose of describing specific embodiments, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structure
Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs
Apply example " or " embodiment " the same embodiment might not be referred both to.
Although above-mentioned example is used to illustrate principle of the present invention in one or more application, for the technology of this field
For personnel, without departing from the principles and ideas of the present invention, hence it is evident that can in form, the details of usage and implementation
It is upper that various modifications may be made and does not have to make the creative labor.Therefore, the present invention is defined by the appended claims.
Claims (15)
1. a kind of man-machine interaction method towards intelligent robot, which is characterized in that the described method includes:
Step 1: obtaining the multi-modal interactive information of user's input;
Step 2: carry out memory parsing to the multi-modal interactive information, and according to whether memory parsing result can be obtained and
Different interaction models are called to generate corresponding multi-modal feedback information and export.
2. the method as described in claim 1, which is characterized in that in the step 2, if it is possible to obtain memory parsing knot
Fruit calls memory interaction models then to generate corresponding multi-modal feedback information and utilize correspondence in the memory interaction models
Come to generate corresponding multi-modal feedback information according to the memory parsing result in user's map of the user.
3. method according to claim 2, which is characterized in that in the step 2,
Obtain the user's map for corresponding to the user;
The memory parsing result and user's map are compared, user's map is carried out according to comparing result
It updates, and corresponding multi-modal feedback information is generated according to updated user's map.
4. method according to any one of claims 1 to 3, which is characterized in that remember to the multi-modal interactive information
The step of recalling parsing include:
Judge whether there is the contextual information for being directed to the multi-modal interactive information;
If it is present carrying out rule parsing and context resolution to the multi-modal interactive information, correspondence obtains rule parsing
As a result with context resolution as a result, being obtained described by being integrated to the rule parsing result and context resolution result
Remember parsing result.
5. method as claimed in claim 4, which is characterized in that if there is no contextual information, then to the multi-modal friendship
Mutual information carries out rule parsing, and correspondence obtains rule parsing as a result, obtaining the memory according to the rule parsing result in turn
Parsing result.
6. method as described in claim 4 or 5, which is characterized in that carrying out memory parsing to the multi-modal interactive information
When, arithmetic analysis also is carried out to the multi-modal interactive information, wherein
If there is the contextual information for being directed to the multi-modal interactive information, then the multi-modal interactive information is advised
It then parses, arithmetic analysis and context resolution, correspondence obtain rule parsing result, arithmetic analysis result and context resolution knot
Fruit obtains the memory solution by integrating to the rule parsing result, arithmetic analysis result and context resolution result
Analyse result;
If there is no the contextual information for being directed to the multi-modal interactive information, then the multi-modal interactive information is carried out
Rule parsing and arithmetic analysis, correspondence obtain rule parsing result and arithmetic analysis as a result, by the rule parsing result
It is integrated with arithmetic analysis result, obtains the memory parsing result.
7. method as claimed in claim 6, which is characterized in that the priority of the rule parsing be higher than the arithmetic analysis and
The priority of context resolution.
8. a kind of man-machine interactive system towards intelligent robot, which is characterized in that the man-machine interactive system is held configured with one
Line program, the execution program is for realizing man-machine interaction method such as according to any one of claims 1 to 7.
9. a kind of human-computer interaction device towards intelligent robot, which is characterized in that described device includes:
Interactive information obtains module, is used to obtain the multi-modal interactive information of user's input;
Feedback information generation module is used to carry out the multi-modal interactive information memory parsing, and according to whether can obtain
It calls different interaction models to memory parsing result and generates corresponding multi-modal feedback information and export.
10. device as claimed in claim 9, which is characterized in that if memory parsing result, the feedback information can be obtained
Generation module is then configured to call memory interaction models to generate corresponding multi-modal feedback information, in the memory interaction models
In, corresponding multi-modal feedback letter is generated according to the memory parsing result using the user's map for corresponding to the user
Breath.
11. device as claimed in claim 10, which is characterized in that the feedback information generation module is configured that
Obtain the user's map for corresponding to the user;
The memory parsing result and user's map are compared, user's map is carried out according to comparing result
It updates, and corresponding multi-modal feedback information is generated according to updated user's map.
12. the device as described in any one of claim 9~11, which is characterized in that the feedback information generation module configuration
To carry out memory parsing to the multi-modal interactive information using following steps:
Judge whether there is the contextual information for being directed to the multi-modal interactive information;
If it is present carrying out rule parsing and context resolution to the multi-modal interactive information, correspondence obtains rule parsing
As a result with context resolution as a result, being obtained described by being integrated to the rule parsing result and context resolution result
Remember parsing result.
13. device as claimed in claim 12, which is characterized in that if there is no contextual information, the feedback information is raw
It is then configured to carry out the multi-modal interactive information rule parsing at module, it is corresponding to obtain rule parsing as a result, basis in turn
The rule parsing result obtains the memory parsing result.
14. device as described in claim 12 or 13, which is characterized in that carrying out memory solution to the multi-modal interactive information
When analysis, the feedback information generation module is configured to also carry out arithmetic analysis to the multi-modal interactive information, wherein
If there is the contextual information for being directed to the multi-modal interactive information, the feedback information generation module is then to described
Multi-modal interactive information carries out rule parsing, arithmetic analysis and context resolution, and correspondence obtains rule parsing result, arithmetic analysis
As a result with context resolution as a result, by being carried out to the rule parsing result, arithmetic analysis result and context resolution result
Integration, obtains the memory parsing result;
If there is no the contextual information for being directed to the multi-modal interactive information, the feedback information generation module is then to institute
It states multi-modal interactive information and carries out rule parsing and arithmetic analysis, it is corresponding to obtain rule parsing result and arithmetic analysis as a result, logical
It crosses and the rule parsing result and arithmetic analysis result is integrated, obtain the memory parsing result.
15. a kind of children special-purpose smart machine, which is characterized in that the equipment includes such as any one of claim 9~14 institute
The human-computer interaction device stated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811014450.0A CN109284811B (en) | 2018-08-31 | 2018-08-31 | Intelligent robot-oriented man-machine interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811014450.0A CN109284811B (en) | 2018-08-31 | 2018-08-31 | Intelligent robot-oriented man-machine interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109284811A true CN109284811A (en) | 2019-01-29 |
CN109284811B CN109284811B (en) | 2021-05-25 |
Family
ID=65183460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811014450.0A Active CN109284811B (en) | 2018-08-31 | 2018-08-31 | Intelligent robot-oriented man-machine interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109284811B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090099693A1 (en) * | 2007-10-16 | 2009-04-16 | Electronics And Telecommunications Research Institute | System and method for control of emotional action expression |
CN105824935A (en) * | 2016-03-18 | 2016-08-03 | 北京光年无限科技有限公司 | Method and system for information processing for question and answer robot |
CN106292423A (en) * | 2016-08-09 | 2017-01-04 | 北京光年无限科技有限公司 | Music data processing method and device for anthropomorphic robot |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN106446141A (en) * | 2016-09-21 | 2017-02-22 | 北京光年无限科技有限公司 | Interaction data processing method for intelligent robot system and robot system |
CN106557464A (en) * | 2016-11-18 | 2017-04-05 | 北京光年无限科技有限公司 | A kind of data processing method and device for talking with interactive system |
CN107870994A (en) * | 2017-10-31 | 2018-04-03 | 北京光年无限科技有限公司 | Man-machine interaction method and system for intelligent robot |
-
2018
- 2018-08-31 CN CN201811014450.0A patent/CN109284811B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090099693A1 (en) * | 2007-10-16 | 2009-04-16 | Electronics And Telecommunications Research Institute | System and method for control of emotional action expression |
CN105824935A (en) * | 2016-03-18 | 2016-08-03 | 北京光年无限科技有限公司 | Method and system for information processing for question and answer robot |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN106292423A (en) * | 2016-08-09 | 2017-01-04 | 北京光年无限科技有限公司 | Music data processing method and device for anthropomorphic robot |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106446141A (en) * | 2016-09-21 | 2017-02-22 | 北京光年无限科技有限公司 | Interaction data processing method for intelligent robot system and robot system |
CN106557464A (en) * | 2016-11-18 | 2017-04-05 | 北京光年无限科技有限公司 | A kind of data processing method and device for talking with interactive system |
CN107870994A (en) * | 2017-10-31 | 2018-04-03 | 北京光年无限科技有限公司 | Man-machine interaction method and system for intelligent robot |
Also Published As
Publication number | Publication date |
---|---|
CN109284811B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105807933B (en) | A kind of man-machine interaction method and device for intelligent robot | |
CN107294837A (en) | Engaged in the dialogue interactive method and system using virtual robot | |
CN107329990A (en) | A kind of mood output intent and dialogue interactive system for virtual robot | |
CN106020488A (en) | Man-machine interaction method and device for conversation system | |
CN105354180B (en) | A kind of method and system for realizing open Semantic interaction service | |
CN107132975A (en) | A kind of control editing and processing method, mobile terminal and computer-readable recording medium | |
CN106294854A (en) | A kind of man-machine interaction method for intelligent robot and device | |
CN107632706A (en) | The application data processing method and system of multi-modal visual human | |
CN104484163B (en) | A kind of isomery model conversion method based on unified Modeling environment | |
WO2020221142A1 (en) | Picture book-based question and answer interaction method and electronic device | |
CN107193853A (en) | A kind of social scenario building method and system based on linguistic context | |
CN109948151A (en) | The method for constructing voice assistant | |
CN100565395C (en) | The autonomy field system of reconfigurable digital controller | |
CN106126636B (en) | A kind of man-machine interaction method and device towards intelligent robot | |
CN106372850A (en) | Information reminding method and device based on intelligent robot | |
CN107786432A (en) | Information displaying method, device, computer installation and calculating readable storage medium storing program for executing | |
CN104504171B (en) | A kind of modeling method based on unified Modeling environment | |
Petridis et al. | Promptinfuser: Bringing user interface mock-ups to life with large language models | |
US20230126821A1 (en) | Systems, devices and methods for the dynamic generation of dialog-based interactive content | |
Mahmood et al. | Singular Adaptive Multi-Role Intelligent Personal Assistant (SAM-IPA) for Human Computer Interaction | |
CN108429848A (en) | A kind of information processing method, device and equipment | |
CN109284811A (en) | A kind of man-machine interaction method and device towards intelligent robot | |
CN105844329A (en) | Method and system for processing thinking data for intelligent robot | |
Willmott et al. | Towards an argument interchange format for multiagent systems | |
Pirrone et al. | GAIML: A new language for verbal and graphical interaction in chatbots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |