CN105843382A - Man-machine interaction method and device - Google Patents
Man-machine interaction method and device Download PDFInfo
- Publication number
- CN105843382A CN105843382A CN201610157698.7A CN201610157698A CN105843382A CN 105843382 A CN105843382 A CN 105843382A CN 201610157698 A CN201610157698 A CN 201610157698A CN 105843382 A CN105843382 A CN 105843382A
- Authority
- CN
- China
- Prior art keywords
- information
- feedback information
- interim
- interaction
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a man-machine interaction method and device. The method comprises the following steps: an input information acquisition step: acquiring multi-modal interaction information input by a user; a data processing step: generating effective interaction information according to the multi-modal interaction information and generating and outputting corresponding interaction feedback information according to the effective interaction information. According to the method, the feedback information generation process can integrate different types of interaction information, so that the output interaction feedback information is enabled to more accord with the expectation of the users, and the condition of outputting feedback information not suitable for the current user can be avoided, thereby improving the user experience of the products.
Description
Technical field
The present invention relates to human-computer interaction technique field, specifically, relate to a kind of man-machine interaction method and device.
Background technology
Along with the development of science and technology, drawing of information technology, computer technology and artificial intelligence technology
Entering, industrial circle is the most progressively walked out in the research of robot, gradually extend to medical treatment, health care, family,
The fields such as amusement and service occupation.And people are also conformed to the principle of simplicity the multiple mechanical action of substance for the requirement of robot
It is promoted to that there is anthropomorphic question and answer, autonomy and the intelligent robot interacted with other robot, man-machine friendship
The most just become the key factor determining intelligent robot development.
For traditional man-machine interaction, man-machine interaction is mainly passed through mouse, keyboard and touch by user
The devices such as screen interact with the equipment such as computer, mobile phone.And for intelligent robot, if still adopted
If this interactive mode, then will make efficiency and the effect extreme difference of man-machine interaction.
Summary of the invention
For solving the problems referred to above, the invention provides a kind of man-machine interaction method, described method includes:
Input information acquiring step, obtains the multi-modal interactive information of user's input;
Data processing step, generates effective interactive information according to described multi-modal interactive information, and has according to described
Effect interactive information generates and exports corresponding interaction feedback information.
According to one embodiment of present invention, described effective interactive information includes user state information, at described number
According to processing in step, true according to image information, voice messaging and the action message in described multi-modal interactive information
Fixed described user state information, and adjust dialog model according to described user state information.
According to one embodiment of present invention, described user state information includes age and/or the sex of user,
In described data processing step, choose from default dialog model set and match with described user state information
Dialog model, and generate corresponding interaction feedback information according to selected dialog model.
According to one embodiment of present invention, in described data processing step,
Utilize and preset interaction models generation the first interim feedback information of talking in professional jargon;
Pre-set user Custom Knowledge Base is utilized to generate the second interim feedback information;
Utilize and preset question and answer interaction models generation the 3rd interim feedback information;
Institute is generated according to the described first interim feedback information, the second interim feedback information and the 3rd interim feedback information
State interaction feedback information.
According to one embodiment of present invention, in described data processing step, to the described first interim feedback letter
Breath, the second interim feedback information and the 3rd interim feedback information are ranked up, and have determined according to ranking results
Imitate interim feedback information, generate described interaction feedback information according to the most interim described feedback information.
Present invention also offers a kind of human-computer interaction device, described device includes:
Input data obtaining module, user obtains the multi-modal interactive information of user's input;
Data processing module, for generating effective interactive information according to described multi-modal interactive information, and according to institute
State effective interactive information and generate and export corresponding interaction feedback information.
According to one embodiment of present invention, described effective interactive information includes user state information, described data
Processing module is configured to according to image information, voice messaging and the action message in described multi-modal interactive information true
Fixed described user state information, and adjust dialog model according to described user state information.
According to one embodiment of present invention, described user state information includes age and/or the sex of user,
Described data processing module is configured to choose from default dialog model set and described user state information phase
The dialog model joined, and generate corresponding interaction feedback information according to selected dialog model.
According to one embodiment of present invention, described data processing module is configured to, and utilizes and presets mutual mould of talking in professional jargon
Type generates the first interim feedback information, utilizes pre-set user Custom Knowledge Base to generate the second interim feedback information,
Utilize and preset question and answer interaction models generation the 3rd interim feedback information, subsequently according to the described first interim feedback letter
Breath, the second interim feedback information and the 3rd interim feedback information generate described interaction feedback information.
According to one embodiment of present invention, described data processing module is configured to the described first interim feedback letter
Breath, the second interim feedback information and the 3rd interim feedback information are ranked up, and have determined according to ranking results
Imitate interim feedback information, generate described interaction feedback information according to the most interim described feedback information.
Man-machine interaction method provided by the present invention and device can utilize accessed image information to adjust
Dialog model after dialog model, and then utilization adjustment generates and exports and more conforms to the mutual anti-of active user
Feedforward information.The most both so that the interaction feedback information of output more conforms to the expectation of user, can avoid again
Output is not suitable for situation (the voice letter such as child user output being comprised thick mouth of the feedback information of active user
The image information etc. ceased or comprise violence information).
The interaction feedback information that man-machine interaction method provided by the present invention is exported can be the most different dialogue
Model, the interaction feedback information the most also allowing for finally exporting more conforms to the interaction habits of user and mutual
Expect.
Other features and advantages of the present invention will illustrate in the following description, and, partly from description
In become apparent, or by implement the present invention and understand.The purpose of the present invention and other advantages can be passed through
Structure specifically noted in description, claims and accompanying drawing realizes and obtains.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment
Or the accompanying drawing required in description of the prior art does and simply introduces:
Fig. 1 is the flow chart of man-machine interaction method according to an embodiment of the invention;
Fig. 2 is the particular flow sheet of data handling procedure according to an embodiment of the invention;
Fig. 3 is the particular flow sheet generating interaction feedback information according to an embodiment of the invention;
Fig. 4 is the structural representation of human-computer interaction device according to an embodiment of the invention.
Detailed description of the invention
Embodiments of the present invention are described in detail, whereby to the present invention such as below with reference to drawings and Examples
What application technology means solves technical problem, and the process that realizes reaching technique effect can fully understand and evidence
To implement.As long as it should be noted that do not constitute conflict, each embodiment in the present invention and respectively implementing
Each feature in example can be combined with each other, the technical scheme formed all protection scope of the present invention it
In.
Meanwhile, in the following description, many details are elaborated for illustrative purposes, to provide this
The thorough understanding of inventive embodiments.It will be apparent, however, to one skilled in the art, that this
Bright can detail here or described ad hoc fashion implement.
It addition, can be at the meter of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing
Calculation machine system performs, and, although show logical order in flow charts, but in some situation
Under, can be to be different from the step shown or described by order execution herein.
In traditional interactive process, user mainly by the devices such as mouse, keyboard and touch screen with
The equipment such as PC or mobile phone interacts.But to the man-machine interaction of robot for, if still used
If traditional man-machine interaction mode, then inevitably result in the poor effect of man-machine interaction.To this, the present invention
Provide a kind of new man-machine interaction method, the method be capable of voice in interactive process, vision and/
Or the multi-modal input of tactile data, and also be capable of action, expression and/speech form multi-modal alternately
Output.
Fig. 1 shows the flow chart of man-machine interaction method provided by the present invention.
As it is shown in figure 1, first method provided by the present invention obtains the multi-modal friendship of user in step S101
Mutual information.It is pointed out that the multi-modal interactive information that the method can get in step S101
In various embodiments can according to actual needs and the actual functional capability of robot and comprise multi-form
Information, at this, concrete form and quantity to the information that it can get does not limits further.
Such as, in one embodiment of the invention, the method can get many in step S101
Mode interactive information preferably includes: image information (i.e. visual information), voice messaging (i.e. auditory information)
And tactile data.It is pointed out that, in actual moving process, the method is caned in step S101
The multi-modal interactive information got (such as only can only comprise a certain item in information listed above or a few item
Get image information, and other information be empty), the invention is not restricted to this.
When, after the multi-modal interactive information getting user's input, the method is in step s 102 according to multi-modal
Interactive information generates effective interactive information, and in step s 103 according to the effective friendship generated in step S102
Mutual information generates and exports corresponding interaction feedback information.
In one embodiment of the invention, effective interactive information can comprise user state information.Such as Fig. 2
Shown in, the method in step s 201 can be according to the image information in multi-modal interactive information, voice messaging
Determine user state information with action message, adjust according to user state information in step S202 subsequently
Dialog model, and then generate and export, based on the dialog model after adjusting, the interaction feedback matched with User Status
Information.
Specifically, the most in one embodiment of the invention, the method is in step s 201 by multimode
Image information in state interactive information carries out image procossing, determines that the user state information of now user is comprised
Age information for less than preset age threshold (such as 15 years old), the most just can determine that active user is
Child.To this end, the method will adjust dialog model in step S202, i.e. by from default dialog model collection
Conjunction selects the dialog model matched with user state information dialog model is adjusted to deflection child field
The dialog model of scape, so the most just more conforms to based on the interaction feedback information that this dialog model generated and exported
The dialogue custom of child.
Equally, if the user state information identified by image information characterizes active user for adult, that
The method will be adjusted to dialog model in the dialog model of deflection adult's scene, such base in step S202
The dialogue custom of adult is the most just more conformed in the interaction feedback information that this dialog model generated and exported.
And for voice messaging, the method can be by extracting the associated eigenvalue in voice messaging (such as
Intonation, frequency etc.) determine user state information (age of such as user and/or sex etc.), come with this
Dialog model is adjusted according to user state information.Due to the adjustment principle of dialog model and process and above-mentioned base
Adjustment in image information is similar to, therefore does not repeats them here.
It can thus be seen that method provided by the present invention can utilize accessed image information right to adjust
Words model, and then utilize the dialog model after adjusting to generate and export the interaction feedback more conforming to active user
Information.The most both so that the interaction feedback information of output more conforms to the expectation of user, can avoid again defeated
The situation going out to be not suitable for the feedback information of active user (such as exports the voice messaging comprising thick mouth to child user
Or comprise the image information etc. of violence information).
And in other embodiments of the invention, the user state information generated in step s 201 is all right
It is to represent a certain event information about User Status.
The most in one embodiment of the invention, the method is in step s 201 by accessed image
Information may determine that someone walks close in the visual field of robot, simultaneously by analyze image further can
To judge that people that this artificial robot " recognizes " is (i.e. according to the information recorded in robot its memory
In robot its memory, storage has the relevant information of this people) and the longest do not ran into this people,
The most now dialog model will just can be adjusted in step S202 by the method, and by adjust after
Dialog model generates and exports such as the voice feedback information of " * monarch *, long time no see, and how are you getting along recently ",
Or perform the action feedback informations such as an action welcome.
The most in yet another embodiment of the present invention, if the head of user Pai Liao robot, then the method exists
Step S102 (is such as expressed as by the process of tactile data can obtain the event description of this event
{ " description ": " by beating ", " weight_type ": " heavy ", " position ": " head " }),
Subsequently by the process to event description, event is converted into the input that robot conversational system can accept
Text (is such as processed as " being beaten again on my head " etc.), then carrys out originally to adjust dialog model according to this article,
And be further processed according to the dialog model after adjusting and generate the interaction feedback that matches with this event
Information (such as generates and exports such as the voice messaging of " why you beat my head ").
It can thus be seen that method provided by the present invention adjusted by acquired multi-modal interactive information right
Words model, the dialog model after the most just can utilizing adjustment exports more mate with active user mutual
Feedback information, so enables to exported interaction feedback information and more conforms to the expectation of user, thus improve
The Consumer's Experience of product.
It is pointed out that in other embodiments of the invention, data processing step S102 can also be adopted
Generate and export interaction feedback information by other reasonable manners.
In the dialog procedure of daily person to person, not only comprise the dialogue of the formula of talking in professional jargon, also comprise catechetical dialogue,
Simultaneously for being compared to each other between familiar people, dialogue both sides can also adjust self according to the custom of the other side
Conversation content.Therefore, in one embodiment of the invention, the method is comprised when carrying out man-machine interaction
Interaction models preferably includes: interaction models of talking in professional jargon, question and answer interaction models and User Defined knowledge base are mutual
Model etc..
Specifically, as it is shown on figure 3, after getting the multi-modal interactive information that user inputs, the method is in step
Rapid S301 utilizes and presets interaction models of talking in professional jargon according to this multi-modal interactive information generation the first interim feedback letter
Breath.
Such as when user inputs a panda picture, the method utilizes to preset in step S301 and talks in professional jargon alternately
The first interim feedback information that model generates can be the first interim feedback letter of such as " I likes giant panda "
Breath, this first interim feedback information characterizes a kind of potential intention.
In the present embodiment, the method utilizes pre-set user Custom Knowledge Base to come according to being somebody's turn to do the most in step s 302
Multi-modal interactive information generates the second interim feedback information.Specifically, such as user sets in Custom Knowledge Base
Determine " question: panda;Answer: sprout well ", then when user inputs panda picture, the party
Method can obtain " sprouting well " in step s 302 and be used as the second interim feedback information, and this is second interim
Feedback information characterizes a kind of potential intention equally.
Similar with the interaction presetting interaction models of talking in professional jargon, the method can also utilize in step S303 in advance
Rhetoric question is answered interaction models and is generated the 3rd interim feedback information.
It is pointed out that the execution sequence of above-mentioned steps S301 to step S303 is not made by the present invention
Limit, in different embodiments of the invention, can according to actual needs set-up procedure S301 to step S303
Execution sequence.Additionally, in the case of conditions permit, step S301 to step S303 can also synchronize to hold
OK.
After utilizing three kinds of different interaction models to obtain three kinds of corresponding interim feedback informations, the method will be in step
Rapid S304 generates final interaction feedback information according to above-mentioned three kinds of interim feedback informations.Specifically, originally
In embodiment, the method the most preferably first with related algorithm step S301 to step S303
The weighted data (such as goals for) of three kinds of interim feedback informations obtained by, and come according to this weighted data
Feedback information interim to these three is ranked up, and determines according to the interim feedback information of top ranked subsequently
Whole interaction feedback information also exports.
It can thus be seen that the interaction feedback information that man-machine interaction method provided by the present invention is exported can be combined
Closing different dialog models, the interaction feedback information the most also allowing for finally exporting more conforms to the mutual of user
It is accustomed to and expects alternately.
Present invention also offers a kind of human-computer interaction device, Fig. 4 shows that in the present embodiment, the structure of this device is shown
It is intended to.
As shown in Figure 4, the human-computer interaction device that the present embodiment is provided includes: input data obtaining module 401
With data processing module 402.Wherein, input data obtaining module 401 is for obtaining the multi-modal of user's input
Interactive information.
In the present embodiment, the multi-modal interactive information that input data obtaining module 401 can get is preferably
Including image information (i.e. visual information), voice messaging and tactile data.It is pointed out that and transport in reality
During row, the input multi-modal interactive information that can get of data obtaining module 401 can only comprise with
A certain item in upper institute column information or a few item (the most only get image information, and other information are empty),
The invention is not restricted to this.
It is also desirable to it is noted that in different embodiments of the invention, input data obtaining module 401
The multi-modal interactive information that can get in various embodiments can according to actual needs and machine
The actual functional capability of people and comprise the information of multi-form, the present invention not tool to the information that it can get
Bodily form formula and quantity make concrete restriction.
When, after the multi-modal interactive information getting user's input, input data obtaining module 401 can be by this multimode
State interactive information is transferred to data processing module 402.Data processing module 402 can be according to multi-modal mutual letter
Breath generates effective interactive information, and generates and export corresponding interaction feedback according to the effective interactive information generated
Information.
In one embodiment of the invention, effective interactive information can comprise user state information.Data process
Module 402 can determine use according to the image information in multi-modal interactive information, voice messaging and action message
Family status information, adjusts dialog model according to user state information subsequently, and then based on the dialogue mould after adjusting
Type generates and exports the interaction feedback information matched with User Status.
It is pointed out that data processing module 402 in this embodiment is to the adjustment principle of dialog model and mistake
Journey is identical with the content involved by above-mentioned Fig. 2, therefore does not repeats them here.
It is pointed out that in other embodiments of the invention, data processing module 402 can also use it
His reasonable manner generates and exports interaction feedback information.
In the dialog procedure of daily person to person, not only comprise the dialogue of the formula of talking in professional jargon, also comprise catechetical dialogue,
Simultaneously for being compared to each other between familiar people, dialogue both sides can also adjust self according to the custom of the other side
Conversation content.Therefore, in one embodiment of the invention, data processing module 402 can also utilize into tune
Chat about the default dialog model such as interaction models, question and answer interaction models and User Defined knowledge base interaction models next life
Become corresponding feedback information.
Specifically, when, after the multi-modal interactive information getting user's input, data processing module 402 can profit
The first interim feedback information is generated according to this multi-modal interactive information with presetting interaction models of talking in professional jargon.
Such as when user inputs a panda picture, data processing module 402 utilizes presets interaction models of talking in professional jargon
The interim feedback information of first generated can be the first interim feedback information of such as " I likes giant panda ",
This first interim feedback information characterizes a kind of potential intention.
In the present embodiment, data processing module 402 also utilizes pre-set user Custom Knowledge Base to come according to this multimode
State interactive information generates the second interim feedback information.Specifically, such as user sets in Custom Knowledge Base
" question: panda;Answer: sprout well ", then when user inputs panda picture, data process
Module 402 just can obtain " sprouting well " and be used as the second interim feedback information, this second interim feedback letter
Breath characterizes a kind of potential intention equally.
Similar with the interaction presetting interaction models of talking in professional jargon, data processing module 402 can also utilize pre-rhetoric question
Answer interaction models and generate the 3rd interim feedback information.
After utilizing three kinds of different interaction models to obtain three kinds of corresponding interim feedback informations, data processing module
402 just can generate final interaction feedback information according to above-mentioned three kinds of interim feedback informations.Specifically, originally
In embodiment, first with related algorithm, data processing module 402 preferably determines that step S301 is to step
The weighted data (such as goals for) of three kinds of interim feedback informations obtained by S303, and according to this weight number
It is ranked up according to feedback information interim to these three, determines according to the interim feedback information of top ranked subsequently
Go out final interaction feedback information and export.
It can thus be seen that the interaction feedback information that man-machine interaction method provided by the present invention is exported can be combined
Closing different dialog models, the interaction feedback information the most also allowing for finally exporting more conforms to the mutual of user
It is accustomed to and expects alternately.
It should be understood that disclosed embodiment of this invention is not limited to ad hoc structure disclosed herein or process
Step, and the equivalent that should extend to these features that those of ordinary skill in the related art are understood substitutes.Also
It should be appreciated that term as used herein is only used for describing the purpose of specific embodiment, and it is not meant to limit
System.
Special characteristic that " embodiment " mentioned in description or " embodiment " mean to describe in conjunction with the embodiments,
Structure or characteristic are included at least one embodiment of the present invention.Therefore, description various places throughout occurs
Phrase " embodiment " or " embodiment " same embodiment might not be referred both to.
Although above-mentioned example is for illustrating present invention principle in one or more application, but for this area
For technical staff, in the case of without departing substantially from the principle of the present invention and thought, hence it is evident that can in form, use
In the details of method and enforcement, various modifications may be made and need not pay creative work.Therefore, the present invention is by appended power
Profit claim limits.
Claims (10)
1. a man-machine interaction method, it is characterised in that including:
Input information acquiring step, obtains the multi-modal interactive information of user's input;
Data processing step, generates effective interactive information according to described multi-modal interactive information, and has according to described
Effect interactive information generates and exports corresponding interaction feedback information.
2. the method for claim 1, it is characterised in that described effective interactive information includes user's shape
State information, in described data processing step, according to the image information in described multi-modal interactive information, voice
Information and action message determine described user state information, and adjust dialogue mould according to described user state information
Type.
3. method as claimed in claim 2, it is characterised in that described user state information includes user's
Age and/or sex, in described data processing step, choose and described user from default dialog model set
The dialog model that status information matches, and generate corresponding interaction feedback letter according to selected dialog model
Breath.
4. the method as according to any one of claims 1 to 3, it is characterised in that process step in described data
In Zhou,
Utilize and preset interaction models generation the first interim feedback information of talking in professional jargon;
Pre-set user Custom Knowledge Base is utilized to generate the second interim feedback information;
Utilize and preset question and answer interaction models generation the 3rd interim feedback information;
Institute is generated according to the described first interim feedback information, the second interim feedback information and the 3rd interim feedback information
State interaction feedback information.
5. method as claimed in claim 4, it is characterised in that in described data processing step, to institute
State the first interim feedback information, the second interim feedback information and the 3rd interim feedback information to be ranked up, and according to
Ranking results determines the most interim feedback information, generates described mutual anti-according to the most interim described feedback information
Feedforward information.
6. a human-computer interaction device, it is characterised in that described device includes:
Input data obtaining module, user obtains the multi-modal interactive information of user's input;
Data processing module, for generating effective interactive information according to described multi-modal interactive information, and according to institute
State effective interactive information and generate and export corresponding interaction feedback information.
7. device as claimed in claim 6, it is characterised in that described effective interactive information includes user's shape
State information, described data processing module is configured to according to the image information in described multi-modal interactive information, voice
Information and action message determine described user state information, and adjust dialogue mould according to described user state information
Type.
8. device as claimed in claim 7, it is characterised in that described user state information includes user's
Age and/or sex, described data processing module is configured to choose and described user from default dialog model set
The dialog model that status information matches, and generate corresponding interaction feedback letter according to selected dialog model
Breath.
9. the device as according to any one of claim 6~8, it is characterised in that described data processing module
It is configured to, utilizes and preset interaction models of talking in professional jargon and generate the first interim feedback information, utilize that pre-set user is self-defined to be known
Know storehouse and generate the second interim feedback information, utilize and preset question and answer interaction models generation the 3rd interim feedback information, with
Generate described according to the described first interim feedback information, the second interim feedback information and the 3rd interim feedback information afterwards
Interaction feedback information.
10. device as claimed in claim 9, it is characterised in that described data processing module is configured to institute
State the first interim feedback information, the second interim feedback information and the 3rd interim feedback information to be ranked up, and according to
Ranking results determines the most interim feedback information, generates described mutual anti-according to the most interim described feedback information
Feedforward information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610157698.7A CN105843382B (en) | 2016-03-18 | 2016-03-18 | A kind of man-machine interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610157698.7A CN105843382B (en) | 2016-03-18 | 2016-03-18 | A kind of man-machine interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105843382A true CN105843382A (en) | 2016-08-10 |
CN105843382B CN105843382B (en) | 2018-10-26 |
Family
ID=56587454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610157698.7A Active CN105843382B (en) | 2016-03-18 | 2016-03-18 | A kind of man-machine interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105843382B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106598215A (en) * | 2016-11-02 | 2017-04-26 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN106774832A (en) * | 2016-11-15 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN106774837A (en) * | 2016-11-23 | 2017-05-31 | 河池学院 | A kind of man-machine interaction method of intelligent robot |
CN106991123A (en) * | 2017-02-27 | 2017-07-28 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device towards intelligent robot |
CN107728780A (en) * | 2017-09-18 | 2018-02-23 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device based on virtual robot |
CN108595420A (en) * | 2018-04-13 | 2018-09-28 | 畅敬佩 | A kind of method and system of optimization human-computer interaction |
CN109459722A (en) * | 2018-10-23 | 2019-03-12 | 同济大学 | Voice interactive method based on face tracking device |
WO2020051893A1 (en) * | 2018-09-14 | 2020-03-19 | 郑永利 | Interaction system, method and processing device |
CN112140118A (en) * | 2019-06-28 | 2020-12-29 | 北京百度网讯科技有限公司 | Interaction method, device, robot and medium |
CN114049443A (en) * | 2020-12-31 | 2022-02-15 | 万翼科技有限公司 | Application building information model interaction method and related device |
CN115545960A (en) * | 2022-12-01 | 2022-12-30 | 江苏联弘信科技发展有限公司 | Electronic information data interaction system and method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1637740A (en) * | 2003-11-20 | 2005-07-13 | 阿鲁策株式会社 | Conversation control apparatus, and conversation control method |
US20080276186A1 (en) * | 2007-03-31 | 2008-11-06 | Sony Deutschland Gmbh | Method and system for adapting a user interface of a device |
CN102262440A (en) * | 2010-06-11 | 2011-11-30 | 微软公司 | Multi-modal gender recognition |
CN102881239A (en) * | 2011-07-15 | 2013-01-16 | 鼎亿数码科技(上海)有限公司 | Advertisement playing system and method based on image identification |
CN103236259A (en) * | 2013-03-22 | 2013-08-07 | 乐金电子研发中心(上海)有限公司 | Voice recognition processing and feedback system, voice response method |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105094315A (en) * | 2015-06-25 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for smart man-machine chat based on artificial intelligence |
-
2016
- 2016-03-18 CN CN201610157698.7A patent/CN105843382B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1637740A (en) * | 2003-11-20 | 2005-07-13 | 阿鲁策株式会社 | Conversation control apparatus, and conversation control method |
US20080276186A1 (en) * | 2007-03-31 | 2008-11-06 | Sony Deutschland Gmbh | Method and system for adapting a user interface of a device |
CN102262440A (en) * | 2010-06-11 | 2011-11-30 | 微软公司 | Multi-modal gender recognition |
CN102881239A (en) * | 2011-07-15 | 2013-01-16 | 鼎亿数码科技(上海)有限公司 | Advertisement playing system and method based on image identification |
CN103236259A (en) * | 2013-03-22 | 2013-08-07 | 乐金电子研发中心(上海)有限公司 | Voice recognition processing and feedback system, voice response method |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105094315A (en) * | 2015-06-25 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for smart man-machine chat based on artificial intelligence |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106598215B (en) * | 2016-11-02 | 2019-11-08 | Tcl移动通信科技(宁波)有限公司 | The implementation method and virtual reality device of virtual reality system |
WO2018082626A1 (en) * | 2016-11-02 | 2018-05-11 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN106598215A (en) * | 2016-11-02 | 2017-04-26 | 惠州Tcl移动通信有限公司 | Virtual reality system implementation method and virtual reality device |
CN106774832A (en) * | 2016-11-15 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN106774837A (en) * | 2016-11-23 | 2017-05-31 | 河池学院 | A kind of man-machine interaction method of intelligent robot |
CN106991123A (en) * | 2017-02-27 | 2017-07-28 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device towards intelligent robot |
CN107728780A (en) * | 2017-09-18 | 2018-02-23 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device based on virtual robot |
CN107728780B (en) * | 2017-09-18 | 2021-04-27 | 北京光年无限科技有限公司 | Human-computer interaction method and device based on virtual robot |
CN108595420A (en) * | 2018-04-13 | 2018-09-28 | 畅敬佩 | A kind of method and system of optimization human-computer interaction |
WO2020051893A1 (en) * | 2018-09-14 | 2020-03-19 | 郑永利 | Interaction system, method and processing device |
CN109459722A (en) * | 2018-10-23 | 2019-03-12 | 同济大学 | Voice interactive method based on face tracking device |
CN112140118A (en) * | 2019-06-28 | 2020-12-29 | 北京百度网讯科技有限公司 | Interaction method, device, robot and medium |
CN114049443A (en) * | 2020-12-31 | 2022-02-15 | 万翼科技有限公司 | Application building information model interaction method and related device |
CN115545960A (en) * | 2022-12-01 | 2022-12-30 | 江苏联弘信科技发展有限公司 | Electronic information data interaction system and method |
Also Published As
Publication number | Publication date |
---|---|
CN105843382B (en) | 2018-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105843382A (en) | Man-machine interaction method and device | |
CN106457563B (en) | Humanoid robot and method for executing dialogue between humanoid robot and user | |
CN105807933B (en) | A kind of man-machine interaction method and device for intelligent robot | |
US8751042B2 (en) | Methods of robot behavior generation and robots utilizing the same | |
CN107340859A (en) | The multi-modal exchange method and system of multi-modal virtual robot | |
CN107765852A (en) | Multi-modal interaction processing method and system based on visual human | |
CN109102809A (en) | A kind of dialogue method and system for intelligent robot | |
CN106020488A (en) | Man-machine interaction method and device for conversation system | |
CN105446491B (en) | A kind of exchange method and device based on intelligent robot | |
CN107797663A (en) | Multi-modal interaction processing method and system based on visual human | |
CN106531162A (en) | Man-machine interaction method and device used for intelligent robot | |
CN107870994A (en) | Man-machine interaction method and system for intelligent robot | |
CN109271018A (en) | Exchange method and system based on visual human's behavioral standard | |
CN107329990A (en) | A kind of mood output intent and dialogue interactive system for virtual robot | |
CN107632706A (en) | The application data processing method and system of multi-modal visual human | |
US20080096533A1 (en) | Virtual Assistant With Real-Time Emotions | |
CN105760362B (en) | A kind of question and answer evaluation method and device towards intelligent robot | |
CN107273477A (en) | A kind of man-machine interaction method and device for robot | |
CN106503786A (en) | Multi-modal exchange method and device for intelligent robot | |
WO2018006374A1 (en) | Function recommending method, system, and robot based on automatic wake-up | |
EP4432236A2 (en) | Device, method, and program for enhancing output content through iterative generation | |
CN108115678B (en) | Robot and motion control method and device thereof | |
CN108052250A (en) | Virtual idol deductive data processing method and system based on multi-modal interaction | |
CN107808191A (en) | The output intent and system of the multi-modal interaction of visual human | |
CN108009573A (en) | A kind of robot emotion model generating method, mood model and exchange method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |