CN109284811B - Intelligent robot-oriented man-machine interaction method and device - Google Patents

Intelligent robot-oriented man-machine interaction method and device Download PDF

Info

Publication number
CN109284811B
CN109284811B CN201811014450.0A CN201811014450A CN109284811B CN 109284811 B CN109284811 B CN 109284811B CN 201811014450 A CN201811014450 A CN 201811014450A CN 109284811 B CN109284811 B CN 109284811B
Authority
CN
China
Prior art keywords
analysis result
information
memory
interaction
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811014450.0A
Other languages
Chinese (zh)
Other versions
CN109284811A (en
Inventor
张珂
魏晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201811014450.0A priority Critical patent/CN109284811B/en
Publication of CN109284811A publication Critical patent/CN109284811A/en
Application granted granted Critical
Publication of CN109284811B publication Critical patent/CN109284811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A man-machine interaction method facing an intelligent robot comprises the following steps: step one, obtaining multi-mode interaction information input by a user; and step two, performing memory analysis on the multi-modal interaction information, and calling different interaction models to generate and output corresponding multi-modal feedback information according to whether a memory analysis result can be obtained or not. Compared with the existing human-computer interaction method, the method can effectively identify the user attribute information contained in the multi-modal interaction information input by the user in the human-computer interaction process, and can reflect the user attribute information in the output feedback information, so that the human-computer interaction process is more in line with the human interaction habit, and the human-computer interaction experience is improved.

Description

Intelligent robot-oriented man-machine interaction method and device
Technical Field
The invention relates to the technical field of robots, in particular to a man-machine interaction method and a man-machine interaction device for an intelligent robot, and also provides a man-machine interaction system for the intelligent robot and intelligent equipment special for children.
Background
With the progress of society, robots are not only widely used in industry, medicine, agriculture, or military, but also have been gradually incorporated into human social contact in life. Common social robots are applied to an activity site or a family, and particularly in the activity site, the interaction of the robots tends to attract attention and interest of the masses.
However, the user often reveals the relevant information of the user in the process of interacting with the intelligent robot, and the existing intelligent robot cannot analyze and extract the key entity information relevant to the user, so that the man-machine interaction process cannot meet the personalized requirements of the user.
Disclosure of Invention
In order to solve the above problems, the present invention provides a human-computer interaction method for an intelligent robot, the method comprising:
step one, obtaining multi-mode interaction information input by a user;
and step two, performing memory analysis on the multi-modal interaction information, and calling different interaction models to generate and output corresponding multi-modal feedback information according to whether a memory analysis result can be obtained or not.
According to an embodiment of the present invention, in the second step, if a memory analysis result is available, a memory interaction model is called to generate corresponding multi-modal feedback information, and in the memory interaction model, a user graph corresponding to the user is used to generate corresponding multi-modal feedback information according to the memory analysis result.
According to one embodiment of the present invention, in the second step,
obtaining a user profile corresponding to the user;
and comparing the memory analysis result with the user map, updating the user map according to the comparison result, and generating corresponding multi-mode feedback information according to the updated user map.
According to an embodiment of the present invention, the step of performing memory resolution on the multi-modal interaction information comprises:
determining whether context information exists for the multimodal interaction information;
and if the multi-modal interaction information exists, performing rule analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and a context analysis result, and integrating the rule analysis result and the context analysis result to obtain the memory analysis result.
According to an embodiment of the invention, if the context information does not exist, rule analysis is performed on the multi-modal interaction information, a rule analysis result is obtained correspondingly, and the memory analysis result is obtained according to the rule analysis result.
According to one embodiment of the invention, when the multi-modal interaction information is subjected to memory analysis, the multi-modal interaction information is also subjected to algorithm analysis, wherein,
if the context information aiming at the multi-modal interaction information exists, performing rule analysis, algorithm analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result, an algorithm analysis result and a context analysis result, and integrating the rule analysis result, the algorithm analysis result and the context analysis result to obtain a memory analysis result;
and if the context information aiming at the multi-modal interaction information does not exist, performing rule analysis and algorithm analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and an algorithm analysis result, and integrating the rule analysis result and the algorithm analysis result to obtain the memory analysis result.
According to one embodiment of the invention, the rule resolution has a higher priority than the algorithm resolution and the context resolution.
The invention also provides a man-machine interaction system facing the intelligent robot, the man-machine interaction system comprises a user and the intelligent robot, and the intelligent robot realizes the man-machine interaction method by executing an executive program.
The invention also provides a man-machine interaction device facing the intelligent robot, which comprises:
the interactive information acquisition module is used for acquiring multi-mode interactive information input by a user;
and the feedback information generation module is used for carrying out memory analysis on the multi-modal interaction information, calling different interaction models according to whether a memory analysis result can be obtained or not, generating and outputting corresponding multi-modal feedback information.
According to an embodiment of the present invention, if a memory resolution result is available, the feedback information generating module is configured to invoke a memory interaction model to generate corresponding multi-modal feedback information, and in the memory interaction model, a user graph corresponding to the user is used to generate corresponding multi-modal feedback information according to the memory resolution result.
According to an embodiment of the invention, the feedback information generating module is configured to:
obtaining a user profile corresponding to the user;
and comparing the memory analysis result with the user map, updating the user map according to the comparison result, and generating corresponding multi-mode feedback information according to the updated user map.
According to an embodiment of the present invention, the feedback information generating module is configured to perform memory analysis on the multi-modal interaction information by using the following steps:
determining whether context information exists for the multimodal interaction information;
and if the multi-modal interaction information exists, performing rule analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and a context analysis result, and integrating the rule analysis result and the context analysis result to obtain the memory analysis result.
According to an embodiment of the present invention, if there is no context information, the feedback information generating module is configured to perform rule analysis on the multi-modal interaction information, and obtain a rule analysis result correspondingly, and further obtain the memory analysis result according to the rule analysis result.
According to an embodiment of the present invention, in the memory parsing of the multi-modal interaction information, the feedback information generation module is configured to further perform algorithm parsing of the multi-modal interaction information, wherein,
if the context information aiming at the multi-modal interaction information exists, the feedback information generation module carries out rule analysis, algorithm analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result, an algorithm analysis result and a context analysis result, and the memory analysis result is obtained by integrating the rule analysis result, the algorithm analysis result and the context analysis result;
and if the context information aiming at the multi-modal interaction information does not exist, the feedback information generation module carries out rule analysis and algorithm analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and an algorithm analysis result, and the memory analysis result is obtained by integrating the rule analysis result and the algorithm analysis result.
The invention also provides intelligent equipment special for children, which comprises the human-computer interaction device.
The man-machine interaction method and the man-machine interaction system for the intelligent robot, provided by the invention, obtain the user portrait (namely information related to the user attribute) by carrying out memory analysis on multi-mode interaction information input by the user, and combine the user portrait with the preset knowledge map to generate corresponding feedback information. Compared with the existing human-computer interaction method, the method can effectively identify the user attribute information contained in the multi-modal interaction information input by the user in the human-computer interaction process, and can reflect the user attribute information in the output feedback information, so that the human-computer interaction process is more in line with the human interaction habit, and the human-computer interaction experience is improved.
Meanwhile, the man-machine interaction method provided by the invention adopts different interaction models to generate the multi-mode feedback information according to different memory analysis results of the multi-mode interaction information, so that compared with the existing man-machine interaction method, the multi-mode feedback information generated by the method is more diversified and personalized, and the fun of man-machine interaction and the user experience can be further improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the drawings required in the description of the embodiments or the prior art:
FIG. 1 is a schematic flow chart of an implementation of a human-computer interaction method for an intelligent robot according to an embodiment of the invention;
FIG. 2 is a flow chart illustrating an implementation of memory parsing of multimodal interaction information according to an embodiment of the invention;
FIG. 3 is a flow diagram illustrating an implementation of generating multimodal feedback information, according to one embodiment of the invention;
FIG. 4 is a schematic diagram of a human-machine interaction scenario for an intelligent robot, according to one embodiment of the present invention;
fig. 5 is a schematic structural diagram of a man-machine interaction device facing an intelligent robot according to an embodiment of the invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details or with other methods described herein.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions and, although a logical order is illustrated in the flow charts, in some cases, the steps illustrated or described may be performed in an order different than here.
A user usually reveals relevant information of the user in the process of interacting with the intelligent robot, and the existing human-computer interaction system cannot analyze and extract key entity information which is useful for human-computer interaction. Meanwhile, for the extracted user attribute information, the existing human-computer interaction system cannot effectively utilize the user attribute information in human-computer interaction, or can only simply reply when a user explicitly inquires the information of the human-computer interaction system, and the reply containing thinking logic and the active interaction related to application information are lacked.
For the problems in the prior art, the invention provides a man-machine interaction method and a man-machine interaction system for an intelligent robot. The man-machine interaction method and the man-machine interaction system can effectively utilize the user information, so that the interaction experience of the man-machine interaction system is improved.
Fig. 1 shows a schematic implementation flow diagram of the human-computer interaction method for the intelligent robot provided by the embodiment.
As shown in fig. 1, the man-machine interaction method for an intelligent robot provided in this embodiment first obtains multi-modal interaction information input by a user in step S101. In this embodiment, the multi-modal interaction information acquired by the method in step S101 preferably includes voice information input by the user. Of course, in other embodiments of the present invention, the multi-modal interaction information obtained in step S101 by the method may also contain other reasonable information according to practical situations, and the present invention is not limited thereto. For example, in an embodiment of the present invention, the multi-modal interaction information acquired by the method in step S101 may further include text information input by a user through a keyboard or the like.
After obtaining the multi-modal interaction information inputted by the user, the method performs a memory analysis on the multi-modal interaction information in step S102, and determines whether a memory analysis result can be obtained in step S103. In this embodiment, the memory resolution result obtained by the method is preferably attribute information related to the user, and may be multidimensional information such as the name, age, gender, constellation, and birthday of the user, for example.
According to whether a memory analysis result can be obtained or not, the method calls different interaction models to generate corresponding multi-modal feedback information. Wherein, if the memory analysis result is available, the method preferably calls the memory interaction model to generate corresponding multi-modal feedback information in step S104.
Specifically, as shown in fig. 2, in the present embodiment, the method determines in step S201 whether there is context information for the multimodal interaction information acquired in step S101. If the context information of the multi-modal interaction information exists, the method performs rule analysis, algorithm analysis and context analysis on the multi-modal interaction information in step S202, so as to obtain a rule analysis result, an algorithm analysis result and a context analysis result correspondingly. By analyzing the multi-modal interactive information, the method can accurately obtain information such as the structure, semantics and topics of sentences contained in the multi-modal interactive information.
After the rule analysis result, the algorithm analysis result, and the context analysis result are obtained, the method integrates the rule analysis result, the algorithm analysis result, and the context analysis result in step S203, so as to obtain a memory analysis result.
If the context information of the multi-modal interaction information does not exist, in this embodiment, the method performs rule analysis and algorithm analysis on the multi-modal interaction information in step S204, so as to obtain a rule analysis result and an algorithm analysis result. Subsequently, the method integrates the rule analysis result and the algorithm analysis result in step S205, so as to obtain a memory analysis result.
In the present embodiment, the rule resolution, the algorithm resolution, and the context resolution preferably have a specific priority. For example, rule resolution has a higher priority than algorithm resolution, and algorithm resolution has a higher priority than context resolution.
Of course, in other embodiments of the present invention, the priorities of the three analysis processes may also be configured to be other reasonable driving, or the three analysis processes may also be performed simultaneously, according to actual needs, and the present invention is not limited thereto.
Meanwhile, it should be noted that, in other embodiments of the present invention, according to actual needs, the method may also perform rule parsing and context parsing in the presence of context information of the multi-modal interaction information without performing an algorithm parsing process, and perform rule parsing in the absence of context information of the multi-modal interaction information, which is not limited to this.
In this embodiment, if the memory resolution result is available, the method preferably calls the memory interaction model to generate corresponding multi-modal feedback information. In the memory interaction model, the method utilizes the user map corresponding to the current user to generate corresponding multi-modal feedback information according to the obtained memory analysis result. If the memory analysis result cannot be obtained, in this embodiment, the method may use other logic to generate the multi-modal feedback information.
Fig. 3 is a schematic flow chart illustrating an implementation process of invoking a memory interaction model to generate multimodal feedback information in the present embodiment.
As shown in fig. 3, in this embodiment, after obtaining the memory resolution result, the method obtains the user map corresponding to the user in step S301. The user map stores multidimensional information (such as the name, age, sex, constellation, birthday, and other multidimensional information) related to the user's own attributes. Wherein the user profile is preferably stored in the data storage chip in advance.
Subsequently, the method compares the obtained memory resolution result with the user map in step S302, and updates the user map obtained in step S301 according to the comparison result in step S303. Specifically, in this embodiment, the method determines in step S302 whether the memory parsing result is consistent with the relevant parameters in the user graph.
If the user profile does not include the user attribute information included in the memory analysis result, then in step S303, the method preferably adds the memory analysis result to the user profile, so as to update the knowledge profile.
And if the user map already contains the user attribute information contained in the memory analysis result and the data of the user attribute information are the same, the method does not additionally operate the user map. If the user map already contains the user attribute information contained in the memory analysis result but the data of the user attribute information and the data of the user attribute information are different, the method generates and outputs corresponding confirmation prompt information to inform the user of confirming whether the user map needs to be updated or not.
If the user confirms that the user map needs to be updated, the method updates the user map by using the memory analysis result. If the user confirms that the user map does not need to be updated, the method does not perform additional operation on the user map at the moment, namely, the existing state of the user map is maintained.
Of course, in other embodiments of the present invention, according to practical situations, the method may also use other reasonable manners to update the user map according to the memory resolution result, and the present invention is not limited thereto.
After the update of the user profile is completed, as shown in fig. 3, in the present embodiment, the method generates corresponding multi-modal feedback information according to the updated user profile in step S304. Therefore, in the process of the interaction between the human-computer interaction system and the user, if the user talks about the information related to the self attribute, the human-computer interaction process generates and feeds back the reply according with the human logic through the knowledge graph.
It can be seen from the above description that the man-machine interaction method for the intelligent robot provided by the invention obtains the user portrait (i.e. the information related to the user's own attributes) by performing memory analysis on the multi-modal interaction information input by the user, and combines the user portrait with the preset knowledge graph to generate corresponding feedback information. Compared with the existing human-computer interaction method, the method can effectively identify the user attribute information contained in the multi-modal interaction information input by the user in the human-computer interaction process, and can reflect the user attribute information in the output feedback information, so that the human-computer interaction process is more in line with the human interaction habit, and the human-computer interaction experience is improved.
Meanwhile, the man-machine interaction method provided by the invention adopts different interaction models to generate the multi-mode feedback information according to different memory analysis results of the multi-mode interaction information, so that compared with the existing man-machine interaction method, the multi-mode feedback information generated by the method is more diversified and personalized, and the fun of man-machine interaction and the user experience can be further improved.
As shown in fig. 4, the human-machine interaction method provided by the present invention is preferably configured in an intelligent robot, which can be executed by a robot operating system built in the intelligent robot. When the built-in operating system of the intelligent robot can realize the method provided by the invention, the user 400 can input corresponding voice interaction information to the intelligent robot 401 according to own habit, and the intelligent robot 401 can generate reasonable feedback information based on the knowledge graph according to the dialogue information input by the user, so that a more anthropomorphic man-machine dialogue process is realized.
It is noted that in different embodiments of the present invention, the intelligent robot 401 may be a different form of system with human-machine conversation capability. For example, in one embodiment of the present invention, the intelligent robot 401 may be a humanoid robot equipped with an intelligent operating system, while in another embodiment of the present invention, the intelligent robot 401 may be a specific software or application capable of performing the man-machine interaction method provided by the present invention.
The invention also provides a man-machine interaction system, which is provided with an executive program, and the executive program can realize the man-machine interaction method in the execution process.
Meanwhile, the invention also provides a man-machine interaction device facing the intelligent robot, and fig. 5 shows a structural schematic diagram of the man-machine interaction device in the embodiment. As shown in fig. 5, the human-computer interaction device facing the intelligent robot provided by the present embodiment preferably includes an interaction information obtaining module 501 and a feedback information generating module 502. The interaction information obtaining module 501 is configured to obtain multi-modal interaction information input by a user.
In this embodiment, the interaction information obtaining module 501 preferably includes a voice obtaining device, which is capable of obtaining the voice interaction information input by the user through the voice interaction device. Of course, in other embodiments of the present invention, the mutual information obtaining module 501 may also include other reasonable devices or be implemented by using other reasonable devices, and the present invention is not limited thereto. For example, in an embodiment of the present invention, the interaction information obtaining module 501 may further include a keyboard (e.g., a virtual keyboard or a physical keyboard), and the interaction information obtaining module 501 may obtain the text information input by the user through the keyboard.
The interaction information obtaining module 501 transmits the obtained multi-modal input information to the feedback information generating module 502 connected to the interaction information obtaining module, so that the feedback information generating module 502 analyzes the multi-modal interaction information, and invokes different interaction models to generate and output corresponding multi-modal feedback information according to whether a memory analysis result can be obtained.
Specifically, in this embodiment, if the memory analysis result is obtained, the feedback information generating module 502 is configured to invoke the memory interaction model to generate corresponding multi-modal feedback information, and the user graph corresponding to the user is used in the memory interaction model to generate corresponding multi-modal feedback information according to the memory analysis result.
In this embodiment, the principle and process of the feedback information generating module 502 to implement its functions are similar to those described in step S102 to step S104 in fig. 1, and therefore specific contents of the feedback information generating module 502 are not described herein again.
In addition, the invention also provides intelligent equipment special for children, which comprises the human-computer interaction device. For the intelligent equipment special for the children, the knowledge graph corresponding to the children user can be stored by utilizing the memory of the intelligent equipment special for the children, and the knowledge graph can contain dimension information such as the age, education degree, interest class, favorite cartoon and the like of the children user, so that the intelligent equipment special for the children can better interact with the children user, and the personalized interaction requirements of the children user are met.
It should be noted that the intelligent device may be: humanoid intelligent robot, children's special-purpose intelligent robot, children story machine, panel, smart mobile phone, children draw this reading equipment etc. not the restriction.
It is to be understood that the disclosed embodiments of the invention are not limited to the particular structures or process steps disclosed herein, but extend to equivalents thereof as would be understood by those skilled in the relevant art. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While the above examples are illustrative of the principles of the present invention in one or more applications, it will be apparent to those of ordinary skill in the art that various changes in form, usage and details of implementation can be made without departing from the principles and concepts of the invention. Accordingly, the invention is defined by the appended claims.

Claims (9)

1. A human-computer interaction method facing an intelligent robot is characterized by comprising the following steps:
step one, obtaining multi-mode interaction information input by a user;
judging whether context information aiming at the multi-mode interactive information exists or not, carrying out memory analysis on the multi-mode interactive information according to a judgment result, calling different interactive models to generate and output corresponding multi-mode feedback information according to whether a memory analysis result can be obtained or not;
if a memory analysis result can be obtained, calling a memory interaction model to generate corresponding multi-modal feedback information, and generating corresponding multi-modal feedback information according to the memory analysis result by using a user map corresponding to the user in the memory interaction model;
the process of carrying out memory analysis on the multi-modal interaction information according to the judgment result comprises the following steps: if the context information aiming at the multi-modal interaction information exists, performing rule analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and a context analysis result, and integrating the rule analysis result and the context analysis result to obtain a memory analysis result;
and if the context information does not exist, performing rule analysis on the multi-mode interaction information to correspondingly obtain a rule analysis result, and further obtaining the memory analysis result according to the rule analysis result.
2. The method of claim 1, wherein, in step two,
obtaining a user profile corresponding to the user;
and comparing the memory analysis result with the user map, updating the user map according to the comparison result, and generating corresponding multi-mode feedback information according to the updated user map.
3. The method of claim 1, wherein in performing a memory resolution of the multi-modal interaction information, the multi-modal interaction information is also subjected to an algorithmic resolution, wherein,
if the context information aiming at the multi-modal interaction information exists, performing rule analysis, algorithm analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result, an algorithm analysis result and a context analysis result, and integrating the rule analysis result, the algorithm analysis result and the context analysis result to obtain a memory analysis result;
and if the context information aiming at the multi-modal interaction information does not exist, performing rule analysis and algorithm analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and an algorithm analysis result, and integrating the rule analysis result and the algorithm analysis result to obtain the memory analysis result.
4. The method of claim 3, wherein the rule resolution is prioritized over the algorithm resolution and context resolution.
5. A human-computer interaction system facing an intelligent robot, wherein the human-computer interaction system comprises a user and the intelligent robot, and the intelligent robot realizes the human-computer interaction method according to any one of claims 1-4 by executing an execution program.
6. A human-computer interaction device facing an intelligent robot, the device comprising:
the interactive information acquisition module is used for acquiring multi-mode interactive information input by a user;
the feedback information generation module is used for judging whether context information aiming at the multi-modal interaction information exists or not, carrying out memory analysis on the multi-modal interaction information according to a judgment result, and calling different interaction models to generate and output corresponding multi-modal feedback information according to whether a memory analysis result can be obtained or not;
if a memory analysis result can be obtained, the feedback information generation module is configured to call a memory interaction model to generate corresponding multi-modal feedback information, and in the memory interaction model, corresponding multi-modal feedback information is generated according to the memory analysis result by using a user map corresponding to the user;
the feedback information generation module performs memory analysis on the multi-modal interaction information according to a judgment result through the following operations:
if the multi-modal interaction information exists, performing rule analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and a context analysis result, and integrating the rule analysis result and the context analysis result to obtain a memory analysis result;
and if the context information does not exist, the feedback information generation module is configured to perform rule analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result, and further obtain the memory analysis result according to the rule analysis result.
7. The apparatus of claim 6, wherein the feedback information generation module is configured to:
obtaining a user profile corresponding to the user;
and comparing the memory analysis result with the user map, updating the user map according to the comparison result, and generating corresponding multi-mode feedback information according to the updated user map.
8. The apparatus of claim 6, wherein in memory parsing the multi-modal interaction information, the feedback information generation module is configured to also perform algorithm parsing on the multi-modal interaction information, wherein,
if the context information aiming at the multi-modal interaction information exists, the feedback information generation module carries out rule analysis, algorithm analysis and context analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result, an algorithm analysis result and a context analysis result, and the memory analysis result is obtained by integrating the rule analysis result, the algorithm analysis result and the context analysis result;
and if the context information aiming at the multi-modal interaction information does not exist, the feedback information generation module carries out rule analysis and algorithm analysis on the multi-modal interaction information to correspondingly obtain a rule analysis result and an algorithm analysis result, and the memory analysis result is obtained by integrating the rule analysis result and the algorithm analysis result.
9. Intelligent device for children, characterized in that it comprises a human-computer interaction device according to any one of claims 6 to 8.
CN201811014450.0A 2018-08-31 2018-08-31 Intelligent robot-oriented man-machine interaction method and device Active CN109284811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811014450.0A CN109284811B (en) 2018-08-31 2018-08-31 Intelligent robot-oriented man-machine interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811014450.0A CN109284811B (en) 2018-08-31 2018-08-31 Intelligent robot-oriented man-machine interaction method and device

Publications (2)

Publication Number Publication Date
CN109284811A CN109284811A (en) 2019-01-29
CN109284811B true CN109284811B (en) 2021-05-25

Family

ID=65183460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811014450.0A Active CN109284811B (en) 2018-08-31 2018-08-31 Intelligent robot-oriented man-machine interaction method and device

Country Status (1)

Country Link
CN (1) CN109284811B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106292423A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Music data processing method and device for anthropomorphic robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100893758B1 (en) * 2007-10-16 2009-04-20 한국전자통신연구원 System for expressing emotion of robots and method thereof
CN105824935A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for information processing for question and answer robot
WO2018000280A1 (en) * 2016-06-29 2018-01-04 深圳狗尾草智能科技有限公司 Multi-mode based intelligent robot interaction method and intelligent robot
CN106297789B (en) * 2016-08-19 2020-01-14 北京光年无限科技有限公司 Personalized interaction method and system for intelligent robot
CN106446141B (en) * 2016-09-21 2020-05-19 北京光年无限科技有限公司 Interactive data processing method for intelligent robot system and robot system
CN106557464A (en) * 2016-11-18 2017-04-05 北京光年无限科技有限公司 A kind of data processing method and device for talking with interactive system
CN107870994A (en) * 2017-10-31 2018-04-03 北京光年无限科技有限公司 Man-machine interaction method and system for intelligent robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106292423A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Music data processing method and device for anthropomorphic robot

Also Published As

Publication number Publication date
CN109284811A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
US7020841B2 (en) System and method for generating and presenting multi-modal applications from intent-based markup scripts
US7548859B2 (en) Method and system for assisting users in interacting with multi-modal dialog systems
CN116737900A (en) Man-machine interaction processing system and method, storage medium and electronic equipment
CN111428520A (en) Text translation method and device
US10372412B2 (en) Force-based interactions with digital agents
CN112286485B (en) Method and device for controlling application through voice, electronic equipment and storage medium
WO2006076304A1 (en) Method and system for controlling input modalties in a multimodal dialog system
CN105760361A (en) Language model building method and device
EP4044173B1 (en) Method and apparatus for text to speech, electronic device and storage medium
US20230123430A1 (en) Grounded multimodal agent interactions
CN109284811B (en) Intelligent robot-oriented man-machine interaction method and device
CN104077105A (en) Information processing method and electronic device
CN117332067A (en) Question-answer interaction method and device, electronic equipment and storage medium
US20130290926A1 (en) Semantic code binding to enable non-developers to build apps
CN112306450A (en) Information processing method and device
CN115631251B (en) Method, device, electronic equipment and medium for generating image based on text
CN106197394A (en) Air navigation aid and device
CN106486111B (en) Multi-TTS engine output speech speed adjusting method and system based on intelligent robot
Pineda et al. Dialogue model specification and interpretation for intelligent multimodal HCI
CN109002498A (en) Interactive method, device, equipment and storage medium
CN115116434A (en) Application implementation method and device, storage medium and electronic equipment
CN104301500A (en) Terminal control method and device and terminal
CN103365427B (en) A kind of method being adjusted to input content and electronic equipment
US20150100321A1 (en) Intelligent state aware system control utilizing two-way voice / audio communication
CN111159472A (en) Multi-modal chat techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant