CN107807734B - Interactive output method and system for intelligent robot - Google Patents

Interactive output method and system for intelligent robot Download PDF

Info

Publication number
CN107807734B
CN107807734B CN201710891490.2A CN201710891490A CN107807734B CN 107807734 B CN107807734 B CN 107807734B CN 201710891490 A CN201710891490 A CN 201710891490A CN 107807734 B CN107807734 B CN 107807734B
Authority
CN
China
Prior art keywords
expression image
response
output
interactive
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710891490.2A
Other languages
Chinese (zh)
Other versions
CN107807734A (en
Inventor
赵媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201710891490.2A priority Critical patent/CN107807734B/en
Publication of CN107807734A publication Critical patent/CN107807734A/en
Application granted granted Critical
Publication of CN107807734B publication Critical patent/CN107807734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an interactive output method and system for an intelligent robot. The method comprises the following steps: acquiring individual parameters of the intelligent robot; acquiring interactive input data of a current interactive object; performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result; generating a corresponding response text according to the semantic analysis result and the emotion analysis result; and generating and outputting interactive response data containing expression images and/or the response texts, wherein the types of the expression images are matched with the personality parameters, and the meanings of the expression images are matched with the response texts.

Description

Interactive output method and system for intelligent robot
Technical Field
The invention relates to the field of computers, in particular to an interactive output method and system for an intelligent robot.
Background
With the continuous development of robot technology, more and more intelligent robots with autonomous human-computer interaction capability are applied to the production and life of human schedules.
In the prior art, a common man-machine interaction mode is that an intelligent robot acquires and analyzes input information of a human being, and then generates and outputs a corresponding interaction response, wherein the most common mode is a text-based text interaction robot.
However, because the daily communication of human beings is mixed in various communication modes and the communication on social software is more various, the text interaction robot can easily feel tired only by using a simple text for communication, and the user experience of the intelligent robot is greatly influenced.
Disclosure of Invention
The invention provides an interactive output method for an intelligent robot, which comprises the following steps:
acquiring individual parameters of the intelligent robot;
acquiring interactive input data of a current interactive object;
performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result;
and generating and outputting interactive response data containing expression images and/or the response texts, wherein the types of the expression images are matched with the personality parameters, and the meanings of the expression images are matched with the response texts.
In one embodiment, acquiring personality parameters of the intelligent robot comprises:
acquiring identity information of a current interactive object;
and calling the individual parameters matched with the identity information.
In one embodiment, acquiring personality parameters of the intelligent robot comprises:
acquiring current interaction environment description information and/or user interaction demand information;
and calling the individual parameters matched with the current interaction environment description information and/or the user interaction demand information.
In one embodiment, generating interactive response data including emoticons and/or the response text includes:
judging whether the expression image needs to be output or not;
and generating the interactive response data containing the expression image when the expression image needs to be output.
In one embodiment, the determining whether the expression image needs to be output includes:
and when the interactive input data contains expression images, judging that the expression images need to be output.
In one embodiment, the determining whether the expression image needs to be output includes:
determining an expression image response strategy according to the individual parameters, wherein the expression image response strategy comprises expression image response frequency, expression image response topic range and/or expression image emotion response trigger strategy;
and judging whether the expression image needs to be output or not based on the expression image response strategy.
In one embodiment, the determining whether the expression image needs to be output includes:
determining a current topic range and/or emotion response parameters according to the semantic analysis result and/or the emotion analysis result;
and judging whether the expression image needs to be output or not according to the topic range and/or the emotional response parameters.
In one embodiment, generating the interactive response data including the expression image when the expression image needs to be output includes:
extracting text information corresponding to the expression image;
comparing the text information corresponding to the expression image with the response text;
and when the matching degree of the text information corresponding to the expression image and the response text reaches a set threshold value, only outputting the expression image.
The invention also proposes a storage medium having stored thereon a program code enabling the implementation of the method according to any one of claims 1 to 8.
The invention also provides an intelligent robot system, which comprises:
the input acquisition module is configured to acquire interactive input data of a current interactive object;
the output module is configured to output interaction response data to the current interaction object;
an interaction resolution module configured to:
acquiring individual parameters of the intelligent robot;
performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result;
and generating the interactive response data comprising expression images and/or the response texts, wherein the types of the expression images are matched with the personality parameters of the intelligent robot, and the meanings of the expression images are matched with the response texts.
According to the method, the output diversity of the intelligent robot during interaction with the user can be greatly improved in a picture output mode, the interestingness of man-machine conversation is increased, more accurate expression of conversation information is realized, the personification level of the intelligent robot is greatly improved, and therefore the user experience of the intelligent robot is improved.
Additional features and advantages of the invention will be set forth in the description which follows. Also, some of the features and advantages of the invention will be apparent from the description, or may be learned by practice of the invention. The objectives and some of the advantages of the invention may be realized and attained by the process particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram of a method according to an embodiment of the invention;
FIGS. 2-5 are partial flow diagrams of methods according to embodiments of the invention;
FIGS. 6 and 7 are diagrammatic illustrations of robotic system configurations according to various embodiments of the present invention;
fig. 8 is a schematic diagram of a robot application scenario according to an embodiment of the present invention.
Detailed Description
The following detailed description will be provided for the embodiments of the present invention with reference to the accompanying drawings and examples, so that the practitioner of the present invention can fully understand how to apply the technical means to solve the technical problems, achieve the technical effects, and implement the present invention according to the implementation procedures. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
In the prior art, a common man-machine interaction mode is that an intelligent robot acquires and analyzes input information of a human being, and then generates and outputs a corresponding interaction response, wherein the most common mode is a text-based text interaction robot.
However, because the daily communication of human beings is mixed in various communication modes and the communication on social software is more various, the text interaction robot can easily feel tired only by using a simple text for communication, and the user experience of the intelligent robot is greatly influenced.
In order to solve the problems, the invention provides an interactive output method for an intelligent robot. In the method of the invention, the interaction of the robot with the user is not limited to text only, but also comprises expression images.
Further, the expression image does not only show specific semantics, but also shows specific emotions in many applications. Therefore, in the method, not only semantic analysis but also emotion analysis is introduced in the interactive process, and the specific expression image to be output is determined by using the double analysis results of the semantic analysis and the emotion analysis.
Furthermore, in a normal human interaction scene, different people often select different types of expression images when expressing the same meaning due to different preferences and habits of each person. For example, children generally tend to select a cartoon character type of emoticon image for children, while elderly people tend to select a steady type of emoticon image.
Therefore, in order to further improve the personification level of the robot, the robot is endowed with personified character features, and personalized parameters corresponding to the specific character features are formulated for the robot. In the process of selecting the expression images, the selection standard is subdivided into two aspects, one is the type selection of the expression images, the expression images of the types matched with the individual parameters of the intelligent robot are selected, and the anthropomorphic character characteristics of the robot are embodied by utilizing the types of the expression images; and secondly, selecting meanings of the expression images, selecting the expression images with meanings matched with semantic output and emotion output of the current interactive object to be responded, and embodying the semantics and emotion to be expressed by using the specific meanings of the expression images.
According to the method, the output diversity of the intelligent robot during interaction with the user can be greatly improved in a picture output mode, the interestingness of man-machine conversation is increased, more accurate expression of conversation information is realized, the personification level of the intelligent robot is greatly improved, and therefore the user experience of the intelligent robot is improved.
The detailed flow of a method according to an embodiment of the invention is described in detail below based on the accompanying drawings, the steps shown in the flow chart of which can be executed in a computer system containing instructions such as a set of computer executable instructions. Although a logical order of steps is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
As shown in fig. 1, in an embodiment, an interactive output method of an intelligent robot includes:
acquiring personality parameters of the intelligent robot (S110);
acquiring interactive input data of a current interactive object (S120);
performing semantic analysis and emotion calculation on the interactive input data (S130), and acquiring a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result (S140);
generating interactive response data (S150) comprising expression images and/or response texts, wherein the types of the expression images are matched with the personality parameters of the intelligent robot, and the meanings of the expression images are matched with the response texts;
the interactive response data generated in step S150 is output (S160).
Further, in an embodiment, in the process of generating interactive response data containing expression images, firstly, the type of the expression images is determined according to the personality parameters of the intelligent robot; and then screening out expression images with meanings matching the response text from all the expression images of the type.
In the process, the output of the intelligent robot reflects the character characteristics of the human beings by giving the intelligent robot specific individual parameters, so that the personification degree of the intelligent robot is improved.
Further, in an embodiment, the human character features to be simulated are preset according to the specific application scenario of the intelligent robot. For example, an intelligent robot applied in a child interaction site sets its simulated child personality traits.
However, since different people have different interaction habits and interaction preferences, not any human character feature performance will enhance the interaction experience. One of the key points in determining the final interaction experience of the intelligent robot is to assign appropriate character features to the intelligent robot, that is, to make the character features of the human type simulated by the intelligent robot suitable for the current interaction scene and the current interaction object. However, in many application scenarios, the interaction scenario and the interaction object of the intelligent robot are not constant. If the intelligent robot always keeps a single character feature, the character feature of the intelligent robot can possibly reduce the interaction experience when facing certain interaction objects or application scenes.
In order to solve the above problems, in an embodiment, the intelligent robot transforms the human character features simulated by the intelligent robot according to different interaction scenes and different interaction requirements, so that the simulated human character features are suitable for the current interaction scene.
Specifically, in one embodiment, the robot determines the human character features to be simulated by the robot according to the identity of the current interaction object.
Specifically, as shown in fig. 2, first, identity information of a current interaction object is obtained (S210); and then invokes a personality parameter that matches the identity information (S220). Therefore, the human character features simulated and embodied by the intelligent robot can meet the interaction habit/requirement of the current interaction object. For example, when the current interaction object is a child user, the intelligent robot calls the personality parameters of the corresponding child to simulate the personality characteristics of the child, so that the child user feels that the child user interacts with another child in the interaction process, and the interaction interest and the interaction experience of the child user are improved.
Specifically, in one embodiment, the robot determines the character features of the human being to be simulated by the robot according to the difference of the current interaction environment and/or the interaction requirements of the interaction objects.
Specifically, as shown in fig. 3, first, current interaction environment description information and/or user interaction requirement information is obtained (S310); and then invokes a personality parameter matched with the current interaction environment description information and/or the user interaction requirement information (S320).
Further, in other embodiments, the personality parameters may also be determined by a method combining the above two manners.
Further, in one embodiment, the interactive response data output by the intelligent robot comprises an emoticon and/or response text. Specifically, the composition of the interactive response data can be roughly divided into two cases, including no emoticon (simple response text) and including an emoticon.
For the two situations, in one embodiment, when the interactive response data is generated, whether the expression image needs to be output is judged for the first phase; generating interactive response data containing the expression images when the expression images need to be output; and generating interactive response data of the pure response text when the expression image does not need to be output.
In the normal human interaction process, generally, when one of two interactive parties uses the expression image, the other party often responds with the expression image. Therefore, in an embodiment, when determining whether the expression image needs to be output, determining whether the expression image needs to be output according to the type of the interactive input data, specifically:
and when the interactive input data contains expression images, directly judging that the expression images need to be output.
Furthermore, in the normal human interaction process, the expression image is not used in each sentence of interactive expression generally, and in order to simulate the expression, a preset expression image output frequency can be set according to the frequency of using the expression image by the human in the general state, so that the probability of outputting the expression image by the intelligent robot is matched with the preset expression image output frequency, and the human interaction behavior is simulated.
Therefore, in an embodiment, when determining whether the expression image needs to be output, determining whether the expression image needs to be output according to the type of the interactive input data, specifically:
and when the interactive input data is plain text data, judging whether the expression image needs to be output according to a preset expression image response frequency.
Further, in the normal human interaction process, the frequency of using the expression images is different due to different habits of people with different characters. For example, a person with lively character uses expression images much more frequently than a person with boring character. Therefore, in one embodiment, the frequency of the intelligent robot using the expression images in the interaction process is determined according to the character features of the human being to be simulated by the intelligent robot.
Specifically, in one embodiment, an expression image response strategy is determined according to the personality parameters of the intelligent robot, and the expression image response strategy comprises expression image response frequency; and when judging whether the expression image needs to be output, judging whether the expression image needs to be output based on the expression image response strategy.
Further, in the normal human interaction process, the topic areas of interest of different people are different, the interaction reactions of people with different characters are different for the same topic area, and whether to use the expression image depends on the current topic area and the corresponding relationship between the characters of the current interactors. Therefore, in one embodiment, the topic range for which the intelligent robot uses the expression image is determined according to the human character features to be simulated by the intelligent robot.
Specifically, in one embodiment, an expression image response strategy is determined according to the individual parameters of the intelligent robot, and the expression image response strategy comprises an expression image response topic range; and when judging whether the expression image needs to be output, judging whether the expression image needs to be output based on the expression image response strategy.
Further, in the normal interaction process of human beings, the emotion of the human beings can be presented due to the expression images, and the emotional expression degrees of different characters are different. For example, a lively person may exhibit a more intense mood that tends to express an exciting mood with a mood image, while a bored person may not be so emotionally exposed for the same thing, and may still be output with plain text. Therefore, in an embodiment, the need of the intelligent robot to use the expression image in which emotion expression application scenes is determined according to the character features of the human to be simulated by the intelligent robot.
Specifically, in an embodiment, an expression image response strategy is determined according to the personality parameters of the intelligent robot, and the expression image response strategy comprises an expression image emotion response trigger strategy (which emotion response application scenario triggers expression image response); and when judging whether the expression image needs to be output, judging whether the expression image needs to be output based on the expression image response strategy.
Further, in other embodiments, the three expression image response strategies may be combined in any two or three combinations to form a new expression image response strategy configuration. Specifically, in one embodiment, an expression image response strategy is determined according to the individual parameters of the intelligent robot, and the expression image response strategy comprises expression image response frequency, expression image response topic range and/or expression image emotion response trigger strategies; and when judging whether the expression image needs to be output, judging whether the expression image needs to be output based on the expression image response strategy.
Furthermore, because the emotive image response topic range and/or the emotive image emotion response trigger strategy are/is limited in the emotive image response strategy, in the actual interaction process, whether the emotive image needs to be output or not is determined according to the current interaction topic range and/or emotion output requirements.
Specifically, in one embodiment, when judging whether an expression image needs to be output, a current topic range and/or emotion response parameters are determined according to a semantic analysis result and/or an emotion analysis result; and then judging whether the expression image needs to be output or not according to the topic range and/or the emotional response parameters.
Specifically, as shown in fig. 4, in an embodiment, before the interaction is started, personality parameters of the intelligent robot are determined (S410), and then an expression image response policy is determined according to the personality parameters, and an expression image response frequency, an expression image response topic range and an expression image emotion response trigger policy are determined (S420).
In the interactive process, first, the type of interactive input data is judged (S430). If the interactive input data includes an expression image, it is directly determined that the current interactive output also needs to include the expression image (S440). If the interactive input data does not contain the emoticon and is a plain text, determining the current topic range and emotion response parameters according to the semantic analysis result and the emotion analysis result (S450), and then determining whether the current interactive output needs to contain the emoticon according to the topic range and emotion response parameters determined in the step S450 based on the emoticon response frequency, the emoticon response topic range and the emoticon emotion response trigger policy determined in the step S420 (S460).
Further, some expression images themselves can express specific text information (or some expression images themselves show specific text information), and in this case, it is not necessary to output the expression images and the expressed/contained text information at the same time. Therefore, in one embodiment, when the expression image needs to be output, the output situation is further subdivided into outputting only the expression image and outputting the mixed output of the expression image and the response text. When the expression image needing to be output can completely replace the response text, the response text does not need to be output, and only the expression image is output; and when the expression image needing to be output cannot replace the response text, outputting the response text and the expression image.
Specifically, as shown in fig. 5, in an embodiment, the intelligent robot determines whether an expression image needs to be output during the interaction process (S510), and outputs only the response text if the expression image does not need to be output (S520).
If the expression image is required to be output, the expression image required to be output is extracted from the expression image library (S530). Specifically, in one embodiment, the type of the expression image is determined according to the personality parameters of the intelligent robot; and then screening out expression images with meanings matching the response text from all the expression images of the type.
Next, extracting text information corresponding to the expression image extracted in step S530 (S540); comparing the text information corresponding to the expression image with the matching degree of the response text (S550); when the matching degree of the text information corresponding to the expression image and the response text reaches a set threshold, only the expression image is output (S560); when the matching degree between the text information corresponding to the expression image and the response text does not reach the set threshold, the expression image and the response text are output (S570).
Further, according to the method of the present invention, the present invention also provides a storage medium having stored thereon program codes that can implement the method according to the present invention.
Furthermore, according to the method, the invention also provides an intelligent robot system. Specifically, as shown in fig. 6, the system includes:
an input acquisition module 610 configured to acquire interactive input data of a current interactive object;
an output module 620 configured to output the interaction response data to the current interaction object;
an interaction resolution module 630 configured to:
acquiring individual parameters of the intelligent robot;
performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result;
and generating interactive response data containing the expression images and/or the response texts, wherein the types of the expression images in the interactive response data are matched with the personality parameters of the intelligent robot, and the meanings of the expression images are matched with the response texts.
Further, in one embodiment, the intelligent robot system is a child story machine interaction system. The children story machine is an intelligent device with cartoon and animal appearance characteristics or intellectual property IP, and is an educational robot for performing man-machine interaction based on story telling requirements by means of the AI capability of the robot.
Further, in an embodiment, the intelligent robot system relies on a cloud server to implement complex data processing operations. Specifically, as shown in fig. 7, the interaction analysis module 730 includes a networking interaction unit 731, which performs data interaction with the robot cloud server 700 through the networking interaction unit 731, so as to deliver complex data processing operations to the robot cloud server 700 for processing.
Specifically, as shown in fig. 7, in an application scenario, the interactive individual 202 is a person (user); the device 201 may be a child story machine, the user's smartphone, tablet, wearable device, etc.; the robot cloud server 203 provides data processing support services (e.g., cloud storage, cloud computing) to the device 201.
An intelligent robot system is installed on the device 201. In the human-computer interaction process, the device 201 acquires user interaction input and sends the user interaction input to the server 203, the server 203 performs semantic understanding and emotion calculation on the user interaction input, generates interaction response data (including response text and/or expression images) responding to the user interaction input, and returns the interaction response data to the device 201. The device 201 outputs the interactive response data to the user 202.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. There are various other embodiments of the method of the present invention. Various corresponding changes or modifications may be made by those skilled in the art without departing from the spirit of the invention, and these corresponding changes or modifications are intended to fall within the scope of the appended claims.

Claims (7)

1. An interactive output method for an intelligent robot, comprising:
obtaining the individual parameters of the intelligent robot, wherein the individual parameters are matched with the identity information of the interactive object, the description information of the current interactive environment and/or the user interactive demand information;
acquiring interactive input data of a current interactive object;
performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result;
judging whether the expression image needs to be output or not by adopting a set strategy, and further generating and outputting interactive response data containing the expression image and/or the response text according to a judgment result, wherein the process of judging whether the expression image needs to be output comprises the following steps:
determining an expression image response strategy according to the individual parameters, wherein the expression image response strategy comprises expression image response frequency, expression image response topic range and/or expression image emotion response trigger strategy;
judging whether the expression image needs to be output or not based on the expression image response strategy;
when the expression image needs to be output, generating and outputting interactive response data containing the expression image and/or the response text, wherein the interactive response data comprises:
determining the type of the expression image according to the individual parameters of the intelligent robot;
selecting expression images with meanings matched with the response texts from all the expression images of the type;
when the expression image needs to be output, the interactive response data containing the expression image is generated and output, and the method further comprises the following steps:
extracting text information corresponding to the expression image;
comparing the text information corresponding to the expression image with the response text;
when the matching degree of the text information corresponding to the expression image and the response text reaches a set threshold value, only outputting the expression image;
the type of the expression image is matched with the personality parameters, and the meaning of the expression image is matched with the response text.
2. The method of claim 1, wherein obtaining personality parameters of the intelligent robot comprises:
acquiring identity information of a current interactive object;
and calling the individual parameters matched with the identity information.
3. The method of claim 1, wherein obtaining personality parameters of the intelligent robot comprises:
acquiring current interaction environment description information and/or user interaction demand information;
and calling the individual parameters matched with the current interaction environment description information and/or the user interaction demand information.
4. The method of claim 1, wherein determining whether the expression image needs to be output comprises:
and when the interactive input data contains expression images, judging that the expression images need to be output.
5. The method of claim 1, wherein determining whether the expression image needs to be output comprises:
determining a current topic range and/or emotion response parameters according to the semantic analysis result and/or the emotion analysis result;
and judging whether the expression image needs to be output or not according to the topic range and/or the emotional response parameters.
6. A storage medium having stored thereon program code for implementing the method according to any one of claims 1-5.
7. An intelligent robotic system, the system comprising:
the input acquisition module is configured to acquire interactive input data of a current interactive object;
the output module is configured to output interaction response data to the current interaction object;
an interaction resolution module configured to:
obtaining individual parameters of the intelligent robot, wherein the individual parameters are matched with identity information of an interactive object, current interactive environment description information and/or user interactive demand information;
performing semantic analysis and emotion calculation on the interactive input data to obtain a semantic analysis result and an emotion analysis result;
generating a corresponding response text according to the semantic analysis result and the emotion analysis result;
judging whether an expression image needs to be output or not by adopting a set strategy, and further generating the interactive response data containing the expression image and/or the response text according to a judgment result, wherein the interactive analysis module judges whether the expression image needs to be output or not by the following operations:
determining an expression image response strategy according to the individual parameters, wherein the expression image response strategy comprises expressions
Image response frequency, emotion image response topic range and/or emotion image emotion response trigger strategy; judging whether the expression image needs to be output or not based on the expression image response strategy;
when the expression image needs to be output, the following operations are executed to generate and output interactive response data containing the expression image and/or the response text:
determining the type of the expression image according to the individual parameters of the intelligent robot;
selecting expression images with meanings matched with the response texts from all the expression images of the type;
when the expression image needs to be output, the following operations are also performed:
extracting text information corresponding to the expression image;
comparing the text information corresponding to the expression image with the response text;
when the matching degree of the text information corresponding to the expression image and the response text reaches a set threshold value, only outputting the expression image; the type of the expression image is matched with the individual parameters of the intelligent robot, and the meaning of the expression image is matched with the response text.
CN201710891490.2A 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot Active CN107807734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710891490.2A CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710891490.2A CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Publications (2)

Publication Number Publication Date
CN107807734A CN107807734A (en) 2018-03-16
CN107807734B true CN107807734B (en) 2021-06-15

Family

ID=61592521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710891490.2A Active CN107807734B (en) 2017-09-27 2017-09-27 Interactive output method and system for intelligent robot

Country Status (1)

Country Link
CN (1) CN107807734B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109065018B (en) * 2018-08-22 2021-09-10 北京光年无限科技有限公司 Intelligent robot-oriented story data processing method and system
CN109460548B (en) * 2018-09-30 2022-03-15 北京光年无限科技有限公司 Intelligent robot-oriented story data processing method and system
CN109815463A (en) * 2018-12-13 2019-05-28 深圳壹账通智能科技有限公司 Control method, device, computer equipment and storage medium are chosen in text editing
CN110209784B (en) * 2019-04-26 2024-03-12 腾讯科技(深圳)有限公司 Message interaction method, computer device and storage medium
CN111984767A (en) * 2019-05-23 2020-11-24 北京搜狗科技发展有限公司 Information recommendation method and device and electronic equipment
CN110633361B (en) * 2019-09-26 2023-05-02 联想(北京)有限公司 Input control method and device and intelligent session server
CN111309862A (en) * 2020-02-10 2020-06-19 贝壳技术有限公司 User interaction method and device with emotion, storage medium and equipment
CN113658467A (en) * 2021-08-11 2021-11-16 岳阳天赋文化旅游有限公司 Interactive system and method for optimizing user behavior

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN106200959A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Information processing method and system towards intelligent robot
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN106863300A (en) * 2017-02-20 2017-06-20 北京光年无限科技有限公司 A kind of data processing method and device for intelligent robot
CN106909896A (en) * 2017-02-17 2017-06-30 竹间智能科技(上海)有限公司 Man-machine interactive system and method for work based on character personality and interpersonal relationships identification
CN206311916U (en) * 2016-05-31 2017-07-07 北京光年无限科技有限公司 A kind of intelligent robot of exportable expression

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843381A (en) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 Data processing method for realizing multi-modal interaction and multi-modal interaction system
CN206311916U (en) * 2016-05-31 2017-07-07 北京光年无限科技有限公司 A kind of intelligent robot of exportable expression
CN106200959A (en) * 2016-07-08 2016-12-07 北京光年无限科技有限公司 Information processing method and system towards intelligent robot
CN106297789A (en) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 The personalized interaction method of intelligent robot and interactive system
CN106873773A (en) * 2017-01-09 2017-06-20 北京奇虎科技有限公司 Robot interactive control method, server and robot
CN106909896A (en) * 2017-02-17 2017-06-30 竹间智能科技(上海)有限公司 Man-machine interactive system and method for work based on character personality and interpersonal relationships identification
CN106863300A (en) * 2017-02-20 2017-06-20 北京光年无限科技有限公司 A kind of data processing method and device for intelligent robot

Also Published As

Publication number Publication date
CN107807734A (en) 2018-03-16

Similar Documents

Publication Publication Date Title
CN107807734B (en) Interactive output method and system for intelligent robot
CN106503156B (en) Man-machine interaction method and device based on artificial intelligence
JP6889281B2 (en) Analyzing electronic conversations for presentations in alternative interfaces
CN107632706B (en) Application data processing method and system of multi-modal virtual human
CN112868060B (en) Multimodal interactions between users, automated assistants, and other computing services
CN109478106B (en) Utilizing environmental context for enhanced communication throughput
Salem et al. Designing a non-verbal language for expressive avatars
CN109086860B (en) Interaction method and system based on virtual human
US11366574B2 (en) Human-machine conversation method, client, electronic device, and storage medium
CN112929253B (en) Virtual image interaction method and device
CN111565143B (en) Instant messaging method, equipment and computer readable storage medium
Tao et al. Affective information processing
KR20200020504A (en) Electronic device for providing response message to user based on user's status information and method for operating the same
CN113703585A (en) Interaction method, interaction device, electronic equipment and storage medium
CN112910754A (en) Message processing method, device, equipment and storage medium based on group session
CN113792196A (en) Method and device for man-machine interaction based on multi-modal dialog state representation
CN114527912B (en) Information processing method, information processing device, computer readable medium and electronic equipment
CN117632109A (en) Virtual digital assistant construction method, device, electronic equipment and storage medium
Wang et al. Multi-party, multi-role comprehensive listening behavior
Piva et al. A flexible architecture for ambient intelligence systems supporting adaptive multimodal interaction with users
Marcus et al. Design, User Experience, and Usability. Design for Contemporary Interactive Environments: 9th International Conference, DUXU 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part II
CN116089736A (en) Social session processing method and related equipment
CN114443821A (en) Robot conversation method and device, electronic equipment and storage medium
CN115705751A (en) Method, apparatus, device, storage medium and program product for interacting with digital robot
Itabashi et al. Integrated robotics architecture with Kansei computing and its application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant