WO2019165732A1 - 基于机器人情绪状态的回复信息生成方法、装置 - Google Patents

基于机器人情绪状态的回复信息生成方法、装置 Download PDF

Info

Publication number
WO2019165732A1
WO2019165732A1 PCT/CN2018/092877 CN2018092877W WO2019165732A1 WO 2019165732 A1 WO2019165732 A1 WO 2019165732A1 CN 2018092877 W CN2018092877 W CN 2018092877W WO 2019165732 A1 WO2019165732 A1 WO 2019165732A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
emotional
information
current
current user
Prior art date
Application number
PCT/CN2018/092877
Other languages
English (en)
French (fr)
Inventor
宋亚楠
邱楠
梁剑华
邓婧文
陈甜
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Publication of WO2019165732A1 publication Critical patent/WO2019165732A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/001Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means

Definitions

  • the present invention relates to the field of intelligent robot technology, and in particular, to a method and an apparatus for generating reply information based on an emotional state of a robot.
  • the present invention provides a method and a device for generating reply information based on the emotional state of the robot, which can combine the real interaction scenario, guide the generation of the reply information, and implement a diversified human-computer interaction process.
  • a method for generating reply information based on a robot's emotional state includes:
  • the generation of reply information is guided according to the robot emotion tag.
  • acquiring the emotional factors of the robot includes:
  • the current mood state of the robot is determined according to the remaining power, the duration of use, the network condition, the activity status, or pre-received mood-specific information.
  • acquiring the emotional factors of the robot includes:
  • the knowledge map includes a plurality of knowledge map sub-pictures;
  • the knowledge map sub-picture includes user data and user history interaction information;
  • the knowledge map subgraph further includes robot environment data; and the acquiring the emotion factor of the robot includes:
  • the emotional history of the robot to the current environment is determined.
  • the knowledge map sub-graph further includes user emotion data;
  • the user emotion data includes one or more of tone data, expression data, action data, and wording data input by the user;
  • the emotion factor of the acquiring robot include:
  • the emotional state of the current user is determined according to the extracted knowledge map subgraph.
  • the emotion factor of the robot is described by multi-dimensional data; and the generating the robot emotion label according to the emotion factor of the robot includes:
  • the emotion factor is converted into one-dimensional data to obtain the robot emotion tag.
  • the guide generates the reply information, including:
  • robot emotion tag as one of the input information of the training model, guiding the generation of the training model; guiding the generation of the reply information according to the generated training model, and determining the morphological category specifically used when replying the information;
  • the morphological categories include mood, intonation, action, wording, and expression.
  • a method for generating reply information based on a robot's emotional state includes:
  • a response information generating apparatus based on a robot emotional state includes:
  • An emotion factor acquisition unit configured to acquire an emotion factor of the robot, the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user Emotional state;
  • a robot emotion label determining unit configured to generate a robot emotion label according to the emotion factor of the robot
  • the robot emotion tag application unit is configured to guide generation of reply information according to the robot emotion tag.
  • a fourth aspect is a reply information generating apparatus based on a robot emotional state, comprising:
  • An emotion factor acquisition unit configured to acquire an emotion factor of the robot, the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user Emotional state;
  • An emotion factor application unit for directly, according to the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state, directly Guide the generation of response information.
  • the method and device for generating reply information based on the emotional state of the robot can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, and the robot and The current user's emotional history, the robot's emotional history of the current environment, and the current user's emotional state for comprehensive analysis, determining the robot's emotional tag, guiding the generation of the reply information through the robot's emotional tag, or directly guiding the generation of the reply information through the above-mentioned emotional factors
  • it helps the robot to respond to the current user due to people, time, cause, and location.
  • FIG. 1 is a flowchart of a method for generating a reply information generation control method according to Embodiment 1;
  • Embodiment 2 is a flow chart showing a method for confirming a current mood state of a robot provided in Embodiment 2;
  • 3 is a flow chart showing a method for the familiarity of the robot with the current user and the emotional history confirmation of the robot and the current user provided by the second embodiment;
  • FIG. 4 is a flow chart showing a method for confirming an emotional history of a current environment by a robot provided in Embodiment 2;
  • FIG. 5 is a flowchart of a method for confirming an emotional state of a current user provided by Embodiment 2;
  • FIG. 6 is a flowchart showing a method for generating a reply information generation control method provided in Embodiment 4.
  • FIG. 7 is a schematic diagram showing the connection of the reply information generating apparatus provided in Embodiment 5.
  • FIG. 8 is a schematic diagram showing the connection of the reply information generating apparatus provided in the sixth embodiment.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the embodiment provides a method for generating reply information based on the emotional state of the robot.
  • the method includes:
  • Step S101 Acquire a mood factor of the robot, where the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state;
  • the emotion factor of the robot is described by multi-dimensional data, for example, it can be described by using a multi-dimensional vector and a multi-dimensional linked list.
  • the current mood state of the robot refers to the current mood of the robot, such as: happy, depressed, sad and so on.
  • the familiarity of the robot to the current user includes: familiar, generally familiar, unfamiliar, and so on.
  • the emotional history of the robot and the current user refers to the historical emotion of the robot to the user determined based on the historical interaction data.
  • the current user emotional state refers to the user's current emotion.
  • Step S102 generating a robot emotion tag according to the emotion factor of the robot
  • the method includes: converting the emotion factor into one-dimensional data, and obtaining the robot emotion tag.
  • the multi-dimensional emotional factor is converted into a one-dimensional emotional tag, so that only one-dimensional emotional tags need to be considered in the subsequent reply information, and the multi-dimensional emotional factors need not be considered, and the reply information can be generated more quickly.
  • Emotional tags can be described in the form of one-dimensional vectors, one-dimensional linked lists, and the like.
  • Step S103 guiding generation of reply information according to the robot emotion tag.
  • the method for generating a reply information based on the emotional state of the robot can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the robot and the current user.
  • the emotional history, the emotional history of the current environment, and the current user's emotional state for comprehensive analysis, to determine the robot's emotional label, to guide the generation of the response information through the robot emotional label, to achieve a diverse human-computer interaction process, and to help the robot achieve the cause People, time, cause, and place respond to current users.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the second embodiment adds the method of acquiring the emotion factor based on the first embodiment.
  • step S201 the current remaining power and duration of the robot are counted.
  • Step S202 detecting the current network status and activity status of the robot.
  • Step S203 determining a current mood state of the robot according to the remaining power, the duration of use, the network status, the activity status, or the pre-received mood specific information.
  • the robot can determine the current mood based on the remaining power, the duration of use, the network status, the activity status, or mood specific information. For example, if the robot's remaining charge is less than 10% hunger, the robot's current mood state is to ask the user to charge himself. So no matter what content the user enters, the robot will ask the user to charge himself before responding to the user input. For example, when the network status of the robot is bad, the current mood state of the robot is to request the user to check his own network, so no matter what content the user inputs, the robot will first request the user to check his network before replying to the user input.
  • Mood-specific information can be sent to the robot in advance by the developer.
  • the mood-specific information can be the happy information that the developer pushes during the Spring Festival.
  • the robot will no longer update the mood state according to the remaining power and other information, so that the robot can always be happy during the Spring Festival, and the robot will have a more happy and lively mood.
  • the state interacts with the user.
  • the recovery information generation method based on the emotional state of the robot can detect and count various states of the robot, and the current mood state of the robot is subject to remaining power, network conditions, ongoing activities of the robot, duration of use of the robot, or research and development.
  • the mood-specific information pushed by the person is determined.
  • Step S301 constructing a knowledge map;
  • the knowledge map includes a plurality of knowledge map sub-pictures;
  • the knowledge map sub-picture includes user data and user history interaction information;
  • the knowledge map includes attributes of various users or robots, and the knowledge map sub-graph is a part of attribute extraction in the extracted knowledge map.
  • Knowledge maps can be stored in two ways: unified storage and chunked storage.
  • Uniform storage means that all robot properties and user properties are stored in a library, so that when extracting the knowledge map submap, you only need to extract from the gallery.
  • Block storage means that all robot attributes and user attributes are divided into multiple storage blocks for storage. For example: all robot properties are grouped together and stored as a robot library; all user attributes are grouped together and stored as a user gallery. In this way, when the knowledge map subgraph is extracted, the knowledge map subgraph of the robot is extracted from the robot library; and the user's knowledge map subgraph is extracted from the user gallery.
  • Step S302 acquiring voice information or picture information that the robot interacts with the current user
  • Step S303 determining a current user ID or a current user name according to the voice information or the picture information;
  • Step S304 extracting, from the knowledge map, a knowledge map sub-picture whose user data matches the current user ID or the current user name;
  • Step S305 determining the familiarity of the robot with the current user according to the degree of perfection of the knowledge map subgraph
  • the degree of perfection refers to the number of attributes included in the knowledge map subgraph.
  • the information that can be filled in the knowledge map subgraph can be enumerated as follows:
  • the first type of information name (ID), identifying information (soundprint, fingerprint, facial image, etc., the robot is used to identify the user), grade level, and region; the information is closely related to the educational function, and the user's grade and region are known. You can know the subjects and knowledge you have studied, are studying, and will learn.
  • the second type of information is age, gender, class; this information plays a supporting role in the educational function. Students of different ages and genders have different characteristics.
  • the class information can help the robot understand the user's teacher team and the specific teaching progress;
  • the third type of information is the information obtained by the robot in the teaching process through teaching, interaction, etc., used to track user learning Customize the teaching and review.
  • the familiarity between the robot and the user is the pass level (the sixty percent system)
  • the second type of information is completed
  • the familiarity between the robot and the user is a good level (percent Eighty-five)
  • the third type of information is complete, and new information is filled into the historical information on a regular basis, and the familiarity between the robot and the user is an excellent level (90 points for the percentage system).
  • Step S306 determining an emotional history of the robot and the current user according to the user history interaction information of the extracted knowledge map subgraph.
  • the user history interaction information includes tone data, expression data, action data, and wording data input by the user.
  • the emotional history of the robot and the current user can be determined from the information fed back by the user. For example: If the purpose of the robot is a mobile assistant. During a working time, the robot puts the information received by the user's mobile phone through the voice, and this behavior seriously affects the user or other people to go to work, so the user feedbacks the angry emotion to the robot through voice or information, for example: replying via SMS You are harmless, this time outside.” Wait. So the robot records the user's emotions in the scene.
  • the history and knowledge map of the interaction between the robot and the user can be extracted from the knowledge map of the robot.
  • the above information will affect the robot's emotions .
  • Step S401 Acquire multi-modal information that interacts with the current user.
  • Step S402 determining an environment name or an environment ID from the multimodal information.
  • Step S403 extracting, from the knowledge map, a knowledge map sub-graph that matches the environment data of the robot with the environment name or the environment ID;
  • Step S404 determining an emotional history of the robot to the current environment according to the extracted knowledge map subgraph.
  • the robot acquires the influence of the current environment on the emotion of the robot according to the interaction history with the user. For example, for the above scenario, when the user receives the new information when the user is at work, the robot learns that the scene cannot be in the scene according to the interaction history. The new information is released. In this way, the robot can obtain the current scene through the multi-modal information, and then extract the content in the knowledge map related to the current environment of the robot, extract the sub-picture related to the current environment from the knowledge map, and determine the current environment to the robot emotion. Impact.
  • the knowledge map sub-graph further includes user sentiment data;
  • the user sentiment data includes one or more of mood data, emoticon data, action data, and wording data input by the user;
  • the specific acquisition process is as follows:
  • Step S501 Acquire multimodal information that interacts with the current user.
  • Step S502 extracting mood information, emoticon information or action information of the current user from the multimodal information.
  • Step S503 extracting, from the knowledge map, a knowledge map sub-picture that matches user emotion data, the mood information, the expression information, or the action information;
  • Step S504 determining an emotional state of the current user according to the extracted knowledge map subgraph.
  • the robot can obtain the tone, expression, action, etc. of the user input through multimodal information, and extract the subgraphs related to the user in the robot knowledge map, and analyze the tone, expression, action, and wording represented by the user input.
  • the user's emotions and the user's current emotions to the robot, and the robot can adjust its emotions based on the above information.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the third embodiment adds a method for generating reply information based on the above embodiment.
  • the robot emotion tag is used as one of the input information of the training model to guide the generation of the training model; the generation of the response information is guided according to the generated training model, and the morphological category specifically used when replying the information is determined;
  • the training model is an artificial intelligence model
  • the training model is learned by the artificial intelligence model for all robot emotion tags.
  • the method can be based on human learning, using artificial intelligence methods such as machine learning to model the emotional response of the person, determine the emotional response of the person in the case of different values of the above information, and then use the above information as input of the artificial intelligence model.
  • the value of its output is getting closer and closer to the true emotional response of the person.
  • the candidates of each morphological category are sorted, and the context information is the interaction information between the robot and the current user.
  • the final option of the morphological category is determined to interact with the current user.
  • the robot's emotional tag is 1, the robot will choose the wording for you, please, trouble, etc. If the robot's emotional tag is 2, the robot will choose the wording pro, bar, ah, ah.
  • Each robot's emotional tag of the robot will have corresponding candidate tone, intonation, action, wording, and expression. Each morphological category may contain many options. The robot sorts the candidates according to the context and context in the actual interaction. Then select the candidate with the highest score as the option to interact with the user, and present a variety of human-computer interaction processes, which helps the robot to respond to the current user due to people, time, cause, and location.
  • a second reply information generation instruction control method is provided, and according to the robot emotion label and the pre-established rules, the generation of the reply information is guided, and the form type specifically used when replying the information is determined;
  • the morphological categories include mood, intonation, action, wording, and expression.
  • the rules mainly refer to grammar rules.
  • the robot's emotional state is good
  • the user enters the demand into the robot: help me set an alarm clock at 8 o'clock tomorrow morning.
  • the robot processes the user input, recognizes that the user's intention is an alarm clock, and the extraction time is 8:00 tomorrow morning.
  • the robot sets the alarm clock for the user, and finds the grammar rules corresponding to the fixed alarm clock: "modal particle", "alarm clock that has already fixed the "time point” for you, "customized part”.
  • the customization Part and modal particle should be filled with positive and positive words.
  • the robot may generate the following reply: Well, I have already set an alarm clock for 8 o'clock tomorrow morning. I will call you at 8 o'clock tomorrow morning.
  • the robot's emotional state is not good, the user enters the demand into the robot: help me set an alarm clock at 8 o'clock tomorrow morning.
  • the robot processes the user input, recognizes that the user's intention is an alarm clock, and the extraction time is 8:00 tomorrow morning.
  • the robot sets the alarm clock for the user, and finds the corresponding language rule for the fixed alarm clock: "modal particle", "alarm clock that has already fixed the "time point” for you, "customized part".
  • the customization Part and modal particle should be filled with negative and relatively negative words.
  • the robot may generate the following reply: Hey, I have already set an alarm clock for 8 o'clock tomorrow morning. I am in a bad mood, don't bother me any more.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • the embodiment of the present invention provides another method for generating reply information based on the emotional state of the robot.
  • the method includes:
  • Step S601 acquiring an emotion factor of the robot, the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state;
  • Step S602 directly directing generation of reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state.
  • the present embodiment inputs the emotion factor into the pre-built training model based on the feedback information generation method of the robot emotional state to guide the generation of the reply information, or guides the generation of the reply information according to the pre-established rules.
  • the robot once reminded the user to charge himself in the absence of lights at night, scaring the user who is resting, causing the user to face the robot. Later, under the same conditions/scene, even if the robot's power is insufficient, affecting the current mood state of the robot, it will not initiate a request for the user to charge himself.
  • the method for generating a reply information based on the emotional state of the robot can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the robot and the current user.
  • the emotional history, the emotional history of the current environment, and the current user's emotional state for comprehensive analysis. Directly guide the generation of response information through the above-mentioned emotional factors, and realize a diversified human-computer interaction process, which helps the robot realize the cause and time. Respond to the current user due to the cause and the location.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the embodiment of the present invention provides a reply information generating apparatus based on the emotional state of the robot.
  • the apparatus includes an emotion factor acquiring unit 101, a robot emotion label determining unit 102, and a robot emotion label applying unit 103.
  • the emotion factor acquiring unit 101 uses Obtaining a mood factor of the robot, the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, the current user emotional state; the robot emotion label determining unit
  • the 102 is configured to determine a robot emotion label according to the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state; the robot emotion label application unit 103 Used to generate response information based on the robot's emotional tag.
  • the response information generating apparatus based on the robot emotional state can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the robot and the current user.
  • the emotional history, the emotional history of the current environment, and the current user's emotional state for comprehensive analysis, to determine the robot's emotional label, to guide the generation of the response information through the robot emotional label, to achieve a diverse human-computer interaction process, and to help the robot achieve the cause People, time, cause, and place respond to current users.
  • the embodiment of the present invention provides another response information generating apparatus based on the robot emotional state.
  • the apparatus includes an emotion factor acquiring unit 101 and an emotion factor applying unit 201, and the emotion factor acquiring unit 101 is configured to acquire the emotion factor of the robot.
  • the emotion factor includes the current mood state of the robot, the familiarity of the robot to the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, the current user emotional state, and the emotion factor adopts a vector representation;
  • the emotion factor application unit 201 uses Directly generate feedback information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment, and the current user emotional state.
  • the response information generating apparatus based on the robot emotional state can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the robot and the current user.
  • the emotional history, the emotional history of the current environment, and the current user's emotional state for comprehensive analysis. Directly guide the generation of response information through the above-mentioned emotional factors, and realize a diversified human-computer interaction process, which helps the robot realize the cause and time. Respond to the current user due to the cause and the location.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

本发明属于智能机器人技术领域,提供了一种基于机器人情绪状态的回复信息生成方法、装置。该方法包括获取机器人的情绪因子,根据机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态,确定机器人情绪标签,通过机器人情绪标签指导生成回复信息,或通过上述情绪因子直接指导生成回复信息。本发明基于机器人情绪状态的回复信息生成方法、装置,能够结合真实的交互场景,指导回复信息的生成,实现多样化的人机交互过程。

Description

基于机器人情绪状态的回复信息生成方法、装置 技术领域
本发明涉及智能机器人技术领域,具体涉及一种基于机器人情绪状态的回复信息生成方法、装置。
背景技术
目前,涉及到人机交互技术的产品和平台甚多,大多通过对用户语音或多模态输入的处理分析,从中获取各种信息,根据这些信息从目标数据库或知识库中提取或生成回复信息,回复给用户。
但是,现有技术潜在的问题是:不论用户在何时、何种情况下与产品进行交互,在大多数情况下,产品会对用户的相同输入给予相同的回复。这明显不符合人与人交互的情况。
如何结合真实的交互场景,指导回复信息的生成,实现多样化的人机交互过程,是本领域技术人员亟需解决的问题。
发明内容
针对现有技术中的缺陷,本发明提供了一种基于机器人情绪状态的回复信息生成方法、装置,能够结合真实的交互场景,指导回复信息的生成,实现多样化的人机交互过程。
第一方面,一种基于机器人情绪状态的回复信息生成方法,包括:
获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
根据机器人的情绪因子生成机器人情绪标签;
根据所述机器人情绪标签,指导生成回复信息。
进一步地,获取机器人的情绪因子,包括:
统计所述机器人当前的剩余电量、使用时长;
检测所述机器人当前的网络状况、活动状况;
根据所述剩余电量、所述使用时长、所述网络状况、所述活动状况或预接收的心情特定信息,确定所述机器人的当前心情状态。
进一步地,获取机器人的情绪因子,包括:
构建知识图谱;所述知识图谱中包括多个知识图谱子图;所述知识图谱子图包括用户数据和用户历史交互信息;
获取机器人与当前用户交互的语音信息或图片信息;
根据所述语音信息或所述图片信息,确定当前用户ID或当前用户名称;
从知识图谱中提取用户数据与当前用户ID或当前用户名称相匹配的知识图谱子图;
根据知识图谱子图的完善程度,确定所述机器人对当前用户的熟悉度;
根据提取得到的知识图谱子图的用户历史交互信息,确定所述机器人与当前用户的情感历史。
进一步地,所述知识图谱子图还包括机器人环境数据;所述获取机器人的情绪因子,包括:
获取机器人与当前用户交互的多模态信息;
从所述多模态信息中确定环境名称或环境ID;
从知识图谱中提取机器人环境数据与环境名称或环境ID相匹配的知识图谱子图;
根据提取到的知识图谱子图,确定所述机器人对当前环境的情感历史。
进一步地,所述知识图谱子图还包括用户情感数据;所述用户情感数据包括用户输入的语气数据、表情数据、动作数据和措辞数据中的一种或 几种;所述获取机器人的情绪因子,包括:
获取机器人与当前用户交互的多模态信息;
从所述多模态信息中提取当前用户的语气信息、表情信息或动作信息;
从知识图谱中提取用户情感数据与所述语气信息、所述表情信息或所述动作信息相匹配的知识图谱子图;
根据提取到的知识图谱子图,确定所述当前用户的情感状态。
进一步地,所述机器人的情绪因子采用多维数据描述;所述根据机器人的情绪因子生成机器人情绪标签,包括:
将情绪因子转换为一维数据,得到所述机器人情绪标签。
进一步地,根据所述机器人情绪标签,指导生成回复信息,包括:
将所述机器人情绪标签作为训练模型的输入信息之一,指导训练模型的生成;根据生成的训练模型指导回复信息的生成,确定回复信息时具体采用的形态类别;
或根据所述机器人情绪标签和预先制定的规则,指导回复信息的生成,确定回复信息时具体采用的形态类别;
所述形态类别包括语气、语调、动作、措辞、表情。
第二方面,一种基于机器人情绪状态的回复信息生成方法,包括:
获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
根据机器人的当前心情状态、所述机器人对当前用户的熟悉度、机器人与当前用户的情感历史、所述机器人对当前环境的情感历史及所述当前用户情感状态,直接指导生成回复信息。
第三方面,一种基于机器人情绪状态的回复信息生成装置,包括:
情绪因子获取单元,用于获取机器人的情绪因子,所述情绪因子包括 机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
机器人情绪标签确定单元,用于根据机器人的情绪因子生成机器人情绪标签;
机器人情绪标签应用单元,用于根据所述机器人情绪标签,指导生成回复信息。
第四方面一种基于机器人情绪状态的回复信息生成装置,包括:
情绪因子获取单元,用于获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
情绪因子应用单元,用于根据机器人的当前心情状态、所述机器人对当前用户的熟悉度、机器人与当前用户的情感历史、所述机器人对当前环境的情感历史及所述当前用户情感状态,直接指导生成回复信息。
由上述技术方案可知,本实施例提供的基于机器人情绪状态的回复信息生成方法、装置,能够对多种情绪因子进行分析,例如,机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态,以进行综合分析,确定机器人情绪标签,通过机器人情绪标签指导回复信息的生成,或通过上述情绪因子直接指导回复信息的生成,实现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
附图说明
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍。在所有附图中,类似的元件或部分一般由类似的附图标记标识。附图中,各元件或部分并不一定按照实际的比例绘制。
图1示出了实施例一提供的回复信息生成控制方法的方法流程图;
图2示出了实施例二提供的机器人的当前心情状态确认的方法流程图;
图3示出了实施例二提供的机器人对当前用户的熟悉度和机器人与当前用户的情感历史确认的方法流程图;
图4示出了实施例二提供的机器人对当前环境的情感历史确认的方法流程图;
图5示出了实施例二提供的当前用户的情感状态确认的方法流程图;
图,6示出了实施例四提供的回复信息生成控制方法的方法流程图;
图7示出了实施例五提供的回复信息生成装置的连接示意图;
图8示出了实施例六提供的回复信息生成装置的连接示意图。
具体实施方式
下面将结合附图对本发明技术方案的实施例进行详细的描述。以下实施例仅用于更加清楚地说明本发明的技术方案,因此只是作为示例,而不能以此来限制本发明的保护范围。
需要注意的是,除非另有说明,本申请使用的技术术语或者科学术语应当为本发明所属领域技术人员所理解的通常意义。
实施例一:
本实施例提供一种基于机器人情绪状态的回复信息生成方法,参见图1,该方法包括:
步骤S101,获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
具体地,机器人的情绪因子采用多维数据描述,例如:可以采用多维向量、多维链表来描述。机器人的当前心情状态是指机器人当下的心情,例如:开心、郁闷、伤心等。机器人对当前用户的熟悉度包括:很熟悉、 一般熟悉、不熟悉等。机器人与当前用户的情感历史是指根据历史交互数据判断得到的机器人对用户的历史情感。当前用户情感状态是指用户当下的情感。
步骤S102,根据机器人的情绪因子生成机器人情绪标签;
具体包括:将情绪因子转换为一维数据,得到所述机器人情绪标签。
具体地,将多维的情绪因子转换为一维的情绪标签,这样在后续回复信息时,只需要考虑一维的情绪标签即可,不需要考虑多维的情绪因子,能更快生成回复信息。情绪标签可以用一维向量、一维链表等形式描述。
步骤S103,根据机器人情绪标签,指导生成回复信息。
由上述技术方案可知,本实施例提供的基于机器人情绪状态的回复信息生成方法,能够对多种情绪因子进行分析,例如,机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、当前环境的情感历史和当前用户情感状态,以进行综合分析,确定机器人情绪标签,通过机器人情绪标签指导回复信息的生成,实现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
实施例二:
实施例二在实施例一的基础上,增加了情绪因子的获取方法。
1、机器人的当前心情状态。
在情绪因子处理方面,针对机器人的当前心情状态,参见图2,具体获取过程如下:
步骤S201,统计机器人当前的剩余电量、使用时长。
步骤S202,检测机器人当前的网络状况、活动状况。
步骤S203,根据剩余电量、使用时长、网络状况、活动状况或预接收的心情特定信息,确定机器人的当前心情状态。
具体地,机器人可以根据剩余电量、使用时长、网络状况、活动状况或心情特定信息确定当下的心情。例如,如果机器人剩余电量低于10%为 饥饿状态,机器人的当前心情状态是请求用户为自己充电。所以不论用户输入什么内容,在对用户输入进行回复之前,机器人都会先请求用户为自己充电。还例如:机器人的网络状态不良时,机器人的当前心情状态是请求用户检查自己的网络,所以不论用户输入什么内容,在对用户输入进行回复之前,机器人都会先请求用户检查自己的网络。
心情特定信息可以由开发人员预先发送给机器人。例如:心情特定信息可以是研发者在春节期间推送的开心信息,机器人将不再根据剩余电量等信息更新心情状态,使得机器人在春节期间都能一直保持开心心情,机器人都会以较为开心活泼的情绪状态与用户交互。
除了上述几种因素,至于其他哪些信息还会影响机器人当前的心情,一般可以根据产品需求设定,也可以采用专家系统,直接由专家根据心理学研究成果,指定机器人当前情绪的确定规则。
本实施例基于机器人情绪状态的回复信息生成方法,能够对机器人的各种状态进行检测与统计,机器人的当前心情状态会受到剩余电量、网络情况、机器人正在进行的活动、机器人使用时长,或研发者推送的心情特定信息所决定。
2、机器人对当前用户的熟悉度和机器人与当前用户的情感历史。
针对机器人对当前用户的熟悉度和机器人与当前用户的情感历史,参见图3,具体获取过程如下:
步骤S301,构建知识图谱;所述知识图谱中包括多个知识图谱子图;所述知识图谱子图包括用户数据和用户历史交互信息;
具体地,知识图谱中包括各种用户或机器人的属性,知识图谱子图就是提取知识图谱中部分属性构成。知识图谱的存储方式可以为两种:统一存储和分块存储。统一存储是指所有的机器人属性和用户属性均存储在一个图库中,这样,在提取知识图谱子图时,只需要从该图库中提取即可。分块存储是指将所有的机器人属性和用户属性分成多个存储块进行存储。 例如:将所有机器人属性分成一组,作为机器人图库进行存储;将所有用户属性分成一组,作为用户图库进行存储。这样,在提取知识图谱子图时,从机器人图库提取机器人的知识图谱子图;从用户图库提取用户的知识图谱子图。
步骤S302,获取机器人与当前用户交互的语音信息或图片信息;
步骤S303,根据所述语音信息或所述图片信息,确定当前用户ID或当前用户名称;
步骤S304,从知识图谱中提取用户数据与当前用户ID或当前用户名称相匹配的知识图谱子图;
步骤S305,根据知识图谱子图的完善程度,确定所述机器人对当前用户的熟悉度;
具体地,完善程度是指知识图谱子图中包括的属性数量。例如在教育的场景下,如果某机器人的用途是为义务教育阶段的学生提供课程相关的辅助教育,在这一场景中,知识图谱子图中可以填充的信息可列举如下:
第一类信息:姓名(ID)、识别性信息(声纹、指纹、脸部图像等,机器人用于识别用户)、就读年级、所属地区;这些信息与教育功能息息相关,知道用户的年级和地区就可以知道学习过、正在学、将要学的科目和知识;
第二类信息:年龄、性别、班级;这些信息对教育功能起辅助作用,不同年龄、不同性别的学生有各自不同的特点,班级信息可以帮助机器人了解用户的教师团队和具体教学进度;
第三类信息:历史成绩、交互历史、错题情况等历史性信息;这些信息对教育功能起辅助作用,是机器人在教学过程中通过教学、交互等方式获得的信息,用于跟踪用户学习情况、指导教学和复习的定制。
在上述产品场景中,如果第一类信息填写完整,则机器人与用户的熟悉度为及格水平(百分制六十分),第二类信息填写完整,则机器人与用户的熟悉度为良好水平(百分制八十分),第三类信息完整、定期有新的信息 被填充到历史信息中,则机器人与用户的熟悉度为优秀水平(百分制九十五分)。由此可以看出知识图谱子图填充得越完整,说明机器人与用户交互的内容越多,机器人与用户的熟悉度越高。
步骤S306,根据提取得到的知识图谱子图的用户历史交互信息,确定所述机器人与当前用户的情感历史。
具体地,用户历史交互信息中包括用户输入的语气数据、表情数据、动作数据和措辞数据等。从用户反馈的信息中可以确定机器人与当前用户的情感历史。例如:如果机器人的用途是手机助理。在一次上班时间,机器人将用户手机接收到的信息通过语音进行外放,而这一行为严重影响了用户或他人上班,所以用户通过语音或信息给机器人反馈生气的情绪,例如:通过短信回复“你笨呀,这个时候外放”等。所以机器人就记录下该场景下用户的情感。
本实施例基于机器人情绪状态的回复信息生成方法,通过接收到的语音信息或图片信息等,分析出当前用户后,可以从机器人的知识图谱中提取到机器人与该用户交互的历史及知识图谱子图,获取到机器人对当前用户的好感度、与当前用户的亲密度、对当前用户的了解度、熟悉度,对当前用户的情感历史,机器人与当前用户情感历史,上述信息都会影响机器人的情绪。
3、机器人对当前环境的情感历史。
针对机器人对当前环境的情感历史,参见图4,具体获取过程如下:
步骤S401,获取与当前用户交互的多模态信息。
步骤S402,从多模态信息中确定环境名称或环境ID。
步骤S403,从知识图谱中提取机器人环境数据与环境名称或环境ID相匹配的知识图谱子图;
步骤S404,根据提取到的知识图谱子图,确定所述机器人对当前环境的情感历史。
具体地,机器人根据与用户的交互历史获取当前环境对机器人情绪的影响,例如:针对上述场景,当用户在上班时候,手机接收到新信息时,机器人根据交互历史,得知,不能在该场景下新信息外放。这样,机器人可以通过多模态信息获取到当前的场景,进而提取到机器人与当前环境相关的知识图谱中的内容,从知识图谱中提取到与当前环境相关的子图,确定当前环境对机器人情绪的影响。
4、当前用户的情感状态。
针对当前用户的情感状态,所述知识图谱子图还包括用户情感数据;所述用户情感数据包括用户输入的语气数据、表情数据、动作数据和措辞数据中的一种或几种;参见图5,具体获取过程如下:
步骤S501,获取与当前用户交互的多模态信息。
步骤S502,从多模态信息中提取当前用户的语气信息、表情信息或动作信息。
步骤S503,从知识图谱中提取用户情感数据与所述语气信息、所述表情信息或所述动作信息相匹配的知识图谱子图;
步骤S504,根据提取到的知识图谱子图,确定所述当前用户的情感状态。
例如,机器人可以通过多模态信息获取到用户输入时的语气、表情、动作等,通过提取机器人知识图谱中与该用户相关的子图,分析用户输入时的语气、表情、动作、措辞代表的用户情绪和用户对机器人的当前情感,进而机器人可以根据上述信息调整自身的情绪。
本实施例所提供的方法,为简要描述,实施例部分未提及之处,可参考前述方法实施例中相应内容。
实施例三:
实施例三在上述实施例的基础上,增加了回复信息的生成方法。
在回复信息生成指导控制方面,具体实现过程如下:
第一种,将所述机器人情绪标签作为训练模型的输入信息之一,指导训练模型的生成;根据生成的训练模型指导回复信息的生成,确定回复信息时具体采用的形态类别;
具体地,所述训练模型为人工智能模型,训练模型由人工智能模型对所有的机器人情绪标签进行学习得到。该方法能够从人类学习出发,使用机器学习等人工智能方法,对人的情绪反应进行建模,在上述信息不同取值的情况下确定人的情绪反应,然后将上述信息作为人工智能模型的输入,通过训练模型,使其输出的值越来越接近人的真实情绪反应值。
在形态类别的选择方面,具体实现过程如下:
根据机器人情绪标签的状态和上下文信息,对每种形态类别的候选项进行排序,上下文信息为机器人与当前用户的交互信息。
根据每种形态类别的排序结果,确定该形态类别的最终选项,与当前用户进行交互。
例如:机器人情绪标签为1的情况下,机器人会选择的措辞为您、请、麻烦等,机器人情绪标签为2的情况下,机器人会选择的措辞为亲、吧、啊、么么哒。机器人的每个机器人情绪标签都会有对应的候选语气、语调、动作、措辞、表情,每一形态类别中可能包含许多可选项,机器人在实际的交互中根据语境及上下文对候选项进行排序,进而从中选择分数最高的候选项作为与用户交互的选项,呈现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
第二种,在实际应用过程中,可能技术上无法训练出符合要求的模型,且人体模型的原始训练数据可能不够充分。为此提供第二种回复信息生成指导控制方法,根据所述机器人情绪标签和预先制定的规则,指导回复信息的生成,确定回复信息时具体采用的形态类别;
所述形态类别包括语气、语调、动作、措辞、表情。
具体地,规则主要指语法规则。例如:当机器人情绪状态为良好时, 用户向机器人输入需求:帮我定一个明天早上八点的闹钟。机器人处理用户输入,识别到用户意图为定闹钟,提取到时间为明天早上八点。机器人为用户设定闹钟,且查找到定闹钟对应的语法规则为:"语气词","已经帮你定好"时间点"的闹钟,"自定义部分"。根据机器人的情绪状态,对自定义部分和语气词应该选择正面、积极的词语填充,最终机器人可能生成如下的回复:嗯呐,已经帮你订好了明天早上八点的闹钟,我明天早上八点会准时叫你的哦。
如果机器人情绪状态为不佳时,用户向机器人输入需求:帮我定一个明天早上八点的闹钟。机器人处理用户输入,识别到用户意图为定闹钟,提取到时间为明天早上八点。机器人为用户设定闹钟,且查找到定闹钟对应的语言规则为:"语气词","已经帮你定好"时间点"的闹钟,"自定义部分"。根据机器人的情绪状态,对自定义部分和语气词应该选择负面、相对消极的词语填充,最终机器人可能生成如下的回复:哼,已经帮你订好了明天早上八点的闹钟,我心情不好你别再烦我了。
本实施例所提供的方法,为简要描述,实施例部分未提及之处,可参考前述方法实施例中相应内容。
实施例四:
本发明实施例提供另一种基于机器人情绪状态的回复信息生成方法,结合图6,该方法包括:
步骤S601,获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
步骤S602,根据机器人的当前心情状态、所述机器人对当前用户的熟悉度、机器人与当前用户的情感历史、所述机器人对当前环境的情感历史及所述当前用户情感状态,直接指导生成回复信息。
在实际应用过程中,本实施例基于机器人情绪状态的回复信息生成方 法将情绪因子输入预先构建的训练模型,以指导回复信息的生成,或根据预先制定的规则,指导回复信息的生成。
例如,机器人曾经在晚上无灯的情况下提醒用户给自己充电,吓到正在休息的用户,导致用户对机器人恶语相向。后来在同样条件/场景下,即使机器人电量不足,影响到机器人当前心情状态,也不会主动发起让用户给自己充电的请求。
由上述技术方案可知,本实施例提供的基于机器人情绪状态的回复信息生成方法,能够对多种情绪因子进行分析,例如,机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、当前环境的情感历史和当前用户情感状态,以进行综合分析,通过上述情绪因子直接指导回复信息的生成,实现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
本实施例所提供的方法,为简要描述,实施例部分未提及之处,可参考前述方法实施例中相应内容。
实施例五:
本发明实施例提供一种基于机器人情绪状态的回复信息生成装置,结合图7,该装置包括情绪因子获取单元101、机器人情绪标签确定单元102和机器人情绪标签应用单元103,情绪因子获取单元101用于获取机器人的情绪因子,情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;机器人情绪标签确定单元102用于根据机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史及当前用户情感状态,确定机器人情绪标签;机器人情绪标签应用单元103用于根据机器人情绪标签,指导生成回复信息。
由上述技术方案可知,本实施例提供的基于机器人情绪状态的回复信 息生成装置,能够对多种情绪因子进行分析,例如,机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、当前环境的情感历史和当前用户情感状态,以进行综合分析,确定机器人情绪标签,通过机器人情绪标签指导回复信息的生成,实现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
本实施例所提供的系统,为简要描述,实施例部分未提及之处,可参考前述方法实施例中相应内容。
实施例六:
本发明实施例提供另一种基于机器人情绪状态的回复信息生成装置,结合图8,该装置包括情绪因子获取单元101和情绪因子应用单元201,情绪因子获取单元101用于获取机器人的情绪因子,情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态,情绪因子采用向量表示;情绪因子应用单元201用于根据机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史及当前用户情感状态,直接指导生成回复信息。
由上述技术方案可知,本实施例提供的基于机器人情绪状态的回复信息生成装置,能够对多种情绪因子进行分析,例如,机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、当前环境的情感历史和当前用户情感状态,以进行综合分析,通过上述情绪因子直接指导回复信息的生成,实现多样化的人机交互过程,有助于机器人实现因人、因时、因事、因地对当前用户进行回复。
本实施例所提供的系统,为简要描述,实施例部分未提及之处,可参考前述方法实施例中相应内容。
本发明的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详 细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围,其均应涵盖在本发明的权利要求和说明书的范围当中。

Claims (10)

  1. 一种基于机器人情绪状态的回复信息生成方法,其特征在于,包括:
    获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
    根据机器人的情绪因子生成机器人情绪标签;
    根据所述机器人情绪标签,指导生成回复信息。
  2. 根据权利要求1所述基于机器人情绪状态的回复信息生成方法,其特征在于,获取机器人的情绪因子,包括:
    统计所述机器人当前的剩余电量、使用时长;
    检测所述机器人当前的网络状况、活动状况;
    根据所述剩余电量、所述使用时长、所述网络状况、所述活动状况或预接收的心情特定信息,确定所述机器人的当前心情状态。
  3. 根据权利要求1或2所述基于机器人情绪状态的回复信息生成方法,其特征在于,获取机器人的情绪因子,包括:
    构建知识图谱;所述知识图谱中包括多个知识图谱子图;所述知识图谱子图包括用户数据和用户历史交互信息;
    获取机器人与当前用户交互的语音信息或图片信息;
    根据所述语音信息或所述图片信息,确定当前用户ID或当前用户名称;
    从知识图谱中提取用户数据与当前用户ID或当前用户名称相匹配的知识图谱子图;
    根据知识图谱子图的完善程度,确定所述机器人对当前用户的熟悉度;
    根据提取得到的知识图谱子图的用户历史交互信息,确定所述机器人与当前用户的情感历史。
  4. 根据权利要求3所述基于机器人情绪状态的回复信息生成方法,其特征在于,所述知识图谱子图还包括机器人环境数据;所述获取机器人的 情绪因子,包括:
    获取机器人与当前用户交互的多模态信息;
    从所述多模态信息中确定环境名称或环境ID;
    从知识图谱中提取机器人环境数据与环境名称或环境ID相匹配的知识图谱子图;
    根据提取到的知识图谱子图,确定所述机器人对当前环境的情感历史。
  5. 根据权利要求3所述基于机器人情绪状态的回复信息生成方法,其特征在于,所述知识图谱子图还包括用户情感数据;所述用户情感数据包括用户输入的语气数据、表情数据、动作数据和措辞数据中的一种或几种;所述获取机器人的情绪因子,包括:
    获取机器人与当前用户交互的多模态信息;
    从所述多模态信息中提取当前用户的语气信息、表情信息或动作信息;
    从知识图谱中提取用户情感数据与所述语气信息、所述表情信息或所述动作信息相匹配的知识图谱子图;
    根据提取到的知识图谱子图,确定所述当前用户的情感状态。
  6. 根据权利要求1所述基于机器人情绪状态的回复信息生成方法,其特征在于,所述机器人的情绪因子采用多维数据描述;所述根据机器人的情绪因子生成机器人情绪标签,包括:
    将情绪因子转换为一维数据,得到所述机器人情绪标签。
  7. 根据权利要求1所述基于机器人情绪状态的回复信息生成方法,其特征在于,根据所述机器人情绪标签,指导生成回复信息,包括:
    将所述机器人情绪标签作为训练模型的输入信息之一,指导训练模型的生成;根据生成的训练模型指导回复信息的生成,确定回复信息时具体采用的形态类别;
    或根据所述机器人情绪标签和预先制定的规则,指导回复信息的生成,确定回复信息时具体采用的形态类别;
    所述形态类别包括语气、语调、动作、措辞、表情。
  8. 一种基于机器人情绪状态的回复信息生成方法,其特征在于,包括:
    获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
    根据机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、所述机器人对当前环境的情感历史及所述当前用户情感状态,直接指导生成回复信息。
  9. 一种基于机器人情绪状态的回复信息生成装置,其特征在于,包括:
    情绪因子获取单元,用于获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
    机器人情绪标签确定单元,用于根据机器人的情绪因子生成机器人情绪标签;
    机器人情绪标签应用单元,用于根据所述机器人情绪标签,指导生成回复信息。
  10. 一种基于机器人情绪状态的回复信息生成装置,其特征在于,包括:
    情绪因子获取单元,用于获取机器人的情绪因子,所述情绪因子包括机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、机器人对当前环境的情感历史、当前用户情感状态;
    情绪因子应用单元,用于根据机器人的当前心情状态、机器人对当前用户的熟悉度、机器人与当前用户的情感历史、所述机器人对当前环境的情感历史及所述当前用户情感状态,直接指导生成回复信息。
PCT/CN2018/092877 2018-02-27 2018-06-26 基于机器人情绪状态的回复信息生成方法、装置 WO2019165732A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810162745 2018-02-27
CN201810162745.6 2018-02-27

Publications (1)

Publication Number Publication Date
WO2019165732A1 true WO2019165732A1 (zh) 2019-09-06

Family

ID=64610907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092877 WO2019165732A1 (zh) 2018-02-27 2018-06-26 基于机器人情绪状态的回复信息生成方法、装置

Country Status (2)

Country Link
CN (1) CN109033179B (zh)
WO (1) WO2019165732A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148846A (zh) * 2020-08-25 2020-12-29 北京来也网络科技有限公司 结合rpa和ai的回复语音确定方法、装置、设备及存储介质
CN112809694A (zh) * 2020-03-02 2021-05-18 腾讯科技(深圳)有限公司 机器人控制方法、装置、存储介质和计算机设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110605724B (zh) * 2019-07-01 2022-09-23 青岛联合创智科技有限公司 一种智能养老陪伴机器人

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297789A (zh) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 智能机器人的个性化交互方法及交互系统
CN106695839A (zh) * 2017-03-02 2017-05-24 青岛中公联信息科技有限公司 一种用于幼童教育的仿生智能型机器人
CN106773923A (zh) * 2016-11-30 2017-05-31 北京光年无限科技有限公司 面向机器人的多模态情感数据交互方法及装置
CN106914903A (zh) * 2017-03-02 2017-07-04 深圳汇通智能化科技有限公司 一种面向智能机器人的交互系统
CN107301168A (zh) * 2017-06-01 2017-10-27 深圳市朗空亿科科技有限公司 智能机器人及其情绪交互方法、系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003340757A (ja) * 2002-05-24 2003-12-02 Mitsubishi Heavy Ind Ltd ロボット
CN105807933B (zh) * 2016-03-18 2019-02-12 北京光年无限科技有限公司 一种用于智能机器人的人机交互方法及装置
CN105824935A (zh) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 面向问答机器人的信息处理方法及系统
CN106462384B (zh) * 2016-06-29 2019-05-31 深圳狗尾草智能科技有限公司 基于多模态的智能机器人交互方法和智能机器人
CN107491511A (zh) * 2017-08-03 2017-12-19 深圳狗尾草智能科技有限公司 机器人的自我认知方法及装置
CN107563517A (zh) * 2017-08-25 2018-01-09 深圳狗尾草智能科技有限公司 机器人自我认知实时更新方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297789A (zh) * 2016-08-19 2017-01-04 北京光年无限科技有限公司 智能机器人的个性化交互方法及交互系统
CN106773923A (zh) * 2016-11-30 2017-05-31 北京光年无限科技有限公司 面向机器人的多模态情感数据交互方法及装置
CN106695839A (zh) * 2017-03-02 2017-05-24 青岛中公联信息科技有限公司 一种用于幼童教育的仿生智能型机器人
CN106914903A (zh) * 2017-03-02 2017-07-04 深圳汇通智能化科技有限公司 一种面向智能机器人的交互系统
CN107301168A (zh) * 2017-06-01 2017-10-27 深圳市朗空亿科科技有限公司 智能机器人及其情绪交互方法、系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112809694A (zh) * 2020-03-02 2021-05-18 腾讯科技(深圳)有限公司 机器人控制方法、装置、存储介质和计算机设备
CN112809694B (zh) * 2020-03-02 2023-12-29 腾讯科技(深圳)有限公司 机器人控制方法、装置、存储介质和计算机设备
CN112148846A (zh) * 2020-08-25 2020-12-29 北京来也网络科技有限公司 结合rpa和ai的回复语音确定方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN109033179A (zh) 2018-12-18
CN109033179B (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
US11568855B2 (en) System and method for defining dialog intents and building zero-shot intent recognition models
CN111712834B (zh) 用于推断现实意图的人工智能系统
CN107301168A (zh) 智能机器人及其情绪交互方法、系统
Stewart et al. Multimodal modeling of collaborative problem-solving facets in triads
JP2017016566A (ja) 情報処理装置、情報処理方法及びプログラム
US20190050713A1 (en) Information processing apparatus and information processing method
WO2019165732A1 (zh) 基于机器人情绪状态的回复信息生成方法、装置
Wilks et al. A prototype for a conversational companion for reminiscing about images
Inoue et al. Latent character model for engagement recognition based on multimodal behaviors
KR102644992B1 (ko) 교육 컨텐츠 주제 기반의 대화형 인공지능 아바타 영어 말하기 교육 방법, 장치 및 이에 대한 시스템
CN111651497A (zh) 用户标签挖掘方法、装置、存储介质及电子设备
CN117172978B (zh) 学习路径信息生成方法、装置、电子设备和介质
Nagao Artificial intelligence accelerates human learning: Discussion data analytics
CN117251057A (zh) 一种基于aigc构建ai数智人的方法及系统
Sahay et al. Modeling intent, dialog policies and response adaptation for goal-oriented interactions
Gunawan et al. Development of intelligent telegram chatbot using natural language processing
Vice et al. Toward accountable and explainable artificial intelligence part two: The framework implementation
Devi et al. ChatGPT: Comprehensive Study On Generative AI Tool
US20200257954A1 (en) Techniques for generating digital personas
Okada et al. Predicting performance of collaborative storytelling using multimodal analysis
Thakkar et al. Infini–a keyword recognition chatbot
CN110379214A (zh) 一种基于语音识别的看图写话训练方法及装置
VanderHoeven et al. Multimodal design for interactive collaborative problem-solving support
Raundale et al. Dialog prediction in institute admission: A deep learning way
Nair HR based Chatbot using deep neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18908212

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18908212

Country of ref document: EP

Kind code of ref document: A1