WO2018006372A1 - 一种基于意图识别控制家电的方法、系统及机器人 - Google Patents

一种基于意图识别控制家电的方法、系统及机器人 Download PDF

Info

Publication number
WO2018006372A1
WO2018006372A1 PCT/CN2016/089216 CN2016089216W WO2018006372A1 WO 2018006372 A1 WO2018006372 A1 WO 2018006372A1 CN 2016089216 W CN2016089216 W CN 2016089216W WO 2018006372 A1 WO2018006372 A1 WO 2018006372A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
time axis
robot
life time
life
Prior art date
Application number
PCT/CN2016/089216
Other languages
English (en)
French (fr)
Inventor
邱楠
杨新宇
王昊奋
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to CN201680001724.5A priority Critical patent/CN106662932A/zh
Priority to PCT/CN2016/089216 priority patent/WO2018006372A1/zh
Publication of WO2018006372A1 publication Critical patent/WO2018006372A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/4185Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by the network communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Definitions

  • the present invention relates to the field of robot interaction technologies, and in particular, to a method, system and robot for controlling home appliances based on intention recognition.
  • robots As an interactive tool with humans, robots are used more and more. For example, some elderly people and children can interact with robots, including dialogue and entertainment.
  • Smart home is a residential platform, using integrated wiring technology, network communication technology, security technology, automatic control technology, audio and video technology to integrate home life related facilities, and build efficient management system for residential facilities and family schedules.
  • a method for controlling home appliances based on intent recognition includes:
  • the home appliance is controlled by the life time axis.
  • the method comprises:
  • the home appliance comprises a light fixture
  • the step of controlling the home appliance by combining the life time axis comprises: controlling the brightness or the switch of the light fixture according to the life time axis.
  • the method for generating parameters of the life time axis of the robot includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
  • the step of fitting the parameters of the self-cognition of the robot to the parameters in the life time axis comprises: using a probability algorithm, using the network to make a probability estimation of the parameters between the robots, and calculating the life time axis.
  • the probability that each parameter changes after the scene parameters on the life time axis change forms a fitting curve of the parameter change probability.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • a system for controlling home appliances based on intent recognition comprising:
  • An obtaining module configured to acquire multi-modal information of the user
  • An artificial intelligence module configured to generate interaction content according to the multimodal information of the user and the life time axis, where the interaction content includes at least voice information and action information;
  • the control module is configured to control the length of the voice information and the length of the motion information to be the same.
  • the system further comprises an active inquiry module for actively inquiring whether the user needs further control of the home appliance, and correspondingly controlling the home appliance according to the instruction of the user.
  • the home appliance comprises a light fixture
  • the control module is specifically configured to: control the brightness or the switch of the light fixture according to the life time axis.
  • the system comprises a processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the processing module is specifically configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the processing module is specifically configured to: use a probability algorithm to compare parameters between robots The number is used to make a probability estimate, and the probability that each parameter changes after the scene parameter on the life time axis of the robot changes on the life time axis is calculated, and a fitting curve of the parameter change probability is formed.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the present invention discloses a robot comprising a system for controlling home appliances based on intention recognition as described in any of the above.
  • the method for controlling the home appliance based on the intention identification of the present invention comprises: acquiring multimodal information of the user; identifying the user intention according to the multimodal information; and multimodal information according to the user And the user's intention, combined with the life time axis to control the home appliances.
  • the user's intention can be identified by one or more of the user's multimodal information such as user voice, user expression, user action, etc., for example, the user wants to rest or work or watch TV, etc., and then according to the user's Multi-modal information and user intentions, combined with the life time axis to control the home appliances, thereby more intelligent automatic adjustment of home appliances, the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make the robot more anthropomorphic, and also improve the user experience of artificial intelligence in smart home.
  • the user's multimodal information such as user voice, user expression, user action, etc.
  • the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make the robot more anthropomorphic, and also improve the user experience of artificial
  • FIG. 1 is a flow chart of a method for controlling home appliances based on intention recognition according to a first embodiment of the present invention
  • FIG. 2 is a schematic diagram of a system for controlling home appliances based on intention recognition according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud-based computer or network The cloud formed by the server.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for controlling a home appliance based on intention recognition is disclosed in the embodiment, including:
  • the method for controlling the home appliance based on the intention identification of the embodiment includes: acquiring multimodal information of the user; identifying the user intention according to the multimodal information; and combining the life time axis with the user according to the multimodal information of the user and the user intention control.
  • the user's intention can be identified by one or more of the user's multimodal information such as user voice, user expression, user action, etc., for example, the user wants to rest or work or watch TV, etc., and then according to the user's Multi-modal information and user intentions, combined with the life time axis to control the home appliances, thereby more intelligent automatic adjustment of home appliances, the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make robots more anthropomorphic, and also improve artificial intelligence in intelligence User experience at home.
  • the user's multimodal information such as user voice, user expression, user action, etc.
  • the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make robots more anthropomorphic, and also improve artificial intelligence in intelligence User experience
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • the interactive content may be a combination of one or more of an expression or a text or a voice or an action.
  • the life time axis 300 of the robot is completed and set in advance. Specifically, the life time axis 300 of the robot is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • the life time axis is specifically: according to the time axis of human daily life, according to the human way, the self-cognition value of the robot itself in the time axis of daily life is fitted, and the behavior of the robot is according to this The action is to get the robot's own behavior in one day, so that the robot can perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the life time axis includes not only voice information, but also information such as actions.
  • the home appliance may be a household appliance used in daily life, such as a lamp, a refrigerator, an air conditioner, a television, a washing machine, a microwave oven, or the like.
  • a luminaire When it is a luminaire, the user adjusts the brightness of the light or the light on or off.
  • the user speaks to the robot: "It's so sleepy.” After the robot hears the intent to identify that the user is sleepy, and then combines the life axis of the robot, for example, the current time is Monday at 8 am, then the robot knows that the owner is just getting up. Then, you should turn on the light and adjust the brightness of the light to a moderate level. It should not be too bright to avoid irritating the eyes, nor too dark to prevent the user from sleeping late. If the current time is 8:00 am on Sunday, the robot determines that the user does not need to go today according to the life time axis.
  • the robot will choose not to turn on the light for a while, for example, until 9:30 in the morning, according to the life time axis, the user should be ready to go to the gym, then the robot will remind the user to get up and turn the light on. And if the user speaks to the robot: "It's so sleepy," the robot understands that the user is sleepy, and then the robot's life time axis, such as the current time is 9:00, then the robot knows that the owner needs to sleep, then The brightness of the light will be reduced, or it can be lowered first and then turned off. This way is more anthropomorphic and improves the user experience.
  • this embodiment is only described by taking a luminaire as an example, and other home appliances can also be applied to this embodiment.
  • the method includes:
  • the robot will continue the light. Keep the brightness just now. If the user answers yes, then the robot will increase the brightness, or the user will turn off the light, then the robot will turn off the light. Of course, the robot can not ask, and the user actively informs the robot to operate.
  • the robot understands that the user is sleepy, and then the robot's life time axis, such as the current time is 9:00, then the robot knows that the owner needs to sleep, then Will reduce the brightness of the light, and then the robot can continue to ask the user whether to turn off the light, if the user answers yes, then the robot will turn off the light, if the user does not answer, then the light will remain low, of course, the user It can also be said that when the light is turned on, the robot will increase the brightness of the light.
  • the method for generating parameters of the life time axis of the robot includes:
  • the parameters of the robot's self-cognition are fitted to the parameters in the life time axis to generate the life time axis of the robot.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the step of fitting the parameters of the self-cognition of the robot to the parameters in the life time axis specifically includes: using a probability algorithm, using the network to make a probability estimation of the parameters between the robots, and calculating the life.
  • the probability that each parameter changes after the scene parameters on the time axis of the robot change on the time axis form a fitting curve of the parameter change probability.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • a system for controlling home appliances based on intent identification including:
  • the obtaining module 201 is configured to acquire multi-modal information of the user
  • the intent identification module 202 is configured to identify a user intent according to the multimodal information
  • the control module 203 is configured to control the home appliance according to the multimodal information of the user and the user intention according to the life time axis, wherein the life time axis is generated by the life time axis module 301.
  • the user's intention can be identified by one or more of the user's multimodal information such as user voice, user expression, user action, etc., for example, the user wants to rest or work or watch TV, etc., and then according to the user's Multi-modal information and user intentions, combined with the life time axis to control the home appliances, thereby more intelligent automatic adjustment of home appliances, the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make the robot more anthropomorphic, and also improve the user experience of artificial intelligence in smart home.
  • the user's multimodal information such as user voice, user expression, user action, etc.
  • the invention applies artificial intelligence to the smart home, more convenient and accurate control of home appliances, making people's daily life Life is more convenient, and it can also increase the fun and interactivity of life, add more excitement to life, and make the robot more anthropomorphic, and also improve the user experience of artificial
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • the interactive content may be a combination of one or more of an expression or a text or a voice or an action.
  • the life time axis 300 of the robot is completed and set in advance. Specifically, the life time axis 300 of the robot is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • the life time axis is specifically: according to the time axis of human daily life, according to the human way, the self-cognition value of the robot itself in the time axis of daily life is fitted, and the behavior of the robot is according to this The action is to get the robot's own behavior in one day, so that the robot can perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the life time axis includes not only voice information, but also information such as actions.
  • the home appliance may be a household appliance used in daily life, such as a lamp, a refrigerator, an air conditioner, a television, a washing machine, a microwave oven, or the like.
  • a luminaire When it is a luminaire, the user adjusts the brightness of the light or the light on or off.
  • the user speaks to the robot: "It's so sleepy.” After the robot hears the intent to identify that the user is sleepy, and then combines the life axis of the robot, for example, the current time is Monday at 8 am, then the robot knows that the owner is just getting up. Then, you should turn on the light and adjust the brightness of the light to a moderate level. It should not be too bright to avoid irritating the eyes, nor too dark to prevent the user from sleeping late. If the current time is 8:00 am on Sunday, the robot judges that the user does not need to go to work according to the life time axis, so the robot will choose not to turn on the lights temporarily.
  • the robot will remind the user to get up and turn the light on. And if the user speaks to the robot: "It's so sleepy," the robot understands that the user is sleepy, and then the robot's life time axis, such as the current time is 9:00, then the robot knows that the owner needs to sleep, then The brightness of the light will be reduced, or it can be lowered first and then turned off. This way is more anthropomorphic and improves the user experience.
  • this embodiment is only described by taking a luminaire as an example, and other home appliances can also be applied to this embodiment.
  • the system further includes an active inquiry module for actively inquiring whether the user needs to further control the home appliance, and correspondingly controlling the home appliance according to the user's instruction.
  • the robot will continue the appliance. Keep the brightness just now. If the user answers yes, then the robot will increase the brightness, or the user will turn off the light, then the robot will turn off the light. Of course, the robot can not ask, and the user actively informs the robot to operate. And if the user speaks to the robot: "It's so sleepy," the robot understands that the user is sleepy, and then the robot's life time axis, such as the current time is 9:00, then the robot knows that the owner needs to sleep, then Will reduce the brightness of the appliance, and then the robot can continue to ask Whether the user wants to turn off the light, if the user answers yes, then the robot will turn off the light. If the user answers no, the light will be kept low. Of course, the user can also say that the light is on, the robot will turn the light on. The brightness is increased.
  • the system includes a time axis based and artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the time-based and artificial intelligence cloud processing module is specifically configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the time-based and artificial intelligence cloud processing module is specifically configured to: use a probability algorithm to estimate a parameter between the robots using a network, and calculate a time on the life time axis of the robot on the life time axis.
  • the probability that each parameter changes after the scene parameter is changed forms a fitting curve of the parameter change probability.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the present invention discloses a robot comprising a system for controlling home appliances based on intention recognition as described in any of the above.

Abstract

一种基于意图识别控制家电的方法,包括:获取用户的多模态信息(S101);根据所述多模态信息识别用户意图(S102);根据用户的多模态信息和用户意图,结合生活时间轴(300)对家电进行控制(S103)。这样就可以通过用户的多模态信息例如用户语音、用户表情、用户动作等的一种或几种,来识别出用户意图,例如用户是想要休息或工作或看电视等,然后根据用户的多模态信息和用户意图,结合生活时间轴(300)来对家电进行控制(S103),从而更加智能化了自动调节家电,将人工智能应用到智能家居中,更加便捷、准确的控制家电,使人们的日常生活更加方便,并且还可以增加生活的趣味性和互动性,为生活添加更多精彩,并且使机器人更加拟人化,也提高了人工智能在智能家居方面的用户体验。

Description

一种基于意图识别控制家电的方法、系统及机器人 技术领域
本发明涉及机器人交互技术领域,尤其涉及一种基于意图识别控制家电的方法、系统及机器人。
背景技术
机器人作为与人类的交互工具,使用的场合越来越多,例如一些老人、小孩较孤独时,就可以与机器人交互,包括对话、娱乐等。
智能家居是以住宅为平台,利用综合布线技术、网络通信技术、安全防范技术、自动控制技术、音视频技术将家居生活有关的设施集成,构建高效的住宅设施与家庭日程事务的管理系统,提升家居安全性、便利性、舒适性、艺术性,并实现环保节能的居住环境。
而在智能家居方面,机器人的使用还比较少,因此发明人研究如何既可以与人类进行交互,也可以在智能家居方面使用的机器人,将人工智能应用到智能家居方面的问题,以期提出更好的解决方案,提升用户体验。
发明内容
本发明的目的是提供一种基于意图识别控制家电的方法、系统及机器人,提升人工智能在智能家居方面的用户体验。
本发明的目的是通过以下技术方案来实现的:
一种基于意图识别控制家电的方法,包括:
获取用户的多模态信息;
根据所述多模态信息识别用户意图;
根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制。
优选的,在根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制的步骤之后包括:
主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
优选的,所述家电包括灯具,所述结合生活时间轴对家电进行控制的步骤包括:结合生活时间轴对灯具的亮度或开关进行控制。
优选的,所述机器人生活时间轴的参数的生成方法包括:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
优选的,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
优选的,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。
优选的,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
一种基于意图识别控制家电的系统,包括:
获取模块,用于获取用户的多模态信息;
人工智能模块,用于根据用户的多模态信息和生活时间轴生成交互内容,所述交互内容至少包括语音信息和动作信息;
控制模块,用于将语音信息的时间长度和动作信息的时间长度控制到相同。
优选的,所述系统还包括主动询问模块,用于主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
优选的,所述家电包括灯具,所述控制模块具体用于:结合生活时间轴对灯具的亮度或开关进行控制。
优选的,所述系统包括处理模块,用于:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
优选的,所述处理模块具体用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
优选的,所述处理模块具体用于:使用概率算法,将机器人之间的参 数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。
优选的,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
本发明公开一种机器人,包括如上述任一所述的一种基于意图识别控制家电的系统。
相比现有技术,本发明具有以下优点:本发明的基于意图识别控制家电的方法包括:获取用户的多模态信息;根据所述多模态信息识别用户意图;根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制。这样就可以通过用户的多模态信息例如用户语音、用户表情、用户动作等的一种或几种,来识别出用户意图,例如用户是想要休息或工作或看电视等,然后根据用户的多模态信息和用户意图,结合生活时间轴来对家电进行控制,从而更加智能化了自动调节家电,本发明将人工智能应用到智能家居中,更加便捷、准确的控制家电,使人们的日常生活更加方便,并且还可以增加生活的趣味性和互动性,为生活添加更多精彩,并且使机器人更加拟人化,也提高了人工智能在智能家居方面的用户体验。
附图说明
图1是本发明实施例一的一种基于意图识别控制家电的方法的流程图;
图2是本发明实施例二的一种基于意图识别控制家电的系统的示意图。
具体实施方式
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络 服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种基于意图识别控制家电的方法,包括:
S101、获取用户的多模态信息;
S102、根据所述多模态信息识别用户意图;
S103、根据用户的多模态信息和用户意图,结合生活时间轴300对家电进行控制。
本实施例的基于意图识别控制家电的方法包括:获取用户的多模态信息;根据所述多模态信息识别用户意图;根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制。这样就可以通过用户的多模态信息例如用户语音、用户表情、用户动作等的一种或几种,来识别出用户意图,例如用户是想要休息或工作或看电视等,然后根据用户的多模态信息和用户意图,结合生活时间轴来对家电进行控制,从而更加智能化了自动调节家电,本发明将人工智能应用到智能家居中,更加便捷、准确的控制家电,使人们的日常生活更加方便,并且还可以增加生活的趣味性和互动性,为生活添加更多精彩,并且使机器人更加拟人化,也提高了人工智能在智能 家居方面的用户体验。
对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化,在一天24小时中,让机器人也会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。因此本发明将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。交互内容可以是表情或文字或语音或动作等一种或几种的组合。机器人的生活时间轴300是提前进行拟合和设置完成的,具体来讲,机器人的生活时间轴300是一系列的参数合集,将这个参数传输给系统进行生成交互内容。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。
本实施例中,基于生活时间轴具体是:根据人类日常生活的时间轴,按照人类的方式,将机器人本身在日常生活时间轴中的自我认知的数值做拟合,机器人的行为按照这个拟合行动,也就是得到一天中机器人自己的行为,从而让机器人基于生活时间轴去进行自己的行为,例如生成交互内容与人类沟通等。假如机器人一直唤醒的话,就会按照这个时间轴上的行为行动,机器人的自我认知也会根据这个时间轴进行相应的更改。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。生活时间轴中不仅包括语音信息,也包括了动作等信息。
本实施例中,家电可以是日常生活中使用的家用电器,例如灯具,冰箱,空调,电视机,洗衣机,微波炉等。下面以灯具为例进行说明,当为灯具时,用户调节的就是灯光的亮度或灯光的开或关。
例如,用户向机器人说话:“好困啊”,机器人听到后意图识别为用户很困,然后结合机器人的生活时间轴,例如当前的时间为周一上午8点,那么机器人就知道主人是刚刚起床,那么就应该打开灯光,并将灯光亮度调到适中,不能太亮以免刺激眼睛,也不能太暗以免用户继续睡懒觉。如果当前时间是周日上午8点,机器人根据生活时间轴判定用户今天不用上 班,因此机器人就会选择暂时不打开灯光,比如等到上午9点半时候,根据生活时间轴用户应该准备去健身,那么机器人就会提醒用户起床,并将灯光打开。而如果用户向机器人说话:“好困啊”,机器人听到后理解的为用户很困,然后机器人的生活时间轴,例如当前的时间为晚上9点,那么机器人就知道主人需要睡觉了,那么就会将灯光的亮度降低,也可以先降低后过一段时间将灯光关闭。这样的方式更加拟人化,提高了用户的体验度。
当然,本实施例仅以灯具为例进行说明,其余家电也可以应用到本实施例中。
本实施例中,在根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制的步骤之后包括:
主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
这样,在机器人作出基本的判断后,进一步向用户确认操作,不仅可以降低误判的几率,而且方便机器人作出进一步的操作,因为在机器人作出基本判断之后,通常不会立即进行操作,或者在操作之后会有一小段的时间过渡,例如上述案例中,用户向机器人说话:“好困啊”,机器人听到后意图识别为用户很困,然后结合机器人的生活时间轴,例如当前的时间为周一上午8点,那么机器人就知道主人是刚刚起床,那么就应该打开灯光,并将灯光亮度调到适中,机器人还可以继续询问用户是否将亮度调高,如果用户回答不,那么机器人就会将灯光继续保持刚才的亮度,如果用户回答是,那么机器人就会将亮度调高,又或者用户说关灯,那么机器人也会关灯,当然也可以机器人不询问,用户主动告知机器人进行操作。而如果用户向机器人说话:“好困啊”,机器人听到后理解的为用户很困,然后机器人的生活时间轴,例如当前的时间为晚上9点,那么机器人就知道主人需要睡觉了,那么就会将灯光的亮度降低,然后机器人还可以继续询问用户是否要将灯关闭,如果用户回答是,那么机器人就会将灯关闭,如果用户回答不,那么就会将灯保持低亮度,当然用户也可以说将灯开亮一点,机器人就会将灯的亮度提高。
根据其中一个示例,所述机器人的生活时间轴的参数的生成方法包括:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人的生活时间轴。
这样将生活时间轴加入到机器人本身的自我认知中去,使机器人具有拟人化的生活。例如将中午吃饭的认知加入到机器人中去。
根据其中另一个示例,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
这样就可以具体的将生活时间轴加入到机器人本身的参数中去。
根据其中另一个示例,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。这样就可以具体的将机器人的自我认知的参数与生活时间轴中的参数进行拟合。其中概率算法可以是贝叶斯概率算法。
例如,在一天24小时中,使机器人会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。每个动作会影响机器人本身的自我认知,将生活时间轴上的参数与机器人本身的自我认知进行结合,拟合后,即让机器人的自我认知包括了,心情,疲劳值,亲密度,好感度,交互次数,机器人的三维的认知,年龄,身高,体重,亲密度,游戏场景值,游戏对象值,地点场景值,地点对象值等。为机器人可以自己识别所在的地点场景,比如咖啡厅,卧室等。
机器一天的时间轴内会进行不同的动作,比如夜里睡觉,中午吃饭,白天运动等等,这些所有的生活时间轴中的场景,对于自我认知都会有影响。这些数值的变化采用的概率模型的动态拟合方式,将这些所有动作在时间轴上发生的几率拟合出来。场景识别:这种地点场景识别会改变自我认知中的地理场景值。
实施例二
如图2所示,本实施例中公开一种基于意图识别控制家电的系统,包括:
获取模块201,用于获取用户的多模态信息;
意图识别模块202,用于根据所述多模态信息识别用户意图;
控制模块203,用于根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制,其中,生活时间轴为生活时间轴模块301生成的。
这样就可以通过用户的多模态信息例如用户语音、用户表情、用户动作等的一种或几种,来识别出用户意图,例如用户是想要休息或工作或看电视等,然后根据用户的多模态信息和用户意图,结合生活时间轴来对家电进行控制,从而更加智能化了自动调节家电,本发明将人工智能应用到智能家居中,更加便捷、准确的控制家电,使人们的日常生活更加方便,并且还可以增加生活的趣味性和互动性,为生活添加更多精彩,并且使机器人更加拟人化,也提高了人工智能在智能家居方面的用户体验。
对于人来讲每天的生活都具有一定的规律性,为了让机器人与人沟通时更加拟人化,在一天24小时中,让机器人也会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。因此本发明将机器人所在的生活时间轴加入到机器人的交互内容生成中去,使机器人与人交互时更加拟人化,使得机器人在生活时间轴内具有人类的生活方式,该方法能够提升机器人交互内容生成的拟人性,提升人机交互体验,提高智能性。交互内容可以是表情或文字或语音或动作等一种或几种的组合。机器人的生活时间轴300是提前进行拟合和设置完成的,具体来讲,机器人的生活时间轴300是一系列的参数合集,将这个参数传输给系统进行生成交互内容。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。
本实施例中,基于生活时间轴具体是:根据人类日常生活的时间轴,按照人类的方式,将机器人本身在日常生活时间轴中的自我认知的数值做拟合,机器人的行为按照这个拟合行动,也就是得到一天中机器人自己的行为,从而让机器人基于生活时间轴去进行自己的行为,例如生成交互内容与人类沟通等。假如机器人一直唤醒的话,就会按照这个时间轴上的行为行动,机器人的自我认知也会根据这个时间轴进行相应的更改。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。生活时间轴中不仅包括语音信息,也包括了动作等信息。
本实施例中,家电可以是日常生活中使用的家用电器,例如灯具,冰箱,空调,电视机,洗衣机,微波炉等。下面以灯具为例进行说明,当为灯具时,用户调节的就是灯光的亮度或灯光的开或关。
例如,用户向机器人说话:“好困啊”,机器人听到后意图识别为用户很困,然后结合机器人的生活时间轴,例如当前的时间为周一上午8点,那么机器人就知道主人是刚刚起床,那么就应该打开灯光,并将灯光亮度调到适中,不能太亮以免刺激眼睛,也不能太暗以免用户继续睡懒觉。如果当前时间是周日上午8点,机器人根据生活时间轴判定用户今天不用上班,因此机器人就会选择暂时不打开灯光,比如等到上午9点半时候,根据生活时间轴用户应该准备去健身,那么机器人就会提醒用户起床,并将灯光打开。而如果用户向机器人说话:“好困啊”,机器人听到后理解的为用户很困,然后机器人的生活时间轴,例如当前的时间为晚上9点,那么机器人就知道主人需要睡觉了,那么就会将灯光的亮度降低,也可以先降低后过一段时间将灯光关闭。这样的方式更加拟人化,提高了用户的体验度。
当然,本实施例仅以灯具为例进行说明,其余家电也可以应用到本实施例中。
本实施例中,所述系统还包括主动询问模块,用于主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
这样,在机器人作出基本的判断后,进一步向用户确认操作,不仅可以降低误判的几率,而且方便机器人作出进一步的操作,因为在机器人作出基本判断之后,通常不会立即进行操作,或者在操作之后会有一小段的时间过渡,例如上述案例中,用户向机器人说话:“好困啊”,机器人听到后意图识别为用户很困,然后结合机器人的生活时间轴,例如当前的时间为周一上午8点,那么机器人就知道主人是刚刚起床,那么就应该打开家电,并将家电亮度调到适中,机器人还可以继续询问用户是否将亮度调高,如果用户回答不,那么机器人就会将家电继续保持刚才的亮度,如果用户回答是,那么机器人就会将亮度调高,又或者用户说关灯,那么机器人也会关灯,当然也可以机器人不询问,用户主动告知机器人进行操作。而如果用户向机器人说话:“好困啊”,机器人听到后理解的为用户很困,然后机器人的生活时间轴,例如当前的时间为晚上9点,那么机器人就知道主人需要睡觉了,那么就会将家电的亮度降低,然后机器人还可以继续询问 用户是否要将灯关闭,如果用户回答是,那么机器人就会将灯关闭,如果用户回答不,那么就会将灯保持低亮度,当然用户也可以说将灯开亮一点,机器人就会将灯的亮度提高。
根据其中一个示例,所述系统包括基于时间轴与人工智能云处理模块,用于:
将机器人的自我认知进行扩展;
获取生活时间轴的参数;
对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
这样将生活时间轴加入到机器人本身的自我认知中去,使机器人具有拟人化的生活。例如将中午吃饭的认知加入到机器人中去。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块具体用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。这样就可以具体的将生活时间轴加入到机器人本身的参数中去。
根据其中另一个示例,所述基于时间轴与人工智能云处理模块具体用于:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。这样就可以具体的将机器人的自我认知的参数与生活时间轴中的参数进行拟合。其中概率算法可以是贝叶斯概率算法。
例如,在一天24小时中,使机器人会有睡觉,运动,吃饭,跳舞,看书,吃饭,化妆,睡觉等动作。每个动作会影响机器人本身的自我认知,将生活时间轴上的参数与机器人本身的自我认知进行结合,拟合后,即让机器人的自我认知包括了,心情,疲劳值,亲密度,好感度,交互次数,机器人的三维的认知,年龄,身高,体重,亲密度,游戏场景值,游戏对象值,地点场景值,地点对象值等。为机器人可以自己识别所在的地点场景,比如咖啡厅,卧室等。
机器一天的时间轴内会进行不同的动作,比如夜里睡觉,中午吃饭,白天运动等等,这些所有的生活时间轴中的场景,对于自我认知都会有影响。这些数值的变化采用的概率模型的动态拟合方式,将这些所有动作在时间轴上发生的几率拟合出来。场景识别:这种地点场景识别会改变自我认知中的地理场景值。
本发明公开一种机器人,包括如上述任一所述的一种基于意图识别控制家电的系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (15)

  1. 一种基于意图识别控制家电的方法,其特征在于,包括:
    获取用户的多模态信息;
    根据所述多模态信息识别用户意图;
    根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制。
  2. 根据权利要求1所述的方法,其特征在于,在根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制的步骤之后包括:
    主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
  3. 根据权利要求1所述的方法,其特征在于,所述家电包括灯具,所述结合生活时间轴对家电进行控制的步骤包括:结合生活时间轴对灯具的亮度或开关进行控制。
  4. 根据权利要求1所述的方法,其特征在于,所述机器人生活时间轴的参数的生成方法包括:
    将机器人的自我认知进行扩展;
    获取生活时间轴的参数;
    对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
  5. 根据权利要求4所述的方法,其特征在于,所述将机器人的自我认知进行扩展的步骤具体包括:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
  6. 根据权利要求4所述的方法,其特征在于,所述对机器人的自我认知的参数与生活时间轴中的参数进行拟合的步骤具体包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。
  7. 根据权利要求1所述的方法,其特征在于,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
  8. 一种基于意图识别控制家电的系统,其特征在于,包括:
    获取模块,用于获取用户的多模态信息;
    意图识别模块,用于根据所述多模态信息识别用户意图;
    控制模块,用于根据用户的多模态信息和用户意图,结合生活时间轴对家电进行控制。
  9. 根据权利要求8所述的系统,其特征在于,所述系统还包括主动询问模块,用于主动询问用户是否需要对家电做进一步控制,并根据用户的指令对家电进行相应控制。
  10. 根据权利要求8所述的方法,其特征在于,所述家电包括灯具,所述控制模块具体用于:结合生活时间轴对灯具的亮度或开关进行控制。
  11. 根据权利要求8所述的系统,其特征在于,所述系统包括处理模块,用于:
    将机器人的自我认知进行扩展;
    获取生活时间轴的参数;
    对机器人的自我认知的参数与生活时间轴中的参数进行拟合,生成机器人生活时间轴。
  12. 根据权利要求11所述的系统,其特征在于,所述处理模块具体用于:将生活场景与机器人的自我认识相结合形成基于生活时间轴的自我认知曲线。
  13. 根据权利要求11所述的系统,其特征在于,所述处理模块具体用于:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后每个参数改变的概率,形成所述参数改变概率的拟合曲线。
  14. 根据权利要求8所述的系统,其特征在于,其中,所述生活时间轴指包含一天24小时的时间轴,所述生活时间轴中的参数至少包括用户在所述生活时间轴上进行的日常生活行为以及代表该行为的参数值。
  15. 一种机器人,其特征在于,包括如权利要求8至14任一所述的一种基于意图识别控制家电的系统。
PCT/CN2016/089216 2016-07-07 2016-07-07 一种基于意图识别控制家电的方法、系统及机器人 WO2018006372A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001724.5A CN106662932A (zh) 2016-07-07 2016-07-07 一种基于意图识别控制家电的方法、系统及机器人
PCT/CN2016/089216 WO2018006372A1 (zh) 2016-07-07 2016-07-07 一种基于意图识别控制家电的方法、系统及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/089216 WO2018006372A1 (zh) 2016-07-07 2016-07-07 一种基于意图识别控制家电的方法、系统及机器人

Publications (1)

Publication Number Publication Date
WO2018006372A1 true WO2018006372A1 (zh) 2018-01-11

Family

ID=58838105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089216 WO2018006372A1 (zh) 2016-07-07 2016-07-07 一种基于意图识别控制家电的方法、系统及机器人

Country Status (2)

Country Link
CN (1) CN106662932A (zh)
WO (1) WO2018006372A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109870923A (zh) * 2019-04-02 2019-06-11 浙江宝业建筑智能科技有限公司 一种智能家居控制系统及方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10504511B2 (en) * 2017-07-24 2019-12-10 Midea Group Co., Ltd. Customizable wake-up voice commands
CN107390539A (zh) * 2017-08-23 2017-11-24 合肥龙图腾信息技术有限公司 一种基于脑电波采集的智能家居控制方法
CN108563321A (zh) * 2018-01-02 2018-09-21 联想(北京)有限公司 信息处理方法和电子设备
CN108415262A (zh) * 2018-03-06 2018-08-17 西北工业大学 智能网关对家电设备的控制方法
CN108536304A (zh) * 2018-06-25 2018-09-14 广州市锐尚展柜制作有限公司 一种智能家居多模态交互装置
CN110197171A (zh) * 2019-06-06 2019-09-03 深圳市汇顶科技股份有限公司 基于用户的动作信息的交互方法、装置和电子设备
CN110888335A (zh) * 2019-11-28 2020-03-17 星络智能科技有限公司 一种智能家居控制器及其交互方法、存储介质
CN111124110A (zh) * 2019-11-28 2020-05-08 星络智能科技有限公司 一种智能家居控制器及其交互方法、存储介质
CN112415908A (zh) * 2020-11-26 2021-02-26 珠海格力电器股份有限公司 智能设备控制方法、装置、可读存储介质和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102088380A (zh) * 2009-12-04 2011-06-08 上海电气集团股份有限公司 一种以家居服务机器人为核心的智能多机器人网络系统
KR20160023089A (ko) * 2014-08-21 2016-03-03 엘지전자 주식회사 디지털 디바이스 및 그 제어 방법
CN104238369B (zh) * 2014-09-02 2017-08-18 百度在线网络技术(北京)有限公司 智能家电控制方法和装置
CN104503378B (zh) * 2014-11-05 2018-01-30 广州艾若博机器人科技有限公司 一种机器人以及基于该机器人的控制家电的方法
CN104965552B (zh) * 2015-07-03 2017-03-08 北京科技大学 一种基于情感机器人的智能家居环境协同控制方法及系统
CN105005204B (zh) * 2015-07-31 2018-02-23 深圳广田智能科技有限公司 可自动触发智能家居和智慧生活情景的智能引擎系统及方法
CN105291093A (zh) * 2015-11-27 2016-02-03 深圳市神州云海智能科技有限公司 一种家用机器人系统
CN105425602A (zh) * 2015-11-30 2016-03-23 青岛海尔智能家电科技有限公司 一种家电设备自动控制方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109870923A (zh) * 2019-04-02 2019-06-11 浙江宝业建筑智能科技有限公司 一种智能家居控制系统及方法

Also Published As

Publication number Publication date
CN106662932A (zh) 2017-05-10

Similar Documents

Publication Publication Date Title
WO2018006372A1 (zh) 一种基于意图识别控制家电的方法、系统及机器人
WO2018006373A1 (zh) 一种基于意图识别控制家电的方法、系统及机器人
US10367652B2 (en) Smart home automation systems and methods
DE102017129939B4 (de) Gesprächsbewusste proaktive Benachrichtigungen für eine Sprachschnittstellenvorrichtung
JP7351745B2 (ja) 環境制御機能を有する社会ロボット
US20180229372A1 (en) Maintaining attention and conveying believability via expression and goal-directed behavior with a social robot
CN112051743A (zh) 设备控制方法、冲突处理方法、相应的装置及电子设备
WO2018000268A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018049430A2 (en) An intelligent interactive and augmented reality based user interface platform
WO2018006370A1 (zh) 一种虚拟3d机器人的交互方法、系统及机器人
WO2018000259A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
CN108279573B (zh) 基于人体属性检测的控制方法、装置、智能家电和介质
CN107330418B (zh) 一种机器人系统
WO2018000267A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
CN107229262A (zh) 一种智能家居系统
WO2019082630A1 (ja) 情報処理装置、及び情報処理方法
Ramadan et al. The intelligent classroom: towards an educational ambient intelligence testbed
CN109357366B (zh) 调节控制方法、装置、存储介质和空调系统
WO2018006371A1 (zh) 一种同步语音及虚拟动作的方法、系统及机器人
WO2018006369A1 (zh) 一种同步语音及虚拟动作的方法、系统及机器人
CN111338227B (zh) 基于强化学习的电子电器控制方法及控制设备、存储介质
CN110958750A (zh) 照明设备控制方法及装置
WO2018000258A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000261A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
WO2018000266A1 (zh) 一种机器人交互内容的生成方法、系统及机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16907877

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16907877

Country of ref document: EP

Kind code of ref document: A1