WO2018006370A1 - 一种虚拟3d机器人的交互方法、系统及机器人 - Google Patents
一种虚拟3d机器人的交互方法、系统及机器人 Download PDFInfo
- Publication number
- WO2018006370A1 WO2018006370A1 PCT/CN2016/089214 CN2016089214W WO2018006370A1 WO 2018006370 A1 WO2018006370 A1 WO 2018006370A1 CN 2016089214 W CN2016089214 W CN 2016089214W WO 2018006370 A1 WO2018006370 A1 WO 2018006370A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- robot
- interaction
- information
- parameter
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Definitions
- the invention relates to the field of robot interaction technology, and in particular relates to a method, a system and a robot for interacting with a virtual 3D robot.
- robots are used more and more. For example, some elderly people and children can interact with robots, including dialogue and entertainment.
- the inventor developed a virtual robot display device and imaging system, which can form a 3D animated image, and the virtual robot's host accepts human commands such as voice to interact with humans. Then the virtual 3D animated image will respond to the sounds and actions according to the instructions of the host, so that the robot can be more anthropomorphic, not only can interact with humans in sounds and expressions, but also interact with humans in actions, etc. Improve the experience of interaction.
- the object of the present invention is to provide an interactive method, system and robot for controlling a more convenient virtual 3D robot, thereby improving the human-computer interaction experience.
- An interactive method for a virtual 3D robot includes:
- the robot outputs according to the interactive content, the output mode including at least a couple interaction, a squat interaction, and a pet interaction.
- the ⁇ interaction specifically includes: acquiring multi-modal information of the user;
- the couple interaction specifically includes: acquiring multimodal information of the user;
- the multi-modality information processed by the robot is transmitted to the couple user associated with the user according to the multi-modality information of the user and the user's intention.
- the pet interaction specifically includes: acquiring multi-modal information of the user;
- the interactive content is sent to the display unit to establish an interaction with the user.
- the method for generating the variable parameter of the robot comprises: fitting the self-cognitive parameter of the robot with the parameter of the scene in the variable parameter to generate a variable parameter of the robot.
- variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
- the step of generating the interactive content according to the multimodal information and the variable parameter specifically includes: generating the interactive content according to the multimodal information and the variable parameter and the fitting curve of the parameter changing probability.
- the method for generating a fitting curve of the parameter change probability comprises: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
- An interactive system for a virtual 3D robot comprising:
- An obtaining module configured to acquire multi-modal information of the user
- An artificial intelligence module configured to generate interaction content according to the multimodal information and the variable parameter
- a conversion module configured to convert the interactive content into machine code recognizable by the robot
- the control module is configured to output, according to the interactive content, the at least the couple interaction, the interaction, and the pet interaction.
- the ⁇ interaction specifically includes: acquiring multi-modal information of the user;
- the couple interaction specifically includes: acquiring multimodal information of the user;
- the multi-modality information processed by the robot is transmitted to the couple user associated with the user according to the multi-modality information of the user and the user's intention.
- the pet interaction specifically includes: acquiring multi-modal information of the user;
- the interactive content is sent to the display unit to establish an interaction with the user.
- the system further comprises a processing module for fitting the self-cognitive parameters of the robot with the parameters of the scene in the variable parameters to generate variable parameters.
- variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
- the artificial intelligence module is specifically configured to: generate interaction content according to the multi-modal information and the variable parameter and the fitting curve of the parameter change probability.
- the system includes a fitting curve generating module for using a probability algorithm to estimate a parameter between the robots using a network, and calculating a scene parameter of the robot on the life time axis after the life time axis is changed.
- the probability of each parameter change forms a fitted curve of the parameter change probability.
- the present invention discloses a robot comprising an interactive system of a virtual 3D robot as described in any of the above.
- the interaction method of the virtual 3D robot of the present invention includes: acquiring multi-modal information of the user; generating interactive content according to the multi-modal information and the variable parameter; and the robot according to the interactive content
- the output is performed, and the output mode includes at least a couple interaction, a squat interaction, and a pet interaction.
- the interactive content is generated by combining the variable parameters of the robot, so that the robot can identify the specific information in the interactive content, so that the robot can output and control, so that the 3D image is matched.
- the presentation and interaction with the user enable the robot to not only have speech performance when interacting, but also have various expressions such as actions, so that the expression form of the robot is more diverse and anthropomorphized, and the user and robot interaction experience is improved, and the present invention
- the output method includes at least couple interaction, ⁇ interaction, pet interaction, so that the robot can display different functions according to different needs, so that the robot has more kinds of interaction modes, and the scope and user experience of the robot are improved.
- FIG. 1 is a flowchart of a method for interacting a virtual 3D robot according to Embodiment 1 of the present invention
- FIG. 2 is a schematic diagram of an interactive system of a virtual 3D robot according to a second embodiment of the present invention.
- Computer devices include user devices and network devices.
- the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
- the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
- the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
- the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
- first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
- the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
- a method for interacting a virtual 3D robot is disclosed.
- the method in this embodiment is mainly used in a virtual 3D robot, such as VR (Virtual Reality).
- Methods include:
- the robot outputs according to the interaction content, where the output manner includes at least a couple Mutual, ⁇ interaction and pet interaction.
- the interaction method of the virtual 3D robot of the present invention comprises: acquiring multimodal information of the user; generating interactive content according to the multimodal information and the variable parameter; and the robot outputs according to the interactive content, the output manner includes at least a couple interaction, ⁇ Interaction and pet interaction.
- the interactive content is generated by combining the variable parameters of the robot, so that the robot can identify the specific information in the interactive content, so that the robot can output and control, so that the 3D image is matched.
- the presentation and interaction with the user enable the robot to not only have speech performance when interacting, but also have various expressions such as actions, so that the expression form of the robot is more diverse and anthropomorphized, and the user and robot interaction experience is improved, and the present invention
- the output method includes at least couple interaction, ⁇ interaction, pet interaction, so that the robot can display different functions according to different needs, so that the robot has more kinds of interaction modes, and the scope and user experience of the robot are improved.
- the interactive content may include voice information, motion information, and the like, so that multi-modal output can be performed, and the expression form of the robot feedback is increased.
- the interactive content may include voice information and action information.
- the voice information and the action information may be adjusted and matched when the interactive content is generated.
- the length of time of the voice information and the length of time of the action information are adjusted to be the same.
- the specific meaning of the adjustment is preferably the length of time for compressing or stretching the voice information or/and the length of time of the action information, or the speed of the playback or slowing down the playback speed, for example, multiplying the playback speed of the voice information by 2, or the action information. Multiply the playback time by 0.8 and so on.
- the time length of the voice information is 1 minute, and the time length of the motion information is 2 minutes, then the playing speed of the motion information can be accelerated, which is the original playing speed. If it is twice, then the playback time after the motion information is adjusted will be 1 minute, which is synchronized with the voice information.
- the playback speed of the voice information can be slowed down, and adjusted to 0.5 times the original playback speed, so that the voice information is adjusted and then slowed down to 2 minutes, thereby synchronizing with the motion information.
- both the voice information and the motion information can be adjusted, for example, the voice information is slowed down, and the motion information is accelerated, and the time is adjusted to 1 minute and 30 seconds, and the voice and the motion can be synchronized.
- the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
- variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping. In this case, if the scene of the robot is suddenly changed, such as taking the beach at the time of running, etc., these human active parameters for the robot, as variable parameters, will cause the robot's self-cognition to change.
- the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
- the robot will use this as a variable parameter.
- the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content in the meal, in the specific interaction
- the robot generates the multi-modal information of the acquired user, such as voice information, video information, picture information, and the like, and variable parameters. In this way, some unexpected events in human life can be added to the life axis of the robot, making the interaction of the robot more anthropomorphic.
- the ⁇ interaction specifically includes: acquiring multi-modal information of the user;
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- the user is recording a piece of speech and then storing it in the database. After another stranger user has randomly acquired the speech, he can establish interaction, communication and communication with the user.
- the couple interaction specifically includes: acquiring multi-modal information of the user;
- the multi-modality information processed by the robot is transmitted to the couple user associated with the user according to the multi-modality information of the user and the user's intention.
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- voice information For example, if the user is recording a voice "wife, go to bed early", then after the robot analyzes and recognizes the voice according to the voice, the voice is converted, and after being sent to the user's couple robot, it will reply as "Dear.”
- XX you The husband lets you go to bed early, so that it is more convenient for users to communicate and communicate, so that the communication between the couple is more intimate.
- the couple robots are pre-bound and set.
- the multi-mode display can also be performed in conjunction with the action information to improve the user experience.
- the pet interaction specifically includes: acquiring multi-modality information of the user;
- the interactive content is sent to the display unit to establish an interaction with the user.
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- voice information such as video information, motion information, and the like.
- the user said a voice "How is the weather today", and then after the robot acquires, it will query the weather today, and then send the result to a mobile terminal such as a mobile phone or a tablet for display, and inform the user of the weather today. For example, it is sunny, and at the same time, it can also be displayed with feedback, such as action and expression.
- the method for generating a variable parameter of the robot includes: fitting a parameter of the self-cognition of the robot with a parameter of the scene in the variable parameter to generate a variable parameter of the robot.
- fitting a parameter of the self-cognition of the robot with a parameter of the scene in the variable parameter to generate a variable parameter of the robot.
- variable parameter includes at least a behavior that changes the user's original behavior and the change, and a parameter value that represents a change in the user's original behavior and the behavior after the change.
- variable parameters are in the same state as the original plan.
- the sudden change causes the user to be in another state.
- the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
- the step of generating the interactive content according to the multimodal information and the variable parameter specifically includes: generating the interactive content according to the multimodal information and the variable parameter and the fitting curve of the parameter change probability.
- the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
- the method for generating a fitting curve of the parameter change probability includes: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene of the robot on the life time axis on the life time axis.
- the probability of each parameter changing after the parameter is changed, A fitting curve of the parameter change probability is formed.
- the probability algorithm can adopt the Bayesian probability algorithm.
- the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
- the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
- Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
- the curve dynamically affects the self-recognition of the robot itself.
- This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
- an interactive system of a virtual 3D robot including:
- the obtaining module 201 is configured to acquire multi-modal information of the user
- the artificial intelligence module 202 is configured to generate interaction content according to the multimodal information and the variable parameter, where the variable parameter is generated by the variable parameter module 301;
- a conversion module 203 configured to convert the interactive content into a machine code recognizable by the robot
- the control module 204 is configured to output, according to the interactive content, the at least the couple interaction, the ⁇ interaction, and the pet interaction.
- the robot can recognize the specific information in the interactive content, so that the robot can output and control, so that the 3D image can be correspondingly displayed and interact with the user, so that the robot not only has voice performance but also has various actions when interacting.
- the expression form makes the robot's expression form more diverse and anthropomorphic, and enhances the user's experience of interacting with the robot.
- the output mode of the present invention includes at least a couple interaction, a squat interaction, and a pet interaction, so that the robot can be made according to different needs. Different functions are displayed, allowing the robot to have more ways to interact and enhance the scope and user experience of the robot.
- the interactive content may include voice information, motion information, and the like, so that multi-modal output can be performed, and the expression form of the robot feedback is increased.
- the interactive content may also include voice information.
- the voice information and the action information may be adjusted and matched when the interactive content is generated. For example, adjusting the length of voice information and the length of motion information To the same.
- the specific meaning of the adjustment is preferably the length of time for compressing or stretching the voice information or/and the length of time of the action information, or the speed of the playback or slowing down the playback speed, for example, multiplying the playback speed of the voice information by 2, or the action information. Multiply the playback time by 0.8 and so on.
- the time length of the voice information is 1 minute, and the time length of the motion information is 2 minutes, then the playing speed of the motion information can be accelerated, which is the original playing speed. If it is twice, then the playback time after the motion information is adjusted will be 1 minute, which is synchronized with the voice information.
- the playback speed of the voice information can be slowed down, and adjusted to 0.5 times the original playback speed, so that the voice information is adjusted and then slowed down to 2 minutes, thereby synchronizing with the motion information.
- both the voice information and the motion information can be adjusted, for example, the voice information is slowed down, and the motion information is accelerated, and the time is adjusted to 1 minute and 30 seconds, and the voice and the motion can be synchronized.
- the multimodal information in this embodiment may be one of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
- variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping. In this case, if the scene of the robot is suddenly changed, such as taking the beach at the time of running, etc., these human active parameters for the robot, as variable parameters, will cause the robot's self-cognition to change.
- the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
- the robot will use this as a variable parameter.
- the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content in the meal, in the specific interaction
- the robot generates the multi-modal information of the acquired user, such as voice information, video information, picture information, and the like, and variable parameters. In this way, some unexpected events in human life can be added to the life axis of the robot, making the interaction of the robot more anthropomorphic.
- the ⁇ interaction specifically includes: acquiring multi-modal information of the user;
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- the user is recording a piece of speech and then storing it in the database. After another stranger user has randomly acquired the speech, he can establish interaction, communication and communication with the user.
- the couple interaction specifically includes: acquiring multi-modal information of the user;
- the multi-modality information processed by the robot is transmitted to the couple user associated with the user according to the multi-modality information of the user and the user's intention.
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- voice information may be voice information
- the robot analyzes and recognizes the voice according to the voice
- the voice is converted, and after being sent to the user's couple robot, it will reply as "Dear.”
- the XX your husband let you go to bed early, so that it is more convenient for users to communicate and communicate, so that the communication between couples is more intimate.
- the couple robots are pre-bound and set up.
- the robot can also display the multi-mode with the action information to improve the user experience.
- the pet interaction specifically includes: acquiring multi-modality information of the user;
- the interactive content is sent to the display unit to establish an interaction with the user.
- the multi-modal information may be voice information, and may of course be other information, such as video information, motion information, and the like.
- voice information such as video information, motion information, and the like.
- the user said a voice "How is the weather today", and then after the robot acquires, it will query the weather today, and then send the result to a mobile terminal such as a mobile phone or a tablet for display, and inform the user of the weather today. For example, it is sunny, and at the same time, it can also be displayed with feedback, such as action and expression.
- the system further includes a processing module for fitting the self-cognitive parameters of the robot with the parameters of the scene in the variable parameters to generate variable parameters.
- variable parameter includes at least changing the behavior and modification of the user's original The behavior after the change, and the parameter value representing the behavior of changing the user's original behavior and the behavior after the change.
- variable parameters are in the same state as the original plan.
- the sudden change causes the user to be in another state.
- the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
- the artificial intelligence module is specifically configured to: generate interaction content according to the multi-modality information and the variable parameter and the fitting curve of the parameter change probability.
- the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
- the system includes a fitting curve generation module for using a probability algorithm to estimate a parameter between the robots using a network for probability estimation, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
- the probability algorithm can adopt the Bayesian probability algorithm.
- the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
- the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
- Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
- the curve dynamically affects the self-recognition of the robot itself.
- This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
- the present invention discloses a robot comprising an interactive system of a virtual 3D robot as described in any of the above.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Manipulator (AREA)
Abstract
一种虚拟3D机器人的交互方法,包括:获取用户的多模态信息(S101);根据所述多模态信息和可变参数(300)生成交互内容(S102);将所述交互内容转换为机器人可识别的机器代码(S103);机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互(S104)。这样机器人就可以识别出交互内容中的具体信息,从而让机器人进行输出和控制,与用户交互,使机器人的表现形式更加多样化和拟人化,提升用户与机器人交互的体验度。输出方式至少包括情侣交互、邂逅交互、宠物交互,这样就可以让机器人根据不同的需要展现出不同的功能,让机器人具有更多种交互方式,提升机器人的适用范围和用户体验。
Description
本发明涉及机器人交互技术领域,尤其涉及一种虚拟3D机器人的交互方法、系统及机器人。
机器人作为与人类的交互工具,使用的场合越来越多,例如一些老人、小孩较孤独时,就可以与机器人交互,包括对话、娱乐等。而为了让机器人与人类交互时更加拟人化,发明人研究出一种虚拟机器人的显示设备和成像系统,能够形成3D的动画形象,虚拟机器人的主机接受人类的指令例如语音等与人类进行交互,然后虚拟的3D动画形象会根据主机的指令进行声音和动作的回复,这样就可以让机器人更加拟人化,不仅在声音、表情上能够与人类交互,而且还可以在动作等上与人类交互,大大提高了交互的体验感。
然而,虚拟机器人如何进行控制是重点问题,也是比较复杂的问题。因此,如何提供一种控制更加方便的虚拟3D机器人的交互方法、系统及机器人,提升人机交互体验成为亟需解决的技术问题。
发明内容
本发明的目的是提供一种控制更加方便的虚拟3D机器人的交互方法、系统及机器人,提升人机交互体验。
本发明的目的是通过以下技术方案来实现的:
一种虚拟3D机器人的交互方法,包括:
获取用户的多模态信息;
根据所述多模态信息和可变参数生成交互内容;
将所述交互内容转换为机器人可识别的机器代码;
机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。
优选的,所述邂逅交互具体包括:获取用户的多模态信息;
将所述多模态信息存储在数据库中;
若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
优选的,所述情侣交互具体包括:获取用户的多模态信息;
根据所述多模态信息和场景信息识别用户意图;
根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
优选的,所述宠物交互具体包括:获取用户的多模态信息;
根据所述多模态信息和可变参数生成交互内容;
将所述交互内容发送至显示单元,与用户建立交互。
优选的,所述机器人的可变参数的生成方法包括:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人的可变参数。
优选的,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述根据所述多模态信息和可变参数生成交互内容的步骤具体包括:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
优选的,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
一种虚拟3D机器人的交互系统,包括:
获取模块,用于获取用户的多模态信息;
人工智能模块,用于根据所述多模态信息和可变参数生成交互内容;
转换模块,用于将所述交互内容转换为机器人可识别的机器代码;
控制模块,用于机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。
优选的,所述邂逅交互具体包括:获取用户的多模态信息;
将所述多模态信息存储在数据库中;
若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
优选的,所述情侣交互具体包括:获取用户的多模态信息;
根据所述多模态信息和场景信息识别用户意图;
根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
优选的,所述宠物交互具体包括:获取用户的多模态信息;
根据所述多模态信息和可变参数生成交互内容;
将所述交互内容发送至显示单元,与用户建立交互。
优选的,所述系统还包括处理模块,用于将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成可变参数。
优选的,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
优选的,所述人工智能模块具体用于:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
优选的,所述系统包括拟合曲线生成模块,用于使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
本发明公开一种机器人,包括如上述任一所述的一种虚拟3D机器人的交互系统。
相比现有技术,本发明具有以下优点:本发明的虚拟3D机器人的交互方法包括:获取用户的多模态信息;根据所述多模态信息和可变参数生成交互内容;机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。这样就可以在获取用户的多模态信息后,结合机器人的可变参数生成交互内容,这样机器人就可以识别出交互内容中的具体信息,从而让机器人进行输出和控制,从而让3D影像进行对应的展现,与用户交互,使机器人在交互时不仅具有语音表现,还可以具有动作等多样的表现形式,使机器人的表现形式更加多样化和拟人化,提升用户与机器人交互的体验度,本发明的输出方式至少包括情侣交互、邂逅交互、宠物交互,这样就可以让机器人根据不同的需要展现出不同的功能,让机器人具有更多种交互方式,提升机器人的适用范围和用户体验。
图1是本发明实施例一的一种虚拟3D机器人的交互方法的流程图;
图2是本发明实施例二的一种虚拟3D机器人的交互系统的示意图。
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种虚拟3D机器人的交互方法,本实施例中的方法主要用于虚拟3D机器人中,具体的例如VR(Virtual Reality,即虚拟现实)中,所述方法包括:
S101、获取用户的多模态信息;
S102、根据所述多模态信息和可变参数300生成交互内容;
S103、机器人根据交互内容进行输出,所述输出方式至少包括情侣交
互、邂逅交互和宠物交互。
本发明的虚拟3D机器人的交互方法包括:获取用户的多模态信息;根据所述多模态信息和可变参数生成交互内容;机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。这样就可以在获取用户的多模态信息后,结合机器人的可变参数生成交互内容,这样机器人就可以识别出交互内容中的具体信息,从而让机器人进行输出和控制,从而让3D影像进行对应的展现,与用户交互,使机器人在交互时不仅具有语音表现,还可以具有动作等多样的表现形式,使机器人的表现形式更加多样化和拟人化,提升用户与机器人交互的体验度,本发明的输出方式至少包括情侣交互、邂逅交互、宠物交互,这样就可以让机器人根据不同的需要展现出不同的功能,让机器人具有更多种交互方式,提升机器人的适用范围和用户体验。
本实施例中,所述交互内容可以包括语音信息、动作信息等等,这样就可以进行多模态的输出,增加机器人反馈的表现形式。
另外,本实施例中,交互内容可以包括语音信息和动作信息,为了让动作信息和语音信息进行匹配,在生成交互内容时,可以对语音信息和动作信息进行调整和匹配。例如,将语音信息的时间长度和动作信息的时间长度调整到相同。调整的具体含义优选为压缩或拉伸语音信息的时间长度或/和动作信息的时间长度,也可以是加快播放速度或者减缓播放速度,例如将语音信息的播放速度乘以2,或者将动作信息的播放时间乘以0.8等等。
例如,机器人根据用户的多模态信息生成的交互内容中,语音信息的时间长度是1分钟,动作信息的时间长度是2分钟,那么就可以将动作信息的播放速度加快,为原来播放速度的两倍,那么动作信息调整后的播放时间就会为1分钟,从而与语音信息进行同步。当然,也可以让语音信息的播放速度减缓,调整为原来播放速度的0.5倍,这样就会让语音信息经过调整后减缓为2分钟,从而与动作信息同步。另外,也可以将语音信息和动作信息都调整,例如语音信息减缓,同时将动作信息加快,都调整到1分30秒,也可以让语音和动作进行同步。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。
本实施例中,可变参数具体是:人与机器发生的突发改变,比如时间轴上的一天生活是吃饭、睡觉、交互、跑步、吃饭、睡觉。那在这个情况下,假如突然改变机器人的场景,比如在跑步的时间段带去海边等等,这些人类主动对于机器人的参数,作为可变参数,这些改变会使得机器人的自我认知产生改变。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。
例如,按照生活时间轴,在中午12点的时候应该是吃饭的时间,而如果改变了这个场景,比如在中午12点的时候出去逛街了,那么机器人就会将这个作为其中的一个可变参数进行写入,在这个时间段内用户与机器人交互时,机器人就会结合到中午12点出去逛街进行生成交互内容,而不是以之前的中午12点在吃饭进行结合生成交互内容,在具体生成交互内容时,机器人就会结合获取的用户的多模态信息,例如语音信息、视屏信息、图片信息等和可变参数进行生成。这样就可以加入一些人类生活中的突发事件在机器人的生活轴中,让机器人的交互更加拟人化。
根据其中一个示例,所述邂逅交互具体包括:获取用户的多模态信息;
将所述多模态信息存储在数据库中;
若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如用户在录制了一段语音,然后存储到数据库中,另一个陌生人用户在随机获取到了这段语音之后,就可以与该用户建立交互,进行沟通和交流了。
根据其中一个示例,所述情侣交互具体包括:获取用户的多模态信息;
根据所述多模态信息和场景信息识别用户意图;
根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如用户在录制了一段语音“老婆,早点睡觉”,那么机器人在根据这段语音进行分析和识别后,将这段语音进行转换,在发送到该用户的情侣机器人之后,就会回复为“亲爱的某某某,您
的老公让您早点睡觉”,这样就可以更加方便用户之间的沟通和交流,让情侣之间的交流更加亲密。当然,情侣机器人之间是预先进行过绑定和设置的。另外,在机器人接收到语音信息之后,也可以配合动作信息进行多模式的展示,提高用户体验度。
根据其中一个示例,所述宠物交互具体包括:获取用户的多模态信息;
根据所述多模态信息和可变参数生成交互内容;
将所述交互内容发送至显示单元,与用户建立交互。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如,用户说一段语音“今天天气如何”,然后机器人在获取之后,就会查询今天的天气,然后将结果发送到显示单元例如手机、平板等移动终端中进行显示,并告知用户今天的天气,例如为晴朗,同时在还可以在反馈时配上动作、表情等方式进行展示。
根据其中一个示例,所述机器人的可变参数的生成方法包括:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人的可变参数。这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中一个示例,所述根据所述多模态信息和可变参数生成交互内容的步骤具体包括:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中一个示例,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,
形成所述参数改变概率的拟合曲线。其中,概率算法可以采用贝叶斯概率算法。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
实施例二
如图2所示,本实施例中公开一种虚拟3D机器人的交互系统,包括:
获取模块201,用于获取用户的多模态信息;
人工智能模块202,用于根据所述多模态信息和可变参数生成交互内容,其中可变参数由可变参数模块301生成;
转换模块203,用于将所述交互内容转换为机器人可识别的机器代码;
控制模块204,用于机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。
这样机器人就可以识别出交互内容中的具体信息,从而让机器人进行输出和控制,从而让3D影像进行对应的展现,与用户交互,使机器人在交互时不仅具有语音表现,还可以具有动作等多样的表现形式,使机器人的表现形式更加多样化和拟人化,提升用户与机器人交互的体验度,本发明的输出方式至少包括情侣交互、邂逅交互、宠物交互,这样就可以让机器人根据不同的需要展现出不同的功能,让机器人具有更多种交互方式,提升机器人的适用范围和用户体验。
本实施例中,所述交互内容可以包括语音信息、动作信息等等,这样就可以进行多模态的输出,增加机器人反馈的表现形式。
另外,本实施例中,交互内容也可以包括语音信息,为了让动作信息和语音信息进行匹配,在生成交互内容时,可以对语音信息和动作信息进行调整和匹配。例如,将语音信息的时间长度和动作信息的时间长度调整
到相同。调整的具体含义优选为压缩或拉伸语音信息的时间长度或/和动作信息的时间长度,也可以是加快播放速度或者减缓播放速度,例如将语音信息的播放速度乘以2,或者将动作信息的播放时间乘以0.8等等。
例如,机器人根据用户的多模态信息生成的交互内容中,语音信息的时间长度是1分钟,动作信息的时间长度是2分钟,那么就可以将动作信息的播放速度加快,为原来播放速度的两倍,那么动作信息调整后的播放时间就会为1分钟,从而与语音信息进行同步。当然,也可以让语音信息的播放速度减缓,调整为原来播放速度的0.5倍,这样就会让语音信息经过调整后减缓为2分钟,从而与动作信息同步。另外,也可以将语音信息和动作信息都调整,例如语音信息减缓,同时将动作信息加快,都调整到1分30秒,也可以让语音和动作进行同步。
本实施例中的多模态信息可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。
本实施例中,可变参数具体是:人与机器发生的突发改变,比如时间轴上的一天生活是吃饭、睡觉、交互、跑步、吃饭、睡觉。那在这个情况下,假如突然改变机器人的场景,比如在跑步的时间段带去海边等等,这些人类主动对于机器人的参数,作为可变参数,这些改变会使得机器人的自我认知产生改变。生活时间轴与可变参数可以对自我认知中的属性,例如心情值,疲劳值等等的更改,也可以自动加入新的自我认知信息,比如之前没有愤怒值,基于生活时间轴和可变因素的场景就会自动根据之前模拟人类自我认知的场景,从而对机器人的自我认知进行添加。
例如,按照生活时间轴,在中午12点的时候应该是吃饭的时间,而如果改变了这个场景,比如在中午12点的时候出去逛街了,那么机器人就会将这个作为其中的一个可变参数进行写入,在这个时间段内用户与机器人交互时,机器人就会结合到中午12点出去逛街进行生成交互内容,而不是以之前的中午12点在吃饭进行结合生成交互内容,在具体生成交互内容时,机器人就会结合获取的用户的多模态信息,例如语音信息、视屏信息、图片信息等和可变参数进行生成。这样就可以加入一些人类生活中的突发事件在机器人的生活轴中,让机器人的交互更加拟人化。
根据其中一个示例,所述邂逅交互具体包括:获取用户的多模态信息;
将所述多模态信息存储在数据库中;
若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如用户在录制了一段语音,然后存储到数据库中,另一个陌生人用户在随机获取到了这段语音之后,就可以与该用户建立交互,进行沟通和交流了。
根据其中一个示例,所述情侣交互具体包括:获取用户的多模态信息;
根据所述多模态信息和场景信息识别用户意图;
根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如用户在录制了一段语音“老婆,早点睡觉”,那么机器人在根据这段语音进行分析和识别后,将这段语音进行转换,在发送到该用户的情侣机器人之后,就会回复为“亲爱的某某某,您的老公让您早点睡觉”,这样就可以更加方便用户之间的沟通和交流,让情侣之间的交流更加亲密。当然,情侣机器人之间是预先进行过绑定和设置的。另外,在机器人接收到语音信息之后,也可以配合动作信息进行多模式的展示,提高用户体验度。
根据其中一个示例,所述宠物交互具体包括:获取用户的多模态信息;
根据所述多模态信息和可变参数生成交互内容;
将所述交互内容发送至显示单元,与用户建立交互。
本实施例中,多模态信息可以是语音信息,当然也可以是其他的信息,例如视频信息,动作信息等。例如,用户说一段语音“今天天气如何”,然后机器人在获取之后,就会查询今天的天气,然后将结果发送到显示单元例如手机、平板等移动终端中进行显示,并告知用户今天的天气,例如为晴朗,同时在还可以在反馈时配上动作、表情等方式进行展示。
根据其中一个示例,所述系统还包括处理模块,用于将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成可变参数。
这样通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。
根据其中一个示例,所述可变参数至少包括改变用户原本的行为和改
变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
可变参数就是按照原本计划,是处于一种状态的,突然的改变让用户处于了另一种状态,可变参数就代表了这种行为或状态的变化,以及变化之后用户的状态或者行为,例如原本在下午5点是在跑步,突然有其他的事,例如去打球,那么从跑步改为打球就是可变参数,另外还要研究这种改变的几率。
根据其中一个示例,所述人工智能模块具体用于:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
这样就可以通过可变参数的概率训练生成拟合曲线,从而生成机器人交互内容。
根据其中一个示例,所述系统包括拟合曲线生成模块,用于使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。其中,概率算法可以采用贝叶斯概率算法。
通过在结合可变参数的机器人的场景,将机器人本身的自我认知行扩展,对自我认知中的参数与可变参会苏轴中使用场景的参数进行拟合,产生拟人化的影响。同时,加上对于地点场景的识别,使得机器人会知道自己的地理位置,会根据自己所处的地理环境,改变交互内容生成的方式。另外,我们使用贝叶斯概率算法,将机器人之间的参数用贝叶斯网络做概率估计,计算生活时间轴上的机器人本身时间轴场景参数改变后,每个参数改变的概率,形成拟合曲线,动态影响机器人本身的自我认知。这种创新的模块使得机器人本身具有人类的生活方式,对于表情这块,可按照所处的地点场景,做表情方面的改变。
本发明公开一种机器人,包括如上述任一所述的一种虚拟3D机器人的交互系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
Claims (17)
- 一种虚拟3D机器人的交互方法,其特征在于,包括:获取用户的多模态信息;根据所述多模态信息和可变参数生成交互内容;机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。
- 根据权利要求1所述的交互方法,其特征在于,所述邂逅交互具体包括:获取用户的多模态信息;将所述多模态信息存储在数据库中;若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
- 根据权利要求1所述的交互方法,其特征在于,所述情侣交互具体包括:获取用户的多模态信息;根据所述多模态信息和场景信息识别用户意图;根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
- 根据权利要求1所述的交互方法,其特征在于,所述宠物交互具体包括:获取用户的多模态信息;根据所述多模态信息和可变参数生成交互内容;将所述交互内容发送至显示单元,与用户建立交互。
- 根据权利要求1所述的交互方法,其特征在于,所述机器人的可变参数的生成方法包括:将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成机器人的可变参数。
- 根据权利要求5所述的交互方法,其特征在于,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
- 根据权利要求1所述的交互方法,其特征在于,所述根据所述多模态信息和可变参数生成交互内容的步骤具体包括:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
- 根据权利要求7所述的交互方法,其特征在于,所述参数改变概率的拟合曲线的生成方法包括:使用概率算法,将机器人之间的参数用网络做概率估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改 变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
- 一种虚拟3D机器人的交互系统,其特征在于,包括:获取模块,用于获取用户的多模态信息;人工智能模块,用于根据所述多模态信息和可变参数生成交互内容;转换模块,用于将所述交互内容转换为机器人可识别的机器代码;控制模块,用于机器人根据交互内容进行输出,所述输出方式至少包括情侣交互、邂逅交互和宠物交互。
- 根据权利要求9所述的交互系统,其特征在于,所述邂逅交互具体包括:获取用户的多模态信息;将所述多模态信息存储在数据库中;若有陌生人用户从所述数据库中获取所述多模态信息,则与该陌生人用户建立交互。
- 根据权利要求9所述的交互系统,其特征在于,所述情侣交互具体包括:获取用户的多模态信息;根据所述多模态信息和场景信息识别用户意图;根据用户的多模态信息和用户意图向与该用户关联的情侣用户发送经过机器人处理的多模态信息。
- 根据权利要求9所述的交互系统,其特征在于,所述宠物交互具体包括:获取用户的多模态信息;根据所述多模态信息和可变参数生成交互内容;将所述交互内容发送至显示单元,与用户建立交互。
- 根据权利要求9所述的交互系统,其特征在于,所述系统还包括处理模块,用于将机器人的自我认知的参数与可变参数中场景的参数进行拟合,生成可变参数。
- 根据权利要求13所述的交互系统,其特征在于,所述可变参数至少包括改变用户原本的行为和改变之后的行为,以及代表改变用户原本的行为和改变之后的行为的参数值。
- 根据权利要求9所述的交互系统,其特征在于,所述人工智能模块具体用于:根据所述多模态信息和可变参数以及参数改变概率的拟合曲线生成交互内容。
- 根据权利要求15所述的交互系统,其特征在于,所述系统包括拟合曲线生成模块,用于使用概率算法,将机器人之间的参数用网络做概率 估计,计算当生活时间轴上的机器人在生活时间轴上的场景参数改变后,每个参数改变的概率,形成所述参数改变概率的拟合曲线。
- 一种机器人,其特征在于,包括如权利要求9至16任一所述的一种虚拟3D机器人的交互系统。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201680001725.XA CN106471444A (zh) | 2016-07-07 | 2016-07-07 | 一种虚拟3d机器人的交互方法、系统及机器人 |
PCT/CN2016/089214 WO2018006370A1 (zh) | 2016-07-07 | 2016-07-07 | 一种虚拟3d机器人的交互方法、系统及机器人 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/089214 WO2018006370A1 (zh) | 2016-07-07 | 2016-07-07 | 一种虚拟3d机器人的交互方法、系统及机器人 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018006370A1 true WO2018006370A1 (zh) | 2018-01-11 |
Family
ID=58230938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/089214 WO2018006370A1 (zh) | 2016-07-07 | 2016-07-07 | 一种虚拟3d机器人的交互方法、系统及机器人 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106471444A (zh) |
WO (1) | WO2018006370A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678617A (zh) * | 2017-09-14 | 2018-02-09 | 北京光年无限科技有限公司 | 面向虚拟机器人的数据交互方法和系统 |
CN111045582A (zh) * | 2019-11-28 | 2020-04-21 | 深圳市木愚科技有限公司 | 一种个性化虚拟人像活化互动系统及方法 |
CN111063346A (zh) * | 2019-12-12 | 2020-04-24 | 第五维度(天津)智能科技有限公司 | 基于机器学习的跨媒体明星情感陪伴交互系统 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106471444A (zh) * | 2016-07-07 | 2017-03-01 | 深圳狗尾草智能科技有限公司 | 一种虚拟3d机器人的交互方法、系统及机器人 |
CN107632706B (zh) * | 2017-09-08 | 2021-01-22 | 北京光年无限科技有限公司 | 多模态虚拟人的应用数据处理方法和系统 |
CN107765852A (zh) * | 2017-10-11 | 2018-03-06 | 北京光年无限科技有限公司 | 基于虚拟人的多模态交互处理方法及系统 |
CN109202925A (zh) * | 2018-09-03 | 2019-01-15 | 深圳狗尾草智能科技有限公司 | 实现机器人动作和语音同步的方法、系统及设备 |
US10606345B1 (en) * | 2018-09-25 | 2020-03-31 | XRSpace CO., LTD. | Reality interactive responding system and reality interactive responding method |
CN114747505A (zh) * | 2022-04-07 | 2022-07-15 | 神马人工智能科技(深圳)有限公司 | 一种基于人工智能的智能宠物训练助手系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11143849A (ja) * | 1997-11-11 | 1999-05-28 | Omron Corp | 行動生成装置、行動生成方法及び行動生成プログラム記録媒体 |
US5963663A (en) * | 1996-07-08 | 1999-10-05 | Sony Corporation | Land mark recognition method for mobile robot navigation |
CN1380846A (zh) * | 2000-03-31 | 2002-11-20 | 索尼公司 | 机器人设备、控制机器人设备动作的方法、以及外力检测设备和方法 |
CN105427865A (zh) * | 2015-11-04 | 2016-03-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能的智能机器人的语音控制系统以及方法 |
CN105446953A (zh) * | 2015-11-10 | 2016-03-30 | 深圳狗尾草智能科技有限公司 | 一种智能机器人与虚拟3d的交互系统及方法 |
CN106471444A (zh) * | 2016-07-07 | 2017-03-01 | 深圳狗尾草智能科技有限公司 | 一种虚拟3d机器人的交互方法、系统及机器人 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1392826A (zh) * | 2000-10-05 | 2003-01-22 | 索尼公司 | 机器人设备及其控制方法 |
CN102103707B (zh) * | 2009-12-16 | 2014-06-11 | 群联电子股份有限公司 | 情感引擎、情感引擎系统及电子装置的控制方法 |
CN104951077A (zh) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机交互方法、装置和终端设备 |
CN105094315B (zh) * | 2015-06-25 | 2018-03-06 | 百度在线网络技术(北京)有限公司 | 基于人工智能的人机智能聊天的方法和装置 |
CN105005614A (zh) * | 2015-07-17 | 2015-10-28 | 深圳狗尾草智能科技有限公司 | 一种机器人情侣社交系统及其交互方法 |
CN105739688A (zh) * | 2016-01-21 | 2016-07-06 | 北京光年无限科技有限公司 | 一种基于情感体系的人机交互方法、装置和交互系统 |
CN105740948B (zh) * | 2016-02-04 | 2019-05-21 | 北京光年无限科技有限公司 | 一种面向智能机器人的交互方法及装置 |
-
2016
- 2016-07-07 CN CN201680001725.XA patent/CN106471444A/zh active Pending
- 2016-07-07 WO PCT/CN2016/089214 patent/WO2018006370A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5963663A (en) * | 1996-07-08 | 1999-10-05 | Sony Corporation | Land mark recognition method for mobile robot navigation |
JPH11143849A (ja) * | 1997-11-11 | 1999-05-28 | Omron Corp | 行動生成装置、行動生成方法及び行動生成プログラム記録媒体 |
CN1380846A (zh) * | 2000-03-31 | 2002-11-20 | 索尼公司 | 机器人设备、控制机器人设备动作的方法、以及外力检测设备和方法 |
CN105427865A (zh) * | 2015-11-04 | 2016-03-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能的智能机器人的语音控制系统以及方法 |
CN105446953A (zh) * | 2015-11-10 | 2016-03-30 | 深圳狗尾草智能科技有限公司 | 一种智能机器人与虚拟3d的交互系统及方法 |
CN106471444A (zh) * | 2016-07-07 | 2017-03-01 | 深圳狗尾草智能科技有限公司 | 一种虚拟3d机器人的交互方法、系统及机器人 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107678617A (zh) * | 2017-09-14 | 2018-02-09 | 北京光年无限科技有限公司 | 面向虚拟机器人的数据交互方法和系统 |
CN111045582A (zh) * | 2019-11-28 | 2020-04-21 | 深圳市木愚科技有限公司 | 一种个性化虚拟人像活化互动系统及方法 |
CN111045582B (zh) * | 2019-11-28 | 2023-05-23 | 深圳市木愚科技有限公司 | 一种个性化虚拟人像活化互动系统及方法 |
CN111063346A (zh) * | 2019-12-12 | 2020-04-24 | 第五维度(天津)智能科技有限公司 | 基于机器学习的跨媒体明星情感陪伴交互系统 |
Also Published As
Publication number | Publication date |
---|---|
CN106471444A (zh) | 2017-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018006370A1 (zh) | 一种虚拟3d机器人的交互方法、系统及机器人 | |
TWI778477B (zh) | 互動方法、裝置、電子設備以及儲存媒體 | |
WO2018006369A1 (zh) | 一种同步语音及虚拟动作的方法、系统及机器人 | |
US20220284896A1 (en) | Electronic personal interactive device | |
US9723265B2 (en) | Using an avatar in a videoconferencing system | |
CN107632706B (zh) | 多模态虚拟人的应用数据处理方法和系统 | |
JP6448971B2 (ja) | 対話装置 | |
KR20220024557A (ko) | 자동화된 어시스턴트에 의한 응답 액션을 트리거하기 위한 핫 명령의 검출 및/또는 등록 | |
JP7408792B2 (ja) | シーンのインタラクション方法及び装置、電子機器並びにコンピュータプログラム | |
JP6889281B2 (ja) | 代替インタフェースでのプレゼンテーションのための電子会話の解析 | |
WO2018006374A1 (zh) | 一种基于主动唤醒的功能推荐方法、系统及机器人 | |
WO2018006375A1 (zh) | 一种虚拟机器人的交互方法、系统及机器人 | |
KR102423712B1 (ko) | 루틴 실행 중에 클라이언트 디바이스간 자동화 어시스턴트 루틴 전송 | |
WO2018000259A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
WO2018006371A1 (zh) | 一种同步语音及虚拟动作的方法、系统及机器人 | |
WO2018000268A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
WO2018000267A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
WO2018006373A1 (zh) | 一种基于意图识别控制家电的方法、系统及机器人 | |
WO2018006372A1 (zh) | 一种基于意图识别控制家电的方法、系统及机器人 | |
US20230047858A1 (en) | Method, apparatus, electronic device, computer-readable storage medium, and computer program product for video communication | |
CN111869185A (zh) | 生成基于IoT的通知并提供命令以致使客户端设备的自动助手客户端自动呈现基于IoT的通知 | |
WO2018000266A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 | |
EP4053792A1 (en) | Information processing device, information processing method, and artificial intelligence model manufacturing method | |
JP2023120130A (ja) | 抽出質問応答を利用する会話型aiプラットフォーム | |
WO2018000258A1 (zh) | 一种机器人交互内容的生成方法、系统及机器人 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16907875 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16907875 Country of ref document: EP Kind code of ref document: A1 |