WO2018006378A1 - 智能机器人控制系统、方法及智能机器人 - Google Patents

智能机器人控制系统、方法及智能机器人 Download PDF

Info

Publication number
WO2018006378A1
WO2018006378A1 PCT/CN2016/089222 CN2016089222W WO2018006378A1 WO 2018006378 A1 WO2018006378 A1 WO 2018006378A1 CN 2016089222 W CN2016089222 W CN 2016089222W WO 2018006378 A1 WO2018006378 A1 WO 2018006378A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
current
robot
determining
previous
Prior art date
Application number
PCT/CN2016/089222
Other languages
English (en)
French (fr)
Inventor
杨新宇
王昊奋
邱楠
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to CN201680001761.6A priority Critical patent/CN106660209B/zh
Priority to PCT/CN2016/089222 priority patent/WO2018006378A1/zh
Publication of WO2018006378A1 publication Critical patent/WO2018006378A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Definitions

  • the invention relates to the field of artificial intelligence, in particular to an intelligent robot control system, method and intelligent robot.
  • a robot is a machine that simulates human behavior. Its research has gone through three generations of development:
  • the first generation (program control) robot This kind of robot generally "learns" to work in the following two ways; one is written by the designer in advance according to the workflow and stored in the internal memory of the robot under the control of the program. jobs. The other is called the “teaching-reproduction” method, which is guided by the technician before the robot performs the task for the first time.
  • the robot records the whole operation step by step, every step of the operation. Expressed as an instruction. After the teaching is completed, the robot completes the work in the order of instructions (ie, reproduction). If the task or environment changes, reprogram the program.
  • This kind of robot can work diligently on machine tools, furnaces, welders, and production lines.
  • Second-generation (adaptive) robots These robots are equipped with corresponding sensory sensors (such as visual, auditory, tactile sensors, etc.), which can obtain simple information such as the working environment and operating objects, and are analyzed by a computer in the robot body. Processing, controlling the movement of the robot.
  • sensory sensors such as visual, auditory, tactile sensors, etc.
  • Processing controlling the movement of the robot.
  • the second generation of robots has some initial intelligence, it also requires technicians to coordinate their work. There are already some commercial products available.
  • the intelligent robot has the intelligence similar to human beings. It is equipped with high-sensitivity sensors, so it has the ability to exceed the visual, auditory, olfactory and tactile sensations of ordinary people. It can analyze the perceived information and control itself. The behavior, dealing with changes in the environment, and completing the complex and difficult tasks assigned to them. Moreover, they have the ability to self-learn, summarize, summarize, and improve their knowledge.
  • the present invention provides an intelligent robot control system, method, and intelligent robot.
  • an intelligent robot control system including: a receiving module, configured to receive a multimodal input command of a user; and an artificial intelligence processing module that stores at least the previous motion information of the robot, and is used to at least according to the Determining whether the action corresponding to the instruction is currently executed; the action generating module is configured to select and generate a current action from the pre-stored action library according to the determination result; and an output module, configured to output the The current action is displayed.
  • an intelligent robot control method comprising the steps of: storing previous motion information of the intelligent robot; receiving a multimodal input command of the user; at least according to the multimodal input command and the previous action And determining, according to the previous action information, whether to perform an action corresponding to the instruction currently; selecting, according to the determination result, a current action from the pre-stored action library; and outputting the current action and displaying .
  • an intelligent robot includes at least the intelligent robot control system described above.
  • the intelligent robot control system and method of the present invention can determine whether the robot is currently suitable for executing the action corresponding to the input command according to the previous action of the robot, and can ensure that the robot action does not abruptly change, thereby improving the user experience.
  • FIG. 1 is a functional block diagram of an intelligent robot control system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a robot control method according to an embodiment of the present invention.
  • an embodiment of the present invention provides an intelligent robot control system 100 , including a receiving module 10 , an artificial intelligence processing module 20 , an action generating module 30 , and an output module 40 .
  • the intelligent robot control system 100 is installed in an intelligent robot.
  • the intelligent robot outputs an action in the manner of a virtual character.
  • the receiving module 10 is configured to receive a multimodal input instruction of a user.
  • the multi-modal input command may be user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • voice information voice information
  • gesture information scene information
  • image information image information
  • video information face information
  • pupil iris information light sense information
  • fingerprint information One or several.
  • the artificial intelligence processing module 20 stores at least the previous motion information of the robot, and determines whether to perform an action corresponding to the instruction at least according to the previous motion information.
  • the artificial intelligence processing module 20 includes at least a storage unit 21, a self-cognition unit 22, a first judging unit 23, and a second judging unit 24.
  • the storage unit 21 is configured to store previous motion information of the robot. It can be understood that the previous motion information may be the last motion information, or may be the motion information that is executed multiple times before.
  • the action information is, for example, information indicating various living conditions such as exercise, eating, sleeping, getting sick, resting, and the like. In this embodiment, the information is represented by different codes or codes.
  • the self-cognition unit 22 is configured to determine the current state of the robot based on the previous motion information.
  • the self-cognition unit 22 includes at least a mutation factor determination sub-unit 221 and a status confirmation sub-unit 222.
  • the mutation factor determining sub-unit 221 is configured to calculate the previous motion information according to a preset probability operation rule, and determine whether there is a mutation factor in the previous action of the smart robot.
  • the mutation factor is an emergency, such as a sports sprained foot, a sudden deterioration of the weather, and failure to act as planned.
  • the status confirmation sub-unit 222 is configured to confirm the mutation factor and determine the current state of the robot according to the mutation factor.
  • the previous motion information may include a fatigue parameter value of the robot, and the self-cognition unit 22 confirms the current state of the robot according to the fatigue parameter value.
  • the action information may also include other types of parameter values, and the present invention is not limited to this embodiment.
  • the first determining unit 23 is configured to determine, according to a preset rule, whether the current state is The actions corresponding to the input commands conflict, and if there is no conflict, it is determined that the action corresponding to the instructions is executed. If there is a conflict, the action corresponding to the input instruction is not executed. For example, the user inputs an instruction of “jump a dance” by voice. At this time, if the self-cognition unit 22 determines that the robot is currently in a state of sprained foot, the first determining unit 23 determines the current state and input of the robot. The actions corresponding to the commands conflict, confirming that the dance cannot be performed.
  • the second determining unit 24 is configured to further determine at least one type of the current action of the robot when the current state conflicts with the action corresponding to the input instruction.
  • the actions of the robot can be divided into different types, such as sports, casual, and the like.
  • the second determining unit 24 includes a time axis determining sub-unit 241 and an action type determining sub-unit 242.
  • the time axis determining sub-unit 241 is configured to determine which range of the life time axis the current time is located, wherein the life time axis includes a plurality of time ranges, and each time range corresponds to mapping different action types.
  • the action type judging subunit 242 is configured to confirm at least one type of the current action according to the multimodal input command, the previous motion information, and the range of the current time.
  • the time axis judgment sub-unit 241 determines that the current time is 7:00 in the morning, it is located in the A range of the life time axis, and the action type of the A range corresponding to the mapping has food, exercise, rest, and the like. Then, the action type judging subunit 242 determines that the current action of the robot is to eat food or rest instead of exercising according to the state in which the robot is currently in a sprained foot.
  • the action generation module 30 is configured to select and generate a current action from the pre-stored action library according to the determination result.
  • the action information includes a plurality of weight values, and the weight value represents the influence of the previous action on the current action
  • the action generating module 30 includes a weight determining unit 31, configured to determine the action information of the previous action. Whether the weight value in the value exceeds the preset value, if yes, confirm that the weight value of the current action should be low, and select the action of the low weight value from the corresponding action type; otherwise, randomly select an action in the corresponding action type.
  • the playing is given a high weight value
  • the rest is given a low weight value
  • the action generating module 30 determines that the previous action is always a motion, and continuously maintains the high weight value exceeding a preset value, and determines that the current action should be a low weight value. , that is, rest.
  • the action generating module 30 may include a determining unit of other parameters, and is not limited to the above weight determining unit.
  • the output module 40 is configured to output the current action and display.
  • the output module 40 is coupled to a holographic imaging device and displays the current motion by holographic imaging. It can be understood that, in other embodiments, the output module 40 can display the current by other means. action.
  • the system when the multimodal input includes audio data, the system further includes a synchronization module 50, configured to perform time synchronization processing on the current motion and the input audio data, so that the sound of the robot Synchronized with the action, more anthropomorphic.
  • a synchronization module 50 configured to perform time synchronization processing on the current motion and the input audio data, so that the sound of the robot Synchronized with the action, more anthropomorphic.
  • the intelligent robot control system 100 of the present invention can determine whether the robot is currently suitable for executing an action corresponding to an input command based on the previous motion of the robot, and can ensure that the robot motion does not change, thereby improving the user experience.
  • the present invention further provides an intelligent robot control method, including:
  • Step S301 storing previous action information of the intelligent robot.
  • Step S302 receiving a multimodal input instruction of the user.
  • Step S303 Determine, according to at least the previous motion information, whether an action corresponding to the instruction is currently performed.
  • step S303 includes the following sub-steps:
  • the previous motion information is calculated according to a preset probability operation rule, and it is determined whether there is a mutation factor in the previous action of the smart robot, and if yes, the mutation factor is confirmed, and The current state of the robot is determined based on the mutation factor.
  • step S303b Determine, according to a preset rule, whether the current state conflicts with an action corresponding to the input instruction. If there is no conflict, step S303c is executed to determine that the action corresponding to the instruction is executed; if the current state conflicts with the action corresponding to the input instruction, step S303d is performed to further determine at least one type of the current action of the robot.
  • the step of determining at least one type of current action of the robot includes the sub-steps:
  • S303d1 determining which range of the life time axis the current time is, wherein the life time axis includes multiple time ranges, and each time range corresponds to mapping different action types;
  • S303d2 Confirm at least one type of the current action according to the current state and the range of the current time.
  • Step S304 selecting and generating a current action from the pre-stored action library according to the determination result.
  • the action information includes a plurality of weight values
  • the weight value represents an influence of the previous action on the current action
  • the step of selecting and generating the current action from the pre-stored action library specifically includes: determining the previous action Whether the weight value in the action information exceeds a preset value, and if yes, confirm that the weight value of the current action is low, and select a low weight value action from the corresponding action type; otherwise, randomly select Select an action in the corresponding action type.
  • Step S305 outputting the current action and displaying.
  • the method further includes step S306, performing time synchronization processing on the current motion and the input audio data.
  • the intelligent robot control method of the present invention it is possible to determine whether the robot is currently suitable for performing an action corresponding to the input command based on the previous motion of the robot, and it is possible to ensure that the robot motion does not abruptly change, thereby improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

一种智能机器人控制系统,包括:接收模块(10)、人工智能处理模块(20)、动作生成模块,以及输出模块,人工智能处理模块(20)存储并根据所述先前的动作信息,判断当前是否执行所述指令对应的动作。该智能机器人能保证机器人动作不会突变。

Description

智能机器人控制系统、方法及智能机器人 技术领域
本发明涉及人工智能领域,尤其涉及一种智能机器人控制系统、方法及智能机器人。
背景技术
机器人是一种能模拟人的行为的机械,对它的研究经历了三代的发展过程:
第一代(程序控制)机器人:这种机器人一般是按以下二种方式“学会”工作的;一种是由设计师预先按工作流程编写好程序存贮在机器人的内部存储器,在程序控制下工作。另一种是被称为“示教—再现”方式,这种方式是在机器人第一次执行任务之前,由技术人员引导机器人操作,机器人将整个操作过程一步一步地记录下来,每一步操作都表示为指令。示教结束后,机器人按指令顺序完成工作(即再现)。如任务或环境有了改变,要重新进行程序设计。这种机器人能尽心尽责的在机床、熔炉、焊机、生产线上工作。日前商品化、实用化的机器人大都属于这一类。这种机器人最大的缺点是它只能刻板地按程序完成工作,环境稍有变化(如加工物品略有倾斜)就会出问题,甚至发生危险。
第二代(自适应)机器人:这种机器人配备有相应的感觉传感器(如视觉、听觉、触觉传感器等),能取得作业环境、操作对象等简单的信息,并由机器人体内的计算机进行分析、处理,控制机器人的动作。虽然第二代机器人具有一些初级的智能,但还需要技术人员协调工作。目前已经有了一些商品化的产品。
第三代(智能)机器人:智能机器人具有类似于人的智能,它装备了高灵敏度的传感器,因而具有超过一般人的视觉、听觉、嗅觉、触觉的能力,能对感知的信息进行分析,控制自己的行为,处理环境发生的变化,完成交给的各种复杂、困难的任务。而且有自我学习、归纳、总结、提高已掌握知识的能力。
然而,目前研制的智能机器人大都只具有部分的智能。因此,让智能机器人更加拟人化,是机器人产业的一个发展方向。
发明内容
为了解决上述问题,本发明提供一种智能机器人控制系统、方法以及智能机器人。
在一个实施例中,提供一种智能机器人控制系统,包括:接收模块,用于接收用户的多模态输入指令;人工智能处理模块,其至少存储有机器人先前的动作信息,并用于至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作;动作生成模块,用于根据所述判断结果,从预存的动作库中选择并生成当前动作;以及输出模块,用于输出所述当前动作并显示。
在另一个实施例中,提供一种智能机器人控制方法,包括步骤:存储智能机器人先前的动作信息;接收用户的多模态输入指令;至少根据所述多模态输入指令以及所述先前的动作信息,并用于至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作;根据所述判断结果,从预存的动作库中选择并生成当前动作;以及输出所述当前动作并显示。
在又一个实施例中,提供一种智能机器人,其至少包括上述的智能机器人控制系统。
本发明的智能机器人控制系统和方法,能够根据机器人先前的动作判断机器人当前是否适合执行输入指令对应的动作,能够保证机器人动作不会突变,提高了用户体验度。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的智能机器人控制系统的功能模块图;
图2为本发明实施例提供的机器人控制方法的流程图。
具体实施方式
下面结合附图和具体实施方式对本发明的技术方案作进一步更详细的描 述。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都应属于本发明保护的范围。
请参阅图1,本发明实施例提供一种智能机器人控制系统100,包括接收模块10、人工智能处理模块20、动作生成模块30以及输出模块40。本实施例中,所述智能机器人控制系统100安装在一个智能机器人中。本实施例中,所述智能机器人以虚拟人物的方式输出动作。
所述接收模块10用于接收用户的多模态输入指令。本实施例中,所述多模态输入指令可以是用户表情、语音信息、手势信息、场景信息、图像信息、视频信息、人脸信息、瞳孔虹膜信息、光感信息和指纹信息等其中的其中一种或几种。
所述人工智能处理模块20至少存储有机器人先前的动作信息,并用于至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作。本实施例中,所述人工智能处理模块20至少包括存储单元21、自我认知单元22、第一判断单元23以及第二判断单元24。
所述存储单元21用于存储机器人先前的动作信息。可以理解的是,所述先前的动作信息可以是上一次的动作信息,也可以是先前多次执行的动作信息。所述动作信息例如是运动、吃法、睡觉、生病、休息等表示各种生活状态的信息。本实施例中,所述信息通过不同的代码或编码表示。
所述自我认知单元22用于根据先前的动作信息判断机器人当前的状态。本实施例中,所述自我认知单元22至少包括突变因素判断子单元221以及状态确认子单元222。所述突变因素判断子单元221用于根据预设的概率运算规则对所述先前的动作信息进行运算,并判断所述智能机器人的先前动作是否存在突变因素。举例来说,所述突变因素是突发事件,例如运动扭伤脚、天气突然变差而无法按照计划办事之类的。所述状态确认子单元222用于确认所述突变因素,并根据所述突变因素判断机器人当前的状态。在另一个实施例中,所述先前的动作信息中可包含机器人的疲劳参数值,所述自我认知单元22根据所述疲劳参数值,确认机器人当前的状态。可以理解,在其他实施例中,所述动作信息还可以包含其他类型的参数值,本发明不以此实施例为限。
所述第一判断单元23用于根据预设的规则,判断所述当前的状态是否与所 述输入指令对应的动作相冲突,如果不冲突,则判断执行所述指令对应的动作。如果冲突,则不执行所述输入指令对应的动作。举例来说,用户通过语音方式输入“跳一个舞吧”的指令,此时,如果所述自我认知单元22判断机器人当前是扭伤脚的状态,则第一判断单元23判断机器人当前状态与输入指令对应的动作相冲突,从而确认无法执行跳舞的动作。
所述第二判断单元24用于当所述当前的状态与所述输入指令对应的动作相冲突时,进一步判断机器人当前动作的至少一个类型。在本实施例中,所述机器人的动作可划分为不同的类型,例如运动类、休闲类等等。
更进一步的,本实施例中,所述第二判断单元24包括时间轴判断子单元241以及动作类型判断子单元242。所述时间轴判断子单元241用于判断当前时间位于生活时间轴的哪个范围,其中,所述生活时间轴包括多个时间范围,且每个时间范围对应映射不同的动作类型。所述动作类型判断子单元242用于根据所述多模态输入指令、所述先前的动作信息以及当前时间所在的范围,确认当前动作的至少一个类型。举例来说,当所述时间轴判断子单元241判断当前时间是早上7:00,其位于生活时间轴的A范围,且所述A范围对应映射的动作类型有吃食物、运动、休息等,则所述动作类型判断子单元242则根据机器人当前处于扭伤脚的状态,判断机器人的当前动作是吃食物或休息,而非运动。
所述动作生成模块30用于根据所述判断结果,从预存的动作库中选择并生成当前动作。本实施例中,所述动作信息包括多个权重值,所述权重值代表先前动作对当前动作的影响,所述动作生成模块30包括权重判断单元31,用于判断先前动作的所述动作信息中的权重值是否超过预设值,如果是,则确认当前动作的权重值应为低,并从对应动作类型中选择低权重值的动作,否则,随机选择对应动作类型中的某个动作。举例来说,打球被赋予高权重值,休息被赋予低权重值,所述动作生成模块30判断先前动作一直是运动,持续维持高权重值超过预设值,则判断当前动作应该是低权重值,即休息。可以理解,在其他实施例中,所述动作生成模块30可以包括其他参数的判断单元,而不以上述权重判断单元为限。
所述输出模块40用于输出所述当前动作并显示。本实施例中,所述输出模块40与一全息成像设备连接,并通过全息成像方式显示所述当前动作。可以理解的是,在其他实施例中,所述输出模块40还可以通过其他方式显示所述当前 动作。
在本实施例中,当所述多模态输入包括音频数据时,所述系统进一步包括同步模块50,用于将所述当前动作与输入的所述音频数据进行时间同步处理,使得机器人的声音和动作同步,更加拟人化。
本发明的智能机器人控制系统100,能够根据机器人先前的动作判断机器人当前是否适合执行输入指令对应的动作,能够保证机器人动作不会突变,提高了用户体验度。
请参阅图2,本发明进一步提供一种智能机器人控制方法,包括:
步骤S301,存储智能机器人先前的动作信息。
步骤S302,接收用户的多模态输入指令。
步骤S303,至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作。本实施例中,步骤S303包含以下子步骤:
S303a,根据先前的动作信息判断机器人当前的状态。具体的,本实施例中,根据预设的概率运算规则对所述先前的动作信息进行运算,并判断所述智能机器人的先前动作是否存在突变因素,如果是,则确认所述突变因素,并根据所述突变因素判断机器人当前的状态。
S303b,根据预设的规则判断所述当前的状态是否与所述输入指令对应的动作相冲突。如果不冲突,则执行步骤S303c,判断执行所述指令对应的动作;如果所述当前的状态与所述输入指令对应的动作相冲突,则执行步骤S303d,进一步判断机器人当前动作的至少一个类型。
更具体的,所述判断机器人当前动作的至少一个类型的步骤包括子步骤:
S303d1,判断当前时间位于生活时间轴的哪个范围,其中,所述生活时间轴包括多个时间范围,且每个时间范围对应映射不同的动作类型;以及
S303d2,根据所述当前的状态以及当前时间所在的范围,确认当前动作的至少一个类型。
步骤S304,根据所述判断结果,从预存的动作库中选择并生成当前动作。本实施例中,所述动作信息包括多个权重值,所述权重值代表先前动作对当前动作的影响,所述从预存的动作库中选择并生成当前动作的步骤具体包括:判断先前动作的所述动作信息中的权重值是否超过预设值,如果是,则确认当前动作的权重值为低,并从对应动作类型中选择低权重值的动作,否则,随机选 择对应动作类型中的某个动作。
步骤S305,输出所述当前动作并显示。
本实施例中,进一步包括步骤S306,将所述当前动作与输入的所述音频数据进行时间同步处理。
本发明的智能机器人控制方法,能够根据机器人先前的动作判断机器人当前是否适合执行输入指令对应的动作,能够保证机器人动作不会突变,提高了用户体验度。
需要说明的是,通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的硬件平台的方式来实现,当然也可以全部通过硬件来实施。基于这样的理解,本发明的技术方案对背景技术做出贡献的全部或者部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。
以上所揭露的仅为本发明实施例中的较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (14)

  1. 一种智能机器人控制系统,包括:
    接收模块,用于接收用户的多模态输入指令;
    人工智能处理模块,其至少存储有机器人先前的动作信息,并用于至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作;
    动作生成模块,用于根据所述判断结果,从预存的动作库中选择并生成当前动作;以及
    输出模块,用于输出所述当前动作并显示。
  2. 根据权利要求1所述的系统,其特征在于,所述人工智能处理模块至少包括:
    存储单元,用于存储机器人先前的动作信息;
    自我认知单元,用于根据先前的动作信息判断机器人当前的状态;第一判断单元,用于根据预设的规则判断所述当前的状态是否与所述输入指令对应的动作相冲突,如果不冲突,则判断执行所述指令对应的动作,如果冲突,则不执行所述输入指令对应的动作。
  3. 根据权利要求2所述的系统,其特征在于,所述人工智能处理模块进一步包括:
    第二判断单元,用于当所述当前的状态与所述输入指令对应的动作相冲突时,进一步判断机器人当前动作的至少一个类型。
  4. 根据权利要求3所述的系统,其特征在于,所述第二判断单元包括:
    时间轴判断子单元,用于判断当前时间位于生活时间轴的哪个范围,其中,所述生活时间轴包括多个时间范围,且每个时间范围对应映射不同的动作类型;以及
    动作类型判断子单元,用于根据所述当前的状态以及当前时间所在的生活时间轴的范围,确认当前动作的至少一个类型。
  5. 根据权利要求2所述的系统,其特征在于,所述自我认知单元至少包括:
    突变因素判断子单元,用于根据预设的概率运算规则对所述先前的动作信息进行运算,并判断所述智能机器人的先前动作是否存在突变因素;以及
    状态确认子单元,用于确认所述突变因素,并根据所述突变因素判断机器 人当前的状态。
  6. 根据权利要求1所述的系统,其特征在于,所述动作信息包括多个权重值,所述权重值代表先前动作对当前动作的影响,所述动作生成模块包括:
    权重判断单元,用于判断先前动作的所述动作信息中的权重值是否超过预设值,如果是,则确认当前动作的权重值为低,并从对应动作类型中选择低权重值的动作,否则,随机选择对应动作类型中的某个动作。
  7. 根据权利要求1所述的系统,其特征在于,所述多模态输入包括音频数据,所述系统进一步包括:同步模块,用于将所述当前动作与输入的所述音频数据进行时间同步处理。
  8. 一种智能机器人控制方法,包括步骤:
    存储智能机器人先前的动作信息;
    接收用户的多模态输入指令;
    至少根据所述多模态输入指令以及所述先前的动作信息,并用于至少根据所述先前的动作信息,判断当前是否执行所述指令对应的动作;
    根据所述判断结果,从预存的动作库中选择并生成当前动作;以及
    输出所述当前动作并显示。
  9. 根据权利要求8所述的方法,其特征在于,所述判断当前是否执行所述指令对应的动作的步骤进一步包括:
    根据先前的动作信息判断机器人当前的状态;
    根据预设的规则判断所述当前的状态是否与所述输入指令对应的动作相冲突,如果不冲突,则判断执行所述指令对应的动作,如果所述当前的状态与所述输入指令对应的动作相冲突,则进一步判断机器人当前动作的至少一个类型。
  10. 根据权利要求9所述的方法,其特征在于,所述判断机器人当前动作的至少一个类型的步骤包括:
    判断当前时间位于生活时间轴的哪个范围,其中,所述生活时间轴包括多个时间范围,且每个时间范围对应映射不同的动作类型;以及
    根据所述当前的状态以及当前时间所在的范围,确认当前动作的至少一个类型。
  11. 根据权利要求9所述的方法,其特征在于,所述根据先前的动作信息判断机器人当前的状态的步骤包括:
    根据预设的概率运算规则对所述先前的动作信息进行运算,并判断所述智能机器人的先前动作是否存在突变因素,如果是,则确认所述突变因素,并根据所述突变因素判断机器人当前的状态。
  12. 根据权利要求8所述的方法,其特征在于,所述动作信息包括多个权重值,所述权重值代表先前动作对当前动作的影响,所述从预存的动作库中选择并生成当前动作的步骤包括:
    判断先前动作的所述动作信息中的权重值是否超过预设值,如果是,则确认当前动作的权重值为低,并从对应动作类型中选择低权重值的动作,否则,随机选择对应动作类型中的某个动作。
  13. 根据权利要求8所述的方法,进一步包括:将所述当前动作与输入的所述音频数据进行时间同步处理。
  14. 一种智能机器人,其至少包括如权利要求1~7项任一项所述的智能机器人控制系统。
PCT/CN2016/089222 2016-07-07 2016-07-07 智能机器人控制系统、方法及智能机器人 WO2018006378A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001761.6A CN106660209B (zh) 2016-07-07 2016-07-07 智能机器人控制系统、方法及智能机器人
PCT/CN2016/089222 WO2018006378A1 (zh) 2016-07-07 2016-07-07 智能机器人控制系统、方法及智能机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/089222 WO2018006378A1 (zh) 2016-07-07 2016-07-07 智能机器人控制系统、方法及智能机器人

Publications (1)

Publication Number Publication Date
WO2018006378A1 true WO2018006378A1 (zh) 2018-01-11

Family

ID=58838969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089222 WO2018006378A1 (zh) 2016-07-07 2016-07-07 智能机器人控制系统、方法及智能机器人

Country Status (2)

Country Link
CN (1) CN106660209B (zh)
WO (1) WO2018006378A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019029061A1 (zh) * 2017-08-11 2019-02-14 深圳市得道健康管理有限公司 人工智能设备、系统及其行为控制方法
CN107496110A (zh) * 2017-08-14 2017-12-22 北京迪克希玛有限责任公司 家用护理床以及护理控制方法
CN108133259A (zh) * 2017-12-14 2018-06-08 深圳狗尾草智能科技有限公司 人工虚拟生命与外界交互的系统及方法
CN107992935A (zh) * 2017-12-14 2018-05-04 深圳狗尾草智能科技有限公司 为机器人设置生活周期的方法、设备及介质
CN110764723A (zh) * 2018-07-27 2020-02-07 苏州狗尾草智能科技有限公司 一种车载全息展示方法及系统
CN109159126A (zh) * 2018-10-11 2019-01-08 上海思依暄机器人科技股份有限公司 机器人行为的控制方法、控制系统及机器人
CN109670416B (zh) * 2018-12-03 2023-04-28 深圳市越疆科技有限公司 基于前置姿态判断的学习方法、学习系统和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1518489A (zh) * 2002-03-15 2004-08-04 索尼公司 用于机器人的行为控制系统和行为控制方法及机器人装置
US20080215183A1 (en) * 2007-03-01 2008-09-04 Ying-Tsai Chen Interactive Entertainment Robot and Method of Controlling the Same
CN101362334A (zh) * 2008-09-25 2009-02-11 塔米智能科技(北京)有限公司 一种智能机器人及其运作方法
CN105426436A (zh) * 2015-11-05 2016-03-23 百度在线网络技术(北京)有限公司 基于人工智能机器人的信息提供方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1518489A (zh) * 2002-03-15 2004-08-04 索尼公司 用于机器人的行为控制系统和行为控制方法及机器人装置
US20080215183A1 (en) * 2007-03-01 2008-09-04 Ying-Tsai Chen Interactive Entertainment Robot and Method of Controlling the Same
CN101362334A (zh) * 2008-09-25 2009-02-11 塔米智能科技(北京)有限公司 一种智能机器人及其运作方法
CN105426436A (zh) * 2015-11-05 2016-03-23 百度在线网络技术(北京)有限公司 基于人工智能机器人的信息提供方法和装置

Also Published As

Publication number Publication date
CN106660209B (zh) 2019-11-22
CN106660209A (zh) 2017-05-10

Similar Documents

Publication Publication Date Title
WO2018006378A1 (zh) 智能机器人控制系统、方法及智能机器人
AU2019384515B2 (en) Adapting a virtual reality experience for a user based on a mood improvement score
Ciardo et al. Attribution of intentional agency towards robots reduces one’s own sense of agency
CN109789550B (zh) 基于小说或表演中的先前角色描绘的社交机器人的控制
US20180330549A1 (en) Editing interactive motion capture data for creating the interaction characteristics of non player characters
WO2019097676A1 (ja) 3次元空間監視装置、3次元空間監視方法、及び3次元空間監視プログラム
WO2019204777A1 (en) Surgical simulator providing labeled data
WO2015158881A1 (en) Methods and systems for managing dialogs of a robot
EP2933066A1 (en) Activity monitoring of a robot
CN106030457A (zh) 在过程期间跟踪对象
JP2022553617A (ja) 途絶の間のアプリケーションへの自動ユーザ入力の提供
Rodríguez et al. Training of procedural tasks through the use of virtual reality and direct aids
WO2018000267A1 (zh) 一种机器人交互内容的生成方法、系统及机器人
EP3637228A2 (en) Real-time motion feedback for extended reality
JP2016087402A (ja) ユーザーとの相互作用が可能な玩具およびその玩具のユーザーとの相互作用方法
EP3872607A1 (en) Systems and methods for automated control of human inhabited characters
JP5927797B2 (ja) ロボット制御装置、ロボットシステム、ロボット装置の行動制御方法、及びプログラム
US20140288704A1 (en) System and Method for Controlling Behavior of a Robotic Character
JP7414735B2 (ja) 複数のロボットエフェクターを制御するための方法
JP2016053606A (ja) 対話型問診訓練システム、対話型処理装置及びそのプログラム
Higgins et al. Head pose as a proxy for gaze in virtual reality
JPWO2020075368A1 (ja) 情報処理装置、情報処理方法及びプログラム
US20240171782A1 (en) Live streaming method and system based on virtual image
US20200372717A1 (en) Extended reality based positive affect implementation for product development
US20240221270A1 (en) Computer-implemented method for controlling a virtual avatar

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16907883

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16907883

Country of ref document: EP

Kind code of ref document: A1