WO2018000207A1 - 基于单意图的技能包并行执行管理方法、系统及机器人 - Google Patents
基于单意图的技能包并行执行管理方法、系统及机器人 Download PDFInfo
- Publication number
- WO2018000207A1 WO2018000207A1 PCT/CN2016/087525 CN2016087525W WO2018000207A1 WO 2018000207 A1 WO2018000207 A1 WO 2018000207A1 CN 2016087525 W CN2016087525 W CN 2016087525W WO 2018000207 A1 WO2018000207 A1 WO 2018000207A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- skill
- parameter
- user
- package
- parameters
- Prior art date
Links
- 238000007726 management method Methods 0.000 title claims abstract description 52
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 230000003993 interaction Effects 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 16
- 210000001747 pupil Anatomy 0.000 claims description 8
- 206010034960 Photophobia Diseases 0.000 claims description 4
- 208000013469 light sensitivity Diseases 0.000 claims description 4
- 230000006870 function Effects 0.000 description 16
- 230000004044 response Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 210000000887 face Anatomy 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
Definitions
- the present invention relates to the field of robot interaction technology, and in particular, to a method, system and robot for parallel execution of skills package based on single intention.
- robots are used more and more. For example, some elderly people and children can interact with robots, including dialogue and entertainment.
- Robots generally have many clock functions. These functions are called skill packs in robot systems.
- a skill pack corresponds to a function of the robot. For example, singing is a skill pack, and playing music is also a skill pack.
- the robot usually knows the skill package that the human wants the robot to perform by recognizing the human voice or the like, and the robot searches for the corresponding skill package according to the voice and then executes.
- the object of the present invention is to provide a skill package parallel execution management method, system and robot with faster response speed, and improve the human-computer interaction experience.
- a method for parallel execution management of skill packages based on single intent including:
- the weight value is respectively matched to at least two skill packages by calculation
- the skill pack with the highest weight value is obtained.
- the method further includes:
- the step of weighting the value further includes:
- the weight values are respectively assigned to at least two skill packs by calculation.
- the step of obtaining the skill package with the highest weight value according to the algorithm of the fusion ordering further comprises:
- the method further includes:
- the multimodal parameter includes at least one of an expression parameter, a scene parameter, an image parameter, a video parameter, a face parameter, a pupil iris parameter, a light sensitivity parameter, and a fingerprint parameter.
- a single-intention-based skill package parallel execution management system including:
- An obtaining module configured to acquire voice information and multi-modal parameters of the user
- a search module configured to simultaneously match weight values to at least two skill packages by calculating according to the voice information and the multi-modal parameters
- the processing module is configured to obtain the skill package with the highest weight value according to the algorithm of the fusion sorting.
- the system further comprises:
- An intent identification module configured to identify a user intent according to the acquired voice information and multimodal parameters
- the matching module is further configured to separately assign weight values to at least two skill packages by calculation according to the identified user intention.
- the processing module is further configured to:
- the system further comprises:
- the execution module is sent, and the skill package with the highest weight value is sent to the interaction module, and the function of the skill package is executed.
- the multimodal parameter includes at least one of an expression parameter, a scene parameter, an image parameter, a video parameter, a face parameter, a pupil iris parameter, a light sensitivity parameter, and a fingerprint parameter.
- a robot comprising a single intention based skill package parallel execution management system as described above.
- the present invention discloses a robot comprising a single intention based skill package parallel execution management system as described above.
- the management method of the present invention includes: acquiring voice information and multi-modal parameters of a user; and calculating at least two skill packs simultaneously according to the voice information and multi-modal parameters Match the weight values separately; according to the fusion sorting algorithm, the skill pack with the highest weight value is obtained.
- the management method of the present invention includes: acquiring voice information and multi-modal parameters of a user; and calculating at least two skill packs simultaneously according to the voice information and multi-modal parameters Match the weight values separately; according to the fusion sorting algorithm, the skill pack with the highest weight value is obtained.
- the invention firstly proposes to manage the function modules of the robot and the robot in the management mode of the skill package. Under the framework of the parallel management, the processing speed and efficiency of the robot can be further improved, and the robot can realize the function more quickly and conveniently.
- the startup, and the parallel execution of multiple skill packages used by the management method and system of the present invention is more convenient for understanding the fuzzy intention.
- the selected skill package fails, the multiple skill packets can be executed in parallel, and the switch can be switched faster.
- the management scheme of the present invention has a higher fault tolerance rate, and overall, the management method and system of the present invention are more robust and robust, improving the efficiency of robot interaction with the user and the user. Sensitivity.
- Embodiment 1 is a flowchart of a method for parallel execution management of skill packets based on single intention according to Embodiment 1 of the present invention
- FIG. 2 is a schematic diagram of a single-intention-based skill package parallel execution management system according to a second embodiment of the present invention.
- Computer devices include user devices and network devices.
- the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
- the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
- the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
- the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
- first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
- the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
- a method for parallel execution management of a skill package based on a single intent is disclosed in the embodiment, including:
- S101 Acquire user's voice information 300 and multimodal parameters 400;
- the management method of the present invention includes: S101 acquiring voice information and multi-modal parameters of the user; S102, according to the voice information and the multi-modal parameter, respectively, matching weight values to at least two skill packs by calculation; S103 sorting according to fusion The algorithm gets the skill pack with the highest weight value. This way According to the user's voice information and multi-modal parameters, all the skill packs are assigned weight values by the calculation of the search module, and then the skill pack with the highest weight value is obtained by the fusion sorting method, so that all the skills are simultaneously The package assigns the weight value.
- the corresponding skill package can be found faster, so that the system response time is greatly reduced, and the parallel generation mode enables the product to be customized for the user, for different Users can use, expand the scope of the system, and facilitate the management of different skill packages.
- Parallel use makes the management of resources more convenient, so that product management, such as updating, adjusting, removing, modifying, etc., does not affect the user's use. And can improve the efficiency of product program development.
- the invention firstly proposes to manage the function modules of the robot and the robot in the management mode of the skill package. Under the framework of the parallel management, the processing speed and efficiency of the robot can be further improved, and the robot can realize the function more quickly and conveniently.
- the startup, and the parallel execution of multiple skill packages used by the management method and system of the present invention is more convenient for understanding the fuzzy intention.
- the management scheme of the present invention has a higher fault tolerance rate, and overall, the management method and system of the present invention are more robust and robust, improving the efficiency of robot interaction with the user and the user. Sensitivity.
- the voice information 300 and the multimodal parameters 400 can be obtained by a voice module and a multimodal module, respectively.
- the method further includes:
- the step of separately assigning weight values to the at least two skill packs according to the voice information and the multi-modal parameters further includes:
- the weight values are respectively assigned to at least two skill packs by calculation.
- the single intent in the present invention generally refers to identifying one of the intentions of the user in order to accurately analyze the user's intention. For example, if the user says "I want to listen to music" and the music skill package plays music directly, this is a single intention.
- the multimodal parameter includes at least an expression parameter, a scene parameter, and an image parameter.
- the acquisition of multimodal parameters can be obtained by a camera, or other such as a light sensor, a fingerprint recognition module, or the like.
- the multimodal parameters in this embodiment generally refer to other parameters than speech.
- the communication between the user and the robot is through voice communication, and in the present invention, in addition to acquiring the voice information, the multi-modal parameters of the user, such as expression parameters, scene parameters, image parameters, video parameters, and faces, are acquired.
- the step of obtaining the skill package with the highest weight value according to the algorithm of the fusion ordering further comprises:
- the user's voice information is “ ⁇ ”, and then the multi-modal parameter displays the user's face with a serious expression for the face parameter.
- the robot searches for the module according to the acquired voice confidence and multi-modal parameters.
- the weight value is assigned, for example, the weight value assigned to the skill package for playing music is 75, the weight value assigned to the skill package in which the robot is in the quiet mode is 75, and the weight value for playing the animation to the robot is 70, and according to the historical data of the user.
- the commonly used skill pack is a skill pack for playing cartoons, so the system will remove the mutually exclusive skill packs, such as the skill pack for playing music and the skill pack in quiet mode, and then select the skill pack for playing the animation.
- the common skill package obtained from the user's historical data is to let the robot play the music
- the mutually exclusive skill packages such as the skill package in the quiet mode and the skill package for playing the animation are removed, and the music skill package is selected to be played. , as the skill pack with the highest weight value.
- step of obtaining the skill package with the highest weight value by the algorithm according to the fusion ordering further comprising:
- the skill package is executed, so that the robot reflects the function corresponding to the skill package.
- the present embodiment discloses a skill package parallel execution management system based on a single intent, including:
- the obtaining module 201 is configured to acquire voice information and multi-modal parameters of the user, where the voice Information and multimodal parameters may be obtained by the speech module 301 and the multimodal parameter module 401, respectively;
- the searching module 202 is configured to separately match the weight values to the at least two skill packages by calculating according to the voice information and the multi-modal parameters;
- the processing module 203 is configured to obtain the skill packet with the highest weight value according to the algorithm of the fusion sorting.
- all the skill packs are assigned weight values by the calculation of the search module, and then the skill pack with the highest weight value is obtained by the fusion sorting method, so that all the The skill package assigns the weight value.
- the parallel triggering of the skill package the corresponding skill package can be found faster, so that the system response time is greatly reduced, and the parallel generation mode enables the product to be customized for the user, for different Users can use, expand the scope of the system, and facilitate the management of different skill packages.
- Parallel use makes it easier to manage resources, so that product management, such as updating, adjusting, removing, modifying, etc., does not affect users. Use, and can improve the efficiency of product program development.
- the invention firstly proposes to manage the function modules of the robot and the robot in the management mode of the skill package. Under the framework of the parallel management, the processing speed and efficiency of the robot can be further improved, and the robot can realize the function more quickly and conveniently.
- the startup, and the parallel execution of multiple skill packages used by the management method and system of the present invention is more convenient for understanding the fuzzy intention.
- the management scheme of the present invention has a higher fault tolerance rate, and overall, the management method and system of the present invention are more robust and robust, improving the efficiency of robot interaction with the user and the user. Sensitivity.
- system further comprises:
- An intent identification module configured to identify a user intent according to the acquired voice information and multimodal parameters
- the matching module is further configured to separately assign weight values to at least two skill packages by calculation according to the identified user intention.
- the obtained information is further analyzed and studied to identify the user's intention and obtain the user's true meaning expression, so that the search module can calculate more accurately. Skill packs assign weight values.
- the multi-modal parameter includes at least one of an expression parameter, a scene parameter, an image parameter, a video parameter, a face parameter, a pupil iris parameter, a light sensitivity parameter, and a fingerprint parameter.
- the acquisition of multimodal parameters can be through a camera, or other such as a light sensor,
- the fingerprint identification module is acquired.
- the multimodal parameters in this embodiment generally refer to other parameters than speech.
- the communication between the user and the robot is through voice communication, and in the present invention, in addition to acquiring the voice information, the multi-modal parameters of the user, such as expression parameters, scene parameters, image parameters, video parameters, and faces, are acquired.
- the processing module is further configured to:
- system further comprises:
- the execution module is sent, and the skill package with the highest weight value is sent to the interaction module, and the function of the skill package is executed.
- the skill package is executed, so that the robot reflects the function corresponding to the skill package.
- a robot including a single intention based skill package parallel execution management system according to any of the above.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Manipulator (AREA)
Abstract
一种基于单意图的技能包并行执行管理方法,包括:获取用户的语音信息和多模态参数(S101);根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值(S102);根据融合排序的算法得到权重值最高的技能包(S103)。该管理方法采用的多个技能包并行执行更加便于模糊意图的理解,在选择的技能包出现失误时,由于是多个技能包并行执行,可以更快的切换到其他技能包,该管理方法容错率也更高,而对于整体来说,该管理方法的鲁棒性和稳健性更高,提高了机器人与用户交互的效率和用户好感度。
Description
本发明涉及机器人交互技术领域,尤其涉及一种基于单意图的技能包并行执行管理方法、系统及机器人。
机器人作为与人类的交互工具,使用的场合越来越多,例如一些老人、小孩较孤独时,就可以与机器人交互,包括对话、娱乐等。机器人一般具有很多钟功能,这些功能在机器人系统中称之为技能包,一般一个技能包就对应着机器人的一个功能,例如唱歌是一个技能包,播放音乐也是一个技能包。机器人通常通过识别人类的语音等来了解人类想要机器人执行的技能包,机器人根据语音去搜索到相应的技能包然后执行。
然而,现有的机器人中的技能包管理和搜索效率较低,造成了机器人在交互过程中反应较慢,往往机器人在接受到命令数秒了才会有反应,大大降低了用户使用机器人的舒适度和好感度。
因此,如何提供一种反应速度更快的技能包并行执行管理方法、系统及机器人,提升人机交互体验成为亟需解决的技术问题。
发明内容
本发明的目的是提供一种反应速度更快的技能包并行执行管理方法、系统及机器人,提升人机交互体验。
本发明的目的是通过以下技术方案来实现的:
一种基于单意图的技能包并行执行管理方法,包括:
获取用户的语音信息和多模态参数;
根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;
根据融合排序的算法得到权重值最高的技能包。
优选的,在获取用户的语音信息和多模态参数的步骤之后还包括:
根据获取的语音信息和多模态参数,识别用户意图;
所述根据所述语音信息和多模态参数同时向至少两个技能包分别分配
权重值的步骤进一步包括:
根据识别的用户意图,通过计算同时向至少两个技能包分别分配权重值。
优选的,所述根据融合排序的算法得到权重值最高的技能包的步骤进一步包括:
判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
优选的,在所述根据融合排序的算法得到权重值最高的技能包的步骤之后,进一步包括:
将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
优选的,所述多模态参数至少包括表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。
一种基于单意图的技能包并行执行管理系统,包括:
获取模块,用于获取用户的语音信息和多模态参数;
搜索模块,用于根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;
处理模块,用于根据融合排序的算法得到权重值最高的技能包。
优选的,所述系统还包括:
意图识别模块,用于根据获取的语音信息和多模态参数,识别用户意图;
所述匹配模块进一步用于:根据识别的用户意图,通过计算同时向至少两个技能包分别分配权重值。
优选的,所述处理模块进一步用于:
判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
优选的,所述系统进一步包括:
发送执行模块,将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
优选的,其中,所述多模态参数至少包括表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。
一种机器人,包括如上述任一所述的一种基于单意图的技能包并行执行管理系统。
本发明公开一种机器人,包括如上述任一所述的一种基于单意图的技能包并行执行管理系统。
相比现有技术,本发明具有以下优点:本发明管理方法由于包括:获取用户的语音信息和多模态参数;根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;根据融合排序的算法得到权重值最高的技能包。这样就可以根据获取用户的语音信息和多模态参数,通过搜索模块的计算来对所有的技能包分配权重值,然后再通过融合排序的方法得到权重值最高的技能包,这样同时向所有的技能包分配权重值,通过技能包的并行触发,可以更快的找到相应的技能包,从而使得系统反应时间大大减少,并且通过并行发生模式,使得产品可以对用户做定制化的使用,对于不同的用户可以使用,扩大了系统的适用范围,并方便管理不同的技能包,并行的使用使得对于资源的管理更方便,使得产品的管理,如更新、调整、下架、修改等不影响用户的使用,并且可以提升产品程序开发的效率。本发明首次提出以技能包的管理方式,对机器人及机器人中的功能模块等进行管理,在并行管理的框架下,可以进一步地提高机器人的处理速度和效率,方便机器人更快更便捷的实现功能的启动,并且,本发明的管理方法和系统采用的多个技能包并行执行更加便于模糊意图的理解,在选择的技能包出现失误时,由于是多个技能包并行执行,可以更快的切换到其他技能包,本发明的管理方案容错率也更高,而对于整体来说,本发明的管理方法和系统的鲁棒性和稳健性更高,提高了机器人与用户交互的效率和用户好感度。
图1是本发明实施例一的一种基于单意图的技能包并行执行管理方法的流程图;
图2是本发明实施例二的一种基于单意图的技能包并行执行管理系统的示意图。
虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以
被并行地、并发地或者同时实施。各项操作的顺序可以被重新安排。当其操作完成时处理可以被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
计算机设备包括用户设备与网络设备。其中,用户设备或客户端包括但不限于电脑、智能手机、PDA等;网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算的由大量计算机或网络服务器构成的云。计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制,使用这些术语仅仅是为了将一个单元与另一个单元进行区分。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和较佳的实施例对本发明作进一步说明。
实施例一
如图1所示,本实施例中公开一种基于单意图的技能包并行执行管理方法,包括:
S101、获取用户的语音信息300和多模态参数400;
S102、根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;
S103、根据融合排序的算法得到权重值最高的技能包。
本发明管理方法由于包括:S101获取用户的语音信息和多模态参数;S102根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;S103根据融合排序的算法得到权重值最高的技能包。这样
就可以根据获取用户的语音信息和多模态参数,通过搜索模块的计算来对所有的技能包分配权重值,然后再通过融合排序的方法得到权重值最高的技能包,这样同时向所有的技能包分配权重值,通过技能包的并行触发,可以更快的找到相应的技能包,从而使得系统反应时间大大减少,并且通过并行发生模式,使得产品可以对用户做定制化的使用,对于不同的用户可以使用,扩大了系统的适用范围,并方便管理不同的技能包,并行的使用使得对于资源的管理更方便,使得产品的管理,如更新、调整、下架、修改等不影响用户的使用,并且可以提升产品程序开发的效率。
本发明首次提出以技能包的管理方式,对机器人及机器人中的功能模块等进行管理,在并行管理的框架下,可以进一步地提高机器人的处理速度和效率,方便机器人更快更便捷的实现功能的启动,并且,本发明的管理方法和系统采用的多个技能包并行执行更加便于模糊意图的理解,在选择的技能包出现失误时,由于是多个技能包并行执行,可以更快的切换到其他技能包,本发明的管理方案容错率也更高,而对于整体来说,本发明的管理方法和系统的鲁棒性和稳健性更高,提高了机器人与用户交互的效率和用户好感度。
语音信息300和多模态参数400可以分别通过语音模块和多模态模块获取到。
根据其中一个示例,在获取用户的语音信息和多模态参数的步骤之后还包括:
根据获取的语音信息和多模态参数,识别用户意图;
所述根据所述语音信息和多模态参数同时向至少两个技能包分别分配权重值的步骤进一步包括:
根据识别的用户意图,通过计算同时向至少两个技能包分别分配权重值。
这样就可以在获取用户的语音信息和多模态参数之后,进一步的对获取的信息进行分析和研究,以识别用户的意图,获取用户的真实意思表达,以便搜索模块更加准确的计算,向每个技能包分配权重值。本发明中的单意图,一般是指识别用户其中的一种意图,以便于精确的分析用户意图。例如,比如用户说“我想听音乐”,音乐技能包直接播放音乐,这就是单意图。
本实施例中,所述多模态参数至少包括表情参数、场景参数、图像参
数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。多模态参数的获取可以是通过摄像头,或其他如光感传感器、指纹识别模块等获取。本实施例中多模态参数中一般是指语音之外的其他参数。因为一般情况下,用户与机器人的交流是通过语音交流,而本发明中,除了获取语音信息外,还获取用户的多模态参数,例如表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种的组合,这样就可以更加准确的识别到用户的意图,了解用户真实的意思表达,从而更加准确的寻找对应的技能包。
根据其中一个示例,所述根据融合排序的算法得到权重值最高的技能包的步骤进一步包括:
判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
例如,用户的语音信息为“噼里啪啦”,然后多模态参数为人脸参数显示用户一脸严肃的表情,这时机器人就根据获取的语音信心和多模态参数,搜索模块就向所有的技能包分配权重值,例如向播放音乐的技能包分配的权重值为75,向机器人处于安静模式的技能包分配的权重值为75,向机器人播放动画片的权重值为70,又根据用户的历史数据得到常用的技能包是播放动画片的技能包,这样系统就会将相互排斥的技能包,例如播放音乐的技能包和处于安静模式下的技能包去除,然后选择播放动画片的技能包。当然,如果根据用户的历史数据得到的常用的技能包是让机器人播放音乐,那么就会将互相排斥的技能包如处于安静模式的技能包和播放动画片的技能包去除,选择播放音乐技能包,作为权重值最高的技能包。
根据其中一个示例,在所述根据融合排序的算法得到权重值最高的技能包的步骤之后,进一步包括:
将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
这样就可以在找到对应的技能包之后,就执行该技能包,从而让机器人体现该技能包对应的功能。
实施例二
如图2所示,本实施例中公开一种基于单意图的技能包并行执行管理系统,包括:
获取模块201,用于获取用户的语音信息和多模态参数,其中,语音
信息和多模态参数可以分别通过语音模块301和多模态参数模块401获取到;
搜索模块202,用于根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;
处理模块203,用于根据融合排序的算法得到权重值最高的技能包。
这样就可以根据获取用户的语音信息和多模态参数,通过搜索模块的计算来对所有的技能包分配权重值,然后再通过融合排序的方法得到权重值最高的技能包,这样同时向所有的技能包分配权重值,通过技能包的并行触发,可以更快的找到相应的技能包,从而使得系统反应时间大大减少,并且通过并行发生模式,使得产品可以对用户做定制化的使用,对于不同的用户可以使用,扩大了系统的适用范围,并方便管理不同的技能包,并行的使用使得对于资源的管理更方便,使得产品的管理,如更新、调整、下架、修改等不影响用户的使用,并且可以提升产品程序开发的效率。
本发明首次提出以技能包的管理方式,对机器人及机器人中的功能模块等进行管理,在并行管理的框架下,可以进一步地提高机器人的处理速度和效率,方便机器人更快更便捷的实现功能的启动,并且,本发明的管理方法和系统采用的多个技能包并行执行更加便于模糊意图的理解,在选择的技能包出现失误时,由于是多个技能包并行执行,可以更快的切换到其他技能包,本发明的管理方案容错率也更高,而对于整体来说,本发明的管理方法和系统的鲁棒性和稳健性更高,提高了机器人与用户交互的效率和用户好感度。
根据其中一个示例,所述系统还包括:
意图识别模块,用于根据获取的语音信息和多模态参数,识别用户意图;
所述匹配模块进一步用于:根据识别的用户意图,通过计算同时向至少两个技能包分别分配权重值。
这样就可以在获取用户的语音信息和多模态参数之后,进一步的对获取的信息进行分析和研究,以识别用户的意图,获取用户的真实意思表达,以便搜索模块更加准确的计算,向每个技能包分配权重值。
本实施例中,所述多模态参数至少包括表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。多模态参数的获取可以是通过摄像头,或其他如光感传感器、
指纹识别模块等获取。本实施例中多模态参数中一般是指语音之外的其他参数。因为一般情况下,用户与机器人的交流是通过语音交流,而本发明中,除了获取语音信息外,还获取用户的多模态参数,例如表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种的组合,这样就可以更加准确的识别到用户的意图,了解用户真实的意思表达,从而更加准确的寻找对应的技能包。
根据其中一个示例,所述处理模块进一步用于:
判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
根据其中一个示例,所述系统进一步包括:
发送执行模块,将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
这样就可以在找到对应的技能包之后,就执行该技能包,从而让机器人体现该技能包对应的功能。
此外,本实施例中,还公开一种机器人,包括如上述任一所述的一种基于单意图的技能包并行执行管理系统。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。
Claims (11)
- 一种基于单意图的技能包并行执行管理方法,其特征在于,包括:获取用户的语音信息和多模态参数;根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;根据融合排序的算法得到权重值最高的技能包。
- 根据权利要求1所述的管理方法,其特征在于,在获取用户的语音信息和多模态参数的步骤之后还包括:根据获取的语音信息和多模态参数,识别用户意图;所述根据所述语音信息和多模态参数同时向至少两个技能包分别分配权重值的步骤进一步包括:根据识别的用户意图,通过计算同时向至少两个技能包分别分配权重值。
- 根据权利要求2所述的管理方法,其特征在于,所述根据融合排序的算法得到权重值最高的技能包的步骤进一步包括:判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
- 根据权利要求1所述的管理方法,其特征在于,在所述根据融合排序的算法得到权重值最高的技能包的步骤之后,进一步包括:将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
- 根据权利要求1所述的管理方法,其特征在于,所述多模态参数至少包括表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。
- 一种基于单意图的技能包并行执行管理系统,其特征在于,包括:获取模块,用于获取用户的语音信息和多模态参数;搜索模块,用于根据所述语音信息和多模态参数,通过计算同时向至少两个技能包分别匹配权重值;处理模块,用于根据融合排序的算法得到权重值最高的技能包。
- 根据权利要求6所述的管理系统,其特征在于,所述系统还包括:意图识别模块,用于根据获取的语音信息和多模态参数,识别用户意图;所述匹配模块进一步用于:根据识别的用户意图,通过计算同时向至 少两个技能包分别分配权重值。
- 根据权利要求7所述的管理系统,其特征在于,所述处理模块进一步用于:判断各个技能包之间是否相互排斥,将存在排斥的技能包去除,并根据用户的历史数据分配权重值,得到权重值最高的技能包。
- 根据权利要求6所述的管理系统,其特征在于,所述系统进一步包括:发送执行模块,将权重值最高的技能包发送至交互模块,并执行该技能包的功能。
- 根据权利要求6所述的管理系统,其特征在于,其中,所述多模态参数至少包括表情参数、场景参数、图像参数、视频参数、人脸参数、瞳孔虹膜参数、光感参数和指纹参数中的其中一种或几种。
- 一种机器人,其特征在于,包括如权利要求6至10任一所述的一种基于单意图的技能包并行执行管理系统。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201680001773.9A CN106663001A (zh) | 2016-06-28 | 2016-06-28 | 基于单意图的技能包并行执行管理方法、系统及机器人 |
PCT/CN2016/087525 WO2018000207A1 (zh) | 2016-06-28 | 2016-06-28 | 基于单意图的技能包并行执行管理方法、系统及机器人 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/087525 WO2018000207A1 (zh) | 2016-06-28 | 2016-06-28 | 基于单意图的技能包并行执行管理方法、系统及机器人 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018000207A1 true WO2018000207A1 (zh) | 2018-01-04 |
Family
ID=58838059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/087525 WO2018000207A1 (zh) | 2016-06-28 | 2016-06-28 | 基于单意图的技能包并行执行管理方法、系统及机器人 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106663001A (zh) |
WO (1) | WO2018000207A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408800A (zh) * | 2018-08-23 | 2019-03-01 | 优视科技(中国)有限公司 | 对话机器人系统及相关技能配置方法 |
CN112099630A (zh) * | 2020-09-11 | 2020-12-18 | 济南大学 | 一种多模态意图逆向主动融合的人机交互方法 |
EP3758898A4 (en) * | 2018-02-28 | 2021-11-24 | Misty Robotics, Inc. | ROBOT SKILL MANAGEMENT |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334353B (zh) * | 2017-08-31 | 2021-04-02 | 科大讯飞股份有限公司 | 技能开发系统及方法 |
CN109658928B (zh) * | 2018-12-06 | 2020-06-23 | 山东大学 | 一种家庭服务机器人云端多模态对话方法、装置及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010070A1 (en) * | 2006-07-10 | 2008-01-10 | Sanghun Kim | Spoken dialog system for human-computer interaction and response method therefor |
CN101187990A (zh) * | 2007-12-14 | 2008-05-28 | 华南理工大学 | 一种会话机器人系统 |
CN104985599A (zh) * | 2015-07-20 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | 基于人工智能的智能机器人控制方法、系统及智能机器人 |
CN105550105A (zh) * | 2015-12-08 | 2016-05-04 | 成都中科创达软件有限公司 | 一种移动终端中功能相同的应用程序的选择方法及系统 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10256982A (ja) * | 1997-03-12 | 1998-09-25 | Nippon Telegr & Teleph Corp <Ntt> | Phsを用いたユーザー行動支援システム |
EP2924539B1 (en) * | 2014-03-27 | 2019-04-17 | Lg Electronics Inc. | Display device and operating method thereof using gestures |
CN104360808A (zh) * | 2014-12-04 | 2015-02-18 | 李方 | 一种利用符号手势指令进行文档编辑的方法及装置 |
-
2016
- 2016-06-28 CN CN201680001773.9A patent/CN106663001A/zh active Pending
- 2016-06-28 WO PCT/CN2016/087525 patent/WO2018000207A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010070A1 (en) * | 2006-07-10 | 2008-01-10 | Sanghun Kim | Spoken dialog system for human-computer interaction and response method therefor |
CN101187990A (zh) * | 2007-12-14 | 2008-05-28 | 华南理工大学 | 一种会话机器人系统 |
CN104985599A (zh) * | 2015-07-20 | 2015-10-21 | 百度在线网络技术(北京)有限公司 | 基于人工智能的智能机器人控制方法、系统及智能机器人 |
CN105550105A (zh) * | 2015-12-08 | 2016-05-04 | 成都中科创达软件有限公司 | 一种移动终端中功能相同的应用程序的选择方法及系统 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3758898A4 (en) * | 2018-02-28 | 2021-11-24 | Misty Robotics, Inc. | ROBOT SKILL MANAGEMENT |
CN109408800A (zh) * | 2018-08-23 | 2019-03-01 | 优视科技(中国)有限公司 | 对话机器人系统及相关技能配置方法 |
CN109408800B (zh) * | 2018-08-23 | 2024-03-01 | 阿里巴巴(中国)有限公司 | 对话机器人系统及相关技能配置方法 |
CN112099630A (zh) * | 2020-09-11 | 2020-12-18 | 济南大学 | 一种多模态意图逆向主动融合的人机交互方法 |
CN112099630B (zh) * | 2020-09-11 | 2024-04-05 | 济南大学 | 一种多模态意图逆向主动融合的人机交互方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106663001A (zh) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018000207A1 (zh) | 基于单意图的技能包并行执行管理方法、系统及机器人 | |
US20230132020A1 (en) | Streaming real-time dialog management | |
WO2017129149A1 (zh) | 基于多模态输入进行交互的方法和设备 | |
CN104992709B (zh) | 一种语音指令的执行方法及语音识别设备 | |
CN105690385B (zh) | 基于智能机器人的应用调用方法与装置 | |
WO2018006374A1 (zh) | 一种基于主动唤醒的功能推荐方法、系统及机器人 | |
WO2018006375A1 (zh) | 一种虚拟机器人的交互方法、系统及机器人 | |
JP6986187B2 (ja) | 人物識別方法、装置、電子デバイス、記憶媒体、及びプログラム | |
WO2015163068A1 (ja) | 情報処理装置、情報処理方法及びコンピュータプログラム | |
RU2016116893A (ru) | Способ диалога между машиной, такой как гуманоидный робот, и собеседником-человеком, компьютерный программный продукт и гуманоидный робот для осуществления такого способа | |
CN108096833B (zh) | 基于级联神经网络的体感游戏控制方法及装置、计算设备 | |
CN111968631A (zh) | 智能设备的交互方法、装置、设备及存储介质 | |
CN109955257A (zh) | 一种机器人的唤醒方法、装置、终端设备和存储介质 | |
WO2023124026A1 (zh) | 机器人控制方法、系统、计算机设备、存储介质及计算机程序产品 | |
WO2019209528A1 (en) | Developer and runtime environments supporting multi-input modalities | |
US20210166685A1 (en) | Speech processing apparatus and speech processing method | |
CN107293295B (zh) | 一种执行自然语言命令所对应的任务的方法、设备和系统 | |
CN117743542A (zh) | 基于人工智能的信息处理方法、装置、电子设备及智能体 | |
WO2018000208A1 (zh) | 一种技能包的搜索与定位方法、系统及机器人 | |
CN112363861A (zh) | 用于地铁购票的语音交互方法及装置 | |
EP4064031A1 (en) | Method and system for tracking in extended reality using voice commmand | |
WO2018006366A1 (zh) | 基于交互信息的评分方法及系统 | |
CN112656309A (zh) | 扫地机的功能执行方法、装置、可读存储介质及电子设备 | |
CN106125911B (zh) | 用于机器的人机交互学习方法及机器 | |
US20240176650A1 (en) | Self-healing bot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16906609 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16906609 Country of ref document: EP Kind code of ref document: A1 |