CN106023989A - Robot capable of speech interaction - Google Patents
Robot capable of speech interaction Download PDFInfo
- Publication number
- CN106023989A CN106023989A CN201610328136.4A CN201610328136A CN106023989A CN 106023989 A CN106023989 A CN 106023989A CN 201610328136 A CN201610328136 A CN 201610328136A CN 106023989 A CN106023989 A CN 106023989A
- Authority
- CN
- China
- Prior art keywords
- module
- robot
- user
- verbal instructions
- language
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title abstract 2
- 230000001755 vocal effect Effects 0.000 claims description 26
- 230000000875 corresponding effect Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 abstract 4
- 230000004899 motility Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000000034 method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a robot capable of speech interaction, and the robot comprises a robot body. The robot is characterized in that the robot also comprises a voice recognition module, a filtering matching module, a customized voice recognition module, a correlation module, an operation set module and an execution module, wherein the voice recognition module, the filtering matching module, the customized voice recognition module, the correlation module, the operation set module and the execution module are connected sequentially; the voice recognition module is used for receiving a voice command of a user; the filtering matching module is used for the filtering of the voice command, and the matching with a voice command in the customized voice recognition module; the customized voice recognition module is used for storing the voice command customized by the user; the correlation module is used for the manual correlating of the voice command customized by the user and a corresponding operation instruction; the operation set module which is used for storing an operation instruction set which can be directly recognized by the robot; and the execution module which enables the robot body to carry out the corresponding operation. The user can customize the voice for starting a corresponding operation according to the favorites and habits, so the robot is high in flexibility and meets the demands of different users for the personalized customization.
Description
Technical field
The present invention relates to a kind of can the mutual robot of language.
Background technology
Robot (Robot) is the installations automatically performing work, can assist or replace the work of the mankind, can can receive verbal instructions and perform corresponding action by the mutual robot of language, but existing design can the mutual robot of language, the verbal instructions of input has to comply with the phonetic rules of robot formulation just can realize corresponding work, motility is the highest, it is impossible to meet the customization of individual character of different user.
Summary of the invention
For the problems referred to above, the present invention provide a kind of can the mutual robot of language, user can be according to self hobby and the voice being accustomed to self-defined startup respective action, and motility is high, meet the customization of individual character of different user.
For realizing above-mentioned technical purpose, reaching above-mentioned technique effect, the present invention is achieved through the following technical solutions:
A kind of can the mutual robot of language, including robot body, it is characterised in that also include the sound identification module being sequentially connected, filter matching module, self-defined sound identification module, relating module, set of actions module and perform module, wherein:
Sound identification module: for receiving the verbal instructions of user;
Filter matching module: verbal instructions is filtered and mates with the verbal instructions in self-defined sound identification module;
Self-defined sound identification module: be used for storing user-defined verbal instructions;
Relating module: for the user-defined verbal instructions of manual association and corresponding action command;
Set of actions module: storage robot can the action command set of Direct Recognition;
Perform module: make robot body perform corresponding action.
Preferably, also include and filter the alarm module that is connected of matching module, when verbal instructions in the verbal instructions that cannot mate user and self-defined sound identification module, then sending alarm.
Preferably, also include logger module, for recording the action log performing module.
Preferably, also include suspending module, for manually suspending the action performing module.
The invention has the beneficial effects as follows: user can like according to self and is accustomed to the voice of self-defined startup respective action and is stored in self-defined sound identification module, the voice easily identified can be selected, the most single digital speech, reduce the probability of coupling error, motility is high, can meet the customization of individual character of different user.
Accompanying drawing explanation
Fig. 1 is that the present invention is a kind of can the structured flowchart of the mutual robot of language.
Detailed description of the invention
Being described in further detail technical solution of the present invention with specific embodiment below in conjunction with the accompanying drawings, so that those skilled in the art can be better understood from the present invention and can be practiced, but illustrated embodiment is not as a limitation of the invention.
A kind of can the mutual robot of language, including robot body, as it is shown in figure 1, also include the sound identification module being sequentially connected, filter matching module, self-defined sound identification module, relating module, set of actions module and perform module, wherein:
Sound identification module: for receiving the verbal instructions of user;
Filter matching module: verbal instructions is filtered and mates with the verbal instructions in self-defined sound identification module;
Self-defined sound identification module: be used for storing user-defined verbal instructions;
Relating module: for the user-defined verbal instructions of manual association and corresponding action command;
Set of actions module: storage robot can the action command set of Direct Recognition, namely the action command that robot can perform;
Perform module: make robot body perform corresponding action.
Preferably, also include and filter the alarm module that is connected of matching module, when verbal instructions in the verbal instructions that cannot mate user and self-defined sound identification module, then sending alarm.
Preferably, also include logger module, for recording the action log performing module.
Preferably, also including suspending module, for manually suspending the action performing module, such as, when the user discover that matching result is incorrect or thinks time-out action, all can manually suspend, or typing in advance suspends corresponding verbal instructions and realizes automatic pause.
Preferably, the verbal instructions of user is carried out number of times and percentage is added up.
Preferably, the number of times and percentage manually suspending the password performed is added up.
User can like according to self and is accustomed to the voice of self-defined startup respective action and is stored in self-defined sound identification module, the voice easily identified can be selected, the corresponding conventional action command of the most single digital speech, reduce the probability of coupling error, motility is high, can meet the customization of individual character of different user.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing description of the invention and accompanying drawing content to be made or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, is the most in like manner included in the scope of patent protection of the present invention.
Claims (6)
1. can the mutual robot of language, including robot body, it is characterised in that also include the sound identification module being sequentially connected, filter matching module, self-defined sound identification module, relating module, set of actions module and perform module, wherein:
Sound identification module: for receiving the verbal instructions of user;
Filter matching module: verbal instructions is filtered and mates with the verbal instructions in self-defined sound identification module;
Self-defined sound identification module: be used for storing user-defined verbal instructions;
Relating module: for the user-defined verbal instructions of manual association and corresponding action command;
Set of actions module: storage robot can the action command set of Direct Recognition;
Perform module: make robot body perform corresponding action.
The most according to claim 1 a kind of can the mutual robot of language, it is characterized in that, also include and filter the alarm module that is connected of matching module, when verbal instructions in the verbal instructions that cannot mate user and self-defined sound identification module, then sending alarm.
The most according to claim 1 a kind of can the mutual robot of language, it is characterised in that also include logger module, for recording the action log performing module.
The most according to claim 3 a kind of can the mutual robot of language, it is characterised in that also include suspending module, for manually suspending the action performing module.
The most according to claim 3 a kind of can the mutual robot of language, it is characterised in that the verbal instructions of user is carried out number of times and percentage is added up.
The most according to claim 4 a kind of can the mutual robot of language, it is characterised in that the number of times and percentage manually suspending the password performed is added up.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610328136.4A CN106023989A (en) | 2016-05-18 | 2016-05-18 | Robot capable of speech interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610328136.4A CN106023989A (en) | 2016-05-18 | 2016-05-18 | Robot capable of speech interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106023989A true CN106023989A (en) | 2016-10-12 |
Family
ID=57098319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610328136.4A Pending CN106023989A (en) | 2016-05-18 | 2016-05-18 | Robot capable of speech interaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106023989A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106773817A (en) * | 2016-12-01 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of command analysis method and robot for intelligent robot |
CN111405129A (en) * | 2020-03-12 | 2020-07-10 | 中国建设银行股份有限公司 | Intelligent outbound risk monitoring method and device |
WO2021097822A1 (en) * | 2019-11-22 | 2021-05-27 | 苏州铭冠软件科技有限公司 | Robot capable of speech interaction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101501762A (en) * | 2005-01-07 | 2009-08-05 | 通用汽车环球科技运作公司 | Voice activated lighting of control interfaces |
CN104091596A (en) * | 2014-01-20 | 2014-10-08 | 腾讯科技(深圳)有限公司 | Music identifying method, system and device |
CN105185377A (en) * | 2015-09-24 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Voice-based file generation method and device |
CN105490890A (en) * | 2014-09-16 | 2016-04-13 | 中兴通讯股份有限公司 | Intelligent household terminal and control method therefor |
-
2016
- 2016-05-18 CN CN201610328136.4A patent/CN106023989A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101501762A (en) * | 2005-01-07 | 2009-08-05 | 通用汽车环球科技运作公司 | Voice activated lighting of control interfaces |
CN104091596A (en) * | 2014-01-20 | 2014-10-08 | 腾讯科技(深圳)有限公司 | Music identifying method, system and device |
CN105490890A (en) * | 2014-09-16 | 2016-04-13 | 中兴通讯股份有限公司 | Intelligent household terminal and control method therefor |
CN105185377A (en) * | 2015-09-24 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | Voice-based file generation method and device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106773817A (en) * | 2016-12-01 | 2017-05-31 | 北京光年无限科技有限公司 | A kind of command analysis method and robot for intelligent robot |
CN106773817B (en) * | 2016-12-01 | 2020-11-17 | 北京光年无限科技有限公司 | Command analysis method for intelligent robot and robot |
WO2021097822A1 (en) * | 2019-11-22 | 2021-05-27 | 苏州铭冠软件科技有限公司 | Robot capable of speech interaction |
CN111405129A (en) * | 2020-03-12 | 2020-07-10 | 中国建设银行股份有限公司 | Intelligent outbound risk monitoring method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108829235B (en) | Voice data processing method and electronic device supporting the same | |
EP3392877B1 (en) | Device for performing task corresponding to user utterance | |
KR102398649B1 (en) | Electronic device for processing user utterance and method for operation thereof | |
EP3396666B1 (en) | Electronic device for providing speech recognition service and method thereof | |
CN110476150B (en) | Method for operating voice recognition service and electronic device supporting the same | |
EP3625667A1 (en) | Optimizing display engagement in action automation | |
KR102007478B1 (en) | Device and method for controlling application using speech recognition under predetermined condition | |
CN106023989A (en) | Robot capable of speech interaction | |
CN105549841A (en) | Voice interaction method, device and equipment | |
WO2018113096A1 (en) | Recipe program code generation method and recipe compilation cloud platform and system | |
CN105096951A (en) | Voice control realizing method and system based on intelligent wearable equipment | |
CN102945120B (en) | A kind of based on the human-computer interaction auxiliary system in children's application and exchange method | |
EP3610479B1 (en) | Electronic apparatus for processing user utterance | |
CN102830915A (en) | Semanteme input control system and method | |
CN104252287A (en) | Interactive device and method for improving expressive ability on basis of same | |
CN106782547B (en) | Robot semantic recognition system based on voice recognition | |
US20240154920A1 (en) | Method and System for Chatbot-Enabled Web Forms and Workflows | |
KR20180116726A (en) | Voice data processing method and electronic device supporting the same | |
CN104007836A (en) | Handwriting input processing method and terminal device | |
CN103902193A (en) | System and method for operating computers to change slides by aid of voice | |
US10908763B2 (en) | Electronic apparatus for processing user utterance and controlling method thereof | |
WO2017000785A1 (en) | System and method for training robot | |
KR102426411B1 (en) | Electronic apparatus for processing user utterance and server | |
TWI594136B (en) | A system and method for training robots through voice | |
CN113782023A (en) | Voice control method and system based on program control instruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161012 |
|
RJ01 | Rejection of invention patent application after publication |