CN106326208A - System and method for training robot via voice - Google Patents

System and method for training robot via voice Download PDF

Info

Publication number
CN106326208A
CN106326208A CN201510383547.9A CN201510383547A CN106326208A CN 106326208 A CN106326208 A CN 106326208A CN 201510383547 A CN201510383547 A CN 201510383547A CN 106326208 A CN106326208 A CN 106326208A
Authority
CN
China
Prior art keywords
statement
robot
conditional statement
voice
default
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510383547.9A
Other languages
Chinese (zh)
Other versions
CN106326208B (en
Inventor
蔡明峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutou Technology Hangzhou Co Ltd
Original Assignee
Yutou Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutou Technology Hangzhou Co Ltd filed Critical Yutou Technology Hangzhou Co Ltd
Priority to CN201510383547.9A priority Critical patent/CN106326208B/en
Priority to PCT/CN2016/085911 priority patent/WO2017000786A1/en
Priority to TW105120437A priority patent/TWI594136B/en
Publication of CN106326208A publication Critical patent/CN106326208A/en
Priority to HK17105090.9A priority patent/HK1231592A1/en
Application granted granted Critical
Publication of CN106326208B publication Critical patent/CN106326208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Toys (AREA)

Abstract

The invention discloses a system and method for training a robot via voice. The system for training the robot via voice comprises a receiving unit used for receiving a voice signal; an analysis unit connected with the receiving unit and used for analyzing the voice signal, matching the voice signal with a preset statement, and obtaining a conditional statement matched with the preset statement and corresponding to the voice signal, and an execution statement corresponding to the voice signal; a processing unit connected with the analysis unit and used for combining the conditional statement with the execution statement to generate a target entry; and a storage unit connected with the processing unit and used for storing a preset entry and training the robot according to the preset entry; and the processing unit which carries out weight calculation according to the target entry and carries out corresponding processing according to the weight calculation result.

Description

A kind of system and method robot being trained by voice
Technical field
The present invention relates to robot field, particularly relate to a kind of by what robot was trained by voice be System and method.
Background technology
The method being trained robot behavior at present is only limitted to use the mode of programming development to come machine The logic of people is modified, and developer, by revising the programmed logic of robot, completes meeting a certain bar The setting of certain action is performed under part.This training method is necessary for robot low level development, but When entering upper layer logic exploitation, then occur that development efficiency is low, the defects such as error rate is high;This training method It is not suitable for the domestic consumer not possessing programming development professional skill, if domestic consumer wants robot A few modifications is done in behavior, then need to take a substantial amount of time and learn.
In sum, above-mentioned training method narrow application range, efficiency is low and error rate is high.
Summary of the invention
The problems referred to above existed for the existing method being trained robot, now provide one to be intended to Realize supporting system and the side that robot is trained by the user not having programming development basis by voice Method.
Concrete technical scheme is as follows:
A kind of system robot being trained by voice, including:
One receives unit, is used for receiving voice signal;
One resolution unit, connects described reception unit, for resolving described voice signal, by institute Predicate tone signal is mated with default statement, obtain with described default statement matching and with described voice The conditional statement that signal is corresponding, and the execution statement corresponding with described voice signal;
One processing unit, connects described resolution unit, for by described conditional statement and described execution statement In conjunction with generating a target entry;
One memory element, connects described processing unit, in order to store default entry, according to described default bar Robot is trained by mesh;
Described processing unit carries out weight calculation according to described target entry, and ties according to described weight calculation Fruit processes accordingly.
Preferably, described resolution unit includes:
One first modular converter, for being converted to Word message by described voice signal;
One semantic module, connects described first modular converter, for solving described Word message Analysis, mates described Word message with described default statement, obtains and described default statement matching And the conditional statement corresponding with described Word message, and identify that described conditional statement is normal formula conditional statement Or reaction type conditional statement;
If described conditional statement is normal formula conditional statement, then obtain the execution corresponding with described fileinfo Statement;
If described conditional statement is reaction type conditional statement, then carry out weight computing, make described robot hold The operation of the last task of row.
Preferably, described resolution unit also includes:
One second modular converter, connects described semantic module, for being converted to by described execution statement Corresponding audio signal, and export.
Preferably, each described default entry includes pre-conditioned statement and presets execution statement.
Preferably, described processing unit is according to the described conditional statement in described target entry, and traversal is described The described pre-conditioned statement in all described default entry in memory element, to obtain described condition language Whether sentence repeats with described pre-conditioned statement, if not repeating, then carries out described weight computing, and by institute State target entry to be stored in described memory element to form new described default entry, preset according to described Robot is trained by entry;If repeating, carry out described weight computing, and according to described weight calculation Result processes accordingly.
A kind of method being trained robot by voice, is comprised the steps:
S1. voice signal is gathered;
S2. described voice signal is resolved, described voice signal is mated with default statement, Obtain and described default statement matching and corresponding with described voice signal conditional statement, and with institute's predicate The execution statement that tone signal is corresponding;
S3. described conditional statement is combined with described execution statement generation one target entry;
S4. carry out weight calculation according to described target entry, and carry out phase according to described weight calculation result The process answered.
Preferably, described step S2 specifically includes:
S21. described voice signal is converted to Word message;
S22. described Word message is resolved, described Word message and described default statement are carried out Join, obtain and described default statement matching and corresponding with described Word message conditional statement, and identify Described conditional statement is normal formula conditional statement or reaction type conditional statement;
If described conditional statement is normal formula conditional statement, then obtain the execution corresponding with described fileinfo Statement;
If described conditional statement is reaction type conditional statement, then carry out weight computing, make described robot hold The operation of the last task of row.
Preferably, described step S2 also includes:
S23. described execution statement is converted to corresponding audio signal, and exports.
Preferably, each described default entry includes pre-conditioned statement and presets execution statement.
Preferably, described step S3 specifically includes:
S31. according to the described conditional statement in described target entry, owning in described memory element is traveled through Described pre-conditioned statement in described default entry;
S32. traversing result is obtained, it is judged that whether described conditional statement repeats with described pre-conditioned statement,
If described conditional statement does not repeats with described pre-conditioned statement, then perform step S33;
If described conditional statement repeats with described pre-conditioned statement, then perform step S34;
S33. carry out described weight computing, and described target entry is stored in described memory element with shape The described default entry of Cheng Xin, is trained robot according to described default entry;
S34. carry out described weight computing, and process accordingly according to described weight calculation result.
The beneficial effect of technique scheme:
In the technical program, in system robot being trained by voice, pass through resolution unit Carry out voice signal resolving and obtain corresponding conditional statement and perform statement, by processing unit by condition Statement and execution statement combine and generate entry, make robot train accordingly according to entry, and efficiency is high And error rate is low.In method robot being trained by voice, user input voice is only needed to believe Number robot can be trained, simple to operate, applied widely and efficiency is high.
Accompanying drawing explanation
Fig. 1 is the mould of a kind of embodiment of the system being trained robot by voice of the present invention Block figure;
Fig. 2 is the flow process of a kind of enforcement of the method being trained robot by voice of the present invention Figure;
Fig. 3 is the method flow diagram resolving voice signal;
Fig. 4 is the method flow diagram processed described target entry accordingly according to traversing result.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out Clearly and completely describe, it is clear that described embodiment is only a part of embodiment of the present invention, and It is not all, of embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art are not making The every other embodiment obtained on the premise of going out creative work, broadly falls into the scope of protection of the invention.
It should be noted that in the case of not conflicting, the embodiment in the present invention and the spy in embodiment Levy and can be mutually combined.
The invention will be further described with specific embodiment below in conjunction with the accompanying drawings, but not as the present invention's Limit.
As it is shown in figure 1, a kind of system robot being trained by voice, including:
One receives unit 1, is used for receiving voice signal;
One resolution unit 2, connects and receives unit 1, for resolving voice signal, by voice signal Mate with default statement, obtain and that preset statement matching and corresponding with voice signal conditional statement, And the execution statement corresponding with voice signal;
One processing unit 3, connects resolution unit 2, for conditional statement is combined generation one with execution statement Target entry;
One memory element 4, connects processing unit 3, in order to store default entry, according to default entry to machine Device people be trained;
Processing unit 3 carries out weight calculation according to target entry, and carries out accordingly according to weight calculation result Process.
In the present embodiment, the system using voice to be trained robot can be applicable to child's type toy In, although child does not possess the programming development technical ability of specialty, but child can pass through natural language and machine People exchanges, and image training robot performs corresponding action.
In the present embodiment, for the optimization process of robot behavior logic development, have selected and be suitable for commonly The mode that user and robot interact, makes user be absorbed in instruction in the process being trained robot White silk logic itself, rather than development language, improve work efficiency and reduce error rate.Single by resolving Voice signal is carried out resolving the corresponding conditional statement of acquisition and performing statement, by processing unit 3 by unit 2 Conditional statement and execution statement are combined and generate entry, makes robot train accordingly according to entry, Efficiency is high and error rate is low.
In a preferred embodiment, resolution unit 2 includes:
One first modular converter 21, is used for converting voice signals into Word message;
One semantic module 22, connects the first modular converter 21, for Word message is resolved, Word message is mated with default statement, obtains with that preset statement matching and corresponding with Word message Conditional statement, and identify that conditional statement is normal formula conditional statement or reaction type conditional statement;
If conditional statement is normal formula conditional statement, then obtain the execution statement corresponding with fileinfo;
If conditional statement is reaction type conditional statement, then carries out weight computing, make robot perform the last time The operation of task.
In the present embodiment, the clause that target entry is corresponding may is that
As A, with regard to B;
If during A, then B;
Not again when A, it is B;
This time should be B;
This is wrong;
Do so not to.
Wherein, " as A ", " if during A ", " the most again when A ", and " this Time " it is normal formula conditional statement;" this is wrong " and " do so be not to " is anti- Feedback formula conditional statement.
The whole training process of the system that robot is trained by employing voice is: close when recognizing training During key clause, robot enters training mode, and user can use clause similar to above to talk with robot Time, by the semantic module 22 of resolution unit 2, user's word is divided into part A and part B, through semantic conversion, is converted to part A condition exploitation statement, is converted to part B perform to move Make exploitation statement, the incidence relation of part A and part B is appended to the training knowledge base (storage of this locality Unit 4), and part A and part B are combined the entry that formation is new, such as fruit part A and training knowledge Condition exploitation statement in storehouse is identical, part B and corresponding execution action exploitation statement in training knowledge base Difference, then part A is that two conditions are the same but perform the knowledge entry of different action, need to carry out weight fortune Calculating, weight computing comprises the positive and negative feedback of user, and the additional time considers, and new knowledge entry is chased after It is added to local knowledge base, and updates training knowledge base.When recognizing the exchange of common natural language, training Pattern terminates, and robot terminates training and revert to poll judgment model, goes through owning in training knowledge base Entry, when hitting a certain bar knowledge entry, then performs the execution action exploitation included in knowledge entry Statement.
In the present embodiment, the first modular converter 21 can use automatic speech recognition (Automatic Speech Recognition, ASR) technology, ASR technology can be by the vocabulary Content Transformation in human speech for calculating Content that machine is readable also inputs computer, and interact with computer.
Semantic module 22 uses natural language processing (the Natural Language of artificial intelligence Processing, NLP) technology, by the conditional statement in NLP technical limit spacing Word message and execution language Sentence.
In a preferred embodiment, resolution unit 2 also includes:
One second modular converter 23, connects semantic module 22, for being converted to accordingly by execution statement Audio signal, and export.
In the present embodiment, the second modular converter 23 uses the TTS (Text To Speech) will text Being converted to voice technology, this technology is an interactive part, allows the robot to speak by TTS.
In a preferred embodiment, each default entry includes pre-conditioned statement and presets execution statement.
In a preferred embodiment, processing unit 3 is according to the conditional statement in target entry, traversal storage Whether the pre-conditioned statement in all default entry in unit, obtain conditional statement with pre-conditioned Statement repeats, if not repeating, then carries out weight computing, and target entry is stored in memory element 4 To form new default entry, according to default entry, robot is trained;If repeating, carry out weight Computing, and process accordingly according to weight calculation result.
In the present embodiment, can be after additional new knowledge entry or original knowledge entry, when receiving user's During positive and negative feedback, carry out weight computing, arrange whole training knowledge base, be compressed waiting work, to protect The card robot efficiency when condition poll judges.
As in figure 2 it is shown, a kind of method robot being trained by voice, comprise the steps:
S1. voice signal is gathered;
S2. voice signal is resolved, voice signal is mated with default statement, obtain with pre- If statement matching and corresponding with voice signal conditional statement, and the execution statement corresponding with voice signal;
S3. conditional statement is combined generation one target entry with execution statement;
S4. carry out weight calculation according to target entry, and process accordingly according to weight calculation result.
In the present embodiment, user input voice signal is only needed can robot to be trained, operation letter Single, applied widely and efficiency is high.
As it is shown on figure 3, in a preferred embodiment, step S2 specifically includes:
S21. Word message is converted voice signals into;
S22. Word message is resolved, Word message is mated with default statement, obtain with pre- If statement matching and corresponding with Word message conditional statement, and identify that conditional statement is normal formula condition Statement or reaction type conditional statement;
If conditional statement is normal formula conditional statement, then obtain the execution statement corresponding with fileinfo;
If conditional statement is reaction type conditional statement, then carries out weight computing, make robot perform the last time The operation of task.
In the present embodiment, convert voice signals into Word message and can use automatic speech recognition (Automatic Speech Recognition, ASR) technology, ASR technology can be by human speech Vocabulary Content Transformation is computer-readable input, and interacts with computer.
Carry out Word message resolving natural language processing (the Natural Language that can use artificial intelligence Processing, NLP) technology, by the conditional statement in NLP technical limit spacing Word message and execution language Sentence.
In a preferred embodiment, step S2 also includes:
S23. execution statement is converted to corresponding audio signal, and exports.
In the present embodiment, employing TTS (Text To Speech, convert text to voice) technology will Performing statement and be converted to corresponding audio signal, this technology is an interactive part, is made by TTS Robot can speak.
In a preferred embodiment, each default entry includes pre-conditioned statement and presets execution statement.
As shown in Figure 4, in a preferred embodiment, step S3 specifically includes:
S31. according to the conditional statement in target entry, in all default entry in traversal memory element Pre-conditioned statement;
S32. obtain traversing result, and carry out weight calculation
Whether Rule of judgment statement repeats with pre-conditioned statement,
If conditional statement does not repeats with pre-conditioned statement, then perform step S33;
If conditional statement repeats with pre-conditioned statement, then perform step S34;
S33. carry out weight computing, and be stored in target entry in memory element to form new default bar Mesh, is trained robot according to default entry;
S34. carry out weight computing, and process accordingly according to weight calculation result.
In the present embodiment, when robot in the afternoon when hear that user says " hello " time, Yong Huxun The training step practicing robot reply " XXX (name), good afternoon " is as follows:
A1. robot is said " hello " by user, " this time should be said, XXX, and good afternoon "
A2. the content said user carries out semantic parsing, isolates the execution statement spoken in content and i.e. " says XXX, good afternoon ", " saying " is the TTS service of corresponding robot, and " XXX " hit is current interactive The name of user, " afternoon " hits the current time, " XXX, good afternoon " corresponding TTS service Content;
A3. new knowledge base entry is generated according to semantic analysis result, it is judged that be appended to this locality after weight and know Know storehouse;
A4. robot performs new knowledge base, terminates;
After completing current interactive training, when robot is said " hello " by user, robot will return Answer " XXX, good afternoon ", thus reach expection training objectives.
The present invention can make user liberate both hands when image training robot, need not write the situation of any code Under, it is achieved the correction to robot behavior, make user focus more on training content itself in the training process, Rather than how to write on the underlying issues such as code.
The foregoing is only preferred embodiment of the present invention, not thereby limit embodiments of the present invention and Protection domain, to those skilled in the art, it should can appreciate that all utilization description of the invention And the equivalent done by diagramatic content and the scheme obtained by obvious change, all should comprise Within the scope of the present invention.

Claims (10)

1. system robot being trained by voice, it is characterised in that including:
One receives unit, is used for receiving voice signal;
One resolution unit, connects described reception unit, for resolving described voice signal, by institute Predicate tone signal is mated with default statement, obtain with described default statement matching and with described voice The conditional statement that signal is corresponding, and the execution statement corresponding with described voice signal;
One processing unit, connects described resolution unit, for by described conditional statement and described execution statement In conjunction with generating a target entry;
One memory element, connects described processing unit, in order to store default entry, according to described default bar Robot is trained by mesh;
Described processing unit carries out weight calculation according to described target entry, and ties according to described weight calculation Fruit processes accordingly.
2. the system by voice, robot being trained as claimed in claim 1, it is characterised in that Described resolution unit includes:
One first modular converter, for being converted to Word message by described voice signal;
One semantic module, connects described first modular converter, for solving described Word message Analysis, mates described Word message with described default statement, obtains and described default statement matching And the conditional statement corresponding with described Word message, and identify that described conditional statement is normal formula conditional statement Or reaction type conditional statement;
If described conditional statement is normal formula conditional statement, then obtain the execution corresponding with described fileinfo Statement;
If described conditional statement is reaction type conditional statement, then carry out weight computing, make described robot hold The operation of the last task of row.
3. the system by voice, robot being trained as claimed in claim 2, it is characterised in that Described resolution unit also includes:
One second modular converter, connects described semantic module, for being converted to by described execution statement Corresponding audio signal, and export.
4. the system by voice, robot being trained as claimed in claim 1, it is characterised in that Each described default entry includes pre-conditioned statement and presets execution statement.
5. the system by voice, robot being trained as claimed in claim 4, it is characterised in that Described processing unit, according to the described conditional statement in described target entry, travels through in described memory element Whether described pre-conditioned statement in all described default entries, obtain described conditional statement with described Pre-conditioned statement repeats, if not repeating, then carries out described weight computing, and described target entry is deposited It is stored in described memory element to form new described default entry, according to described default entry to robot It is trained;If repeating, carrying out described weight computing, and carrying out accordingly according to described weight calculation result Process.
6. method robot being trained by voice, it is characterised in that comprise the steps:
S1. voice signal is gathered;
S2. described voice signal is resolved, described voice signal is mated with default statement, Obtain and described default statement matching and corresponding with described voice signal conditional statement, and with institute's predicate The execution statement that tone signal is corresponding;
S3. described conditional statement is combined with described execution statement generation one target entry;
S4. carry out weight calculation according to described target entry, and carry out phase according to described weight calculation result The process answered.
7. the method by voice, robot being trained as claimed in claim 6, it is characterised in that Described step S2 specifically includes:
S21. described voice signal is converted to Word message;
S22. described Word message is resolved, described Word message and described default statement are carried out Join, obtain and described default statement matching and corresponding with described Word message conditional statement, and identify Described conditional statement is normal formula conditional statement or reaction type conditional statement;
If described conditional statement is normal formula conditional statement, then obtain the execution corresponding with described fileinfo Statement;
If described conditional statement is reaction type conditional statement, then carry out weight computing, make described robot hold The operation of the last task of row.
8. the method by voice, robot being trained as claimed in claim 7, it is characterised in that Described step S2 also includes:
S23. described execution statement is converted to corresponding audio signal, and exports.
9. the method by voice, robot being trained as claimed in claim 6, it is characterised in that Each described default entry includes pre-conditioned statement and presets execution statement.
10. the method by voice, robot being trained as claimed in claim 9, it is characterised in that Described step S3 specifically includes:
S31. according to the described conditional statement in described target entry, owning in described memory element is traveled through Described pre-conditioned statement in described default entry;
S32. traversing result is obtained, it is judged that whether described conditional statement repeats with described pre-conditioned statement,
If described conditional statement does not repeats with described pre-conditioned statement, then perform step S33;
If described conditional statement repeats with described pre-conditioned statement, then perform step S34;
S33. carry out described weight computing, and described target entry is stored in described memory element with shape The described default entry of Cheng Xin, is trained robot according to described default entry;
S34. carry out described weight computing, and process accordingly according to described weight calculation result.
CN201510383547.9A 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice Active CN106326208B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201510383547.9A CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice
PCT/CN2016/085911 WO2017000786A1 (en) 2015-06-30 2016-06-15 System and method for training robot via voice
TW105120437A TWI594136B (en) 2015-06-30 2016-06-29 A system and method for training robots through voice
HK17105090.9A HK1231592A1 (en) 2015-06-30 2017-05-19 A system and method for training robots through voice

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510383547.9A CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice

Publications (2)

Publication Number Publication Date
CN106326208A true CN106326208A (en) 2017-01-11
CN106326208B CN106326208B (en) 2019-06-07

Family

ID=57607864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510383547.9A Active CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice

Country Status (4)

Country Link
CN (1) CN106326208B (en)
HK (1) HK1231592A1 (en)
TW (1) TWI594136B (en)
WO (1) WO2017000786A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108172226A (en) * 2018-01-27 2018-06-15 上海萌王智能科技有限公司 A kind of voice control robot for learning response voice and action

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021110870A1 (en) 2019-12-05 2021-06-10 Acib Gmbh Method for producing a fermentation product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060100855A1 (en) * 2004-10-27 2006-05-11 Rozen Suan D Disambiguation method for complex sentences
US20060262912A1 (en) * 2005-05-19 2006-11-23 Mci, Inc. Systems and methods for providing voicemail services including caller identification
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method
CN202736475U (en) * 2011-12-08 2013-02-13 华南理工大学 Chat robot
TW201423587A (en) * 2012-12-04 2014-06-16 Hongfujin Prec Ind Wuhan System and method for providing prompts for callee

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101060183B1 (en) * 2009-12-11 2011-08-30 한국과학기술연구원 Embedded auditory system and voice signal processing method
US8719023B2 (en) * 2010-05-21 2014-05-06 Sony Computer Entertainment Inc. Robustness to environmental changes of a context dependent speech recognizer
CN103065629A (en) * 2012-11-20 2013-04-24 广东工业大学 Speech recognition system of humanoid robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060100855A1 (en) * 2004-10-27 2006-05-11 Rozen Suan D Disambiguation method for complex sentences
US20060262912A1 (en) * 2005-05-19 2006-11-23 Mci, Inc. Systems and methods for providing voicemail services including caller identification
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method
CN202736475U (en) * 2011-12-08 2013-02-13 华南理工大学 Chat robot
TW201423587A (en) * 2012-12-04 2014-06-16 Hongfujin Prec Ind Wuhan System and method for providing prompts for callee

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHONGGUO LI ET AL.: "Chatting Robots", 《PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON INTEGRATION TECHNOLOGY》 *
李新德 等: "一种面向室内智能机器人导航的路径自然语言处理方法", 《自动化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108172226A (en) * 2018-01-27 2018-06-15 上海萌王智能科技有限公司 A kind of voice control robot for learning response voice and action

Also Published As

Publication number Publication date
TW201719452A (en) 2017-06-01
CN106326208B (en) 2019-06-07
WO2017000786A1 (en) 2017-01-05
HK1231592A1 (en) 2017-12-22
TWI594136B (en) 2017-08-01

Similar Documents

Publication Publication Date Title
WO2021104102A1 (en) Speech recognition error correction method, related devices, and readable storage medium
CN103745722B (en) Voice interaction smart home system and voice interaction method
CN106104674B (en) Mixing voice identification
CN110415686A (en) Method of speech processing, device, medium, electronic equipment
CN109388700A (en) A kind of intension recognizing method and system
CN107767861A (en) voice awakening method, system and intelligent terminal
CN110517664A (en) Multi-party speech recognition methods, device, equipment and readable storage medium storing program for executing
KR101666930B1 (en) Target speaker adaptive voice conversion method using deep learning model and voice conversion device implementing the same
CN103514879A (en) Local voice recognition method based on BP neural network
CN106205615A (en) A kind of control method based on interactive voice and system
CN105261358A (en) N-gram grammar model constructing method for voice identification and voice identification system
CN109671434A (en) A kind of speech ciphering equipment and self study audio recognition method
CN106557165B (en) The action simulation exchange method and device and smart machine of smart machine
CN111179928A (en) Intelligent control method for power transformation and distribution station based on voice interaction
CN107679225A (en) A kind of reply generation method based on keyword
CN111428867A (en) Model training method and device based on reversible separation convolution and computer equipment
CN108446321A (en) A kind of automatic question-answering method based on deep learning
CN108304424A (en) Text key word extracting method and text key word extraction element
CN106127526A (en) Intelligent robot system and method for work thereof
CN106708950B (en) Data processing method and device for intelligent robot self-learning system
CN111429923A (en) Training method and device of speaker information extraction model and computer equipment
CN109977401A (en) A kind of method for recognizing semantics neural network based
CN106326208A (en) System and method for training robot via voice
CN110516240B (en) Semantic similarity calculation model DSSM (direct sequence spread spectrum) technology based on Transformer
CN104679733B (en) A kind of voice dialogue interpretation method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1231592

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190618

Address after: 100085 Beijing Haidian District Shangdi Information Industry Base Pioneer Road 1 B Block 2 Floor 2037

Patentee after: Beijing or Technology Co., Ltd.

Address before: 310023 Room 101, No. 10, Lianggongdang Road, Xixi Art Collection Village, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: Taro Technology (Hangzhou) Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191209

Address after: 310000, room 10, No. 101, Gong Hui Road, Xixi art gathering village, Wuchang Street, Yuhang District, Zhejiang, Hangzhou

Patentee after: Taro Technology (Hangzhou) Co., Ltd.

Address before: 100085 Beijing Haidian District Shangdi Information Industry Base Pioneer Road 1 B Block 2 Floor 2037

Patentee before: Beijing or Technology Co., Ltd.