CN106326208B - A kind of system and method that robot is trained by voice - Google Patents

A kind of system and method that robot is trained by voice Download PDF

Info

Publication number
CN106326208B
CN106326208B CN201510383547.9A CN201510383547A CN106326208B CN 106326208 B CN106326208 B CN 106326208B CN 201510383547 A CN201510383547 A CN 201510383547A CN 106326208 B CN106326208 B CN 106326208B
Authority
CN
China
Prior art keywords
sentence
robot
conditional statement
default
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510383547.9A
Other languages
Chinese (zh)
Other versions
CN106326208A (en
Inventor
蔡明峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutou Technology Hangzhou Co Ltd
Original Assignee
Yutou Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yutou Technology Hangzhou Co Ltd filed Critical Yutou Technology Hangzhou Co Ltd
Priority to CN201510383547.9A priority Critical patent/CN106326208B/en
Priority to PCT/CN2016/085911 priority patent/WO2017000786A1/en
Priority to TW105120437A priority patent/TWI594136B/en
Publication of CN106326208A publication Critical patent/CN106326208A/en
Priority to HK17105090.9A priority patent/HK1231592A1/en
Application granted granted Critical
Publication of CN106326208B publication Critical patent/CN106326208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Toys (AREA)

Abstract

The invention discloses a kind of system and methods being trained by voice to robot, by a receiving unit of the system that voice is trained robot, for receiving voice signal;One resolution unit, the receiving unit is connected, for being parsed to the voice signal, the voice signal is matched with default sentence, obtain and the default statement matching and corresponding with voice signal conditional statement, and execution sentence corresponding with the voice signal;One processing unit connects the resolution unit, for the conditional statement to be generated a target entry in conjunction with the execution sentence;One storage unit connects the processing unit, to store default entry, is trained according to the default entry to robot;The processing unit carries out weight calculation according to the target entry, and is performed corresponding processing according to the weight calculation result.

Description

A kind of system and method that robot is trained by voice
Technical field
The present invention relates to robot field more particularly to a kind of system being trained by voice to robot and sides Method.
Background technique
At present the method that robot behavior is trained is only limitted to patrol robot using the mode of programming development It collects and modifies, developer completes to execute certain movement under the conditions of meeting a certain by the programmed logic of modification robot Setting.This training method is necessary for robot low level development, but into when upper layer logic exploitation, then exploitation effect occurs The defects of rate is low, and error rate is high;This training method is not suitable for the ordinary user for not having programming development professional skill, if Ordinary user wants to do a few modifications to the behavior of robot, then needs to take a substantial amount of time and learnt.
In conclusion above-mentioned training method narrow application range, low efficiency and error rate are high.
Summary of the invention
For the above problem existing for the existing method being trained to robot, one kind is now provided and aims at support The system and method for not having the user on programming development basis to be trained by voice to robot.
Specific technical solution is as follows:
A kind of system that robot is trained by voice, comprising:
One receiving unit, for receiving voice signal;
One resolution unit connects the receiving unit, for parsing to the voice signal, by the voice signal Matched with default sentence, obtain and the default statement matching and corresponding with voice signal conditional statement, and Execution sentence corresponding with the voice signal;
One processing unit connects the resolution unit, for generating the conditional statement in conjunction with the execution sentence One target entry;
One storage unit connects the processing unit, to store default entry, according to the default entry to robot It is trained;
The processing unit carries out weight calculation according to the target entry, and carries out phase according to the weight calculation result The processing answered.
Preferably, the resolution unit includes:
One first conversion module, for the voice signal to be converted to text information;
One semantic module connects first conversion module, will be described for parsing to the text information Text information is matched with the default sentence, obtain and the default statement matching and it is corresponding with the text information Conditional statement, and identify that the conditional statement is normal formula conditional statement or reaction type conditional statement;
If the conditional statement is normal formula conditional statement, execution sentence corresponding with the file information is obtained;
If the conditional statement is reaction type conditional statement, weight operation is carried out, the robot is made to execute the last time The operation of task.
Preferably, the resolution unit further include:
One second conversion module, connects the semantic module, for the execution sentence to be converted to corresponding sound Frequency signal, and export.
Preferably, it includes preset condition sentence and default execution sentence that entry is preset described in each.
Preferably, the processing unit traverses the storage unit according to the conditional statement in the target entry In all default entries in the preset condition sentence, with obtain the conditional statement whether with the preset condition Sentence repeats, if not repeating, carries out the weight operation, and the target entry is stored in the storage unit with shape The default entry of Cheng Xin, is trained robot according to the default entry;The weight operation is carried out if repeating, And it is performed corresponding processing according to the weight calculation result.
A method of robot is trained by voice, is included the following steps:
S1. voice signal is acquired;
S2. the voice signal is parsed, the voice signal is matched with default sentence, obtain with it is described Conditional statement default statement matching and corresponding with the voice signal, and execution sentence corresponding with the voice signal;
S3., the conditional statement is generated to a target entry in conjunction with the execution sentence;
S4. weight calculation is carried out according to the target entry, and is performed corresponding processing according to the weight calculation result.
Preferably, the step S2 is specifically included:
S21. the voice signal is converted into text information;
S22. the text information is parsed, the text information is matched with the default sentence, obtained And the conditional statement default statement matching and corresponding with the text information, and identify that the conditional statement is normal formula Conditional statement or reaction type conditional statement;
If the conditional statement is normal formula conditional statement, execution sentence corresponding with the file information is obtained;
If the conditional statement is reaction type conditional statement, weight operation is carried out, the robot is made to execute the last time The operation of task.
Preferably, the step S2 further include:
S23. the execution sentence is converted into corresponding audio signal, and exported.
Preferably, it includes preset condition sentence and default execution sentence that entry is preset described in each.
Preferably, the step S3 is specifically included:
S31. it according to the conditional statement in the target entry, traverses all described default in the storage unit The preset condition sentence in entry;
S32. traversing result is obtained, judges whether the conditional statement repeats with the preset condition sentence,
If the conditional statement is not repeated with the preset condition sentence, S33 is thened follow the steps;
If the conditional statement and the preset condition sentence repeat, S34 is thened follow the steps;
S33. the weight operation is carried out, and the target entry is stored in and forms new institute in the storage unit Default entry is stated, robot is trained according to the default entry;
S34. the weight operation is carried out, and is performed corresponding processing according to the weight calculation result.
Above-mentioned technical proposal the utility model has the advantages that
In the technical program, in the system being trained by voice to robot, voice is believed by resolution unit It number carries out parsing to obtain corresponding conditional statement and execute sentence, by conditional statement and executes sentence through the processing unit and combine life At entry, train robot accordingly according to entry, high-efficient and error rate is low.Robot is being carried out by voice In trained method, only need user's input speech signal that can be trained to robot, easy to operate, applied widely and effect Rate is high.
Detailed description of the invention
Fig. 1 is a kind of module map of embodiment of the system of the present invention being trained by voice to robot;
Fig. 2 is a kind of flow chart of implementation of the method for the present invention being trained by voice to robot;
Fig. 3 is the method flow diagram parsed to voice signal;
Fig. 4 is the method flow diagram performed corresponding processing according to traversing result to the target entry.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art without creative labor it is obtained it is all its His embodiment, shall fall within the protection scope of the present invention.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase Mutually combination.
The present invention will be further explained below with reference to the attached drawings and specific examples, but not as the limitation of the invention.
As shown in Figure 1, a kind of system being trained by voice to robot, comprising:
One receiving unit 1, for receiving voice signal;
One resolution unit 2 connects receiving unit 1, for parsing to voice signal, by voice signal and default sentence Matched, obtain and default statement matching and corresponding with voice signal conditional statement, and corresponding with voice signal hold Line statement;
One processing unit 3 connects resolution unit 2, for conditional statement to be generated a target entry in conjunction with execution sentence;
One storage unit 4, connection processing unit 3 instruct robot according to default entry to store default entry Practice;
Processing unit 3 carries out weight calculation according to target entry, and is performed corresponding processing according to weight calculation result.
In the present embodiment, the system that robot is trained can be applied in children's type toy using voice, although Children do not have the programming development technical ability of profession, but children can be exchanged by natural language with robot, and image training robot Execute corresponding movement.
In the present embodiment, for the optimization process of robot behavior logic development, suitable ordinary user and machine have been selected The mode that device people interacts makes user be absorbed in trained logic itself to the process that robot is trained, rather than develops Language improves work efficiency and reduces error rate.Parsing is carried out to voice signal by resolution unit 2 and obtains corresponding item Conditional statement and execution sentence 3 are combined through the processing unit and generate entry, make robot according to item by part sentence and execution sentence Mesh is trained accordingly, and high-efficient and error rate is low.
In a preferred embodiment, resolution unit 2 includes:
One first conversion module 21, for converting voice signals into text information;
One semantic module 22 connects the first conversion module 21, for parsing to text information, by text information Matched with default sentence, obtain and default statement matching and corresponding with text information conditional statement, and identify condition Sentence is normal formula conditional statement or reaction type conditional statement;
If conditional statement is normal formula conditional statement, execution sentence corresponding with the file information is obtained;
If conditional statement is reaction type conditional statement, weight operation is carried out, robot is made to execute the behaviour of last task Make.
In the present embodiment, the corresponding clause of target entry may is that
As A, with regard to B;
If when A, B;
B is not again in A;
This when should be B;
This is wrong;
It does not do so not pair.
Wherein, " as A ", " if when A ", " should not be again in A ", and " this when " is normal formula condition language Sentence;" this is wrong " and " do so be not pair " is reaction type conditional statement.
Using the entire training process for the system that voice is trained robot are as follows: when recognizing training key clause Robot enters training mode and passes through the language of resolution unit 2 when clause and robot similar to above can be used to talk with for user User's word is divided into part A and part B by adopted analysis module 22, and by semantic conversion, part A is converted to condition exploitation Part B is converted to execution movement exploitation sentence, the incidence relation of part A and part B is appended to local training and known by sentence Know library (storage unit 4), and part A and part B is combined and forms new entry, such as the item in fruit part A and training knowledge base Part exploitation sentence is identical, and part B is different with execution movement exploitation sentence corresponding in training knowledge base, then part A is two rules Part is the same but executes the knowledge entry of different movements, need to carry out weight operation, weight operation includes the positive and negative feedback of user, additional Time is considered, and new knowledge entry is appended to local knowledge base, and updates trained knowledge base.When recognizing common nature When communication, training mode terminates, and robot terminates training and revert to poll judgment model, goes through the institute in training knowledge base There is entry, when hitting a certain knowledge entry, then executes the movement exploitation sentence of execution included in knowledge entry.
In the present embodiment, automatic speech recognition (Automatic Speech can be used in the first conversion module 21 Recognition, ASR) technology, ASR technology can by the vocabulary Content Transformation in human speech be computer-readable content simultaneously Computer is inputted, and is interacted with computer.
Semantic module 22 using artificial intelligence natural language processing (Natural Language Processing, NLP) technology obtains the conditional statement in text information by NLP technology and executes sentence.
In a preferred embodiment, resolution unit 2 further include:
One second conversion module 23 connects semantic module 22, is converted to corresponding audio letter for that will execute sentence Number, and export.
In the present embodiment, the second conversion module 23 converts text to voice using TTS (Text To Speech) Technology, the technology are interactive a part, allow the robot to speak by TTS.
In a preferred embodiment, it includes preset condition sentence and default execution sentence that each, which presets entry,.
In a preferred embodiment, processing unit 3 traverses the institute in storage unit according to the conditional statement in target entry Have the preset condition sentence in default entry, with obtain conditional statement whether with preset condition sentence repeat, if not repeating, into Row weight operation, and target entry is stored in the 4 default entry new with formation in storage unit, according to default entry to machine People is trained;Weight operation is carried out if repeating, and is performed corresponding processing according to weight calculation result.
It in the present embodiment, can be after additional new knowledge entry or original knowledge entry, when the positive and negative feedback for receiving user When, weight operation is carried out, entire training knowledge base is arranged, carries out the work such as compressing, to guarantee that robot judges in condition poll When efficiency.
As shown in Fig. 2, a kind of method being trained by voice to robot, includes the following steps:
S1. voice signal is acquired;
S2. voice signal is parsed, voice signal is matched with default sentence, is obtained and default statement matching And corresponding with voice signal conditional statement, and execution sentence corresponding with voice signal;
S3. conditional statement is generated into a target entry in conjunction with execution sentence;
S4. weight calculation is carried out according to target entry, and is performed corresponding processing according to weight calculation result.
In the present embodiment, only need user's input speech signal that can be trained to robot, it is easy to operate, it is applicable in model It encloses wide and high-efficient.
As shown in figure 3, in a preferred embodiment, step S2 is specifically included:
S21. text information is converted voice signals into;
S22. text information is parsed, text information is matched with default sentence, is obtained and default sentence Conditional statement matching and corresponding with text information, and identify that conditional statement is normal formula conditional statement or reaction type condition language Sentence;
If conditional statement is normal formula conditional statement, execution sentence corresponding with the file information is obtained;
If conditional statement is reaction type conditional statement, weight operation is carried out, robot is made to execute the behaviour of last task Make.
In the present embodiment, converting voice signals into text information can be used automatic speech recognition (Automatic Speech Recognition, ASR) technology, the vocabulary Content Transformation in human speech can be computer-readable by ASR technology Input, and interacted with computer.
Carrying out parsing to text information can be used natural language processing (the Natural Language of artificial intelligence Processing, NLP) technology, the conditional statement in text information is obtained by NLP technology and executes sentence.
In a preferred embodiment, step S2 further include:
S23. sentence will be executed and be converted to corresponding audio signal, and exported.
In the present embodiment, sentence will be executed using TTS (Text To Speech, convert text to voice) technology to turn It is changed to corresponding audio signal, which is interactive a part, allows the robot to speak by TTS.
In a preferred embodiment, it includes preset condition sentence and default execution sentence that each, which presets entry,.
As shown in figure 4, in a preferred embodiment, step S3 is specifically included:
S31. according to the conditional statement in target entry, the preset condition in all default entries in storage unit is traversed Sentence;
S32. traversing result is obtained, and carries out weight calculation
Whether Rule of judgment sentence repeats with preset condition sentence,
If conditional statement is not repeated with preset condition sentence, S33 is thened follow the steps;
If conditional statement and preset condition sentence repeat, S34 is thened follow the steps;
S33. weight operation is carried out, and target entry is stored in and forms new default entry in storage unit, according to Default entry is trained robot;
S34. weight operation is carried out, and is performed corresponding processing according to weight calculation result.
In the present embodiment, when robot in the afternoon when hear that user says " hello " when, user's image training robot return The training step of multiple " XXX (name), good afternoon " is as follows:
A1. user says " hello " to robot, " this when should say that XXX, good afternoon "
A2. semantic parsing is carried out to the content that user says, the execution sentence isolated in speech content " says XXX, afternoon It is good ", " saying " is the TTS service of corresponding robot, and the name for the user that " XXX " hit currently interacts, " afternoon " hits current Time, the content of " XXX, good afternoon " corresponding TTS service;
A3. new knowledge base entry is generated according to semantic parsing result, is appended to local knowledge base after judging weight;
A4. robot executes new knowledge base, terminates;
After completing current interaction training, when user says " hello " to robot, robot will answer " XXX, afternoon It is good ", to reach expected training goal.
Family can be used to liberate both hands in image training robot for the present invention, in the case where not needing to write any code, realize Amendment to robot behavior makes user focus more on training content itself in the training process, rather than how to write code etc. On underlying issue.
The foregoing is merely preferred embodiments of the present invention, are not intended to limit embodiments of the present invention and protection model It encloses, to those skilled in the art, should can appreciate that all with made by description of the invention and diagramatic content Equivalent replacement and obviously change obtained scheme, should all be included within the scope of the present invention.

Claims (10)

1. a kind of system being trained by voice to robot characterized by comprising
One receiving unit, for receiving voice signal;
One resolution unit connects the receiving unit, for parsing to the voice signal, by the voice signal and in advance If sentence is matched, obtain and the default statement matching and corresponding with voice signal conditional statement, and with institute The corresponding execution sentence of predicate sound signal;
One processing unit connects the resolution unit, for the conditional statement to be generated a mesh in conjunction with the execution sentence Mark entry;
One storage unit connects the processing unit, to store default entry, is carried out according to the default entry to robot Training;
The processing unit carries out weight calculation according to the target entry, and is carried out accordingly according to the weight calculation result Processing.
2. the system being trained as described in claim 1 by voice to robot, which is characterized in that the resolution unit Include:
One first conversion module, for the voice signal to be converted to text information;
One semantic module connects first conversion module, for parsing to the text information, by the text Information is matched with the default sentence, obtain and the default statement matching and corresponding with text information condition Sentence, and identify that the conditional statement is normal formula conditional statement or reaction type conditional statement;
If the conditional statement is normal formula conditional statement, execution sentence corresponding with the text information is obtained;
If the conditional statement is reaction type conditional statement, weight operation is carried out, the robot is made to execute last task Operation.
3. the system being trained as claimed in claim 2 by voice to robot, which is characterized in that the resolution unit Further include:
One second conversion module, connects the semantic module, believes for the execution sentence to be converted to corresponding audio Number, and export.
4. the system being trained as described in claim 1 by voice to robot, which is characterized in that pre- described in each If entry includes preset condition sentence and default execution sentence.
5. the system being trained as claimed in claim 4 by voice to robot, which is characterized in that the processing unit According to the conditional statement in the target entry, traverse described in all default entries in the storage unit Preset condition sentence, with obtain the conditional statement whether with the preset condition sentence repeat, if not repeating, carry out described in Weight calculation, and the target entry is stored in and forms the new default entry in the storage unit, according to described Default entry is trained robot;The weight calculation is carried out if repeating, and is carried out according to the weight calculation result Corresponding processing.
6. a kind of method being trained by voice to robot, which is characterized in that include the following steps:
S1. voice signal is acquired;
S2. the voice signal is parsed, the voice signal is matched with default sentence, obtained and preset with described Conditional statement statement matching and corresponding with the voice signal, and execution sentence corresponding with the voice signal;
S3., the conditional statement is generated to a target entry in conjunction with the execution sentence;
S4. weight calculation is carried out according to the target entry, and is performed corresponding processing according to the weight calculation result.
7. the method being trained as claimed in claim 6 by voice to robot, which is characterized in that the step S2 is specific Include:
S21. the voice signal is converted into text information;
S22. the text information is parsed, the text information is matched with the default sentence, acquisition and institute State default statement matching and conditional statement corresponding with the text information, and identify that the conditional statement is normal formula condition Sentence or reaction type conditional statement;
If the conditional statement is normal formula conditional statement, execution sentence corresponding with the text information is obtained.
8. the method being trained as claimed in claim 7 by voice to robot, which is characterized in that the step S2 is also wrapped It includes:
S23. the execution sentence is converted into corresponding audio signal, and exported.
9. the method being trained as claimed in claim 6 by voice to robot, which is characterized in that use a storage unit Default entry is stored, robot is trained according to the default entry;
It includes preset condition sentence and default execution sentence that entry is preset described in each.
10. the method being trained as claimed in claim 9 by voice to robot, which is characterized in that the step S3 tool Body includes:
S31. according to the conditional statement in the target entry, all default entries in the storage unit are traversed In the preset condition sentence;
S32. traversing result is obtained, judges whether the conditional statement repeats with the preset condition sentence,
If the conditional statement is not repeated with the preset condition sentence, S33 is thened follow the steps;
If the conditional statement and the preset condition sentence repeat, S34 is thened follow the steps;
S33. the weight calculation is carried out, and the target entry is stored in the storage unit to form newly described pre- If entry, robot is trained according to the default entry;
S34. the weight calculation is carried out, and is performed corresponding processing according to the weight calculation result.
CN201510383547.9A 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice Active CN106326208B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201510383547.9A CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice
PCT/CN2016/085911 WO2017000786A1 (en) 2015-06-30 2016-06-15 System and method for training robot via voice
TW105120437A TWI594136B (en) 2015-06-30 2016-06-29 A system and method for training robots through voice
HK17105090.9A HK1231592A1 (en) 2015-06-30 2017-05-19 A system and method for training robots through voice

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510383547.9A CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice

Publications (2)

Publication Number Publication Date
CN106326208A CN106326208A (en) 2017-01-11
CN106326208B true CN106326208B (en) 2019-06-07

Family

ID=57607864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510383547.9A Active CN106326208B (en) 2015-06-30 2015-06-30 A kind of system and method that robot is trained by voice

Country Status (4)

Country Link
CN (1) CN106326208B (en)
HK (1) HK1231592A1 (en)
TW (1) TWI594136B (en)
WO (1) WO2017000786A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108172226A (en) * 2018-01-27 2018-06-15 上海萌王智能科技有限公司 A kind of voice control robot for learning response voice and action
WO2021110870A1 (en) 2019-12-05 2021-06-10 Acib Gmbh Method for producing a fermentation product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method
CN202736475U (en) * 2011-12-08 2013-02-13 华南理工大学 Chat robot
TW201423587A (en) * 2012-12-04 2014-06-16 Hongfujin Prec Ind Wuhan System and method for providing prompts for callee

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060100855A1 (en) * 2004-10-27 2006-05-11 Rozen Suan D Disambiguation method for complex sentences
US7646857B2 (en) * 2005-05-19 2010-01-12 Verizon Business Global Llc Systems and methods for providing voicemail services including caller identification
KR101060183B1 (en) * 2009-12-11 2011-08-30 한국과학기술연구원 Embedded auditory system and voice signal processing method
US8719023B2 (en) * 2010-05-21 2014-05-06 Sony Computer Entertainment Inc. Robustness to environmental changes of a context dependent speech recognizer
CN103065629A (en) * 2012-11-20 2013-04-24 广东工业大学 Speech recognition system of humanoid robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method
CN202736475U (en) * 2011-12-08 2013-02-13 华南理工大学 Chat robot
TW201423587A (en) * 2012-12-04 2014-06-16 Hongfujin Prec Ind Wuhan System and method for providing prompts for callee

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chatting Robots;Chongguo Li et al.;《Proceedings of the 2007 IEEE International Conference on Integration Technology》;20070320;第679-684页
一种面向室内智能机器人导航的路径自然语言处理方法;李新德 等;《自动化学报》;20140228;第40卷(第2期);第289-305页

Also Published As

Publication number Publication date
TW201719452A (en) 2017-06-01
WO2017000786A1 (en) 2017-01-05
HK1231592A1 (en) 2017-12-22
TWI594136B (en) 2017-08-01
CN106326208A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN103745722B (en) Voice interaction smart home system and voice interaction method
CN105531758B (en) Use the speech recognition of foreign words grammer
CN104036774A (en) Method and system for recognizing Tibetan dialects
CN106294854A (en) A kind of man-machine interaction method for intelligent robot and device
CN107767861A (en) voice awakening method, system and intelligent terminal
CN105631468A (en) RNN-based automatic picture description generation method
CN107369439A (en) A kind of voice awakening method and device
CN107291701B (en) Machine language generation method and device
CN106803422A (en) A kind of language model re-evaluation method based on memory network in short-term long
CN109460459A (en) A kind of conversational system automatic optimization method based on log study
CN109671434A (en) A kind of speech ciphering equipment and self study audio recognition method
CN104751227A (en) Method and system for constructing deep neural network
CN103546790A (en) Language interaction method and language interaction system on basis of mobile terminal and interactive television
CN115495568B (en) Training method and device for dialogue model, dialogue response method and device
CN106205622A (en) Information processing method and electronic equipment
CN105788596A (en) Speech recognition television control method and system
CN106782502A (en) A kind of speech recognition equipment of children robot
CN106326208B (en) A kind of system and method that robot is trained by voice
CN110021293A (en) Audio recognition method and device, readable storage medium storing program for executing
CN106297765A (en) Phoneme synthesizing method and system
CN106708950B (en) Data processing method and device for intelligent robot self-learning system
CN104679733B (en) A kind of voice dialogue interpretation method, apparatus and system
CN102446309A (en) Process pattern based dynamic workflow planning system and method
CN108053826A (en) For the method, apparatus of human-computer interaction, electronic equipment and storage medium
CN106297766A (en) Phoneme synthesizing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1231592

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190618

Address after: 100085 Beijing Haidian District Shangdi Information Industry Base Pioneer Road 1 B Block 2 Floor 2037

Patentee after: Beijing or Technology Co., Ltd.

Address before: 310023 Room 101, No. 10, Lianggongdang Road, Xixi Art Collection Village, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: Taro Technology (Hangzhou) Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191209

Address after: 310000, room 10, No. 101, Gong Hui Road, Xixi art gathering village, Wuchang Street, Yuhang District, Zhejiang, Hangzhou

Patentee after: Taro Technology (Hangzhou) Co., Ltd.

Address before: 100085 Beijing Haidian District Shangdi Information Industry Base Pioneer Road 1 B Block 2 Floor 2037

Patentee before: Beijing or Technology Co., Ltd.

TR01 Transfer of patent right