CN108172226A - A kind of voice control robot for learning response voice and action - Google Patents

A kind of voice control robot for learning response voice and action Download PDF

Info

Publication number
CN108172226A
CN108172226A CN201810079661.6A CN201810079661A CN108172226A CN 108172226 A CN108172226 A CN 108172226A CN 201810079661 A CN201810079661 A CN 201810079661A CN 108172226 A CN108172226 A CN 108172226A
Authority
CN
China
Prior art keywords
voice
unit
language
training
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810079661.6A
Other languages
Chinese (zh)
Inventor
李博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Meng Wang Intelligent Technology Co Ltd
Original Assignee
Shanghai Meng Wang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Meng Wang Intelligent Technology Co Ltd filed Critical Shanghai Meng Wang Intelligent Technology Co Ltd
Priority to CN201810079661.6A priority Critical patent/CN108172226A/en
Publication of CN108172226A publication Critical patent/CN108172226A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0638Interactive procedures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

One embodiment of present specification discloses a kind of voice control robot for learning response voice,Including voice recognition unit,Put question to language recording unit,Response language recording unit,Voice map unit,Voice match unit,Voice call unit,The training voice of the voice recognition unit recognition training person,According to preset partitioning algorithm,Training voice is divided,Generate preceding language and rear language,Using preceding language as enquirement language,The enquirement language recording unit is written,Using rear language as response language,The response language recording unit is written,Voice map unit record puts question to the mapping relations of language and response language,Voice recognition unit identifies the control voice of controller,Voice match unit matches the enquirement language with control voice best match in language recording unit is putd question to,The mapping relations for puing question to language and response language that voice call unit is recorded according to voice map unit,Call the response language with language is putd question to there are mapping relations,The voice-output unit of robot plays response language.

Description

A kind of voice control robot for learning response voice and action
Technical field
Present specification belongs to robotic technology field, and in particular, to a kind of language for learning response voice and action Sound control robot.
Technical background
The household service robot of many types all has certain speech recognition capabilities at present, and developer utilizes its voice Recognition capability increases the functions such as voice control, question response, but this kind of function is generally all customized by developer for robot, The diversified demand of ordinary user can not be met.For example, ordinary user may wish to robot, to a problem, there are one special Different answer-mode, and robot usually only can search for a given answer by networking.For another example ordinary user wishes to use The voice of oneself controls robot to do a series of actions, but can only use basic " going ahead " at present, " turning left ", " stretches Go out arm " the control voice customized.Herein described technical solution can partly solve above two usage scenario The problems in.
Invention content
【One】In order to allowing ordinary user according to the actual conditions of oneself, allow robot to it is certain the problem of make it is specific Answer or specific response is made to general language, the technical solution of present specification, which discloses one kind, can learn response The voice control robot of voice.
The voice control robot for learning response voice including voice recognition unit, enquirement language recording unit, is answered Answer language recording unit, voice map unit, the training voice of the voice recognition unit recognition training person, according to preset segmentation Training voice is divided, generates preceding language and rear language by algorithm, and using preceding language as language is putd question to, the enquirement language recording unit is written, will Language is as response language afterwards, is written the response language recording unit, and the voice map unit record puts question to reflecting for language and response language Penetrate relationship.
The voice control robot for learning response voice, further includes voice match unit, voice call unit, institute State voice recognition unit identification controller control voice, the voice match unit matched in language recording unit is putd question to and The enquirement language of control voice best match, the enquirement language and answer that the voice call unit is recorded according to the voice map unit The mapping relations of language are answered, call the response language with language is putd question to there are mapping relations, the voice-output unit of robot plays response language.
The technical solution of present specification also discloses a kind of sound control method, applied to above-mentioned robot, including such as Lower step:
According to preset partitioning algorithm, training voice is divided for S101, the training voice of voice recognition unit recognition training person, Preceding language and rear language are generated, using preceding language as language is putd question to, language recording unit is putd question in write-in, using rear language as response language, described in write-in Response language recording unit;
S102, the voice map unit record put question to the mapping relations of language and response language;
S201, voice recognition unit identify the control voice of controller, and voice match unit matches in language recording unit is putd question to Go out the enquirement language with control voice best match;
S202, the mapping relations for puing question to language and response language that voice call unit is recorded according to the voice map unit, is called With language is putd question to have the response language of mapping relations;
S203, the voice-output unit of robot play response language.
【Two】In order to allow ordinary user according to the actual conditions of oneself, robot is allowed to respond certain control voice, is made Go out specific action or action sequence, the technical solution of present specification discloses a kind of voice control machine for learning action People.
The voice control robot for learning action, including voice recognition unit, voice memorized unit, action recognition Unit, action record unit and information MAP unit, the training voice of the voice recognition unit recognition training person, the voice Recording unit records train voice, and the training action of the action recognition unit recognition training person obtains training action feature ginseng Number table, the action record unit record training action characteristic parameter table, described information map unit record training voice and instruction Practice the mapping relations of motion characteristic parameter list.
The voice control robot for learning action further includes voice match unit, action invocation unit, action mould Quasi-simple member, the control voice of the voice recognition unit identification controller, obtains control voice information, the voice match unit The training voice with control voice best match is matched in the voice memorized unit, the action invocation unit is according to institute The training voice of information MAP unit record and the mapping relations of training action characteristic parameter table are stated, calls to have with training voice and reflect The training action characteristic parameter table of relationship is penetrated, the action simulation unit is made simulation according to training action characteristic parameter table and moved Make.
The voice control robot for learning action, voice recognition unit, voice match unit, action invocation list Member and action simulation unit can process the control voice of a sequence context, make the simulated action of a sequence context.
The technical solution of present specification also discloses a kind of sound control method, applied to above-mentioned robot, including such as Lower step:
S301, the training voice of voice recognition unit recognition training person, voice memorized unit record training voice;
S302, the training action of action recognition unit recognition training person obtain training action characteristic parameter table, action record unit Record training action characteristic parameter table;
The mapping relations of S303, information MAP unit record training voice and training action characteristic parameter table;
S401, voice recognition unit identify the control voice of controller, and voice match unit matches in voice memorized unit With the training voice of control voice best match;
S402, training voice that action invocation unit is recorded according to described information map unit and training action characteristic parameter table Mapping relations call the training action characteristic parameter table for having mapping relations with training voice;
S403, action simulation unit make simulated action according to training action characteristic parameter table.
In above-mentioned steps, S301, S302, S303 can first be implemented repeatedly so that the multiple trained languages of information MAP unit record The mapping relations of sound and training action characteristic parameter table.
In above-mentioned steps S401, if to a control voice identification and matching multiple trained voices, a sequence is formed Training voice, then step S402, S404 can implement repeatedly.Therefore, the voice recognition unit, voice match unit, action Call unit and action simulation unit can process a sequence context or non-coherent control voice, make a sequence context or Non-coherent simulated action.
Description of the drawings
Fig. 1 is the robot schematic diagram of embodiment 1;
Fig. 2 is the robot schematic diagram of embodiment 2.
Specific embodiment
In order to enable the technical characteristic of the present invention is definitely, intuitively, embodiment is described below in conjunction with the accompanying drawings, this The technical staff in field is it should be understood that these embodiments are exemplary in nature, not to the limitation of technical solution of the present invention, and And section Example can be combined with each other or be combined with other known solutions.
【Embodiment 1】
In order to allowing ordinary user according to the actual conditions of oneself, allow robot to it is certain the problem of make specific answer or Specific response is made to general language, present embodiment discloses a kind of voice control robots for learning response voice.
The voice control robot for learning response voice including voice recognition unit, enquirement language recording unit, is answered Answer language recording unit, voice map unit.The division of the unit, only a kind of division of logic function, in actual implementation may be used To there is other dividing mode.Multiple units can be combined or be integrated on software and hardware or some features can be ignored, Or it does not perform.It will be understood by those skilled in the art that the unit may be realized by the form of hardware or software or software and hard The form that part combines is realized.
The voice recognition unit includes voice sensing device, also including speech recognition software or voice recognition chip.When Trainer sends out voice, the training voice of voice recognition unit recognition training person, according to preset partitioning algorithm, by training voice Segmentation, generates preceding language and rear language.There are many partitioning algorithms.
For example, trainer sends out voice " if someone asks you ' what is your name ', you just answer ' I cries sprouts king greatly ' ", Voice recognition unit identifies voice segments A " if someone asks you " and voice segments C " being answered if you ", according to voice partitioning algorithm Voice segments B " what is your name " between voice segments A and C is denoted as preceding language, " I cries greatly by the voice segments D after voice segments C Sprout king " it is used as rear language.
For another example trainer is two people, a people puts question to " child has gone to school today ", and a people, which answers, " goes to auntie house Play ", then preset partitioning algorithm can be according to languages such as the range informations of voiceprint or sound of quizmaster and the person of answering " child has gone to school today " is used as preceding language by sound characteristic information, " auntie house will be gone to play " and is used as rear language.
For another example trainer while voice " I likes you very much " is sent out according to " preceding language " button on robot body, According to " rear language " button on robot body while voice " I also likes you " is sent out, then preset partitioning algorithm only needs root According to the difference of button, " I likes you very much " is used as preceding language, " I also likes you " is used as rear language.
Optionally, above-mentioned " preceding language ", " rear language " are the voice recorded;Optionally, above-mentioned " preceding language ", " rear language " are electronization Word, word or sentence, the word, word and sentence are converted from raw tone.
The enquirement language recording unit is written using preceding language as language is putd question in voice recognition unit, using rear language as response language, The response language recording unit is written, the voice map unit record puts question to the mapping relations of language and response language.
Preferably, it is the database being made of the voice document of the label of each enquirement language to put question to language recording unit;
Preferably, response language recording unit is the database being made of the voice document of the label of each response language;
Preferably, text form document of the language recording unit for electronization is putd question to, is carried wherein having recorded each sentence that training obtains Ask language;
Preferably, text form document of the response language recording unit for electronization is answered wherein having recorded each sentence that training obtains Answer language.
Preferably, the voice map unit is form document, and language recording unit and response language note are putd question to wherein having recorded There is the index putd question between language and response language of mapping relations in record unit per a pair.
Preferably, language and response language is putd question to form mapping or index relative using its filename or call number.
Preferably, language is putd question to and during response language in generation, if language recording unit is putd question to have an identical enquirement language, and response language When recording unit has different response languages, with response language old in newly-generated response language update response language recording unit.
The voice control robot for learning response voice, further includes voice match unit, voice call unit.
After the completion of training, controller can be interacted by voice and robot.Aforementioned voice recognition unit identifies The control voice of controller, voice match unit match the enquirement with control voice best match in language recording unit is putd question to Language, mapping, the index relative of puing question to language and response language that the voice call unit is recorded according to the voice map unit, is adjusted With with put question to language have mapping, index relative response language, the voice-output unit of robot plays response language.
Such as:
Controller sends out voice " what is your name " or " your name is ", and voice recognition unit identifies control voice " you What is your name " or " your name is ", voice match unit match immediate enquirement language in language recording unit is putd question to " what is your name ".Voice call unit is rely the hardware resource of operation for voice or text processing software and software.Voice The filename or call number of enquirement language that call unit reading matches, according to puing question to language and response language in voice map unit Mapping, index relative, get response language " I cries sprouts king greatly ".The voice-output unit of robot plays response language, and " I cries greatly Sprout king ".
The present embodiment also discloses a kind of sound control method, applied to above-mentioned robot, includes the following steps:
According to preset partitioning algorithm, training voice is divided into for S101, the training voice of voice recognition unit recognition training person Preceding language and rear language, using preceding language as language is putd question to, write-in puts question to language recording unit, using rear language as response language, the response is written Language recording unit;
S102, the voice map unit record put question to mapping, the index relative of language and response language;
S201, voice recognition unit identify the control voice of controller, and voice match unit matches in language recording unit is putd question to Go out the enquirement language with control voice best match;
S202, mapping, the index relative of puing question to language and response language that voice call unit is recorded according to the voice map unit, Call with put question to language have mapping, index relative response language;
S203, the voice-output unit of robot play response language.
The technical staff in this category field can be understood that, for convenience and simplicity of description, the method for foregoing description In, the concrete property and the course of work of each functional unit belonging to robot can refer to pair in aforementioned system embodiment Situation is answered, details are not described herein.
【Embodiment 2】
In order to allow ordinary user according to the actual conditions of oneself, robot is allowed to respond certain control voice, is made specific Action or action sequence, present embodiment discloses a kind of voice control robots for learning action.
The voice control robot for learning action, including voice recognition unit, voice memorized unit, action recognition Unit, action record unit and information MAP unit, voice match unit, action invocation unit, action simulation unit.The list The division of member, only a kind of division of logic function can have other dividing mode in actual implementation.Multiple units can be tied It closes or is integrated on software and hardware or some features can be ignored or does not perform.It will be understood by those skilled in the art that The form that the unit may be realized by the form of hardware or software or software and hardware combines is realized.
The voice recognition unit include voice sensing device, also including speech recognition software and or voice recognition chip. When trainer sends out voice in the sphere of action of voice recognition unit, the training voice of voice recognition unit recognition training person, The voice memorized unit record training voice.
The action recognition unit includes image sensing device, also including image recognition software and or image recognition chip. When trainer makes action in the sphere of action of action recognition unit, the training action of action recognition unit recognition training person, Obtain training action characteristic parameter table, the action record unit record training action characteristic parameter table.
Mapping, the index relative of described information map unit record training voice and training action characteristic parameter table.
Trainer can open the training mode of robot by modes such as voice, buttons, be entered as to training voice And training action.
Such as:
Trainer can say initiating speech " starting to train ", and the sound identification module of robot identifies " starting to train ", then Robot control system starts the training mode of robot;
When robot enters training mode, trainer first says one section of trained voice, for example, " jump one I practiced January 25 Good dancing ", robot voice recognition unit recognition training person training voice " dance one I practiced January 25 Step ", voice memorized unit record trains voice, and robot prompts trainer to open by voice, screen, action or indicator light later Beginning does training action;
Trainer makes one section of training action, that is, jumps out a trained dancing on January 25, action recognition unit recognition training The training action of person, obtains training action characteristic parameter table, i.e., image recognition software and or image recognition chip to image sensing The image of the trainer of device intake is handled, and obtains space shape of each component part of trainer's limbs at a sequence moment State by reference records such as the spatiality coordinate of each moment limbs each section, angles, forms the training action of time series Parameter list, action record unit record training action characteristic parameter table, in practical situations, action record Single Component Management multiple Training action characteristic parameter list file;
After training action, trainer says end voice " terminating training ", and the sound identification module of robot identifies " terminating training ", the training voice that information MAP unit record newly obtains and the training action characteristic parameter table newly obtained reflect It penetrates, index relative, robot control system closes training mode.
After the completion of training, controller can be made by voice control robot has mapping, index relative with voice Action.
Controller sends out control voice, for example, " jump one my the dancing practiced on January 25 " or " jumping what I trained January Dancing ", the control voice " jump one my dancing practice on January 25 " of above-mentioned voice recognition unit identification controller or " jump Wave the dancing of training in January ", the voice match unit matches and control voice best match in the voice memorized unit Training voice " jump one my dancing practiced on January 25 ", the action invocation unit is according to described information map unit The training voice of record and mapping, the index relative of training action characteristic parameter table, call has mapping, index to close with training voice The training action characteristic parameter table of system.
The action simulation unit includes motion control software, processor, anthropomorphous machine's structure etc., anthropomorphous machine's structure It can be corresponded with limbs each section of trainer, each moment limbs each section in training action characteristic parameter table The parameters such as coordinate, angle, motion control software is under the assistance of the hardware resources such as processor so that anthropomorphous machine mechanism it is each Partial simulation limbs each section, makes whole body echomotism with joint efforts, trainer before imitating " jump one I practiced January 25 Good dancing ".
The voice control robot for learning action, voice recognition unit, voice match unit, action invocation list Member and action simulation unit can process the control voice of a sequence context, make the simulated action of a sequence context.
For example, trainer allows robot to obtain trained voice respectively by sending out trained voice and making training action " turning left ", " five steps of going ahead ", " turning right " corresponding training action characteristic parameter table and mapping, index relative, then control Person can send out the control voice of " turning left, five steps of going ahead, then turn right ", and voice recognition unit identifies the control language of controller Sound, voice match unit match the training voice with control voice best match in voice memorized unit, according to three Control voice is divided into order three to control languages by the training voice " turning left " known, " five steps of going ahead ", " turning right " Sound finally sequentially calls three training action characteristic parameter tables for having mapping relations with three trained voices, moves respectively Make analogue unit and coherent simulated action is made according to the precedence of three training action characteristic parameter tables.
The present embodiment also discloses a kind of sound control method, applied to above-mentioned robot, includes the following steps:
S301, the training voice of voice recognition unit recognition training person, voice memorized unit record training voice;
S302, the training action of action recognition unit recognition training person obtain training action characteristic parameter table, action record unit Record training action characteristic parameter table;
The mapping relations of S303, information MAP unit record training voice and training action characteristic parameter table;
S401, voice recognition unit identify the control voice of controller, and voice match unit matches in voice memorized unit With the training voice of control voice best match;
S402, training voice that action invocation unit is recorded according to described information map unit and training action characteristic parameter table Mapping relations call the training action characteristic parameter table for having mapping relations with training voice;
S403, action simulation unit make simulated action according to training action characteristic parameter table.
In above-mentioned steps, S301, S302, S303 can first be implemented repeatedly so that the multiple trained languages of information MAP unit record The mapping relations of sound and training action characteristic parameter table.
In above-mentioned steps S401, if to a control voice identification and matching multiple trained voices, a sequence is formed Training voice, then step S402, S404 can implement repeatedly.
The voice recognition unit, voice match unit, action invocation unit and action simulation unit can connect a sequence It passes through or non-coherent control voice processes, make a sequence context or non-coherent simulated action.
The technical staff in this category field can be understood that, for convenience and simplicity of description, the method for foregoing description In, the concrete property and the course of work of each functional unit belonging to robot can refer to pair in aforementioned system embodiment Situation is answered, details are not described herein.
Above example is only to illustrate the technical solution of present specification rather than its technical solution is limited, this The technical staff in field is it is understood that carry out the technical solution recorded in foregoing embodiments the obtained skill of non-creative modification Art scheme carries out which part technical characteristic the technical solution that equivalent replacement is obtained, its essence is not made to be detached from this The range of technical solution described in application documents.

Claims (10)

1. a kind of voice control robot for learning response voice, which is characterized in that including voice recognition unit, put question to language note Record unit, response language recording unit, voice map unit, the training voice of the voice recognition unit recognition training person, according to Training voice is divided, generates preceding language and rear language by preset partitioning algorithm, and using preceding language as language is putd question to, the enquirement language is written Using rear language as response language, the response language recording unit is written in recording unit, the voice map unit record put question to language and The mapping relations of response language.
2. robot as claimed in claim 1, which is characterized in that further include voice match unit, voice call unit, the voice Recognition unit identifies the control voice of controller, and the voice match unit matches and control language in language recording unit is putd question to The enquirement language of sound best match, the voice call unit put question to language and response language according to what the voice map unit recorded Mapping relations, call the response language with language is putd question to have mapping relations, and the voice-output unit of robot plays response language.
3. a kind of sound control method, which is characterized in that include the following steps:
According to preset partitioning algorithm, training voice is divided into for S101, the training voice of voice recognition unit recognition training person Preceding language and rear language, using preceding language as language is putd question to, write-in puts question to language recording unit, using rear language as response language, the response is written Language recording unit;
S102, the voice map unit record put question to the mapping relations of language and response language.
4. sound control method as claimed in claim 6, which is characterized in that further include following steps:
S201, voice recognition unit identify the control voice of controller, and voice match unit matches in language recording unit is putd question to Go out the enquirement language with control voice best match;
S202, the mapping relations for puing question to language and response language that voice call unit is recorded according to the voice map unit, is called With language is putd question to have the response language of mapping relations;
S203, the voice-output unit of robot play response language.
5. a kind of voice control robot for learning action, which is characterized in that including voice recognition unit, voice record list Member, action recognition unit, action record unit and information MAP unit, the training language of the voice recognition unit recognition training person Sound, the voice memorized unit record training voice, the training action of the action recognition unit recognition training person are trained Motion characteristic parameter list, the action record unit record training action characteristic parameter table, described information map unit record instruction Practice the mapping relations of voice and training action characteristic parameter table.
6. robot as claimed in claim 5, which is characterized in that further include voice match unit, action invocation unit, action simulation Unit, the control voice of the voice recognition unit identification controller, the voice match unit is in the voice memorized unit In match training voice with control voice best match, the action invocation unit is recorded according to described information map unit Training voice and training action characteristic parameter table mapping relations, call has the training action of mapping relations special with training voice Parameter list is levied, the action simulation unit makes simulated action according to training action characteristic parameter table.
7. robot as claimed in claim 6, which is characterized in that the voice recognition unit, voice match unit, action invocation list Member and action simulation unit can process the control voice of a sequence, make the simulated action of a sequence.
8. a kind of sound control method, which is characterized in that include the following steps:
S301, the training voice of voice recognition unit recognition training person, voice memorized unit record training voice;
S302, the training action of action recognition unit recognition training person obtain training action characteristic parameter table, action record unit Record training action characteristic parameter table;
The mapping relations of S303, information MAP unit record training voice and training action characteristic parameter table.
9. sound control method as claimed in claim 8, which is characterized in that further include following steps:
S401, voice recognition unit identify the control voice of controller, and voice match unit matches in voice memorized unit With the training voice of control voice best match;
S402, training voice that action invocation unit is recorded according to described information map unit and training action characteristic parameter table Mapping relations call the training action characteristic parameter table for having mapping relations with training voice;
S403, action simulation unit make simulated action according to training action characteristic parameter table.
10. sound control method as claimed in claim 9, which is characterized in that
Described step S301, S302, S303 can first be implemented repeatedly so that the multiple trained voices of information MAP unit record and training The mapping relations of motion characteristic parameter list;
In the step S401, if to a control voice identification and matching multiple trained voices, the instruction of a sequence is formed Practice voice, then step S402, S404 can be implemented repeatedly.
CN201810079661.6A 2018-01-27 2018-01-27 A kind of voice control robot for learning response voice and action Pending CN108172226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810079661.6A CN108172226A (en) 2018-01-27 2018-01-27 A kind of voice control robot for learning response voice and action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810079661.6A CN108172226A (en) 2018-01-27 2018-01-27 A kind of voice control robot for learning response voice and action

Publications (1)

Publication Number Publication Date
CN108172226A true CN108172226A (en) 2018-06-15

Family

ID=62516088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810079661.6A Pending CN108172226A (en) 2018-01-27 2018-01-27 A kind of voice control robot for learning response voice and action

Country Status (1)

Country Link
CN (1) CN108172226A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438564A (en) * 2021-06-22 2021-09-24 武汉领普科技有限公司 Control system, terminal processing method, wireless switch and processing method thereof

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266635B1 (en) * 1999-07-08 2001-07-24 Contec Medical Ltd. Multitasking interactive voice user interface
CN1380846A (en) * 2000-03-31 2002-11-20 索尼公司 Robot device, robot device action control method, external force detecting device and method
CN1507617A (en) * 2002-03-06 2004-06-23 ���ṫ˾ Learning apparatus, learning method, and robot apparatus
CN104392720A (en) * 2014-12-01 2015-03-04 江西洪都航空工业集团有限责任公司 Voice interaction method of intelligent service robot
CN104965426A (en) * 2015-06-24 2015-10-07 百度在线网络技术(北京)有限公司 Intelligent robot control system, method and device based on artificial intelligence
CN105528349A (en) * 2014-09-29 2016-04-27 华为技术有限公司 Method and apparatus for analyzing question based on knowledge base
CN105825268A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for data processing for robot action expression learning
CN106292424A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Music data processing method and device for anthropomorphic robot
CN106326208A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 System and method for training robot via voice
CN106547884A (en) * 2016-11-03 2017-03-29 深圳量旌科技有限公司 A kind of behavior pattern learning system of augmentor
CN106601237A (en) * 2016-12-29 2017-04-26 上海智臻智能网络科技股份有限公司 Interactive voice response system and voice recognition method thereof
CN106649825A (en) * 2016-12-29 2017-05-10 上海智臻智能网络科技股份有限公司 Voice interaction system, establishment method and device thereof
CN106782539A (en) * 2017-01-16 2017-05-31 上海智臻智能网络科技股份有限公司 A kind of intelligent sound exchange method, apparatus and system
CN106847279A (en) * 2017-01-10 2017-06-13 西安电子科技大学 Man-machine interaction method based on robot operating system ROS
CN107450367A (en) * 2017-08-11 2017-12-08 上海思依暄机器人科技股份有限公司 A kind of voice transparent transmission method, apparatus and robot
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot
CN107463636A (en) * 2017-07-17 2017-12-12 北京小米移动软件有限公司 Data configuration method, device and the computer-readable recording medium of interactive voice

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266635B1 (en) * 1999-07-08 2001-07-24 Contec Medical Ltd. Multitasking interactive voice user interface
CN1380846A (en) * 2000-03-31 2002-11-20 索尼公司 Robot device, robot device action control method, external force detecting device and method
CN1507617A (en) * 2002-03-06 2004-06-23 ���ṫ˾ Learning apparatus, learning method, and robot apparatus
CN105528349A (en) * 2014-09-29 2016-04-27 华为技术有限公司 Method and apparatus for analyzing question based on knowledge base
CN104392720A (en) * 2014-12-01 2015-03-04 江西洪都航空工业集团有限责任公司 Voice interaction method of intelligent service robot
CN104965426A (en) * 2015-06-24 2015-10-07 百度在线网络技术(北京)有限公司 Intelligent robot control system, method and device based on artificial intelligence
CN106326208A (en) * 2015-06-30 2017-01-11 芋头科技(杭州)有限公司 System and method for training robot via voice
CN105825268A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for data processing for robot action expression learning
CN106292424A (en) * 2016-08-09 2017-01-04 北京光年无限科技有限公司 Music data processing method and device for anthropomorphic robot
CN106547884A (en) * 2016-11-03 2017-03-29 深圳量旌科技有限公司 A kind of behavior pattern learning system of augmentor
CN106601237A (en) * 2016-12-29 2017-04-26 上海智臻智能网络科技股份有限公司 Interactive voice response system and voice recognition method thereof
CN106649825A (en) * 2016-12-29 2017-05-10 上海智臻智能网络科技股份有限公司 Voice interaction system, establishment method and device thereof
CN106847279A (en) * 2017-01-10 2017-06-13 西安电子科技大学 Man-machine interaction method based on robot operating system ROS
CN106782539A (en) * 2017-01-16 2017-05-31 上海智臻智能网络科技股份有限公司 A kind of intelligent sound exchange method, apparatus and system
CN107463636A (en) * 2017-07-17 2017-12-12 北京小米移动软件有限公司 Data configuration method, device and the computer-readable recording medium of interactive voice
CN107450367A (en) * 2017-08-11 2017-12-08 上海思依暄机器人科技股份有限公司 A kind of voice transparent transmission method, apparatus and robot
CN107443396A (en) * 2017-08-25 2017-12-08 魔咖智能科技(常州)有限公司 A kind of intelligence for imitating human action in real time accompanies robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438564A (en) * 2021-06-22 2021-09-24 武汉领普科技有限公司 Control system, terminal processing method, wireless switch and processing method thereof

Similar Documents

Publication Publication Date Title
JP6816925B2 (en) Data processing method and equipment for childcare robots
CN108000526B (en) Dialogue interaction method and system for intelligent robot
US20200210901A1 (en) Dynamic learning method and system for robot, robot and cloud server
CN112204564A (en) System and method for speech understanding via integrated audio and visual based speech recognition
US11017551B2 (en) System and method for identifying a point of interest based on intersecting visual trajectories
CN111801730A (en) System and method for artificial intelligence driven automated companion
CN105723360A (en) Improving natural language interactions using emotional modulation
US11308312B2 (en) System and method for reconstructing unoccupied 3D space
US20190251350A1 (en) System and method for inferring scenes based on visual context-free grammar model
US20210043106A1 (en) Technology based learning platform for persons having autism
CN106774845B (en) intelligent interaction method, device and terminal equipment
US20190304451A1 (en) Dialogue method, dialogue system, dialogue apparatus and program
US10607504B1 (en) Computer-implemented systems and methods for a crowd source-bootstrapped spoken dialog system
Strauss et al. Proactive spoken dialogue interaction in multi-party environments
WO2021003471A1 (en) System and method for adaptive dialogue management across real and augmented reality
US20190253724A1 (en) System and method for visual rendering based on sparse samples with predicted motion
Lison et al. Spoken dialogue systems: the new frontier in human-computer interaction
CN112204563A (en) System and method for visual scene construction based on user communication
CN110134863A (en) The method and device that application program is recommended
WO2020256993A1 (en) System and method for personalized and multimodal context aware human machine dialogue
KR20160051020A (en) User-interaction toy and interaction method of the toy
JP2023055910A (en) Robot, dialogue system, information processing method, and program
CN109741744B (en) AI robot conversation control method and system based on big data search
CN117541444B (en) Interactive virtual reality talent expression training method, device, equipment and medium
CN108172226A (en) A kind of voice control robot for learning response voice and action

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180615

WD01 Invention patent application deemed withdrawn after publication