CN101551998B - A group of voice interaction devices and method of voice interaction with human - Google Patents

A group of voice interaction devices and method of voice interaction with human Download PDF

Info

Publication number
CN101551998B
CN101551998B CN2009100510319A CN200910051031A CN101551998B CN 101551998 B CN101551998 B CN 101551998B CN 2009100510319 A CN2009100510319 A CN 2009100510319A CN 200910051031 A CN200910051031 A CN 200910051031A CN 101551998 B CN101551998 B CN 101551998B
Authority
CN
China
Prior art keywords
data
voice
speech recognition
database
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100510319A
Other languages
Chinese (zh)
Other versions
CN101551998A (en
Inventor
潘竞
程青云
马果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinxin Electronic Technology Co Ltd
Original Assignee
Shanghai Jinxin Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinxin Electronic Technology Co Ltd filed Critical Shanghai Jinxin Electronic Technology Co Ltd
Priority to CN2009100510319A priority Critical patent/CN101551998B/en
Publication of CN101551998A publication Critical patent/CN101551998A/en
Application granted granted Critical
Publication of CN101551998B publication Critical patent/CN101551998B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a voice interaction system and a method of voice interaction with human. The voice interaction system includes more than two devices capable of voice interaction; each device is provided with a voice recognition system, the voice recognition system comprises a voice input module, a database, a voice recognition and controlling module, and a voice output module for outputting voice. The data stored in the database in the voice recognition of each device has logic correlation therebetween, thereby the voice interaction between the two devices can be realized. the voice interaction between the devices are realized by the data with logic correlation stored in the database in the voice recognition of each device; and by grouping the data in the databases, and comparing the inputted voice with the corresponding data group in voice recognition, the voice recognition speed is increased and the voice recognition content is enriched.

Description

A kind of interactive voice response system with and and people's voice interface method
Technical field
The present invention relates to field of speech recognition, have the device of speech recognition, relate in particular to a kind of interactive voice response system and utilize the voice interface method of this interactive voice response system between people and this interactive voice response system.
Background technology
Carry out speech exchange with machine, allow machine understand what you say, this is the thing that people dream of for a long time.Speech recognition technology is exactly to allow machine voice signal be changed into the hi-tech of corresponding text or order by identification and understanding process.Speech recognition is a cross discipline, and recent two decades comes, and speech recognition technology is obtained marked improvement, begins to move towards market from the laboratory.People estimate that in following 10 years, speech recognition technology will enter every field such as industry, household electrical appliances, communication, automotive electronics, medical treatment, home services, consumption electronic product, are one of electronics in the period of 2000 to 2010, message area ten big scientific and technological achievements application.This achievement will play sizable product renewal effect in the whole nation and even global household electrical appliances, communication and industrial control field.At present, many companies have in the world all used speech recognition technology on telecommunications, service sector and industrial production line, and create the voice product (as voice memo basis, voice-control toy, voice remote controller, home server) of a collection of novelty.At present, in field of speech recognition, man-to-man communication between speech recognition equipment person of being to use and the device, and also the scene of this communication is very limited, and the discernible clauses and subclauses of speech recognition equipment are also very limited.At above shortcoming, be necessary to propose a kind ofly can between multiple arrangement, carry out the language interaction, and enrich one group of device of session operational scenarios.
Summary of the invention
The technical problem to be solved in the present invention be to provide a kind of interactive voice response system with and and people's voice interface method, by the data with logical interdependency of storing in the database in the speech recognition system in each device in this interactive voice response system, realize the voice interface between each device; And by the data in the database are divided into groups, when carrying out speech recognition, can improve speech recognition speed thereby voice and the corresponding data sets of input compared, greatly reduction is to the demand of Installed System Memory; Simultaneously during the data in increasing database, can not reduce the speed of speech recognition, not need to change the capacity of random access memory yet, thereby can make things convenient for and freely enrich the content of speech recognition.
For solving above technical matters, the invention provides a kind of interactive voice response system, comprise the device that can carry out voice interface more than two; Each the device in be provided with speech recognition system, this speech recognition system comprises a voice input module, in order to phonetic entry in speech recognition system; One database is stored content to be identified and the speech data that will make the content of response according to institute's content identified in this database; One speech recognition controlled module, it is discerned in order to the statement that will store in the speech data of described voice input module input and database, the voice output module output voice that in this speech recognition system, comprise, it is characterized in that: have logical interdependency between the data of storing in the database in the speech recognition system in described each device, thereby can realize this voice interface between installing more than two.
Further improvement of the present invention is: store in the described database data based use the dialogue scene be divided into several groups, each scene is one group of data, and each the group data have a head node, this head node contains the scene information of this data set; Wherein, there is logical interdependency at least one group of data in the database in the speech recognition system at least one group of data in the database in the speech recognition system in each device and other devices.
Further improvement of the present invention is: described each group data are divided into the plural groups divided data; The content of the content in described each divided data group and the divided data group of other group is combined into new group, a scene in other words.
Further improvement of the present invention is: in the described speech recognition system, include a Data Input Interface, be used for new data are input to database.
On the other hand, the invention provides the method that a kind of people and interactive voice response system carry out voice interface, wherein interactive voice response system comprises the device that can carry out voice interface more than two; Each the device in be provided with speech recognition system, this speech recognition system comprises a voice input module, in order to phonetic entry in speech recognition system; One database is stored content to be identified and the speech data that will make the content of response according to institute's content identified in this database; One speech recognition controlled module, it is discerned in order to the statement that will store in the speech data of described voice input module input and database, the voice output module output voice that in this speech recognition system, comprise, there is logical interdependency between the data of storing in the database in the speech recognition system in described each device, thereby can realize this voice interface between installing more than two, this method comprises: a) at first send instruction by people's speech;
B) after each device in this installs more than two is heard this instruction, each device is discerned this instruction by the speech recognition controlled module in the speech recognition system on it, and finds one group of data of the scene corresponding with this instruction in database by the speech recognition controlled module;
C) after relevant apparatus finds one group of data of corresponding scene, send voice according to this instruction by its speech output end by first device;
It is characterized in that: after d) first device in the device relevant with scene sent voice, other devices received this speech data by its speech recognition system, and the data of storing in this speech data and its database are compared identification; Second device relevant with scene according to the result who relatively discerns, exported the voice of the voice match of sending with first device by the speech output end on it;
Repeat above step, until finishing a complete scene dialogue.
The further improvement of this aspect of the present invention is: the data based applied scene of storing in the described database is divided into several groups, each scene is one group of data, and each group data has a head node, and this head node contains the scene information of this data set; Wherein, there is logical interdependency at least one group of data in the database in the speech recognition system at least one group of data in the database in the speech recognition system in each device and other devices.
The further improvement of this aspect of the present invention is: also comprise in step c), step c1) after instruction is sent in user's speech, speech recognition system in each device will be instructed by its speech recognition controlled module and be compared identification with each head node scene information data of organizing data, find corresponding data set then; Step c2) voice that have correlativity by output of the speech output end in its speech recognition system and user instruction by first device;
In step d), also comprise, steps d 1) after first device sends voice, the speech data that other devices send this first device is packed into by the voice input module of the speech recognition system on it in speech recognition controlled module of speech recognition system, and speech data that this first device is sent and corresponding scene data set in data compare identification;
Steps d 2) speech data that finds the voice that send with first device to be complementary at second device, by the voice output module on it with voice output.
By above-described technical scheme, a kind of interactive voice response system provided by the invention with and and people's voice interface method, by the data with logical interdependency of storing in the database in the speech recognition system in each device in this interactive voice response system, realize the voice interface between each device; And by the data in the database are divided into groups, when carrying out speech recognition, can improve speech recognition speed thereby voice and the corresponding data sets of input compared, greatly reduction is to the demand of Installed System Memory; Simultaneously during the data in increasing database, do not need to change the capacity of random access memory yet, can not reduce the speed of speech recognition, thereby can make things convenient for and freely enrich the content of speech recognition.
Description of drawings
Fig. 1 is the speech recognition system module map that is provided with in each device in a kind of interactive voice response system of a preferred embodiment of the present invention;
Fig. 2 is the identification process figure of the speech recognition system of each device in a kind of interactive voice response system of a preferred embodiment of the present invention;
Fig. 3 is the data of database packet diagram of the speech recognition system in each device in a kind of interactive voice response system of a preferred embodiment of the present invention; And
The process flow diagram of Fig. 4 for carrying out the language interaction between the people of a preferred embodiment of the present invention and the interactive voice response system.
Embodiment
The present invention relates to multiple arrangement, but the hardware configuration of each device all is identical with workflow.Realize that the present invention mainly is the technology of 3 aspects, one is speech recognition, and two is the switching that helps scene by good data structure.The 3rd, improve the correctness of discerning between the device with effective method, the correctness that device is judged user's voice.In this specific embodiment, be example, describe a kind of interactive voice response system in detail with two devices.Below with reference to accompanying drawing the present invention is described in detail.
It with reference to figure 1 the speech recognition system module map that is provided with in each device in a kind of interactive voice response system of a preferred embodiment of the present invention; This speech recognition system comprises a speech recognition controlled module 10, respectively a voice input module 20 that communicates to connect with this speech recognition controlled module 10, a database 30, a Data Input Interface 40, a voice output module 50 and an action output module 60; Wherein, speech recognition controlled module 10 comprises a processor and operation speech recognition algorithm in the above, and in addition, this speech recognition controlled module 10 also can be that a processor adds independent sound identification module; Voice input module 20 comprises a microphone microphone, be used for the voice of input are amplified input, one modulus (A/D) change-over circuit, its voice that are used for importing are digital signal by analog signal conversion, then with this digital signal input speech recognition controlled module 10; Database 30, wherein storage is content to be identified and the speech data that will make the content of response according to institute's content identified; Data Input Interface 40 is used for by this interface 40 new data being input to database 30, makes device to change function and content according to user's needs; Voice output module 50 comprises digital-to-analogue (D/A) change-over circuit and loudspeaker, is used for digital voice data to be exported is converted to the analog voice data after loudspeaker amplify output.Output content is not limited only to voice, also can be the action of other machinery of making after recognizing voice and electronics.
The above is the introduction of the speech recognition system that has in each device that uses among the present invention.In this speech recognition system, the data of storage are open type data in its database 30, that is to say that the user can change content wherein according to the needs of oneself, can increase, reduce, changes the identification clauses and subclauses before promptly each the use, thereby can satisfy user's oneself needs; By described Data Input Interface 40, the user can the data that prior burning is good be input in the described speech recognition controlled module 10, and the data of utilizing this speech recognition controlled module 10 will come in by data-interface 40 are put in the database 30.
In addition, with reference to figure 2, the data based various scene of storage is divided into plurality of data group 31,32,33... in this database 30, and each organizes a data represented different scene; And each data set 31,32,33..., can be divided into a plurality of divided data groups 311,312,313... again, 321,322,323..., and the content in described each divided data group also can be combined into new group, a scene in other words with the content of the divided data group of other group; Wherein, when data are divided into groups, each data set has a head node, and this head node contains the scene information of this data set, comprises the scene title, the address of all possible identification items etc., and according to concrete scene, each data set has several partial nodes again according to the situation of its divided data group, these several partial nodes contain the information of divided data group equally, comprise address of name information, possible all identification items etc.; Described speech recognition controlled module 10, when the data that will store in speech data in load module 20 these speech recognition controlled modules 10 of input and described database 30 compare identification, be not as traditional audio recognition method, the data of storing in the speech data of input and all database 30 are compared, but with the input speech data and the scene title in each data set be that head node compares, thereby select corresponding data set, then the data set of corresponding scene and the speech data of input are compared; By so a kind of data mode relatively, can accelerate the speed of speech recognition, and also can the not slow down speed of speech recognition of the data that can increase storage in the database 30.In addition, by the method for grouping, the present invention can also utilize the identification clauses and subclauses that are available to obscure easily for some or the recognition node of multi-lingual synonym increases the chromaffin body point, effectively improves phonetic recognization rate and recognition effect with this.Such as in identification " you are good " in this, increase chromaffin body point " hello " " you ", carry out scene Recognition at a device according to speech content, also the chromaffin body point is compared identification simultaneously when node is compared identification, thereby can improve recognition efficiency and recognition effect; Make device can cooperate user's speech custom better like this.
It with reference to figure 3 the speech recognition process flow diagram of speech recognition system in each device of a preferred embodiment of the present invention; 201: at first user speech is sent instruction or voice are sent in other device speeches, and the voice signal of this speech content is converted to voice digital signal by load module 20 with this voice analog signal and is input to speech recognition controlled module 10 after amplifying then; 202: according to the definite scene content that will discern of speech content; 203: speech recognition controlled module 20 adds recognized list with the audio digital signals content of input; 204: speech recognition controlled module 20 will add the content of recognized list and the speech data of user's input or the speech data of other device inputs and compare identification; 205: discern successfully, export recognition result and determine new scene according to the result; If identification is unsuccessful, then returns step 204 and compare identification again.
The process flow diagram that carries out voice interface for the user of a preferred embodiment of the present invention and stream oriented device with reference to figure 4.When utilizing two devices that can carry out voice interface to carry out voice interface, comprise step 401: say that by the user in short sending instruction starts two voice interface devices; Step 402,402 ': the words that first device and second device receive by the user said by the voice input module on it 20, and carry out speech recognition by 10 pairs of words that the user said of the speech recognition controlled module on it, compare by the head node of this speech recognition controlled module 10 data set of storage in words that the user said and the database 30; Step 403,403 ': by the speech recognition in the step 402, first device finds the data set N of the corresponding scene of being said with the user of words, and second device finds the data set N of the corresponding scene of being said with the user of words ~Step 404: after first device found corresponding contextual data group, first device was told a word of scene, by voice output module 50 this a word was exported; 404 ': after second device finds corresponding contextual data group, a word that second device is told this first device is made as the identification content, a word with other scenes writes recognized list simultaneously, through discerning by 10 pairs of these a words of speech recognition controlled module; Step 405 ': if corresponding scene, then second device is told second word, if not the scene of correspondence, then in short changes scene according to the of other scenes in the recognized list, find the scene of correspondence after, tell second word; Step 405: second word that first device is told second device by voice input module 20 speech recognition list in the speech recognition controlled module 10 of packing into, and discern this second words, tell the 3rd word afterwards; Repeat above step until finishing this scene dialogue.
Voice interface between the device described above be two the device and and the people between voice interface, in the present invention, when relating to more than the voice interface between two devices, its working method is identical with working method between two devices, at first send instruction by user's speech, each device finds corresponding scene, and each device as content identified, and is told the speech content that conforms to other device speech content with contents of other device speeches according to recognition result afterwards.
Be understandable that the detailed description of the foregoing description is in order to set forth and explain principle of the present invention rather than to the qualification of protection scope of the present invention.Under the prerequisite that does not break away from purport of the present invention, one of ordinary skill in the art can be made modification by the understanding to the principle of being instructed of technique scheme on these embodiment bases, changes and changes.Therefore protection scope of the present invention by appended claim with and be equal to and limit.

Claims (7)

1. an interactive voice response system comprises the device that can carry out voice interface more than two; Each the device in be provided with speech recognition system, this speech recognition system comprises a voice input module, in order to phonetic entry in speech recognition system; One database is stored content to be identified and the speech data that will make the content of response according to institute's content identified in this database; One speech recognition controlled module, it is discerned in order to the statement that will store in the speech data of described voice input module input and database, the voice output module output voice that in this speech recognition system, comprise, it is characterized in that: have logical interdependency between the data of storing in the database in the speech recognition system in described each device, thereby can realize this voice interface between installing more than two.
2. a kind of interactive voice response system as claimed in claim 1, it is characterized in that: the data based applied scene of storing in the described database is divided into several groups, each scene is one group of data, and each the group data have a head node, this head node contains the scene information of this data set; Wherein, there is logical interdependency at least one group of data in the database in the speech recognition system at least one group of data in the database in the speech recognition system in each device and other devices.
3. a kind of interactive voice response system as claimed in claim 2, it is characterized in that: described each group data are divided into plural groups divided data group, the content of the content in described each divided data group and the divided data group of other group is combined into new group, a scene in other words.
4. as the arbitrary described a kind of interactive voice response system of claim 1-3, it is characterized in that: in the described speech recognition system, include a Data Input Interface, be used for new data are input to database.
5. people and the interactive voice response system method of carrying out voice interface, wherein interactive voice response system comprises the device that can carry out voice interface more than two; Each the device in be provided with speech recognition system, this speech recognition system comprises a voice input module, in order to phonetic entry in speech recognition system; One database is stored content to be identified and the speech data that will make the content of response according to institute's content identified in this database; One speech recognition controlled module, it is discerned in order to the statement that will store in the speech data of described voice input module input and database, the voice output module output voice that in this speech recognition system, comprise, there is logical interdependency between the data of storing in the database in the speech recognition system in described each device, thereby can realize this voice interface between installing more than two, this method comprises: a) at first send instruction by people's speech;
B) after each device in this installs more than two was heard this instruction, each device was discerned this instruction by the identification module in the speech recognition system on it, and found one group of data of the scene corresponding with this instruction in database by identification module;
C) after relevant apparatus finds one group of data of corresponding scene, send voice by its speech output end according to this instruction;
It is characterized in that: after d) first device in the device relevant with scene sent voice, other devices received this speech data by its speech recognition system, and the data of storing in this speech data and its database are compared identification; Second device relevant with scene according to the result who relatively discerns, exported the voice of the voice match of sending with first device by the speech output end on it;
Repeat above step, until finishing a complete scene dialogue.
6. the method that a kind of people as claimed in claim 5 and interactive voice response system carry out voice interface, it is characterized in that: the data based applied scene of storing in the described database is divided into several groups, each scene is one group of data, and each group data has a head node, and this head node contains the scene information of this data set; Wherein, there is logical interdependency at least one group of data in the database in the speech recognition system at least one group of data in the database in the speech recognition system in each device and other devices.
7. the method that a kind of people as claimed in claim 6 and interactive voice response system carry out voice interface, it is characterized in that: in step c), also comprise, step c1) after instruction is sent in user's speech, speech recognition system in each device will be instructed by its speech recognition controlled module and be compared identification with each head node scene information data of organizing data, find corresponding data set then; Step c2) voice that have correlativity by output of the speech output end in its speech recognition system and user instruction by first device;
In step d), also comprise, steps d 1) after first device sends voice, the speech data that other devices send this first device is packed into by the voice input module of the speech recognition system on it in speech recognition controlled module of speech recognition system, and speech data that this first device is sent and corresponding scene data set in data compare identification;
Steps d 2) speech data that finds the voice that send with first device to be complementary at second device, by the voice output module on it with voice output.
CN2009100510319A 2009-05-12 2009-05-12 A group of voice interaction devices and method of voice interaction with human Expired - Fee Related CN101551998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100510319A CN101551998B (en) 2009-05-12 2009-05-12 A group of voice interaction devices and method of voice interaction with human

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100510319A CN101551998B (en) 2009-05-12 2009-05-12 A group of voice interaction devices and method of voice interaction with human

Publications (2)

Publication Number Publication Date
CN101551998A CN101551998A (en) 2009-10-07
CN101551998B true CN101551998B (en) 2011-07-27

Family

ID=41156202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100510319A Expired - Fee Related CN101551998B (en) 2009-05-12 2009-05-12 A group of voice interaction devices and method of voice interaction with human

Country Status (1)

Country Link
CN (1) CN101551998B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103135751A (en) * 2011-11-30 2013-06-05 北京德信互动网络技术有限公司 Intelligent electronic device and voice control method based on voice control
CN102723080B (en) * 2012-06-25 2014-06-11 惠州市德赛西威汽车电子有限公司 Voice recognition test system and voice recognition test method
CN102855873A (en) * 2012-08-03 2013-01-02 海信集团有限公司 Electronic equipment and method used for controlling same
CN103632664B (en) * 2012-08-20 2017-07-25 联想(北京)有限公司 The method and electronic equipment of a kind of speech recognition
KR102155482B1 (en) * 2013-10-15 2020-09-14 삼성전자 주식회사 Display apparatus and control method thereof
CN107644641B (en) * 2017-07-28 2021-04-13 深圳前海微众银行股份有限公司 Dialog scene recognition method, terminal and computer-readable storage medium
CN110021299B (en) * 2018-01-08 2021-07-20 佛山市顺德区美的电热电器制造有限公司 Voice interaction method, device, system and storage medium
CN108648749B (en) * 2018-05-08 2020-08-18 上海嘉奥信息科技发展有限公司 Medical voice recognition construction method and system based on voice control system and VR
WO2020181407A1 (en) * 2019-03-08 2020-09-17 发条橘子云端行销股份有限公司 Voice recognition control method and device
CN110086945B (en) * 2019-04-24 2021-07-20 北京百度网讯科技有限公司 Communication method, server, intelligent device, server, and storage medium
CN113494798B (en) * 2020-04-02 2023-11-03 青岛海尔电冰箱有限公司 Refrigerator, method of controlling sound transmission unit, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1591569A (en) * 2003-07-03 2005-03-09 索尼株式会社 Speech communiction system and method, and robot apparatus
CN1734445A (en) * 2004-07-26 2006-02-15 索尼株式会社 Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN101017428A (en) * 2006-12-22 2007-08-15 广东电子工业研究院有限公司 Embedded voice interaction device and interaction method thereof
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1591569A (en) * 2003-07-03 2005-03-09 索尼株式会社 Speech communiction system and method, and robot apparatus
CN1734445A (en) * 2004-07-26 2006-02-15 索尼株式会社 Method, apparatus, and program for dialogue, and storage medium including a program stored therein
CN101017428A (en) * 2006-12-22 2007-08-15 广东电子工业研究院有限公司 Embedded voice interaction device and interaction method thereof
CN101075435A (en) * 2007-04-19 2007-11-21 深圳先进技术研究院 Intelligent chatting system and its realizing method

Also Published As

Publication number Publication date
CN101551998A (en) 2009-10-07

Similar Documents

Publication Publication Date Title
CN101551998B (en) A group of voice interaction devices and method of voice interaction with human
JP6783339B2 (en) Methods and devices for processing audio
CN107423364B (en) Method, device and storage medium for answering operation broadcasting based on artificial intelligence
CN102543071B (en) Voice recognition system and method used for mobile equipment
US8909525B2 (en) Interactive voice recognition electronic device and method
CN107018228B (en) Voice control system, voice processing method and terminal equipment
CN102842306A (en) Voice control method and device as well as voice response method and device
CN102237087B (en) Voice control method and voice control device
CN103078995A (en) Customizable individualized response method and system used in mobile terminal
CN104123938A (en) Voice control system, electronic device and voice control method
CN104766608A (en) Voice control method and voice control device
CN113436609B (en) Voice conversion model, training method thereof, voice conversion method and system
CN108806688A (en) Sound control method, smart television, system and the storage medium of smart television
CN103514882A (en) Voice identification method and system
CN101354886A (en) Apparatus for recognizing speech
CN203386472U (en) Character voice changer
WO2016027909A1 (en) Data structure, interactive voice response device, and electronic device
CN201532764U (en) Vehicle-mounted sound-control wireless broadband network audio player
CN105427856B (en) Appointment data processing method and system for intelligent robot
CN105227765A (en) Interactive approach in communication process and system
CN105727572B (en) A kind of self-learning method and self study device based on speech recognition of toy
CN1416560A (en) Method for voice-controlled iniation of actions by means of limited circle of users, whereby said actions can be carried out in appliance
CN201075286Y (en) Apparatus for speech voice identification
CN114596840A (en) Speech recognition method, device, equipment and computer readable storage medium
CN106339454A (en) Inquiry-command conversion method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110727

Termination date: 20140512