CN103187057A - System and method for cartoon voice control - Google Patents
System and method for cartoon voice control Download PDFInfo
- Publication number
- CN103187057A CN103187057A CN2011104499960A CN201110449996A CN103187057A CN 103187057 A CN103187057 A CN 103187057A CN 2011104499960 A CN2011104499960 A CN 2011104499960A CN 201110449996 A CN201110449996 A CN 201110449996A CN 103187057 A CN103187057 A CN 103187057A
- Authority
- CN
- China
- Prior art keywords
- caricature
- voice
- command
- execution command
- receives
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a system for cartoon voice control. The system for the cartoon voice control comprises a voice receiving unit used for receiving a voice order, a voice analyzing unit used for analyzing the voice order received by the voice receiving unit and an executing unit used for conducting corresponding operation on a cartoon according to an analyzed result of the voice analyzing unit. Accordingly, the invention further provides a method for the cartoon voice control. According to the technical scheme, the cartoon can be directly controlled by voice. When a user wants to operate the cartoon, manual operation is not needed and operation can be achieved by a specific order sent by the voice, so that reading is facilitated for the user and user experience is improved.
Description
Technical field
The present invention relates to the sound control technique field, in particular to a kind of caricature voice activated control and method.
Background technology
Along with the maturation of mobile Internet, electronic reading is also more and more universal, and caricature also is wherein with part.The electronics caricature is because its aboundresources, ageing advantage such as strong more and more are subjected to people's favor.But in relevant programme, people are to the operation of caricature, as page turning, convergent-divergent etc., can only realize by modes such as mouse, touch-screen, keyboards, namely must be by manually could realizing the control to caricature, and wish can be at the reading caricature time as people, in the time of doing other things by both hands again, or work as space constraint and when being not easy to manual operation, just can't satisfy user's demand well, very inconvenient.
Therefore, need a kind of new caricature sound control technique, can be directly by the control of voice to caricature, when the user wants caricature taked to operate, do not need manual execution, only need send specific instruction with voice and get final product, make things convenient for user's reading, improved user's experience.
Summary of the invention
Technical matters to be solved by this invention is, a kind of new caricature sound control technique is provided, can be directly by the control of voice to caricature, when the user wants caricature taked to operate, do not need manual execution, only need send specific instruction with voice and get final product, make things convenient for user's reading, improve user's experience.
In view of this, the invention provides a kind of caricature voice activated control, comprising: voice receiving unit receives voice command; The speech analysis unit is analyzed the described voice command that described voice receiving unit receives; And performance element, according to the analysis result of described speech analysis unit, caricature is carried out corresponding operation.In this technical scheme, realized by the control of voice to caricature, when the user wants caricature taked to operate (for example page turning), do not need manual execution, only need send specific instruction with voice and get final product, like this, make the user when reading caricature, can do other things by both hands again, or needn't can't manual operation read caricature owing to being subjected to space constraint, make things convenient for user's reading, improved user's experience.
In technique scheme, preferably, described speech analysis unit specifically comprises: the semantic subelement of resolving, and described voice command is carried out semanteme resolve, pick out the execution command of described voice command correspondence; And described performance element, be used for operating accordingly according to described execution command.In this technical scheme, when by terminal caricature being read, can send the acoustic control order by the user, as " following one page ", the speech data of being received by end-on is analyzed then, such as by the speech data that comprises " following one page " is resolved, the real semanteme of understanding this speech data correspondence is to wish to leaf through next caricature page or leaf, thereby has realized intelligent decision user's acoustic control order.
In technique scheme, preferably, described speech analysis unit also comprises: the communicator unit, described voice command is sent to server, by described server described voice command being carried out semanteme resolves, pick out the execution command of described voice command correspondence, and receive the described execution command of returning; And described performance element, be used for operating accordingly according to described execution command.In this technical scheme, by with the communicating by letter of server, by server the user's voice data are resolved, recognize corresponding semanteme, and server obviously has more powerful analytic ability than terminal, thereby can satisfy user's demand better.
In technique scheme, preferably, also comprise: the unit is set, according to the order that arranges that receives, carries out related with execution command speech data; Storage unit is stored the described result that arranges that the unit is set; Search the unit, in described storage unit, search the speech data that whether exists the described voice command that receives with described speech analysis unit to be complementary, if exist, then obtain corresponding execution command; And described performance element, be used for operating accordingly according to described execution command.In this technical scheme, the user can be according to the corresponding operating function of the specific pronunciation of the fancy setting of oneself, and uses the function of own setting to operate, and is convenient and personalized, promote the user and experience, also reduced terminal or the performance requirement of server.
In technique scheme, preferably, also comprise: edit cell, according to the edit commands that receives, edit the corresponding relation between described speech data, described execution command and/or described speech data and the described execution command.In this technical scheme, when the user wishes to add new voice setting or revises, during original setting of deletion, can edit according to oneself hobby, thereby make conveniently and personalized, promote user's experience.
The present invention also provides a kind of caricature acoustic-controlled method, comprising: step 202 receives voice command; Step 204 is analyzed the described voice command that receives, and caricature is carried out corresponding operation.In this technical scheme, realized by the control of voice to caricature, when the user wants caricature taked to operate (for example page turning), do not need manual execution, only need send specific instruction with voice and get final product, like this, make the user when reading caricature, can do other things by both hands again, or needn't can't manual operation read caricature owing to being subjected to space constraint, make things convenient for user's reading, improved user's experience.
In technique scheme, preferably, in described step 204, the process of the described voice command that described analysis receives specifically comprises: described voice command is carried out semanteme resolve, pick out the execution command of described voice command correspondence, and carry out described execution command.In this technical scheme, when by terminal caricature being read, can send the acoustic control order by the user, as " following one page ", the speech data of being received by end-on is analyzed then, such as by the speech data that comprises " following one page " is resolved, the real semanteme of understanding this speech data correspondence is to wish to leaf through next caricature page or leaf, thereby has realized intelligent decision user's acoustic control order.
In technique scheme, preferably, in described step 204, the process of the described voice command that described analysis receives also comprises: described voice command is sent to server, by described server described voice command being carried out semanteme resolves, pick out the execution command of described voice command correspondence, and return described execution command; And carry out described execution command.In this technical scheme, by with the communicating by letter of server, by server the user's voice data are resolved, recognize corresponding semanteme, and server obviously has more powerful analytic ability than terminal, thereby can satisfy user's demand better.
In technique scheme, preferably, before described step 202, also comprise: according to the order that arranges that receives, speech data with after execution command is related, is carried out corresponding storage; And in described step 204, the process of the described voice command that described analysis receives specifically comprises: search whether there is the speech data that is complementary with described voice command, if exist, then obtain corresponding execution command, and carry out described execution command.In this technical scheme, the user can be according to the corresponding operating function of the specific pronunciation of the fancy setting of oneself, and uses the function of own setting to operate, and is convenient and personalized, promote the user and experience, also reduced terminal or the performance requirement of server.
In technique scheme, preferably, also comprise: according to the edit commands that receives, edit the corresponding relation between described speech data, described execution command and/or described speech data and the described execution command.In this technical scheme, when the user wishes to add new voice setting or revises, during original setting of deletion, can edit according to oneself hobby, thereby make conveniently and personalized, promote user's experience.
By above technical scheme, can be directly by the control of voice to caricature, when the user wants caricature taked to operate, do not need manual execution, only need send specific instruction with voice and get final product, made things convenient for user's reading, improved user's experience.
Description of drawings
Fig. 1 is the block diagram of caricature voice activated control according to an embodiment of the invention;
Fig. 2 is the process flow diagram of caricature acoustic-controlled method according to an embodiment of the invention;
Fig. 3 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control;
Fig. 4 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control;
Fig. 5 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control;
Fig. 6 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control.
Embodiment
In order more to be expressly understood above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments the present invention is further described in detail.
Set forth a lot of details in the following description so that fully understand the present invention, still, the present invention can also adopt other to be different from other modes described here and implement, and therefore, the present invention is not limited to the restriction of following public specific embodiment.
Fig. 1 is the block diagram of caricature voice activated control according to an embodiment of the invention.
The invention provides a kind of caricature voice activated control 100, as shown in Figure 1, comprising: voice receiving unit 102 receives voice command; Speech analysis unit 104 is analyzed the voice command that voice receiving unit 102 receives; And performance element 106, according to the analysis result of speech analysis unit 104, caricature is carried out corresponding operation.In this technical scheme, realized by the control of voice to caricature, when the user wants caricature taked to operate (for example page turning), do not need manual execution, only need send specific instruction with voice and get final product, like this, make the user when reading caricature, can do other things by both hands again, or needn't can't manual operation read caricature owing to being subjected to space constraint, make things convenient for user's reading, improved user's experience.
In technique scheme, speech analysis unit 104 specifically comprises: the semantic subelement 104A that resolves, and voice command is carried out semanteme resolve, pick out the execution command of this voice command correspondence; And performance element 106, be used for operating accordingly according to execution command.In this technical scheme, when by terminal caricature being read, can send the acoustic control order by the user, as " following one page ", the speech data of being received by end-on is analyzed then, such as by the speech data that comprises " following one page " is resolved, the real semanteme of understanding this speech data correspondence is to wish to leaf through next caricature page or leaf, thereby has realized intelligent decision user's acoustic control order.
In technique scheme, speech analysis unit 104 also comprises: communicator unit 104B, voice command is sent to server, and by server voice command is carried out semanteme and resolve, pick out the execution command of this voice command correspondence, and receive the execution command of returning; And performance element 106, be used for operating accordingly according to execution command.In this technical scheme, by with the communicating by letter of server, by server the user's voice data are resolved, recognize corresponding semanteme, and server obviously has more powerful analytic ability than terminal, thereby can satisfy user's demand better.
In technique scheme, also comprise: unit 108 is set, according to the order that arranges that receives, carries out related with execution command speech data; Storage unit 110 is stored the result that arranges that unit 108 is set; Search unit 112, in storage unit 110, search the speech data that whether exists the voice command that receives with speech analysis unit 104 to be complementary, if exist, then obtain corresponding execution command; And performance element 106, be used for operating accordingly according to execution command.In this technical scheme, the user can be according to the corresponding operating function of the specific pronunciation of the fancy setting of oneself, and uses the function of own setting to operate, and is convenient and personalized, promote the user and experience, also reduced terminal or the performance requirement of server.
In technique scheme, also comprise: edit cell 114, according to the edit commands that receives, the corresponding relation between editor speech data, execution command and/or speech data and the execution command.In this technical scheme, when the user wishes to add new voice setting or revises, during original setting of deletion, can edit according to oneself hobby, thereby make conveniently and personalized, promote user's experience.
Fig. 2 is the process flow diagram of caricature acoustic-controlled method according to an embodiment of the invention.
As shown in Figure 2, the caricature acoustic-controlled method comprises: step 202, reception voice command according to an embodiment of the invention; Step 204 is analyzed the voice command that receives, and caricature is carried out corresponding operation.In this technical scheme, realized by the control of voice to caricature, when the user wants caricature taked to operate (for example page turning), do not need manual execution, only need send specific instruction with voice and get final product, like this, make the user when reading caricature, can do other things by both hands again, or needn't can't manual operation read caricature owing to being subjected to space constraint, make things convenient for user's reading, improved user's experience.
In technique scheme, in step 204, the process of analyzing the voice command that receives specifically comprises: voice command is carried out semanteme resolve, pick out the execution command of this voice command correspondence, and carry out this execution command.In this technical scheme, when by terminal caricature being read, can send the acoustic control order by the user, as " following one page ", the speech data of being received by end-on is analyzed then, such as by the speech data that comprises " following one page " is resolved, the real semanteme of understanding this speech data correspondence is to wish to leaf through next caricature page or leaf, thereby has realized intelligent decision user's acoustic control order.
In technique scheme, in step 204, the process of analyzing the voice command that receives also comprises: voice command is sent to server, by server voice command is carried out semanteme and resolve, pick out the execution command of voice command correspondence, and return execution command; And carry out this execution command.In this technical scheme, by with the communicating by letter of server, by server the user's voice data are resolved, recognize corresponding semanteme, and server obviously has more powerful analytic ability than terminal, thereby can satisfy user's demand better.
In technique scheme, before step 202, also comprise: according to the order that arranges that receives, speech data with after execution command is related, is carried out corresponding storage; And in step 204, the process of analyzing the voice command that receives specifically comprises: search whether there is the speech data that is complementary with voice command, if exist, then obtain corresponding execution command, and carry out this execution command.In this technical scheme, the user can be according to the corresponding operating function of the specific pronunciation of the fancy setting of oneself, and uses the function of own setting to operate, and is convenient and personalized, promote the user and experience, also reduced terminal or the performance requirement of server.
In technique scheme, also comprise: according to the edit commands that receives, the corresponding relation between editor speech data, execution command and/or speech data and the execution command.In this technical scheme, when the user wishes to add new voice setting or revises, during original setting of deletion, can edit according to oneself hobby, thereby make conveniently and personalized, promote user's experience.
Describe technical scheme of the present invention in detail below in conjunction with practical operation.
By the caricature voice activated control in the embodiment of the invention, can realize the various operations that are converted into caricature controlled in voice, its specific implementation process is as follows:
Fig. 3 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control.
As shown in Figure 3, it is as follows according to an embodiment of the invention caricature to be carried out the flow process of acoustic control:
Fig. 4 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control.
As shown in Figure 4, it is as follows according to an embodiment of the invention caricature to be carried out the flow process of acoustic control:
In terminal 402:
In server 404:
In terminal 402:
Fig. 5 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control.
As shown in Figure 5, it is as follows according to an embodiment of the invention caricature to be carried out the flow process of acoustic control:
Step 502 is associated speech data with operational order, and storage accordingly.Here can be by hobby or the use habit of user according to oneself, the mode of operation that self-defined hope is used is " page up " as definition " Kazakhstan ", " exhaling " is " following one page ", thereby promotes the interest that caricature is read.
Step 504 receives the user's voice order.
Step 506 judges whether to exist corresponding speech data.The user's voice order that receives as keyword, is searched in the data of storage in step 502, judge whether to exist the speech data corresponding to this voice command.If exist, then enter step 508, otherwise finish.
Step 508 is obtained instruction corresponding.If have the speech data corresponding to the user's voice order of obtaining in real time, then further obtain the operational order corresponding to this speech data.
Fig. 6 is the process flow diagram that according to an embodiment of the invention caricature is carried out acoustic control.
As shown in Figure 6, it is as follows according to an embodiment of the invention caricature to be carried out the flow process of acoustic control:
In terminal 600:
Step 602 with after operational order is related, generates associated with speech data.Put down in writing the incidence relation between speech data and the operational order in this associated with, as " Kazakhstan " correspondence " page up ", " exhaling " correspondence " following one page ", thus promote the interest that caricature is read.
Step 604 is sent to server 601.
In server 601:
Step 606 receives associated with and storage.
In terminal 600:
Step 608 receives the user's voice order.
Step 610 is sent to server.
In server 601:
Step 612 according to the voice command that receives, judges whether to exist the speech data corresponding to this voice command.If exist, then enter step 614.
Step 614 is obtained instruction corresponding.If have the speech data corresponding to the user's voice order of obtaining in real time, then further obtain the operational order corresponding to this speech data.
In terminal 600:
Step 618 receives operational order.
More than be described with reference to the accompanying drawings technical scheme of the present invention, consider in the correlation technique, too single for the mode of operation of caricature, can't be applicable to multiple environment for use, therefore, the invention provides a kind of caricature voice activated control and a kind of caricature acoustic-controlled method, can be directly by the control of voice to caricature, when the user wants caricature taked to operate, do not need manual execution, only need send specific instruction with voice and get final product, make things convenient for user's reading, improve user's experience.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. a caricature voice activated control is characterized in that, comprising:
Voice receiving unit receives voice command;
The speech analysis unit is analyzed the described voice command that described voice receiving unit receives; And
Performance element according to the analysis result of described speech analysis unit, is carried out corresponding operation to caricature.
2. caricature voice activated control according to claim 1 is characterized in that, described speech analysis unit specifically comprises:
The semantic subelement of resolving carries out semanteme to described voice command and resolves, and picks out the execution command of described voice command correspondence; And
Described performance element is used for operating accordingly according to described execution command.
3. caricature voice activated control according to claim 1 and 2 is characterized in that, described speech analysis unit also comprises:
The communicator unit is sent to server with described voice command, by described server described voice command is carried out semanteme and resolves, and picks out the execution command of described voice command correspondence, and receives the described execution command of returning; And
Described performance element is used for operating accordingly according to described execution command.
4. caricature voice activated control according to claim 1 is characterized in that, also comprises:
The unit is set, according to the order that arranges that receives, carries out related with execution command speech data;
Storage unit is stored the described result that arranges that the unit is set;
Search the unit, in described storage unit, search the speech data that whether exists the described voice command that receives with described speech analysis unit to be complementary, if exist, then obtain corresponding execution command; And
Described performance element is used for operating accordingly according to described execution command.
5. caricature voice activated control according to claim 4 is characterized in that, also comprises:
Edit cell according to the edit commands that receives, is edited the corresponding relation between described speech data, described execution command and/or described speech data and the described execution command.
6. a caricature acoustic-controlled method is characterized in that, comprising:
Step 202 receives voice command;
Step 204 is analyzed the described voice command that receives, and caricature is carried out corresponding operation.
7. caricature acoustic-controlled method according to claim 6 is characterized in that, in described step 204, the process of the described voice command that described analysis receives specifically comprises:
Described voice command is carried out semanteme resolve, pick out the execution command of described voice command correspondence, and carry out described execution command.
8. according to claim 6 or 7 described caricature acoustic-controlled methods, it is characterized in that in described step 204, the process of the described voice command that described analysis receives also comprises:
Described voice command is sent to server, by described server described voice command is carried out semanteme and resolve, pick out the execution command of described voice command correspondence, and return described execution command; And
Carry out described execution command.
9. caricature acoustic-controlled method according to claim 6 is characterized in that, before described step 202, also comprises:
According to the order that arranges that receives, speech data with after execution command is related, is carried out corresponding storage; And
In described step 204, the process of the described voice command that described analysis receives specifically comprises:
Search whether there is the speech data that is complementary with described voice command, if exist, then obtain corresponding execution command, and carry out described execution command.
10. caricature acoustic-controlled method according to claim 9 is characterized in that, also comprises:
According to the edit commands that receives, edit the corresponding relation between described speech data, described execution command and/or described speech data and the described execution command.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104499960A CN103187057A (en) | 2011-12-29 | 2011-12-29 | System and method for cartoon voice control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104499960A CN103187057A (en) | 2011-12-29 | 2011-12-29 | System and method for cartoon voice control |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103187057A true CN103187057A (en) | 2013-07-03 |
Family
ID=48678191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011104499960A Pending CN103187057A (en) | 2011-12-29 | 2011-12-29 | System and method for cartoon voice control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103187057A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123941A (en) * | 2014-07-17 | 2014-10-29 | 常州蓝城信息科技有限公司 | Method for using voice recognition to control device |
CN106104528A (en) * | 2014-03-03 | 2016-11-09 | 微软技术许可有限责任公司 | Begin a project for screen and select and the method based on model of disambiguation |
CN106205614A (en) * | 2016-07-27 | 2016-12-07 | 太仓世源金属制品有限公司 | The remote control unit of a kind of speech-controlled electric bed and remote control system |
CN108648749A (en) * | 2018-05-08 | 2018-10-12 | 上海嘉奥信息科技发展有限公司 | Medical speech recognition construction method and system based on voice activated control and VR |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916266A (en) * | 2010-07-30 | 2010-12-15 | 优视科技有限公司 | Voice control web page browsing method and device based on mobile terminal |
US20110050592A1 (en) * | 2009-09-02 | 2011-03-03 | Kim John T | Touch-Screen User Interface |
CN102152312A (en) * | 2010-11-16 | 2011-08-17 | 深圳中科智酷机器人科技有限公司 | Robot system and task execution method of robot system |
CN102253710A (en) * | 2010-05-21 | 2011-11-23 | 台达电子工业股份有限公司 | Multi-mode interactively operated electronic device and multi-mode interactively operated method thereof |
-
2011
- 2011-12-29 CN CN2011104499960A patent/CN103187057A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110050592A1 (en) * | 2009-09-02 | 2011-03-03 | Kim John T | Touch-Screen User Interface |
CN102253710A (en) * | 2010-05-21 | 2011-11-23 | 台达电子工业股份有限公司 | Multi-mode interactively operated electronic device and multi-mode interactively operated method thereof |
CN101916266A (en) * | 2010-07-30 | 2010-12-15 | 优视科技有限公司 | Voice control web page browsing method and device based on mobile terminal |
CN102152312A (en) * | 2010-11-16 | 2011-08-17 | 深圳中科智酷机器人科技有限公司 | Robot system and task execution method of robot system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106104528A (en) * | 2014-03-03 | 2016-11-09 | 微软技术许可有限责任公司 | Begin a project for screen and select and the method based on model of disambiguation |
CN104123941A (en) * | 2014-07-17 | 2014-10-29 | 常州蓝城信息科技有限公司 | Method for using voice recognition to control device |
CN106205614A (en) * | 2016-07-27 | 2016-12-07 | 太仓世源金属制品有限公司 | The remote control unit of a kind of speech-controlled electric bed and remote control system |
CN108648749A (en) * | 2018-05-08 | 2018-10-12 | 上海嘉奥信息科技发展有限公司 | Medical speech recognition construction method and system based on voice activated control and VR |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210311975A1 (en) | Language agnostic command-understanding digital assistant | |
TWI511125B (en) | Voice control method, mobile terminal apparatus and voice controlsystem | |
CN102111314B (en) | Smart home voice control system and method based on Bluetooth transmission | |
US9928030B2 (en) | Speech retrieval device, speech retrieval method, and display device | |
US9218052B2 (en) | Framework for voice controlling applications | |
KR101586890B1 (en) | Input processing method and apparatus | |
CN101893993B (en) | Electronic whiteboard system and voice processing method thereof | |
CN101221576B (en) | Input method and device capable of implementing automatic translation | |
WO2007008798A3 (en) | System and method for searching for network-based content in a multi-modal system using spoken keywords | |
CN102984666B (en) | Address list voice information processing method in a kind of communication process and system | |
CN103092928B (en) | Voice inquiry method and system | |
CN105391730A (en) | Information feedback method, device and system | |
CN106504748A (en) | A kind of sound control method and device | |
CN103797761A (en) | Communication method, client, and terminal | |
CN103731711A (en) | Method and system for executing operation of smart television | |
CN110992955A (en) | Voice operation method, device, equipment and storage medium of intelligent equipment | |
JP2014106523A (en) | Voice input corresponding device and voice input corresponding program | |
CN110968245B (en) | Operation method for controlling office software through voice | |
CN105788597A (en) | Voice recognition-based screen reading application instruction input method and device | |
CN103187057A (en) | System and method for cartoon voice control | |
JP2008287210A5 (en) | ||
CN106205622A (en) | Information processing method and electronic equipment | |
CN103366010A (en) | Method and device for searching audio file | |
CN111161731A (en) | Intelligent off-line voice control device for household electrical appliances | |
KR20080083290A (en) | A method and apparatus for accessing a digital file from a collection of digital files |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130703 |