CN105529025A - Voice operation input method and electronic device - Google Patents

Voice operation input method and electronic device Download PDF

Info

Publication number
CN105529025A
CN105529025A CN201410509616.1A CN201410509616A CN105529025A CN 105529025 A CN105529025 A CN 105529025A CN 201410509616 A CN201410509616 A CN 201410509616A CN 105529025 A CN105529025 A CN 105529025A
Authority
CN
China
Prior art keywords
order
processing unit
syllable
information
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410509616.1A
Other languages
Chinese (zh)
Other versions
CN105529025B (en
Inventor
章丹峰
靳玉茹
钟荣标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410509616.1A priority Critical patent/CN105529025B/en
Publication of CN105529025A publication Critical patent/CN105529025A/en
Application granted granted Critical
Publication of CN105529025B publication Critical patent/CN105529025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a voice operation input method and an electronic device. The method is applied to the electronic device provided with a sound acquisition unit, a first processing unit and a second processing unit. The method comprises steps: sound information is acquired through the sound acquisition unit; the first processing unit recognizes the sound information; when the sound information meets a predetermined condition, features of the sound information are extracted; an information set is generated according to the features of the sound information; the information set is sent to the second processing unit; and the second processing unit executes a command corresponding to the information set. By adopting the voice operation input method and the electronic device of the invention, an application program supporting the voice operation does not need to be triggered, the voice operation can be inputted directly, and even if the electronic device is in a standby state, the voice operation can be inputted.

Description

A kind of voice operating input method and electronic equipment
Technical field
The present invention relates to control field, particularly relate to a kind of voice operating input method and electronic equipment.
Background technology
Current, the function of electronic equipment is more and more abundanter.Increasing electronic equipment can support voice operation.
Such as, the application programs such as the voice assistant on smart mobile phone, just can make user be operated smart mobile phone by voice.
In prior art, the method that inputs is carried out mainly for voice operating: the application program first triggering support voice operation, makes this application program enter initiate mode, then receive the voice of user's input, voice are identified, is finally converted to certain operation to electronic equipment.
From the above, voice operating input method of the prior art, needs the application program first triggering support voice operation, makes this application program enter initiate mode, can receive the voice of user's input.This causes operating process more loaded down with trivial details, and, when electronic equipment is in holding state, also cannot voice operating be inputted.
Summary of the invention
The object of this invention is to provide a kind of voice operating input method and electronic equipment, can trigger without the need to the application program operated support voice, just directly can input voice operating, even if when electronic equipment is in holding state, also can input voice operating.
For achieving the above object, the invention provides following scheme:
A kind of voice operating input method, described method is applied to the electronic equipment with sound collection unit, the first processing unit and the second processing unit, and described method comprises:
Acoustic information is obtained by described sound collection unit;
Acoustic information described in described first processing unit identification;
When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information;
According to the feature information generated set of described acoustic information;
Described information aggregate is sent to described second processing unit;
Described second processing unit performs the order corresponding with described information aggregate.
Optionally, the power consumption of described first processing unit is lower than the power consumption of described second processing unit.
Optionally, described second processing unit performs the order corresponding with described information aggregate, specifically comprises:
Described second processing unit switches to the second state by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state;
Described second processing unit being in described second state searches the order corresponding with described information aggregate;
Perform described order.
Optionally, before obtaining acoustic information by described sound collection unit, also comprise:
Detect the syllable template typing operation of user;
Obtain the syllable Template Information of user's input after described syllable template typing operation;
Preserve the syllable template that described syllable Template Information represents.
Optionally, after the syllable template that the described syllable Template Information of described preservation represents, also comprise:
For the described syllable template of preserving distributes corresponding mark;
Show the corresponding relation between described syllable template and described mark;
Obtain putting in order of the mark of user's input;
Obtain the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Put in order described in foundation and corresponding relation between described operational order option.
A kind of electronic equipment, described electronic equipment comprises:
Sound collection unit, for obtaining acoustic information;
First processing unit, for identifying described acoustic information;
When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; And according to after the feature information generated set of described acoustic information, described information aggregate is sent to the second processing unit;
Described second processing unit, for performing the order corresponding with described information aggregate after receiving described information aggregate.
Optionally, the power consumption of described first processing unit is lower than the power consumption of described second processing unit.
Optionally, described second processing unit, specifically comprises:
State switches subelement, switches to the second state for controlling described second processing unit by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state;
Command lookup subelement, searches the order corresponding with described information aggregate for described second processing unit controlling to be in described second state;
Order performs subelement, for performing described order.
Optionally, described first processing unit, specifically comprises:
Syllable recognin unit, for identifying in described voice messaging the multiple syllables comprised;
Coupling subelement, for mating described syllable with the multiple syllable templates preset respectively;
Put in order and determine subelement, for when described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding;
Information aggregate generates subelement, for generate comprise described in the information aggregate that puts in order;
Information sends subelement, for the information aggregate put in order described in comprising is sent to described second processing unit;
Described second processing unit, specifically comprises:
Subelement is determined in order, and for putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order;
Order performs subelement, for performing described order.
Optionally, described second processing unit also comprises:
Typing operation acquiring unit, for before acquisition acoustic information, detects the syllable template typing operation of user;
Syllable Template Information acquiring unit, for obtaining the syllable Template Information of user's input after described syllable template typing operation;
Syllable template storage unit, for preserving the syllable template that described syllable Template Information represents.
Optionally, described second processing unit also comprises:
Mark allocation units, for after preserving the syllable template that represents of described syllable Template Information, are that the described syllable template of preserving distributes corresponding mark;
Corresponding relation display unit, for showing the corresponding relation between described syllable template and described mark;
Put in order acquiring unit, for obtaining putting in order of the mark of user's input;
Operational order option acquiring unit, for obtaining the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Corresponding relation sets up unit, for the corresponding relation put in order described in setting up and between described operational order option.
According to specific embodiment provided by the invention, the invention discloses following technique effect:
Voice operating input method of the present invention and electronic equipment, by adopting acoustic information described in described first processing unit identification; When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; According to the feature information generated set of described acoustic information; Described information aggregate is sent to described second processing unit; The order corresponding with described information aggregate is performed again by described second processing unit; Can trigger without the need to the application program operated support voice, just directly can input voice operating, even if when electronic equipment is in holding state, also can input voice operating.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of voice operating input method embodiment 1 of the present invention;
Fig. 2 is the process flow diagram of voice operating input method embodiment 2 of the present invention;
Fig. 3 is the process flow diagram of voice operating input method embodiment 3 of the present invention;
Fig. 4 is in voice operating input method embodiment of the present invention, arranges the process flow diagram of syllable template;
Fig. 5 is the structural drawing of electronic equipment embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
For enabling above-mentioned purpose of the present invention, feature and advantage become apparent more, and below in conjunction with the drawings and specific embodiments, the present invention is further detailed explanation.
The voice operating input method of the embodiment of the present invention, is applied to the electronic equipment with sound collection unit, the first processing unit and the second processing unit.
Described electronic equipment can be the equipment such as mobile phone, panel computer.Described sound collection unit can be microphone.Described first processing unit can be special IC (ApplicationSpecificIntegratedCircuits, ASIC), and described second processing unit can be application processor (ApplicationProcessor, AP).
Fig. 1 is the process flow diagram of voice operating input method embodiment 1 of the present invention.As shown in Figure 1, the method can comprise:
Step 101: obtain acoustic information by described sound collection unit;
Described sound collection unit can the acoustic information in the Real-time Obtaining external world.Described first processing unit can be set to only respond some special sound that user sends.
Step 102: acoustic information described in described first processing unit identification;
Described first processing unit can prestore some voice messagings as syllable template.After described sound collection unit gets acoustic information, can mate with syllable template.Described acoustic information can comprise multiple voice matched with syllable template.Described first processing unit can for each syllable unit in described acoustic information, this syllable unit is mated with each syllable template respectively, if this syllable unit and the success of any one syllable template matches, then can determine that this syllable unit is the syllable unit matched with syllable template.
Step 103: when described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information;
If each syllable unit in described acoustic information, the syllable template matched can be identified, then can judge that described acoustic information conforms to a predetermined condition.
The feature of described acoustic information, can refer to putting in order of the syllable template corresponding to syllable unit comprised in described acoustic information.
Step 104: according to the feature information generated set of described acoustic information;
According to the feature of different described acoustic informations, different information aggregates can be generated.In described information aggregate, include the feature of described acoustic information.
Step 105: described information aggregate is sent to described second processing unit;
Described second processing unit can have different duties, and such as, described second processing unit can have dormant state and wake-up states.After described second processing unit receives described information aggregate, the state that can occur changes, and such as, can switch to wake-up states from dormant state.
Step 106: described second processing unit performs the order corresponding with described information aggregate.
After described second processing unit receives described information aggregate, can the information comprised in described information aggregate be analyzed.According to described information aggregate, corresponding order can be found, thus perform described order.
In sum, in the present embodiment, by adopting acoustic information described in described first processing unit identification; When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; According to the feature information generated set of described acoustic information; Described information aggregate is sent to described second processing unit; The order corresponding with described information aggregate is performed again by described second processing unit; Can trigger without the need to the application program operated support voice, just directly can input voice operating, even if when electronic equipment is in holding state, also can input voice operating.
It should be noted that, in the embodiment of the present invention, the power consumption of described first processing unit can lower than the power consumption of described second processing unit.Described first processing unit can be asic chip.Described first processing unit can after the start of described electronic equipment, always in running order.Described second processing unit when not needing to be operated in wake-up states, can be in dormant state.Because the power consumption of described first processing unit is lower than the power consumption of described second processing unit, even if so described first processing unit is always in running order, the low in energy consumption when power consumption of described first processing unit also continues in running order than described second processing unit.
Fig. 2 is the process flow diagram of voice operating input method embodiment 2 of the present invention.As shown in Figure 2, the method can comprise:
Step 201: obtain acoustic information by microphone;
Step 202: adopt acoustic information described in asic chip identification;
Step 203: when described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information;
Step 204: according to the feature information generated set of described acoustic information;
Step 205: described information aggregate is sent to application processor;
Step 206: described application processor switches to the second state by the first state; Wherein, the power consumption being in the described application processor of described first state is lower than the power consumption of described application processor being in described second state;
Such as, described first state can be dormant state, and described second state can be wake-up states.
Step 207: the described application processor being in described second state searches the order corresponding with described information aggregate;
Step 208: perform described order.
Such as, when corresponding being operating as of described order is called to user A, described electronic equipment just can automatically perform the operation of calling to user A.When navigate application is opened in corresponding being operating as of described order, described electronic equipment just can automatically perform the operation of opening navigate application.
In the present embodiment, by adopting asic chip as described first processing unit, the power consumption of the first processing unit needing Real time identification acoustic information can be reduced; By when described second processing unit receives described information aggregate, described second processing unit switches to the second state by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state; The power consumption of the voice operating input method in the embodiment of the present invention can be reduced further.
Fig. 3 is the process flow diagram of voice operating input method embodiment 3 of the present invention.As shown in Figure 3, the method can comprise:
Step 301: obtain acoustic information by described sound collection unit;
Step 302: identify in described voice messaging the multiple syllables comprised;
Described syllable, can refer to the pronunciation unit that described voice messaging can be divided into.Suppose that described voice messaging is for " ABC ", then described syllable can be just " A ", " B ", " C "; Suppose that described voice messaging is for " 123 ", then described syllable can be just " 1 ", " 2 ", " 3 "; Suppose that described voice messaging is for " making a phone call ", then described syllable can be just " beating ", " electricity ", " words ".
Step 303: described syllable is mated with the multiple syllable templates preset respectively;
Described syllable template can pre-set.Electronic equipment described in the phonetic entry that self can send by user, is kept in described electronic equipment as described syllable template.
Such as, user can send voice " A ", voice " A " is inputted electronic equipment, saves as syllable template.On this basis, voice " B ", " C " can also be saved as syllable template by user.
When the syllable template of preserving is respectively " A ", " B ", " C ", just the acoustic information got by described sound collection unit can be mated respectively with " A ", " B ", " C ", determine whether the syllable comprised in described acoustic information matches with syllable template.
Step 304: when described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding;
As long as the syllable comprised in described voice messaging all can be successful with syllable template matches, namely can judge that described acoustic information conforms to a predetermined condition.
In example above, when the syllable template of preserving is respectively " A ", " B ", " C ", no matter described voice messaging is " ABC ", " BCA ", " CBA ", " CAB ", " ACB " or " BAC ", because the syllable comprised can be successful with syllable template matches, therefore, all can be determined described acoustic information to conform to a predetermined condition.
When described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, putting in order of the mark of the syllable template that described multiple syllable is corresponding can be determined.
When arranging syllable template, it can be the mark that each syllable template-setup is corresponding.Such as, mark 1 can be set for syllable template " A ", for syllable template " B " arranges mark 2, for syllable template " C " arranges mark 3.In practical application, which kind of mark is specifically set, can selects according to the actual requirements.
In the above example, suppose that described acoustic information is for " ABC ", then the putting in order of mark of corresponding syllable template is " 123 "; Suppose that described acoustic information is for " CBA ", then the putting in order of mark of corresponding syllable template is " 321 ".
Step 305: generate the information aggregate put in order described in comprising;
In described information aggregate, put in order described in can comprising, can also interrupt instruction be comprised.Described interrupt instruction may be used for described second processing unit to switch to wake-up states by dormant state.
Step 306: described information aggregate is sent to described second processing unit;
After described second processing unit receives described information aggregate, wake-up states can be switched to by dormant state.
Step 307: described second processing unit putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order;
When pre-setting syllable template, putting in order and corresponding relation between ordering of the mark of syllable template can be set.Such as, the order that can arrange put in order " 123 " corresponding is control described electronic equipment and call to the order of user A; The order that can arrange put in order " 321 " corresponding is control the order that navigate application opened by described electronic equipment.
Step 308: perform described order.
In sum, in the present embodiment, by identifying in described voice messaging the multiple syllables comprised;
Described syllable is mated with the multiple syllable templates preset respectively; When described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding; Putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order; The syllable template of finite number can be utilized, put in order by different, corresponding different orders.Due to the negligible amounts of the syllable template of use, can reduce for the complexity described in the first processing unit identification during acoustic information, make described first processing unit can adopt relatively simple for structure, cost is lower, the asic chip that power consumption is lower, thus the cost and the power consumption that reduce the voice operating input method of the embodiment of the present invention.In addition, due to negligible amounts syllable template put in order can have a variety of, so less syllable template can also be adopted, corresponding more order, the number of the order that the voice operating input method having enriched the embodiment of the present invention can be corresponding.
Fig. 4 is in voice operating input method embodiment of the present invention, arranges the process flow diagram of syllable template.As shown in Figure 4, this flow process can comprise:
Step 401: the syllable template typing operation detecting user;
Described syllable template typing operation, shows that user will input voice, and using these voice as syllable template.
The implementation of described syllable template typing operation can have multiple.Such as, the program for arranging syllable template can being opened, operating corresponding button with the typing of syllable template by clicking in this program interface, input syllable template typing operation.
Step 402: the syllable Template Information obtaining user's input after described syllable template typing operation;
After input syllable template typing operation, the sound collection unit of described electronic equipment can be in running order.The voice that user in real sends by running order described sound collection unit.In this flow process, the voice of user's input after described syllable template typing operation are called syllable Template Information.
Step 403: preserve the syllable template that described syllable Template Information represents.
Concrete, user often can send voice, and these voice just can be saved as syllable template by described electronic equipment.Like this, user can send multiple voice successively, and these multiple voice are saved as syllable template by described electronic equipment successively.
Step 404: for the described syllable template of preserving distributes corresponding mark;
After preserving described syllable template, can be that described syllable template distributes corresponding mark.Which kind of mark of concrete distribution, is not construed as limiting herein.As long as mark corresponding to different syllable templates is different.Suppose there are four syllable templates, these four syllable templates can be designated A, B, C, D respectively, also can be designated 1,2,3,4 respectively, certainly, also can be identified as other forms of mark.
Step 405: show the corresponding relation between described syllable template and described mark;
After mark is assigned, described electronic equipment can also pass through display unit, shows the corresponding relation between described syllable template and described mark.Such as, can show " 1:A ", represent that first syllable template is identified as A.
Step 406: obtain putting in order of the mark of user's input;
After user knows the corresponding relation between syllable template and mark, user also can represent corresponding syllable template by mark.Such as, user can input " ABC ", represent syllable template put in order for: the syllable template of first input is front, and the syllable template of second input is in centre, and the syllable template of the 3rd input is in the end.
Step 407: obtain the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Described electronic equipment can pass through display unit, shows multiple operational order option.Such as, the operational order option called to user A can be shown, also can show the option opening navigate application, or other operational order option can also be shown.User can select one from multiple operational order option, as the operational order option putting in order corresponding with described mark.
Step 408: put in order described in foundation and corresponding relation between described operational order option.
After setting up described corresponding relation, user just can in the process of the described electronic equipment of follow-up use, by send meet described in the voice that put in order, thus trigger described electronic equipment and perform corresponding order.
In sum, in this flow process, by preserving syllable template, for the described syllable template of preserving distributes corresponding mark, obtain the putting in order of mark of user's input, and the operational order option that user selects, put in order described in foundation and corresponding relation between described operational order option; Make user by typing syllable template, select the different corresponding different operational order options that puts in order, just can in follow-up use procedure, by sending the syllable that difference puts in order, trigger electronics performs corresponding order, and the syllable that will variously not put in order all inputs described electronic equipment as training utterance, therefore can improve voice and corresponding operational order be set efficiency is set.
The invention also discloses a kind of electronic equipment.Described electronic equipment has sound collection unit, the first processing unit and the second processing unit.Described electronic equipment can be the equipment such as mobile phone, panel computer.Described sound collection unit can be microphone.Described first processing unit can be special IC (ApplicationSpecificIntegratedCircuits, ASIC), and described second processing unit can be application processor (ApplicationProcessor, AP).
Fig. 5 is the structural drawing of electronic equipment embodiment of the present invention.As shown in Figure 5, described electronic equipment can comprise:
Acoustic information collecting unit 501, for obtaining acoustic information;
Described sound collection unit can the acoustic information in the Real-time Obtaining external world.Described first processing unit can be set to only respond some special sound that user sends.
First processing unit 502, for identifying described acoustic information; When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; Information aggregate generation unit, for and according to after the feature information generated set of described acoustic information; Information aggregate transmitting element, for being sent to the second processing unit 503 by described information aggregate.
Described first processing unit can prestore some voice messagings as syllable template.After described sound collection unit gets acoustic information, can mate with syllable template.Described acoustic information can comprise multiple voice matched with syllable template.Described first processing unit can for each syllable unit in described acoustic information, this syllable unit is mated with each syllable template respectively, if this syllable unit and the success of any one syllable template matches, then can determine that this syllable unit is the syllable unit matched with syllable template.
If each syllable unit in described acoustic information, the syllable template matched can be identified, then can judge that described acoustic information conforms to a predetermined condition.
The feature of described acoustic information, can refer to putting in order of the syllable template corresponding to syllable unit comprised in described acoustic information.
According to the feature of different described acoustic informations, different information aggregates can be generated.In described information aggregate, include the feature of described acoustic information.
Described second processing unit 503 can have different duties, and such as, described second processing unit can have dormant state and wake-up states.After described second processing unit receives described information aggregate, the state that can occur changes, and such as, can switch to wake-up states from dormant state.
Described second processing unit 503, for performing the order corresponding with described information aggregate after receiving described information aggregate.
In sum, in the present embodiment, by adopting acoustic information described in described first processing unit identification; When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; According to the feature information generated set of described acoustic information; Described information aggregate is sent to described second processing unit; The order corresponding with described information aggregate is performed again by described second processing unit; Can trigger without the need to the application program operated support voice, just directly can input voice operating, even if when electronic equipment is in holding state, also can input voice operating.
In practical application, the power consumption of described first processing unit can lower than the power consumption of described second processing unit.
In practical application, described second processing unit 503, specifically can comprise:
State switches subelement, switches to the second state for controlling described second processing unit by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state;
Command lookup subelement, searches the order corresponding with described information aggregate for described second processing unit controlling to be in described second state;
Order performs subelement, for performing described order.
In practical application, described first processing unit 502, specifically can comprise:
Syllable recognin unit, for identifying in described voice messaging the multiple syllables comprised;
Coupling subelement, for mating described syllable with the multiple syllable templates preset respectively;
Put in order and determine subelement, for when described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding;
Information aggregate generates subelement, for generate comprise described in the information aggregate that puts in order;
Information sends subelement, for the information aggregate put in order described in comprising is sent to described second processing unit;
Described second processing unit 503, specifically can comprise:
Subelement is determined in order, and for putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order;
Order performs subelement, for performing described order.
In practical application, described second processing unit 503 can also comprise:
Typing operation acquiring unit, for before acquisition acoustic information, detects the syllable template typing operation of user;
Syllable Template Information acquiring unit, for obtaining the syllable Template Information of user's input after described syllable template typing operation;
Syllable template storage unit, for preserving the syllable template that described syllable Template Information represents.
In practical application, described second processing unit 503 can also comprise:
Mark allocation units, for after preserving the syllable template that represents of described syllable Template Information, are that the described syllable template of preserving distributes corresponding mark;
Corresponding relation display unit, for showing the corresponding relation between described syllable template and described mark;
Put in order acquiring unit, for obtaining putting in order of the mark of user's input;
Operational order option acquiring unit, for obtaining the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Corresponding relation sets up unit, for the corresponding relation put in order described in setting up and between described operational order option.
Finally, also it should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required hardware platform by software and realize, can certainly all be implemented by hardware, but in a lot of situation, the former is better embodiment.Based on such understanding, what technical scheme of the present invention contributed to background technology can embody with the form of software product in whole or in part, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the present invention or embodiment.
In this instructions, each embodiment adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar portion mutually see.For electronic equipment disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
Apply specific case herein to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications.In sum, this description should not be construed as limitation of the present invention.

Claims (12)

1. a voice operating input method, is characterized in that, described method is applied to the electronic equipment with sound collection unit, the first processing unit and the second processing unit, and described method comprises:
Acoustic information is obtained by described sound collection unit;
Acoustic information described in described first processing unit identification;
When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information;
According to the feature information generated set of described acoustic information;
Described information aggregate is sent to described second processing unit;
Described second processing unit performs the order corresponding with described information aggregate.
2. method according to claim 1, is characterized in that, the power consumption of described first processing unit is lower than the power consumption of described second processing unit.
3. method according to claim 1, is characterized in that, described second processing unit performs the order corresponding with described information aggregate, specifically comprises:
Described second processing unit switches to the second state by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state;
Described second processing unit being in described second state searches the order corresponding with described information aggregate;
Perform described order.
4. method according to claim 1, is characterized in that, acoustic information described in described first processing unit identification, specifically comprises:
Identify in described voice messaging the multiple syllables comprised;
Described syllable is mated with the multiple syllable templates preset respectively;
Described when described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information, specifically comprise:
When described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding;
The described feature information generated set according to described acoustic information, specifically comprises:
Generate the information aggregate put in order described in comprising;
Described second processing unit performs the order corresponding with described information aggregate, specifically comprises:
Putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order;
Perform described order.
5. method according to claim 1, is characterized in that, before obtaining acoustic information, also comprises by described sound collection unit:
Detect the syllable template typing operation of user;
Obtain the syllable Template Information of user's input after described syllable template typing operation;
Preserve the syllable template that described syllable Template Information represents.
6. method according to claim 5, is characterized in that, after the syllable template that the described syllable Template Information of described preservation represents, also comprises:
For the described syllable template of preserving distributes corresponding mark;
Show the corresponding relation between described syllable template and described mark;
Obtain putting in order of the mark of user's input;
Obtain the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Put in order described in foundation and corresponding relation between described operational order option.
7. an electronic equipment, is characterized in that, described electronic equipment comprises:
Sound collection unit, for obtaining acoustic information;
First processing unit, for identifying described acoustic information;
When described acoustic information conforms to a predetermined condition, extract the feature of described acoustic information; And according to after the feature information generated set of described acoustic information, described information aggregate is sent to the second processing unit;
Described second processing unit, for performing the order corresponding with described information aggregate after receiving described information aggregate.
8. electronic equipment according to claim 7, is characterized in that, the power consumption of described first processing unit is lower than the power consumption of described second processing unit.
9. electronic equipment according to claim 7, is characterized in that, described second processing unit, specifically comprises:
State switches subelement, switches to the second state for controlling described second processing unit by the first state; Wherein, the power consumption being in described second processing unit of described first state is lower than the power consumption of described second processing unit being in described second state;
Command lookup subelement, searches the order corresponding with described information aggregate for described second processing unit controlling to be in described second state;
Order performs subelement, for performing described order.
10. electronic equipment according to claim 7, is characterized in that, described first processing unit, specifically comprises:
Syllable recognin unit, for identifying in described voice messaging the multiple syllables comprised;
Coupling subelement, for mating described syllable with the multiple syllable templates preset respectively;
Put in order and determine subelement, for when described multiple syllable is successful with a syllable template matches in the multiple syllable templates preset respectively, determine putting in order of the mark of the syllable template that described multiple syllable is corresponding;
Information aggregate generates subelement, for generate comprise described in the information aggregate that puts in order;
Information sends subelement, for the information aggregate put in order described in comprising is sent to described second processing unit;
Described second processing unit, specifically comprises:
Subelement is determined in order, and for putting in order and mapping relations between ordering according to setting, put in order described in determining corresponding order;
Order performs subelement, for performing described order.
11. electronic equipments according to claim 7, is characterized in that, described second processing unit also comprises:
Typing operation acquiring unit, for before acquisition acoustic information, detects the syllable template typing operation of user;
Syllable Template Information acquiring unit, for obtaining the syllable Template Information of user's input after described syllable template typing operation;
Syllable template storage unit, for preserving the syllable template that described syllable Template Information represents.
12. electronic equipments according to claim 11, is characterized in that, described second processing unit also comprises:
Mark allocation units, for after preserving the syllable template that represents of described syllable Template Information, are that the described syllable template of preserving distributes corresponding mark;
Corresponding relation display unit, for showing the corresponding relation between described syllable template and described mark;
Put in order acquiring unit, for obtaining putting in order of the mark of user's input;
Operational order option acquiring unit, for obtaining the operational order option that user selects; Described operational order option is for representing the order that needs perform;
Corresponding relation sets up unit, for the corresponding relation put in order described in setting up and between described operational order option.
CN201410509616.1A 2014-09-28 2014-09-28 Voice operation input method and electronic equipment Active CN105529025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410509616.1A CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410509616.1A CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105529025A true CN105529025A (en) 2016-04-27
CN105529025B CN105529025B (en) 2019-12-24

Family

ID=55771203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410509616.1A Active CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105529025B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106098066A (en) * 2016-06-02 2016-11-09 深圳市智物联网络有限公司 Audio recognition method and device
CN108806673A (en) * 2017-05-04 2018-11-13 北京猎户星空科技有限公司 A kind of smart machine control method, device and smart machine
CN109658922A (en) * 2017-10-12 2019-04-19 现代自动车株式会社 The device and method for handling user's input of vehicle
CN110265011A (en) * 2019-06-10 2019-09-20 龙马智芯(珠海横琴)科技有限公司 The exchange method and its electronic equipment of a kind of electronic equipment
CN112262584A (en) * 2018-06-15 2021-01-22 三菱电机株式会社 Device control apparatus, device control system, device control method, and device control program

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0109140A1 (en) * 1982-10-19 1984-05-23 Computer Basic Technology Research Association Recognition of continuous speech
CN1337670A (en) * 2001-09-28 2002-02-27 北京安可尔通讯技术有限公司 Fast voice identifying method for Chinese phrase of specific person
CN1991976A (en) * 2005-12-31 2007-07-04 潘建强 Phoneme based voice recognition method and system
CN102341843A (en) * 2009-03-03 2012-02-01 三菱电机株式会社 Voice recognition device
US20130253937A1 (en) * 2012-02-17 2013-09-26 Lg Electronics Inc. Method and apparatus for smart voice recognition
CN103594089A (en) * 2013-11-18 2014-02-19 联想(北京)有限公司 Voice recognition method and electronic device
CN103730120A (en) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 Voice control method and system for electronic device
CN103811003A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Voice recognition method and electronic equipment
CN103827963A (en) * 2011-09-27 2014-05-28 感官公司 Background speech recognition assistant using speaker verification
CN103841248A (en) * 2012-11-20 2014-06-04 联想(北京)有限公司 Method and electronic equipment for information processing
CN103885596A (en) * 2014-03-24 2014-06-25 联想(北京)有限公司 Information processing method and electronic device
CN103943105A (en) * 2014-04-18 2014-07-23 安徽科大讯飞信息科技股份有限公司 Voice interaction method and system
CN104036778A (en) * 2014-05-20 2014-09-10 安徽科大讯飞信息科技股份有限公司 Equipment control method, device and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0109140A1 (en) * 1982-10-19 1984-05-23 Computer Basic Technology Research Association Recognition of continuous speech
CN1337670A (en) * 2001-09-28 2002-02-27 北京安可尔通讯技术有限公司 Fast voice identifying method for Chinese phrase of specific person
CN1991976A (en) * 2005-12-31 2007-07-04 潘建强 Phoneme based voice recognition method and system
CN102341843A (en) * 2009-03-03 2012-02-01 三菱电机株式会社 Voice recognition device
CN103827963A (en) * 2011-09-27 2014-05-28 感官公司 Background speech recognition assistant using speaker verification
US20130253937A1 (en) * 2012-02-17 2013-09-26 Lg Electronics Inc. Method and apparatus for smart voice recognition
CN103811003A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Voice recognition method and electronic equipment
CN103841248A (en) * 2012-11-20 2014-06-04 联想(北京)有限公司 Method and electronic equipment for information processing
CN103594089A (en) * 2013-11-18 2014-02-19 联想(北京)有限公司 Voice recognition method and electronic device
CN103730120A (en) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 Voice control method and system for electronic device
CN103885596A (en) * 2014-03-24 2014-06-25 联想(北京)有限公司 Information processing method and electronic device
CN103943105A (en) * 2014-04-18 2014-07-23 安徽科大讯飞信息科技股份有限公司 Voice interaction method and system
CN104036778A (en) * 2014-05-20 2014-09-10 安徽科大讯飞信息科技股份有限公司 Equipment control method, device and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106098066A (en) * 2016-06-02 2016-11-09 深圳市智物联网络有限公司 Audio recognition method and device
CN106098066B (en) * 2016-06-02 2020-01-17 深圳市智物联网络有限公司 Voice recognition method and device
CN108806673A (en) * 2017-05-04 2018-11-13 北京猎户星空科技有限公司 A kind of smart machine control method, device and smart machine
CN109658922A (en) * 2017-10-12 2019-04-19 现代自动车株式会社 The device and method for handling user's input of vehicle
CN109658922B (en) * 2017-10-12 2023-10-10 现代自动车株式会社 Apparatus and method for processing user input for vehicle
CN112262584A (en) * 2018-06-15 2021-01-22 三菱电机株式会社 Device control apparatus, device control system, device control method, and device control program
CN112262584B (en) * 2018-06-15 2023-03-21 三菱电机株式会社 Device control apparatus, device control system, device control method, and device control program
CN110265011A (en) * 2019-06-10 2019-09-20 龙马智芯(珠海横琴)科技有限公司 The exchange method and its electronic equipment of a kind of electronic equipment

Also Published As

Publication number Publication date
CN105529025B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN105529025A (en) Voice operation input method and electronic device
CN104978957A (en) Voice control method and system based on voiceprint identification
CN107277672B (en) Method and device for supporting automatic switching of wake-up mode
CN104536711A (en) Control method of terminal display
CN105791931A (en) Smart television and voice control method of the smart television
CN104360736A (en) Gesture-based terminal control method and system
CN105391730A (en) Information feedback method, device and system
CN105988581A (en) Voice input method and apparatus
CN108665889B (en) Voice signal endpoint detection method, device, equipment and storage medium
CN111490927B (en) Method, device and equipment for displaying message
CN103197756A (en) Method and device for inputting operating information of electronic equipment
CN108039173B (en) Voice information input method, mobile terminal, system and readable storage medium
CN103870356A (en) Information processing method and electronic equipment
CN104866226A (en) Terminal device and method for controlling same
CN105138250A (en) Human-computer interaction operation guide method, human-computer interaction operation guide system, human-computer interaction device and server
CN105227557A (en) A kind of account number processing method and device
CN104992715A (en) Interface switching method and system of intelligent device
CN104580705A (en) Terminal
CN103971683A (en) Voice control method and system and handheld device
CN106228047B (en) A kind of application icon processing method and terminal device
CN105471641A (en) Information management method and correlation equipment
CN108763350A (en) Text data processing method, device, storage medium and terminal
CN105374357A (en) Voice recognition method, device and voice control system
CN108600559B (en) Control method and device of mute mode, storage medium and electronic equipment
CN103294368B (en) Information processing method, browser and the mobile terminal of browser

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant