CN105741841A - Voice control method and electronic equipment - Google Patents

Voice control method and electronic equipment Download PDF

Info

Publication number
CN105741841A
CN105741841A CN201410768009.7A CN201410768009A CN105741841A CN 105741841 A CN105741841 A CN 105741841A CN 201410768009 A CN201410768009 A CN 201410768009A CN 105741841 A CN105741841 A CN 105741841A
Authority
CN
China
Prior art keywords
voice data
frequency
data
electronic equipment
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410768009.7A
Other languages
Chinese (zh)
Other versions
CN105741841B (en
Inventor
赵侠
王云华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN201410768009.7A priority Critical patent/CN105741841B/en
Publication of CN105741841A publication Critical patent/CN105741841A/en
Application granted granted Critical
Publication of CN105741841B publication Critical patent/CN105741841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a control method. The control method comprises steps that A, during voice control triggering, the sound data is acquired, and the digital audio frequency of the acquired sound data is acquired; B, whether the sound data is a voice data is determined according to the digital audio frequency; and C, if yes, the acquisition extension time is set according to the digital audio frequency, and sound acquisition extension is accomplished according to the acquisition extension time when a voice button is closed. The invention discloses the electronic equipment. Through the method, integrity of the acquired voice information can be improved.

Description

Sound control method and electronic equipment
Technical field
The present invention relates to Voice command field, particularly relate to a kind of sound control method and electronic equipment.
Background technology
Along with the development of communication technology, various electronic equipments also tend to intellectuality tending to digitized simultaneously.The electronic equipment that can carry out sound collection at present is generally used to carry out recording etc., does not possess other intelligentized function.Such as, user presets acquisition time in the electronic device, but the time of voice collecting is held inaccurate, can not complete for the voice messaging collection of user in this acquisition time, such as, if user speaks relatively slow, namely electronic equipment just finishes but without gathering when having enrolled voice data, and the voice messaging at this moment gathered is imperfect.
Summary of the invention
Present invention is primarily targeted at and solve to avoid electronic equipment to gather the technical problem that the incomplete situation of voice messaging occurs.
For achieving the above object, a kind of sound control method provided by the invention, including:
A, when Voice command key triggers, gather voice data, and obtain the DAB frequency of the voice data collected;
B, judge according to described DAB frequency whether described voice data is speech data;
If C speech data, then extend the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extend time lengthening sound collection according to described collection.
Preferably, also include after described step B:
If not D speech data, then, when reaching the sound collection time preset, stop the collection of described voice data.
Preferably, described step B includes:
Gather the DAB frequency of described voice data, the DAB frequency gathered is compared with the sound frequency range preset;
If the DAB frequency gathered is in described default sound audio segment limit, then judge that described voice data is speech data.
Preferably, described sound control method also includes step E, F:
E, the voice data gathered is carried out speech recognition, obtain recognition result;
F, recognition result is sent to smart machine, described smart machine is operated inputting instruction as described smart machine.
Preferably, described step E includes:
The word of described voice data with local dictionary is mated;
If the match is successful, using the word that the match is successful as described recognition result;
If it fails to match, then described audio data transmitting is delivered to high in the clouds, and obtains the recognition result that high in the clouds returns.
To achieve these goals, the present invention also provides for a kind of electronic equipment, and described electronic equipment includes:
Acquisition module, for when Voice command key triggers, gathering voice data, and obtain the DAB frequency of the voice data collected;
According to described DAB frequency, judge module, for judging whether described voice data is speech data;
Module is set, if for speech data, then extends the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extend time lengthening sound collection according to described collection.
Preferably, described electronic equipment also includes:
Stopping modular, if not for speech data, then, when reaching the sound collection time preset, stopping the collection of described voice data.
Preferably, described judge module includes:
Comparing unit, for gathering the DAB frequency of described voice data, compares the DAB frequency gathered with the sound frequency range preset;
Judging unit, if the DAB frequency being used for gathering is in described default sound audio segment limit, then judges that described voice data is speech data.
Preferably, described electronic equipment also includes:
Identification module, for the voice data gathered carries out speech recognition, obtains recognition result;
Sending module, for being sent to smart machine by recognition result, described smart machine to be operated as the input instruction of described smart machine.
Preferably, described identification module includes:
Matching unit, for mating the word of described voice data with local dictionary;
First recognition unit, if for the match is successful, using the word that the match is successful as described recognition result;
Second recognition unit, if for it fails to match, then delivers to described audio data transmitting high in the clouds, and obtains the recognition result that high in the clouds returns.
A kind of sound control method of the present invention and electronic equipment, electronic equipment gathers voice data, DAB frequency according to voice data confirms that whether it is the speech data of people's vocal input, if speech data, then extend the time according to DAB frequency configuration collection, namely according to user speak speed degree arrange collection the prolongation time, avoid speak relatively slow, speech controling switch of user to close in advance and cause that the incomplete situation of voice data of admission occurs, improve the integrity of the voice data of admission.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of sound control method first embodiment of the present invention;
Fig. 2 is the schematic flow sheet of sound control method the second embodiment of the present invention;
Fig. 3 is the refinement schematic flow sheet of the step B in Fig. 1;
Fig. 4 is the schematic flow sheet of sound control method the 3rd embodiment of the present invention;
Fig. 5 is the high-level schematic functional block diagram of electronic equipment first embodiment of the present invention;
Fig. 6 is the high-level schematic functional block diagram of electronic equipment the second embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of judge module in Fig. 5;
Fig. 8 is the high-level schematic functional block diagram of electronic equipment the 3rd embodiment of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Detailed description of the invention
Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The present invention provides a kind of sound control method, and with reference to Fig. 1, in one embodiment, this sound control method includes:
Step A, when Voice command key triggers, gathers voice data, and obtains the DAB frequency of the voice data collected;
In the present embodiment, preset phonetic controller in electronic equipment, start to gather voice data when pressing Voice command key.
Preferably, the present embodiment is more suitable for carrying out the admission of the voice data of short time, such as 5 seconds interior voice datas of admission.
In the present embodiment, electronic equipment is converted into digital signal while gathering voice data.Wherein, just obtain the DAB frequency of voice data when gathering sub-fraction voice data, can know the speed of the word speed of voice data according to this DAB frequency, and then carry out next step operation according to the speed of the word speed of voice data.
For example, electronic equipment in the present embodiment can be remote controller or other can gather the electronic equipment (such as mobile phone, recording pen etc.) of voice data, when user presses the Voice command button in remote controller, speech controling switch is connected, then the voice data in surrounding is gathered, remote controller such as can be used to be got off by the voice collecting of user, control television set as phonetic entry.In the present embodiment, while gathering voice data, obtain the DAB frequency of the voice data collected.
According to described DAB frequency, step B, judges whether described voice data is speech data;
Wherein, the DAB frequency in the voice data collected can knowing frequency range belonging to it, in the present embodiment, definition f (x) is DAB frequency, and unit is hertz:
(246.9 < f (x) < 987.8): frequency range when soprano speaks;
(164.8 < f (x) < 659.2): frequency range when alto is spoken;
(110 < f (x) < 440): frequency range when tenor is spoken;
(73.4 < f (x) < 293.7): frequency range when bass is spoken;
(100 < f (x) < 300): frequency range when ordinary people speaks.
Wherein, the feature of the DAB frequency of voice data when the present embodiment is spoken according to people and frequency range defined above, adopt certain algorithm, such as sound frequency is sampled, 10 number word audio frequencies of continuous acquisition tut data, if having 5 number word audio frequencies all in some above-mentioned frequency range, as 246.9 < f (x) < 987.8, being then judged as that the soprano of correspondence speaks.Namely frequency range belonging to it or frequency range are drawn according to the DAB frequency in voice data, it can be deduced that be the speech data spoken of soprano, alto, tenor, bass or ordinary people.
In general, the sample frequency of current voice is 40KHz, and namely the number of times to original signal samples per second is 40000 times.
Preferably, if having drawn the frequency range belonging to DAB frequency of the voice data currently collected, then also can adopt other algorithms that voice data is calculated again, to reaffirm that it is whether for the speech data of voice.In one embodiment, if the frequency range belonging to confirming is soprano 246.9-987.8Hz, take the n class frequency in voice data, if have number of times that n/2 class frequency occurs in the scope of (40000/246.9)-(40000/987.8), then confirm as the speech data of voice, it not otherwise the speech data of voice, do not carry out next step process.
Step C, if speech data, then extends the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extends time lengthening sound collection according to described collection;
In the present embodiment, after being defined as speech data, the speed of the word speed of voice is judged by DAB frequency, if DAB frequency is higher, namely show that user speaks comparatively fast, otherwise then relatively slow, user speaks and comparatively fast can arrange the shorter collection prolongation time, speaks and then can arrange the longer collection prolongation time relatively slowly.
In the present embodiment, the time is extended, for instance if user speaks relatively slow, then the voice collecting that can arrange electronic equipment extends the time according to DAB frequency configuration collection, extend the acquisition time of electronic equipment with this, be 300m second or 500m second etc. as the voice collecting prolongation time can be arranged.When user can not hold the voice collecting time, namely just it is disconnected in advance but without the speech controling switch finishing words electronic equipment, the present embodiment is owing to extending the voice collecting time, therefore voice data can still be enrolled, it is to avoid disconnect speech controling switch in advance when user speaks slower and cause that the incomplete situation of voice data of admission occurs.
In the present embodiment, if it is confirmed that be that what to gather is speech data, no matter user speaks speed, extends the time all in accordance with DAB frequency configuration collection, it is prevented that when not gathered voice data, and speech controling switch disconnects and causes that the voice data of admission is imperfect.
Compared with prior art, the electronic equipment of the present embodiment gathers voice data, DAB frequency according to voice data confirms that whether it is the speech data of people's vocal input, if speech data, then extend the time according to DAB frequency configuration collection, namely it is set according to user's speed degree of speaking collection prolongation time, it is to avoid speak relatively slow, speech controling switch of user is closed in advance and caused that the incomplete situation of voice data of admission occurs, and improves the integrity of the voice data of admission.
In a preferred embodiment, as in figure 2 it is shown, on the basis of the embodiment of above-mentioned Fig. 1, the present embodiment also includes after above-mentioned steps C:
Step D, if not speech data, then, when reaching the sound collection time preset, stops the collection of described voice data.
In the present embodiment, it is possible to a preset intervalometer in the electronic device, when really admit a fault speech data time, intervalometer timing starts, when reaching the sound collection time preset, it is automatically switched off speech controling switch by the triggering of intervalometer, stops the collection of voice data.
Preferably, in the present embodiment, if speech data detected within the default sound collection time, then the timing time of timer resets, and then extends the time according to the DAB frequency configuration collection of speech data.
In the present embodiment, if being not detected by speech data, electronic equipment continues the collection of voice data, when reaching the sound collection time preset, stops the collection of described voice data.
In a preferred embodiment, as it is shown on figure 3, on the basis of the embodiment of above-mentioned Fig. 1, above-mentioned steps B also includes:
Step B1, gathers the DAB frequency of described voice data, the DAB frequency gathered is compared with the sound frequency range preset;
Step B2, if the DAB frequency gathered is in described default sound audio segment limit, then judges that described voice data is speech data.
In the present embodiment, the sound frequency range preset includes the sound frequency range of the pronunciation of above-mentioned soprano, alto, tenor, bass or ordinary people.
Wherein, the feature of the DAB frequency of voice data when the present embodiment is spoken according to people and frequency range defined above, adopt certain algorithm, such as sound frequency is sampled, 10 number word audio frequencies of continuous acquisition tut data, if having 5 number word audio frequencies all in some above-mentioned frequency range, as 246.9 < f (x) < 987.8, being then judged as that the soprano of correspondence speaks.Namely frequency range belonging to it or frequency range are drawn according to the DAB frequency in voice data, it can be deduced that be the speech data spoken of soprano, alto, tenor, bass or ordinary people.
In the present embodiment, the DAB Rate derivation in voice data going out its affiliated sound frequency range is:
Table 1
Table 2
As above, shown in table 1 and table 2, table 1 and table 2 are repeatedly test the priori value obtained.Input low band frequency or input the DAB frequency that high band frequency is equivalent in the present embodiment, Zstar low-limit frequency and Zend highest frequency are formed in the frequency range of sound type corresponding to this DAB frequency, thus, can obtain: multiple proportion P (x)=(Zend-Zstar)/F (x)=2.5, the rest may be inferred.
Therefore, if the DAB frequency acquired in the present embodiment, it is possible to by calculating the sound frequency range obtaining belonging to this DAB frequency, as shown in table 3 below:
Table 3
As shown in table 3, the average of both sums that multiple proportion P (x) of table 3 is the multiple proportion in above-mentioned table 1 and table 2, i.e. multiple proportion P (x)=(2.5+1.25)/2=1.875.
Additionally, in the present embodiment, if incoming frequency F (x) is be more than or equal to 500Hz, then can take the half that Zstar low-limit frequency is F (x), if incoming frequency F (x) is less than 500Hz, then can take that Zstar low-limit frequency is F (x) 1/3rd, then calculate Zend highest frequency=F (x) * P (x)+Zstar=500*1.875+250=1187.5Hz.
So far, the DAB frequency in voice data show that its affiliated sound frequency range is 250-1187.5.
It addition, judge that number of times that DAB frequency occurs each second is whether in the numbers range that default sound frequency range is corresponding:
Such as, for soprano: if Zstar low-limit frequency=260, then the number of times of frequency appearance each second is 40000/260=153, the like.
If taking 10 groups of Zstar low-limit frequencies, calculating the number of times obtaining frequency appearance each second respectively is 153,4000,153,153,150,150,150,150,150 and 150, the sound frequency range preset occurs in the corresponding scope that number of times is (40000/1187.5) to (40000/250) each second, namely in the scope of 33.68 to 160.It can be seen that except 4000 times of second group, other number of times is all in the scope of 33.68 to 160, therefore, have 9 groups of data to satisfy condition, its more than 10 groups 1/2nd, therefore, can reaffirm that the voice data enrolled is speech data.
In a preferred embodiment, as shown in Figure 4, on the basis of the embodiment of above-mentioned Fig. 1, this sound control method also includes:
Step E, the voice data gathered is carried out speech recognition, obtain recognition result;
Step F, recognition result is sent to smart machine, described smart machine is operated inputting instruction as described smart machine.
In the present embodiment, after collection terminates, the voice data enrolled can be carried out speech recognition by electronic equipment, is then sent to smart machine, and recognition result is exported and is shown on screen by smart machine.
In the present embodiment, electronic equipment can be remote controller, is gathered by remote controller or enrolls voice data, is identified by voice data, is sent to intelligent television and shows, then also can as input information, TV be manipulated by recognition result.
Preferably, above-mentioned steps E includes: mated by the word of described voice data with local dictionary;If the match is successful, using the word that the match is successful as described recognition result;If it fails to match, then described audio data transmitting is delivered to high in the clouds, and obtains the recognition result that high in the clouds returns.
Wherein, electronic equipment can first be identified in this locality, is namely mated by the word of voice data with local dictionary in local dictionary;If being not matched in this locality, it is also possible to audio data transmitting is delivered to high in the clouds and is identified.
The present invention provides a kind of electronic equipment, and with reference to Fig. 5, in one embodiment, electronic equipment includes:
Acquisition module 101, for when Voice command key triggers, gathering voice data, and obtain the DAB frequency of the voice data collected;
In the present embodiment, preset phonetic controller in electronic equipment, start to gather voice data when pressing Voice command key.
Preferably, the present embodiment is more suitable for carrying out the admission of the voice data of short time, such as 5 seconds interior voice datas of admission.
In the present embodiment, electronic equipment is converted into digital signal while gathering voice data.Wherein, just obtain the DAB frequency of voice data when gathering sub-fraction voice data, can know the speed of the word speed of voice data according to this DAB frequency, and then carry out next step operation according to the speed of the word speed of voice data.
For example, electronic equipment in the present embodiment can be remote controller or other can gather the electronic equipment (such as mobile phone, recording pen etc.) of voice data, when user presses the Voice command button in remote controller, speech controling switch is connected, then the voice data in surrounding is gathered, remote controller such as can be used to be got off by the voice collecting of user, control television set as phonetic entry.In the present embodiment, while gathering voice data, obtain the DAB frequency of the voice data collected.
According to described DAB frequency, judge module 102, for judging whether described voice data is speech data;
Wherein, the DAB frequency in the voice data collected can knowing frequency range belonging to it, in the present embodiment, definition f (x) is DAB frequency, and unit is hertz:
(246.9 < f (x) < 987.8): frequency range when soprano speaks;
(164.8 < f (x) < 659.2): frequency range when alto is spoken;
(110 < f (x) < 440): frequency range when tenor is spoken;
(73.4 < f (x) < 293.7): frequency range when bass is spoken;
(100 < f (x) < 300): frequency range when ordinary people speaks.
Wherein, the feature of the DAB frequency of voice data when the present embodiment is spoken according to people and frequency range defined above, adopt certain algorithm, such as sound frequency is sampled, 10 number word audio frequencies of continuous acquisition tut data, if having 5 number word audio frequencies all in some above-mentioned frequency range, as 246.9 < f (x) < 987.8, being then judged as that the soprano of correspondence speaks.Namely frequency range belonging to it or frequency range are drawn according to the DAB frequency in voice data, it can be deduced that be the speech data spoken of soprano, alto, tenor, bass or ordinary people.
In general, the sample frequency of current voice is 40KHz, and namely the number of times to original signal samples per second is 40000 times.
Preferably, if having drawn the frequency range belonging to DAB frequency of the voice data currently collected, then also can adopt other algorithms that voice data is calculated again, to reaffirm that it is whether for the speech data of voice.In one embodiment, if the frequency range belonging to confirming is soprano 246.9-987.8Hz, take the n class frequency in voice data, if have number of times that n/2 class frequency occurs in the scope of (40000/246.9)-(40000/987.8), then confirm as the speech data of voice, it not otherwise the speech data of voice, do not carry out next step process.
Module 103 is set, if for speech data, then extends the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extend time lengthening sound collection according to described collection.
In the present embodiment, after being defined as speech data, the speed of the word speed of voice is judged by DAB frequency, if DAB frequency is higher, namely show that user speaks comparatively fast, otherwise then relatively slow, user speaks and comparatively fast can arrange the shorter collection prolongation time, speaks and then can arrange the longer collection prolongation time relatively slowly.
In the present embodiment, the time is extended, for instance if user speaks relatively slow, then the voice collecting that can arrange electronic equipment extends the time according to DAB frequency configuration collection, extend the acquisition time of electronic equipment with this, be 300m second or 500m second etc. as the voice collecting prolongation time can be arranged.When user can not hold the voice collecting time, namely just it is disconnected in advance but without the speech controling switch finishing words electronic equipment, the present embodiment is owing to extending the voice collecting time, therefore voice data can still be enrolled, it is to avoid disconnect speech controling switch in advance when user speaks slower and cause that the incomplete situation of voice data of admission occurs.
In the present embodiment, if it is confirmed that be that what to gather is speech data, no matter user speaks speed, extends the time all in accordance with DAB frequency configuration collection, it is prevented that when not gathered voice data, and speech controling switch disconnects and causes that the voice data of admission is imperfect.
In a preferred embodiment, as shown in Figure 6, on the basis of the embodiment of above-mentioned Fig. 5, described electronic equipment also includes:
Stopping modular 104, if not for speech data, then, when reaching the sound collection time preset, stopping the collection of described voice data.
In the present embodiment, it is possible to a preset intervalometer in the electronic device, when really admit a fault speech data time, intervalometer timing starts, when reaching the sound collection time preset, it is automatically switched off speech controling switch by the triggering of intervalometer, stops the collection of voice data.
Preferably, in the present embodiment, if speech data detected within the default sound collection time, then the timing time of timer resets, and then extends the time according to the DAB frequency configuration collection of speech data.
In the present embodiment, if being not detected by speech data, electronic equipment continues the collection of voice data, when reaching the sound collection time preset, stops the collection of described voice data.
In a preferred embodiment, as it is shown in fig. 7, on the basis of the embodiment of above-mentioned Fig. 5, described judge module 102 includes:
Comparing unit 1021, for gathering the DAB frequency of described voice data, compares the DAB frequency gathered with the sound frequency range preset;
Judging unit 1022, if the DAB frequency being used for gathering is in described default sound audio segment limit, then judges that described voice data is speech data.
In the present embodiment, the sound frequency range preset includes the sound frequency range of the pronunciation of above-mentioned soprano, alto, tenor, bass or ordinary people.
Wherein, the feature of the DAB frequency of voice data when the present embodiment is spoken according to people and frequency range defined above, adopt certain algorithm, such as sound frequency is sampled, 10 number word audio frequencies of continuous acquisition tut data, if having 5 number word audio frequencies all in some above-mentioned frequency range, as 246.9 < f (x) < 987.8, being then judged as that the soprano of correspondence speaks.Namely frequency range belonging to it or frequency range are drawn according to the DAB frequency in voice data, it can be deduced that be the speech data spoken of soprano, alto, tenor, bass or ordinary people.
In the present embodiment, the DAB Rate derivation in voice data go out its affiliated sound frequency range and can consult the data in above-mentioned table 1, table 2, table 3 and relevant derivation, repeat no more herein.
It addition, judge that number of times that DAB frequency occurs each second is whether in the numbers range that default sound frequency range is corresponding:
Such as, for soprano: if Zstar low-limit frequency=260, then the number of times of frequency appearance each second is 40000/260=153, the like.
If taking 10 groups of Zstar low-limit frequencies, calculating the number of times obtaining frequency appearance each second respectively is 153,4000,153,153,150,150,150,150,150 and 150, the sound frequency range preset occurs in the corresponding scope that number of times is (40000/1187.5) to (40000/250) each second, namely in the scope of 33.68 to 160.It can be seen that except 4000 times of second group, other number of times is all in the scope of 33.68 to 160, therefore, have 9 groups of data to satisfy condition, its more than 10 groups 1/2nd, therefore, can reaffirm that the voice data enrolled is speech data.
In a preferred embodiment, as shown in Figure 8, on the basis of the embodiment of above-mentioned Fig. 5, described electronic equipment also includes:
Identification module 105, for the voice data gathered carries out speech recognition, obtains recognition result;
Sending module 106, for being sent to smart machine by recognition result, described smart machine to be operated as the input instruction of described smart machine.
In the present embodiment, after collection terminates, the voice data enrolled can be carried out speech recognition by electronic equipment, is then sent to smart machine, and recognition result is exported and is shown on screen by smart machine.
In the present embodiment, electronic equipment can be remote controller, is gathered by remote controller or enrolls voice data, is identified by voice data, is sent to intelligent television and shows, then as input information, TV is manipulated by recognition result.
Preferably, described identification module 105 includes: matching unit, for being mated by the word of described voice data with local dictionary;First recognition unit, if for the match is successful, using the word that the match is successful as described recognition result;Second recognition unit, if for it fails to match, then delivers to described audio data transmitting high in the clouds, and obtains the recognition result that high in the clouds returns.
Wherein, electronic equipment can first be identified in this locality, is namely mated by the word of voice data with local dictionary in local dictionary;If being not matched in this locality, it is also possible to audio data transmitting is delivered to high in the clouds and is identified.
It should be noted that all modules in the embodiment of above-mentioned remote controller all can have been controlled corresponding function by the CPU processing module of remote controller.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing description of the present invention and accompanying drawing content to make or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, all in like manner include in the scope of patent protection of the present invention.

Claims (10)

1. a sound control method, it is characterised in that comprise the following steps:
A, when Voice command key triggers, gather voice data, and obtain the DAB frequency of the voice data collected;
B, judge according to described DAB frequency whether described voice data is speech data;
If C speech data, then extend the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extend time lengthening sound collection according to described collection.
2. sound control method as claimed in claim 1, it is characterised in that also include after described step B:
If not D speech data, then, when reaching the sound collection time preset, stop the collection of described voice data.
3. sound control method as claimed in claim 1, it is characterised in that described step B includes:
Gather the DAB frequency of described voice data, the DAB frequency gathered is compared with the sound frequency range preset;
If the DAB frequency gathered is in described default sound audio segment limit, then judge that described voice data is speech data.
4. the sound control method as described in claim 1 or 3, it is characterised in that described sound control method also includes step E, F:
E, the voice data gathered is carried out speech recognition, obtain recognition result;
F, recognition result is sent to smart machine, described smart machine is operated inputting instruction as described smart machine.
5. sound control method as claimed in claim 4, it is characterised in that described step E includes:
The word of described voice data with local dictionary is mated;
If the match is successful, using the word that the match is successful as described recognition result;
If it fails to match, then described audio data transmitting is delivered to high in the clouds, and obtains the recognition result that high in the clouds returns.
6. an electronic equipment, it is characterised in that described electronic equipment includes:
Acquisition module, for when Voice command key triggers, gathering voice data, and obtain the DAB frequency of the voice data collected;
According to described DAB frequency, judge module, for judging whether described voice data is speech data;
Module is set, if for speech data, then extends the time according to described DAB frequency configuration collection, and when described Voice command key is closed, extend time lengthening sound collection according to described collection.
7. electronic equipment as claimed in claim 6, it is characterised in that described electronic equipment also includes:
Stopping modular, if not for speech data, then, when reaching the sound collection time preset, stopping the collection of described voice data.
8. electronic equipment as claimed in claim 6, it is characterised in that described judge module includes:
Comparing unit, for gathering the DAB frequency of described voice data, compares the DAB frequency gathered with the sound frequency range preset;
Judging unit, if the DAB frequency being used for gathering is in described default sound audio segment limit, then judges that described voice data is speech data.
9. the electronic equipment as described in claim 6 or 8, it is characterised in that described electronic equipment also includes:
Identification module, for the voice data gathered carries out speech recognition, obtains recognition result;
Sending module, for being sent to smart machine by recognition result, described smart machine to be operated as the input instruction of described smart machine.
10. electronic equipment as claimed in claim 9, it is characterised in that described identification module includes:
Matching unit, for mating the word of described voice data with local dictionary;
First recognition unit, if for the match is successful, using the word that the match is successful as described recognition result;
Second recognition unit, if for it fails to match, then delivers to described audio data transmitting high in the clouds, and obtains the recognition result that high in the clouds returns.
CN201410768009.7A 2014-12-12 2014-12-12 Sound control method and electronic equipment Active CN105741841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410768009.7A CN105741841B (en) 2014-12-12 2014-12-12 Sound control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410768009.7A CN105741841B (en) 2014-12-12 2014-12-12 Sound control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105741841A true CN105741841A (en) 2016-07-06
CN105741841B CN105741841B (en) 2019-12-03

Family

ID=56241450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410768009.7A Active CN105741841B (en) 2014-12-12 2014-12-12 Sound control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105741841B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782552A (en) * 2016-12-06 2017-05-31 深圳Tcl数字技术有限公司 Last or end syllable recognition methods and voice remote controller
CN107895579A (en) * 2018-01-02 2018-04-10 联想(北京)有限公司 A kind of audio recognition method and system
CN109243447A (en) * 2018-10-12 2019-01-18 西安蜂语信息科技有限公司 Voice sends triggering method and device
CN110970054A (en) * 2019-11-06 2020-04-07 广州视源电子科技股份有限公司 Method and device for automatically stopping voice acquisition, terminal equipment and storage medium
CN111627441A (en) * 2020-05-26 2020-09-04 北京百度网讯科技有限公司 Control method, device, equipment and storage medium of electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101601088A (en) * 2007-09-11 2009-12-09 松下电器产业株式会社 Sound judgment means, sound detection device and sound determination methods
CN102324241A (en) * 2011-05-04 2012-01-18 鸿富锦精密工业(深圳)有限公司 Electronic device with voice-controlling function and voice-controlling method
CN102541505A (en) * 2011-01-04 2012-07-04 中国移动通信集团公司 Voice input method and system thereof
CN103713876A (en) * 2014-01-16 2014-04-09 联想(北京)有限公司 Data processing method and electronic equipment
CN103886860A (en) * 2014-02-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic device
CN104038804A (en) * 2013-03-05 2014-09-10 三星电子(中国)研发中心 Subtitle synchronization device and subtitle synchronization method based on speech recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101601088A (en) * 2007-09-11 2009-12-09 松下电器产业株式会社 Sound judgment means, sound detection device and sound determination methods
CN102541505A (en) * 2011-01-04 2012-07-04 中国移动通信集团公司 Voice input method and system thereof
CN102324241A (en) * 2011-05-04 2012-01-18 鸿富锦精密工业(深圳)有限公司 Electronic device with voice-controlling function and voice-controlling method
CN104038804A (en) * 2013-03-05 2014-09-10 三星电子(中国)研发中心 Subtitle synchronization device and subtitle synchronization method based on speech recognition
CN103713876A (en) * 2014-01-16 2014-04-09 联想(北京)有限公司 Data processing method and electronic equipment
CN103886860A (en) * 2014-02-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106782552A (en) * 2016-12-06 2017-05-31 深圳Tcl数字技术有限公司 Last or end syllable recognition methods and voice remote controller
CN106782552B (en) * 2016-12-06 2020-05-22 深圳Tcl数字技术有限公司 Tail sound identification method and voice remote controller
CN107895579A (en) * 2018-01-02 2018-04-10 联想(北京)有限公司 A kind of audio recognition method and system
CN107895579B (en) * 2018-01-02 2021-08-17 联想(北京)有限公司 Voice recognition method and system
CN109243447A (en) * 2018-10-12 2019-01-18 西安蜂语信息科技有限公司 Voice sends triggering method and device
CN110970054A (en) * 2019-11-06 2020-04-07 广州视源电子科技股份有限公司 Method and device for automatically stopping voice acquisition, terminal equipment and storage medium
CN111627441A (en) * 2020-05-26 2020-09-04 北京百度网讯科技有限公司 Control method, device, equipment and storage medium of electronic equipment
CN111627441B (en) * 2020-05-26 2021-10-08 北京百度网讯科技有限公司 Control method, device, equipment and storage medium of electronic equipment

Also Published As

Publication number Publication date
CN105741841B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN107454508B (en) TV set and TV system of microphone array
CN105741841A (en) Voice control method and electronic equipment
CN104168353B (en) Bluetooth headset and its interactive voice control method
CN101599270A (en) Voice server and voice control method
CN101576901B (en) Method for generating search request and mobile communication equipment
CN105357006A (en) Method and equipment for performing security authentication based on voiceprint feature
CN106847281A (en) Intelligent household voice control system and method based on voice fuzzy identification technology
EP2747077A1 (en) Voice recognition system, recognition dictionary logging system, and audio model identifier series generation device
CN102723078A (en) Emotion speech recognition method based on natural language comprehension
CN103491411A (en) Method and device based on language recommending channels
CN104360736A (en) Gesture-based terminal control method and system
CN107729433B (en) Audio processing method and device
KR20140058127A (en) Voice recognition apparatus and voice recogniton method
CN110428806A (en) Interactive voice based on microphone signal wakes up electronic equipment, method and medium
CN110097875A (en) Interactive voice based on microphone signal wakes up electronic equipment, method and medium
CN111326143A (en) Voice processing method, device, equipment and storage medium
CN110992955A (en) Voice operation method, device, equipment and storage medium of intelligent equipment
CN110956965A (en) Personalized intelligent home safety control system and method based on voiceprint recognition
JP2008287210A5 (en)
CN104301522A (en) Information input method in communication and communication terminal
CN2814830Y (en) Sound control TV set and remote controller
CN105091208B (en) Air conditioner wind speed control method and system
CN103248930A (en) Voice television and household appliance system
WO2019101099A1 (en) Video program identification method and device, terminal, system, and storage medium
CN1885930A (en) Acoustic control TV set, remote controller and TV set remote controlling method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant