CN103871419B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN103871419B
CN103871419B CN201210534978.7A CN201210534978A CN103871419B CN 103871419 B CN103871419 B CN 103871419B CN 201210534978 A CN201210534978 A CN 201210534978A CN 103871419 B CN103871419 B CN 103871419B
Authority
CN
China
Prior art keywords
voiceprint
sound
vibration
information
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210534978.7A
Other languages
Chinese (zh)
Other versions
CN103871419A (en
Inventor
李凡智
刘旭国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210534978.7A priority Critical patent/CN103871419B/en
Publication of CN103871419A publication Critical patent/CN103871419A/en
Application granted granted Critical
Publication of CN103871419B publication Critical patent/CN103871419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The invention provides an information processing method and electronic equipment. The method comprises the following steps: first vibration information which is corresponding to a first voice and is detected by a vibration detection unit is obtained, and a second voice which includes the first voice and is acquired by a voice acquisition unit is obtained; the first vibration information is parsed to obtain first voiceprint information corresponding to the first voice; the second voice is parsed to obtain second voiceprint information corresponding to the first voice; based on the first voiceprint information and the second voiceprint information, third voiceprint information is obtained; and the third voiceprint information is converted to third sound information corresponding to the first voice. In the prior art, user's voice is filtered out or voice in the external environment is not completely filtered out by a noise removal mode, thus resulting in inaccurate voice processing in electronic equipment. In addition, the noise removal mode is single. By the above method provided by the invention, the above technical problems in the prior art are solved.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to electronic technology field, more particularly to a kind of information processing method and electronic equipment.
Background technology
With continuing to develop for electronic technology, it is varied that electronic equipment of the prior art also becomes function, such as Mobile phone of the prior art typically all can simultaneously be provided with two microphones, wherein, when user using mobile phone when being conversed, Wherein the first microphone is used to receive the sound that user sends, and second microphone is used to receive the sound in external environment, when Right two microphones are provided in the two ends of mobile phone, and when user is during call, mobile phone can simultaneously process the first microphone And remove the first Mike in the acoustic information received by second microphone, and the sound that second microphone can be received The acoustic information of wind, the sound letter of user is accurately transmitted such that it is able to make mobile phone be remained able in the larger environment of noise Breath.
But, present inventor has found that prior art has following technical problem or lacks in the practice of the invention Fall into:
The method of removal outside noise is mainly and is filled by setting multiple sound collections in the electronic device in the prior art Put, and processed and filtered by the acoustic information that all voice collection devices are collected, such that it is able to by the external world Noise remove in environment, and only retain the sound of electronic equipment correspondence user, but, which is in the sound for collecting Filtering environmental noise in information, therefore, the sound of user can be filtered out during the filtering of noise, or filtering noise During occur that the noise-filtering in environment to external world is imperfect, so as to cause electronic equipment inaccurate to acoustic processing Technical problem.
The content of the invention
The invention provides a kind of information processing method and electronic equipment, set for solving electronics present in prior art The standby technical problem inaccurate to acoustic processing, its concrete technical scheme is as follows:
A kind of information processing method, is applied to an electronic equipment, and methods described includes:
The vibration detecting unit of the vibration produced when predeterminated position is used for and detects user's sounding with a user, And during the user sends the first sound, obtain and detect corresponding with first sound by the vibration detecting unit The first vibration information, and obtain by a sound collection unit gather obtain the rising tone comprising first sound Sound;
First vibration information is parsed, the first voiceprint corresponding with first sound, and parsing institute is obtained Second sound is stated, the second voiceprint corresponding with the second sound is obtained;
Based on first voiceprint and second voiceprint, the 3rd voiceprint is obtained;
3rd voiceprint is converted into the 3rd acoustic information corresponding with first sound.
Wherein, it is described based on first voiceprint and second voiceprint, obtain the 3rd voiceprint, tool Body includes:
First voiceprint is carried out into voice print matching with second voiceprint;
Voiceprint that second voiceprint all includes with first voiceprint is obtained as the 3rd sound Line information.
Optionally, parsing first vibration information, obtains the first voiceprint corresponding with first sound, Specifically include:
First vibration information is parsed, corresponding first vibration frequency of first vibration information is obtained;
According to first vibration frequency, corresponding first voiceprint of first vibration frequency is obtained.
Optionally, described acquisition detects the first vibration letter corresponding with first sound by the vibration detecting unit Breath, and the second sound comprising first sound obtained by a sound collection unit is obtained, specially:
Obtain in one first Preset Time and institute corresponding with first sound is detected by the vibration detecting unit State the first vibration information;And
Obtain obtain in first Preset Time by the sound collection unit gather comprising first sound The second sound.
Optionally, it is described by three voiceprint be converted to the 3rd acoustic information corresponding with first sound it Afterwards, methods described also includes:
3rd acoustic information is exported by the voice output in the electronic equipment.
A kind of electronic equipment, the electronic equipment includes:
First obtains unit, during sending the first sound in user, obtains first sound corresponding the One vibration information;
Second obtaining unit, for obtaining the second sound comprising the first sound;
Resolution unit, for parsing first vibration information, obtains the first vocal print letter corresponding with first sound Breath, and the second sound is parsed, obtain the second voiceprint corresponding with the second sound;
3rd obtaining unit, for based on first voiceprint and second voiceprint, obtaining the 3rd vocal print Information;
Converting unit, for the 3rd voiceprint to be converted into the 3rd sound letter corresponding with first sound Breath.
Wherein, the 3rd obtaining unit is specifically included:
Matching module, for first voiceprint to be carried out into voice print matching with second voiceprint;
Module is obtained, is made for obtaining the voiceprint that second voiceprint and first voiceprint are all included It is the 3rd voiceprint.
Optionally, the resolution unit is specifically included:
Parsing module, for parsing first vibration information, obtains corresponding first vibration of first vibration information Frequency;
Vocal print obtains module, corresponding described for according to first vibration frequency, obtaining first vibration frequency First voiceprint.
Optionally, the first obtains unit specifically for obtain one first Preset Time in first sound pair First vibration information answered;And obtain the rising tone comprising first sound in first Preset Time Sound.
Optionally, the electronic equipment also includes:
Output unit, for exporting the 3rd acoustic information.
At least there is following technique effect or advantage in one or more embodiment that the present invention is provided:
By user's sounding position vibration information being converted into the first voiceprint in the embodiment of the present invention and while will be adopted The sound for collecting is converted to the second voiceprint, so as to be obtained using application according to the first voiceprint and the second voiceprint 3rd voiceprint of family vocal print, and the 3rd voiceprint is converted into corresponding 3rd acoustic information, that is to say, that electronics Then equipment the first voiceprint of meeting proposes the vocal print for going to be matched with the first voiceprint as standard in the second voiceprint Information, because the first voiceprint is obtained by the vibration at user's sounding position, therefore first voiceprint is user's hair Accurate vocal print after sound, therefore can clearly obtain the acoustic information of user, it is possible to the noise in external environment is direct Filter, there is the electronic equipment technical problem inaccurate to acoustic processing in the prior art so as to solve, it is achieved thereby that electric Sub- equipment is accurately removed to the noise in the sound that collects, obtains the accurate sound of user, and then improve electronics The degree of accuracy of equipment acoustic processing, and the also greatly Experience Degree of increased user.
In embodiments of the present invention by gathering corresponding first vibration information of the first sound simultaneously in a Preset Time And collection comprising the first sound second sound, that is to say, that electronic equipment can collection information get the bid time clock signal, Therefore electronic equipment is that will read the clock signal, thereby may be ensured that the synchronization of information gathering in the information that treatment is collected Property, so as to improve electronic equipment to acoustic processing accuracy, and improve removal noise accuracy.
Brief description of the drawings
Fig. 1 show a kind of flow chart of information processing method in the embodiment of the present invention;
Fig. 2 show the concrete structure schematic diagram of a kind of electronic equipment in the embodiment of the present invention;
Fig. 3 show the 3rd concrete structure schematic diagram for obtaining in the embodiment of the present invention;
Fig. 4 show the concrete structure schematic diagram of resolution unit in the embodiment of the present invention.
Specific embodiment
The invention provides a kind of information processing method and electronic equipment, the method includes:Worn in predeterminated position in user Produce the vibration detecting unit of vibration when being used to detect user's sounding with one, and during user sends the first sound, obtain The first vibration information corresponding with the first sound must be detected by vibration detecting unit, and obtained by a sound collection unit Collection obtains the second sound comprising the first sound, then parses the first vibration information, obtains corresponding with the first sound first Voiceprint, and parsing second sound, obtain corresponding with the first sound the second voiceprint, based on the first voiceprint and Second voiceprint, obtains the 3rd voiceprint, and the 3rd voiceprint is converted into the 3rd sound letter corresponding with the first sound Breath, is used to solve the prior art electronic equipment technical problem inaccurate to acoustic processing.
For simple, when user sends sound, sounding position also vibrates corresponding generation, therefore, in the present invention In embodiment, it is necessary to user predeterminated position wear it is corresponding for detecting user's sounding when produce vibration vibration detection list Unit, the vibration detecting unit is able to detect that the vibration information at sounding position, so as to be obtained in that user's sound according to vibration information The corresponding vibration of sound, so as to extract the sound of user in the sound that electronic equipment is collected according to the vibration such that it is able to Enable the sound of the more accurate identifying user of electronic equipment.
Technical solution of the present invention is described in detail below by accompanying drawing and specific embodiment, it should be understood that the present invention Particular technique feature in embodiment and embodiment is the explanation to technical solution of the present invention, and is not to skill of the present invention The restriction of art scheme, in the case where not conflicting, the particular technique feature in the embodiment of the present invention and embodiment can be mutual Combination.
To first, in embodiments of the present invention in order to collect during user's sounding, the vibration information at sounding position, Electronic equipment in the embodiment of the present invention is connected with a vibration detecting unit, and the vibration detecting unit needs to be worn on user preset On position, the predeterminated position is just the sounding position of user, so as to when user is when sound is sent, the vibration detecting unit can The vibration information at user's sounding position is timely detected, the information processing method in the embodiment of the present invention specifically includes following step Suddenly:
It is as shown in Figure 1 a kind of flow chart of information processing method in the embodiment of the present invention, the method includes:
Step 101, obtains by corresponding first vibration information of the first sound, and the rising tone comprising the first sound Sound.
Electronic equipment in embodiments of the present invention also sets in addition to being connected with vibration detecting unit in the electronic equipment Sound collection unit is equipped with, when user sends the first sound, the vibration detecting unit being connected with the electronic equipment can just be examined Measure the first sound corresponding to vibration information, the same of corresponding first vibration information of the first sound is detected in vibration detecting unit When, the sound collection unit in electronic equipment will also be gathered and obtain the second sound comprising the first sound.
Herein it should be noted that in embodiments of the present invention sound collection unit collection be sound in external environment Sound, therefore, except the first sound that user sends also has noise in external environment, and then it is not pure that electronic equipment collects Net user voice, is all provided with two sound collection units, a sound collection list in the prior art in electronic equipment Unit is mainly used in gathering the sound of user and the sound of external environment, and another sound collection unit is mainly user's collection Noise in external environment.
Such as, the electronic equipment is mobile phone, and the mobile phone is connected with a vibrating detector, and the vibration detection is arranged at user Sounding position, when user's sounding, the sounding position also sends vibration by corresponding, and now the vibrating detector will be collected The vibration information at the vibration position, certain vibration will continue the regular hour, and the vibrating detector is also by lasting detection Vibration information within this time, at the same time the microphone in the electronic equipment will detect predeterminable area in sound, when Other sound in the sound and external environment that user sends so just are included in the sound, in addition, being shaken this is detected After dynamic information, the vibration information can be sent to mobile phone by the vibrating detector, therefore, the mobile phone is gone back while sound is gathered The vibration information of vibrating sensor transmission will be received.
After receiving vibration information and collecting acoustic information, the electronic equipment will perform step 102.
Step 102, parses the first vibration information, obtains corresponding with the first sound the first voiceprint, and parses the Two sound, obtain the second voiceprint corresponding with the first sound.
In embodiments of the present invention, after the first vibration information is received, the electronic equipment will be parsed the electronic equipment First vibration information, due to collection be a period of time in vibration information, therefore, it can be shaken according to this section the first of the time Corresponding first vibration frequency of the dynamic vibration information of acquisition of information first, then according to first vibration frequency can just obtain this Corresponding first voiceprint of one vibration frequency.
Specifically, in embodiments of the present invention, after the electronic equipment receives the first vibration information, the electronic equipment To first parse the vibration frequency in first vibration information, the vibration frequency be include it is all vibrated in a period of time Journey, therefore, can be the vibration frequency combination at all time points, and then the electronic equipment can just shake according to all in the time Dynamic frequency combination obtains the vibration frequency distribution map of first vibration information, and then can just be obtained according to the vibration frequency distribution map To should vibration frequency part figure vocal print image, so as to can just obtain first vibration information pair according to the vocal print image The first voiceprint answered.
Herein it should be noted that the collection of the first vibration information needs to carry out with the collection of second sound simultaneously, that is, Say to be obtained in the first Preset Time and the first vibration information corresponding with the first sound is detected by vibration detecting unit, while The second sound comprising the first sound gathered by sound collection unit is obtained in first Preset Time, this makes it possible to ensure Acoustic information and the synchronism of vibration information, can so improve the accuracy of follow-up sound denoising.
After corresponding first voiceprint of the first vibration information is resolved to, the electronic equipment will also be to sound collection list The second sound that unit collects is parsed, and its specific analysis mode is exactly that the second sound that will be collected passes through electronic equipment In vocal print analyzing device parsed, it is new so as to obtain corresponding second vocal print of the second sound.
After the first voiceprint and the second voiceprint is got, the electronic equipment will perform step 103.
Step 103, based on the first voiceprint and the second voiceprint, obtains the 3rd voiceprint.
In a step 102, after the electronic equipment obtains the first voiceprint and the second voiceprint, the electronics sets It is standby first voiceprint to be matched with the second voiceprint, so as to obtain the second voiceprint and the first voiceprint all Comprising voiceprint as the 3rd voiceprint.
Specifically, the first voiceprint and the second vocal print that the electronic equipment is obtained in embodiments of the present invention be exactly Two corresponding vocal print images, that is, the first vocal print image and rising tone print image, now the electronic equipment can be by first Vocal print image is matched with rising tone print image, and the process of the matching is exactly mainly by the identification of vocal print image, so that root According to two identifications of vocal print image, the electronic equipment will determine in the first vocal print image with rising tone print image identical vocal print Image, so as to using the corresponding voiceprint of vocal print image as the 3rd voiceprint.
Further, in embodiments of the present invention in order to further obtain more accurate 3rd voiceprint and avoid Occur noise matching during voiceprint is matched in the 3rd voiceprint, therefore, entering in embodiments of the present invention The first vocal print image needs what is reached with the duration of identical voiceprint in rising tone print image during row matching Preset Time, that is to say, that identical voiceprint needs to last till that the regular hour could believe as the 3rd last vocal print Breath, so can effectively filter out external noise.
After the 3rd voiceprint is got, the electronic equipment will perform step 104.
Step 104, corresponding 3rd acoustic information of the first sound is converted to by the 3rd voiceprint.
After the 3rd voiceprint is got, be sent to the 3rd voiceprint in the electronic equipment by the electronic equipment Voiceprint resolver in carry out between voiceprint and acoustic information conversion, this is converted to process and will exactly match To the 3rd voiceprint be converted to the 3rd acoustic information.
Therefore, user's sounding position vibration information is converted into the first voiceprint and simultaneously by the embodiment of the present invention The sound that will be collected is converted to the second voiceprint, so as to obtain correspondence according to the first voiceprint and the second voiceprint With the 3rd voiceprint of user's vocal print, and the 3rd voiceprint is converted into corresponding 3rd acoustic information, therefore can be with Noise in external environment is directly filtered, be there is noise remove mode and can be filtered user so as to solve in the prior art Sound in sound or external environment filter it is imperfect, so as to cause acoustic processing in electronic equipment inaccurate, and The single technical problem of noise removal mode, it is achieved thereby that electronic equipment is carried out accurately to the noise in the sound that collects Removal, obtains the accurate sound of user, and then improves the degree of accuracy of electronic equipment acoustic processing, and also greatly increased The Experience Degree of user.
To the present invention it is that embodiment is described further below by specific application scenarios.
Illustrated by mobile phone of electronic equipment in this embodiment, certainly, electronic equipment is except that can be in the case Can simply be illustrated with mobile phone herein with electronic equipments such as earphone, notebook computer or palm PCs beyond mobile phone.
For first, the mobile phone is connected with vibrating detector, and the vibrating detector needs to be placed on the pars stridulans of user Position, then when user's sounding, such as the content of speaking of user, for " park will be seen tomorrow ", the mobile phone is first can be by mobile phone In microphone collection sound, the voice and external environment of user are just included in the sound that now microphone is collected In other noises, such as content " tomorrow of speaking of " the net function in smart mobile phone " that is sent in TV and user Park is seen ", and the sound for collecting is sent to the sound processing chip in mobile phone, meanwhile, during for user's sounding, The sounding position of user will have corresponding vibration, therefore, the vibrating detector will collect user's shaking within this time Dynamic information, then the vibrating detector vibration information can be sent to the process chip in mobile phone, the process chip will solution The vibration information is analysed, and the vibration frequency in the vibration information will be obtained, finally the vibration will be obtained according to the vibration frequency Corresponding first voiceprint of frequency degree, that is, the mobile phone will obtain user and speak content " park will be seen tomorrow " corresponding first Voiceprint.
Now, the sound processing chip in the mobile phone also to the sound for collecting will process that to obtain the sound corresponding Second voiceprint, that is to say, that parsing is obtained " function of surfing the Net in smart mobile phone " and " park will be seen tomorrow " institute by the mobile phone Corresponding voiceprint, then the mobile phone the first voiceprint and the second voiceprint match obtain the first vocal print letter Breath and the voiceprint of the second voiceprint identical the 3rd, namely go according to the first voiceprint to find out in the second vocal print in fact With the first voiceprint identical voiceprint in information, when the voiceprint is found out, then the voiceprint is defined as Three voiceprints, such as, mobile phone can go to match " the online in smart mobile phone according to " park will be seen tomorrow " corresponding voiceprint Function " and " park will be seen tomorrow " corresponding voiceprint, the voiceprint such that it is able to be matched are " park will be seen tomorrow " Corresponding voiceprint.
After the 3rd voiceprint is obtained, the 3rd voiceprint can be converted to the output of the 3rd sound by the mobile phone, Simply contain the sound of user in 3rd sound, that is, " park will be seen tomorrow " sound, and the noise in external environment will Accurately filtered out, so as to improve the noise remove degree of accuracy of mobile phone, and improved the acoustic processing function of mobile phone.
A kind of correspondence method of information processing of the invention, the embodiment of the present invention additionally provides a kind of electronic equipment, such as Fig. 2 institutes The concrete structure schematic diagram of a kind of electronic equipment of the present invention is shown as, the electronic equipment includes:
First obtains unit 201, during sending the first sound in user, obtains first sound corresponding First vibration information;
Second obtaining unit 202, for obtaining the second sound comprising the first sound;
Resolution unit 203, for parsing first vibration information, obtains the first vocal print corresponding with first sound Information, and the second sound is parsed, obtain the second voiceprint corresponding with first sound;
3rd obtaining unit 204, for based on first voiceprint and second voiceprint, obtaining the 3rd sound Line information;
Converting unit 205, for the 3rd voiceprint to be converted into the 3rd sound corresponding with first sound Information.
Output unit 206, for exporting the 3rd acoustic information
Wherein, the obtaining unit 202 of first acquisition unit 201 and second is simultaneously in collection information, first obtains unit 201 It is that the second obtaining unit 202 is for obtaining comprising the first sound for obtaining corresponding first vibration information of the first sound Second sound, then the resolution unit 203 in the electronic equipment will receive the first vibration information and second sound, it is and simultaneously right First vibration information and second sound are parsed, so as to obtain corresponding first voiceprint of the first vibration information, and Corresponding second voiceprint of sound of winning the second place, then the 3rd obtaining unit 204 in the electronic equipment will be based on the first vocal print Information and the second voiceprint obtain the 3rd voiceprint, and then the 3rd obtaining unit 204 sends out the 3rd voiceprint Converting unit 205 is given, so that the 3rd voiceprint is converted to corresponding with first sound by converting unit 205 Three acoustic informations, finally the output unit 206 in the electronic equipment will export the 3rd acoustic information.
Wherein, the 3rd obtaining unit 204 of the electronic equipment includes concrete structure as shown in Figure 3, and the 3rd obtains single Unit 204 includes:
Matching module 301, for first voiceprint to be carried out into voice print matching with second voiceprint;
Module 302 is obtained, for obtaining the vocal print letter that second voiceprint and first voiceprint are all included Breath is used as the 3rd voiceprint.
The resolution unit 203 of the electronic equipment includes concrete structure as shown in Figure 4, and the resolution unit 203 includes:
Parsing module 401, for parsing first vibration information, obtains first vibration information corresponding first and shakes Dynamic frequency;
Vocal print obtains module 402, for according to first vibration frequency, obtaining the corresponding institute of first vibration frequency State the first voiceprint.
It should be noted that the first obtains unit 201 in the electronic equipment is specifically for obtaining one first Preset Time Interior first vibration information corresponding with first sound;And obtain in first Preset Time comprising described the The second sound of one sound.
At least there is following technique effect or advantage in one or more embodiment that the present invention is provided:
By user's sounding position vibration information being converted into the first voiceprint in the embodiment of the present invention and while will be adopted The sound for collecting is converted to the second voiceprint, so as to be obtained using application according to the first voiceprint and the second voiceprint 3rd voiceprint of family vocal print, and the 3rd voiceprint is converted into corresponding 3rd acoustic information, that is to say, that electronics Then equipment the first voiceprint of meeting proposes the vocal print for going to be matched with the first voiceprint as standard in the second voiceprint Information, because the first voiceprint is obtained by the vibration at user's sounding position, therefore first voiceprint is user's hair Accurate vocal print after sound, therefore can clearly obtain the acoustic information of user, it is possible to the noise in external environment is direct Filter, there is the electronic equipment technical problem inaccurate to acoustic processing in the prior art so as to solve, it is achieved thereby that electric Sub- equipment is accurately removed to the noise in the sound that collects, obtains the accurate sound of user, and then improve electronics The degree of accuracy of equipment acoustic processing, and the also greatly Experience Degree of increased user.
In embodiments of the present invention by gathering corresponding first vibration information of the first sound simultaneously in a Preset Time And collection comprising the first sound second sound, that is to say, that electronic equipment can collection information get the bid time clock signal, Therefore electronic equipment is that will read the clock signal, thereby may be ensured that the synchronization of information gathering in the information that treatment is collected Property, so as to improve electronic equipment to acoustic processing accuracy, and improve removal noise accuracy.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can be using the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.And, the present invention can be used and wherein include the computer of computer usable program code at one or more The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) is produced The form of product.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product Figure and/or block diagram are described.It should be understood that every first-class during flow chart and/or block diagram can be realized by computer program instructions The combination of flow and/or square frame in journey and/or square frame and flow chart and/or block diagram.These computer programs can be provided The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced for reality by the instruction of computer or the computing device of other programmable data processing devices The device of the function of being specified in present one flow of flow chart or multiple one square frame of flow and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in can guide computer or other programmable data processing devices with spy In determining the computer-readable memory that mode works so that instruction of the storage in the computer-readable memory is produced and include finger Make the manufacture of device, the command device realize in one flow of flow chart or multiple one square frame of flow and/or block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that in meter Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented treatment, so as in computer or The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in individual square frame or multiple square frames.
Obviously, those skilled in the art can carry out various changes and modification without deviating from essence of the invention to the present invention God and scope.So, if these modifications of the invention and modification belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising these changes and modification.

Claims (8)

1. a kind of information processing method, is applied to an electronic equipment, it is characterised in that methods described includes:
The vibration detecting unit of the vibration produced when predeterminated position is used for and detects user's sounding with a user, and institute State during user sends the first sound, obtain and detect corresponding with first sound the by the vibration detecting unit One vibration information, and obtain the second sound comprising first sound that acquisition is gathered by a sound collection unit;
Parse first vibration information, obtain the first voiceprint corresponding with first sound, and parse described the Two sound, obtain the second voiceprint corresponding with the second sound;
Based on first voiceprint and second voiceprint, the 3rd voiceprint is obtained;
3rd voiceprint is converted into the 3rd acoustic information corresponding with first sound;
Wherein, it is described based on first voiceprint and second voiceprint, obtain the 3rd voiceprint, specific bag Include:
First voiceprint is carried out into voice print matching with second voiceprint;
Second voiceprint is obtained to believe as the 3rd vocal print with the voiceprint that first voiceprint is all included Breath.
2. the method for claim 1, it is characterised in that parsing first vibration information, obtains and described the Corresponding first voiceprint of one sound, specifically includes:
First vibration information is parsed, corresponding first vibration frequency of first vibration information is obtained;
According to first vibration frequency, corresponding first voiceprint of first vibration frequency is obtained.
3. the method for claim 1, it is characterised in that the acquisition by the vibration detecting unit detect with it is described Corresponding first vibration information of first sound, and obtain by a sound collection unit obtain comprising first sound Second sound, specially:
Obtain in one first Preset Time and detect corresponding with first sound described the by the vibration detecting unit One vibration information;And
Obtain the institute comprising first sound for being obtained in first Preset Time and being gathered by the sound collection unit State second sound.
4. the method as described in any claim of claims 1 to 3, it is characterised in that described by three voiceprint conversion It is that methods described also includes after the 3rd acoustic information corresponding with first sound:
3rd acoustic information is exported by the voice output in the electronic equipment.
5. a kind of electronic equipment, it is characterised in that the electronic equipment includes:
First obtains unit, during sending the first sound in user, obtains first sound corresponding first and shakes Dynamic information;
Second obtaining unit, for obtaining the second sound comprising the first sound;
Resolution unit, for parsing first vibration information, obtains the first voiceprint corresponding with first sound, with And the second sound is parsed, obtain the second voiceprint corresponding with the second sound;
3rd obtaining unit, for based on first voiceprint and second voiceprint, obtaining the 3rd voiceprint;
Converting unit, for the 3rd voiceprint to be converted into the 3rd acoustic information corresponding with first sound;
Wherein, the 3rd obtaining unit is specifically included:
Matching module, for first voiceprint to be carried out into voice print matching with second voiceprint;
Module is obtained, for obtaining voiceprint that second voiceprint all includes with first voiceprint as institute State the 3rd voiceprint.
6. electronic equipment as claimed in claim 5, it is characterised in that the resolution unit is specifically included:
Parsing module, for parsing first vibration information, obtains corresponding first vibration frequency of first vibration information;
Vocal print obtains module, for according to first vibration frequency, obtaining first vibration frequency corresponding described first Voiceprint.
7. electronic equipment as claimed in claim 5, it is characterised in that the first obtains unit is specifically for obtaining one first First vibration information corresponding with first sound in Preset Time;And obtain bag in first Preset Time The second sound containing first sound.
8. electronic equipment as claimed in claim 5, it is characterised in that the electronic equipment also includes:
Output unit, for exporting the 3rd acoustic information.
CN201210534978.7A 2012-12-11 2012-12-11 Information processing method and electronic equipment Active CN103871419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210534978.7A CN103871419B (en) 2012-12-11 2012-12-11 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210534978.7A CN103871419B (en) 2012-12-11 2012-12-11 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN103871419A CN103871419A (en) 2014-06-18
CN103871419B true CN103871419B (en) 2017-05-24

Family

ID=50909882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210534978.7A Active CN103871419B (en) 2012-12-11 2012-12-11 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103871419B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469806B (en) * 2014-09-12 2020-02-21 联想(北京)有限公司 Sound processing method, device and system
CN104601825A (en) * 2015-02-16 2015-05-06 联想(北京)有限公司 Control method and control device
CN105611061A (en) * 2015-12-31 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Voice transmission method and device and mobile terminal
CN106326698A (en) * 2016-08-11 2017-01-11 上海青橙实业有限公司 Working state setting method for terminal and terminal
CN107293293A (en) * 2017-05-22 2017-10-24 深圳市搜果科技发展有限公司 A kind of voice instruction recognition method, system and robot
CN107331399A (en) * 2017-07-05 2017-11-07 广东小天才科技有限公司 Learning effect detection method and system and terminal equipment
CN108449507B (en) * 2018-03-12 2020-04-17 Oppo广东移动通信有限公司 Voice call data processing method and device, storage medium and mobile terminal
EP3790006A4 (en) * 2018-06-29 2021-06-09 Huawei Technologies Co., Ltd. Voice control method, wearable apparatus, and terminal
CN110265007B (en) * 2019-05-11 2020-07-24 出门问问信息科技有限公司 Control method and control device of voice assistant system and Bluetooth headset
CN111009253B (en) * 2019-11-29 2022-10-21 联想(北京)有限公司 Data processing method and device
WO2021237740A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Voice signal processing method and related device therefor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1750123A (en) * 2004-09-17 2006-03-22 微软公司 Method and apparatus for multi-sensory speech enhancement
CN101199006A (en) * 2005-06-20 2008-06-11 微软公司 Multi-sensory speech enhancement using a clean speech prior
CN101887728A (en) * 2003-11-26 2010-11-17 微软公司 Many sensings sound enhancement method and device
CN102084668A (en) * 2008-05-22 2011-06-01 伯恩同通信有限公司 A method and a system for processing signals
CN102411936A (en) * 2010-11-25 2012-04-11 歌尔声学股份有限公司 Speech enhancement method and device as well as head de-noising communication earphone

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9549252B2 (en) * 2010-08-27 2017-01-17 Nokia Technologies Oy Microphone apparatus and method for removing unwanted sounds
FR2976111B1 (en) * 2011-06-01 2013-07-05 Parrot AUDIO EQUIPMENT COMPRISING MEANS FOR DEBRISING A SPEECH SIGNAL BY FRACTIONAL TIME FILTERING, IN PARTICULAR FOR A HANDS-FREE TELEPHONY SYSTEM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887728A (en) * 2003-11-26 2010-11-17 微软公司 Many sensings sound enhancement method and device
CN1750123A (en) * 2004-09-17 2006-03-22 微软公司 Method and apparatus for multi-sensory speech enhancement
CN101199006A (en) * 2005-06-20 2008-06-11 微软公司 Multi-sensory speech enhancement using a clean speech prior
CN102084668A (en) * 2008-05-22 2011-06-01 伯恩同通信有限公司 A method and a system for processing signals
CN102411936A (en) * 2010-11-25 2012-04-11 歌尔声学股份有限公司 Speech enhancement method and device as well as head de-noising communication earphone

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AIR- AND BONE-CONDUCTIVE INTEGRATED MICROPHONES FOR ROBUST SPEECH DETECTION AND ENHANCEMENT;Yanli Zheng et al;《ASRU 2003》;20031231;第249-254页 *

Also Published As

Publication number Publication date
CN103871419A (en) 2014-06-18

Similar Documents

Publication Publication Date Title
CN103871419B (en) Information processing method and electronic equipment
CN109256146B (en) Audio detection method, device and storage medium
CN105259459B (en) Automation quality detecting method, device and the equipment of a kind of electronic equipment
CN106469555B (en) Voice recognition method and terminal
CN111768760B (en) Multi-mode voice endpoint detection method and device
CN111307274A (en) Method and device for diagnosing problem noise source based on big data information
CN106210219A (en) Noise-reduction method and device
CN110335590B (en) Voice recognition test method, device and system
CN110992963A (en) Network communication method, device, computer equipment and storage medium
CN103903597A (en) Piano electronic tuning method based on machine vision and device
CN104092809A (en) Communication sound recording method and recorded communication sound playing method and device
Amerini et al. Robust smartphone fingerprint by mixing device sensors features for mobile strong authentication
CN114187922A (en) Audio detection method and device and terminal equipment
CN111028838A (en) Voice wake-up method, device and computer readable storage medium
CN208223542U (en) A kind of construction site environmental monitoring system
CN112750426B (en) Voice analysis system of mobile terminal
CN110808062B (en) Mixed voice separation method and device
CN104064190A (en) Human body audio digital collection and recognition system and implementation method thereof
CN106197803A (en) Fall acquisition method and the terminal unit of data
CN111667837A (en) Conference record acquisition method, intelligent terminal and device with storage function
CN115312036A (en) Model training data screening method and device, electronic equipment and storage medium
CN114882912A (en) Method and device for testing transient defects of time domain of acoustic signal
CN108766127A (en) Sign language exchange method, unit and storage medium
CN110875043B (en) Voiceprint recognition method and device, mobile terminal and computer readable storage medium
CN102824164A (en) ECG (electroglottograph) measuring method and device for vocal cords

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant