CN109729454A - The sound wheat processing unit of formula interactive voice earphone is worn for neck - Google Patents

The sound wheat processing unit of formula interactive voice earphone is worn for neck Download PDF

Info

Publication number
CN109729454A
CN109729454A CN201711024164.8A CN201711024164A CN109729454A CN 109729454 A CN109729454 A CN 109729454A CN 201711024164 A CN201711024164 A CN 201711024164A CN 109729454 A CN109729454 A CN 109729454A
Authority
CN
China
Prior art keywords
control device
processing unit
neck
sound
sound wheat
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711024164.8A
Other languages
Chinese (zh)
Inventor
朱华明
武巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jinruidelu Technology Co Ltd
Original Assignee
Beijing Jinruidelu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinruidelu Technology Co Ltd filed Critical Beijing Jinruidelu Technology Co Ltd
Priority to CN201711024164.8A priority Critical patent/CN109729454A/en
Publication of CN109729454A publication Critical patent/CN109729454A/en
Pending legal-status Critical Current

Links

Abstract

A kind of sound wheat processing unit that formula interactive voice earphone is worn for neck has been planted in present invention offer, comprising: sound wheat harvest mixer, speech recognition module, semantic understanding module, voice synthetic module, control device and executive device.The present invention is effectively removed noise and noise, obtains clearly control instruction, export effectively correct audio data, user is made to obtain best hearing enjoying by carrying out mixed processing to multi-path audio-frequency data.

Description

The sound wheat processing unit of formula interactive voice earphone is worn for neck
Technical field
The present invention relates to intelligent wearable device technical field, especially a kind of sound wheat that formula interactive voice earphone is worn for neck Processing unit.
Background technique
With the development and continuous improvement of people's living standards of intelligent wearable device, various intelligent wearable devices such as intelligence Wrist-watch using more more and more universal, intelligent wearable device has become indispensable means of communication in people's life.
Why people can hear sound, be that vibration is transmitted to ear-drum by external ear ear canal because of the vibration in air, Pass through the auditory nerve for the vibratory drive people that ear-drum is formed.And when the middle external ear of people damages or clogs ear canal with hand, sound Acoustical vibration can also be transmitted by the skin and bone of people, to drive the auditory nerve of people.
Osteoacusis is a kind of sound conduction mode, passes through the skull of people, temporal bone, osseous labyrinth, the transmitting of inner ear lymph, ear Snail, auditory nerve, auditory center transmit sound wave, and here it is bone conduction technologies.Osteoacusis is to vibrate skull or temporal bone, obstructed It crosses external ear and middle ear is transmitted directly in inner ear.Relative to traditional air transmitted side for generating sound wave by Loudspeaker diaphragm The step of formula, osteoacusis eliminates the transmitting of many sound waves, clearly sound-reducing, and sound wave can be realized in noisy caliber Her people will not be influenced because of spreading in air.
However, bone conduction technology also has several disadvantages in that the appropriate of (1) bond-conduction sound and contacts the position of bone Correlation, it is also related with the feature of tissue.Such as: the difference such as age of user, gender, fat or thin can all cause different user to exist When using same bone conduction earphone, there is different experience, often this different experience is all penalty.(2) it is passed using bone Waveguide technology is called or send words, and bone conduction device must be close to bone, and Shen Bo directly passes through bone and reaches auditory nerve, wearing side Formula determines that bone conduction device must oppress bone conduction device height and passes on bone that some absent-mindedness just will affect sound Keep pouring in the quality passed, however the wearing mode of this height compressing bone makes user's comfort in use, skin Health is influenced to different extents.(3) bone and tissue generate refreshing Eastcom number the amplitude with frequency selectivity The audio signal in decaying and delay, high-fidelity or broadband is difficult through bone conduction to auditory nerve, so being based on existing skill User's majority of art can complain that bone conduction earphone " sound quality " and " timbre " are poor.(4) the problem of osteoacusis leakage sound.Because solid-state is conducted The characteristic of vibration, most existing bone conduction technologies all can not really solve the problems, such as that osteoacusis leaks sound, this is because the prior art The skeleton of frequency dependence and the reality of tissue attenuation vibration signal are compensated by big volume, big vibration signal, That is that this method is equivalent to and drinks poison to quench thirst, and user can complain that leakage sound is serious, or since it is desired that bigger power, osteoacusis are loyal Sub-volume weight greatly increases, and causes equipment whole excessively heavy.(5) bone conduction earphone is the system of open ears, works as user It stays in a noisy environment, can't hear the sound transmitted in earphone at all since the opening of bone conduction earphone will lead to user Sound.
Application No. is the patent applications of 102084668A to disclose the method and system of processing signal, the system comprises: (a) processor is set as handling and carves the first input signal detected when detecting by the first microphone, existed by second microphone The second input signal that detection moment detects, and carve the third input signal detected when detecting by the first microphone, To generate in response to the signal after the correction of the first, second, and third input signal;And (b) communication interface, be set as to External system provides the signal after correction.This method carries out noise reduction process to sound by convolution function, and it is more accurate to have obtained Voice signal.But due to being that a few road sound mix, some sound are easy to be mistaken for correct sound and be recorded in track In, therefore the sound exported is not entirely accurate and clearly.
Application No. is the patent applications of 105721973A to disclose a kind of bone conduction earphone and its audio-frequency processing method, wherein A kind of bone conduction earphone and its audio playing apparatus based on the bone conduction earphone, the bone conduction earphone include skeleton With tissue model modeling module and mathematics presetter, Delay computation unit, digital analog converter, analog-digital converter, first Low-pass filter, the second low-pass filter, audio-frequency amplifier, audio drive amplifiers, at least one first microphone, at least one A bone conduction vibrator;The skeleton of real-time monitoring different user and the attenuation effect information of tissue are based on the attenuation effect Information generates a compensation transmission function, is carried out to input audio signal to input audio signal by the compensation transmission function It is conducted in bone and tissue after digital pre-calibration.The application is to carry out pre- school to the audio of input by the method for compensation Just, but this method is mainly used for solving the attenuation problem of audio signal, and correct audio can not be told from noise Data.
Summary of the invention
In order to solve the above technical problems, the invention proposes by the way that traditional acoustic microphones (the first Mike is arranged The mode that wind and second microphone combine), single frames judgement is carried out to two-way audio input respectively, determines that speech probability is higher Frame is speech frame, and final speech frame is combined into the audio data of output.
The present invention provides a kind of sound wheat processing unit that formula interactive voice earphone is worn for neck, comprising: sound wheat harvest mixer, Speech recognition module, semantic understanding module, voice synthetic module, control device and executive device;
The sound wheat harvest mixer is arranged in the neck and wears on the earplug of formula interactive voice earphone and/or on host;
The speech recognition module is integrated on the control device;
The semantic understanding module is integrated on the control device, is connect with the speech recognition module;
The voice synthetic module is integrated on the control device, is connect with the semantic understanding module;
The control device setting is on the host;
The executive device setting is on the earplug and/or on host;
The sound wheat harvest mixer is connect by wired or wireless with the control device;
The control device is connect by flexible cable with the executive device;
The voice is converted into digital signal and is sent to control dress by the voice of the sound wheat harvest mixer acquisition user It sets;
The control device receives the digital signal, converts thereof into control signal by operation and is sent to and executes dress It sets;
The executive device receives the control signal that the control device issues, and issues the user with wake-up prompt or holds The corresponding control of row.
Preferably, sound wheat harvest mixer is electrodynamic type microphone, Electret Condencer Microphone, piezoelectric microphone, electromagnetic type It is any one or more in microphone, carbon granules formula microphone and semiconductor-type microphone.
Any of the above-described is preferably, and the control device integrates processing unit.
Any of the above-described is preferably, and the processing unit is MTK 6580.
Any of the above-described is preferably, and the executive device includes loudspeaker and/or screen.
Any of the above-described is preferably, and is additionally provided with filter between the sound wheat processing unit and the control device.
Any of the above-described is preferably, and is additionally provided with chromacoder between the control device and the executive device.
Any of the above-described is preferably, and further includes at least one battery pack, the battery pack by flexible cable with it is described Control device connection.
Any of the above-described is preferably, the control device also integrated operation memory RAM.
Any of the above-described is preferably, the also integrated fuselage memory space ROM or fuselage memory space ROM of the control device Slot.
Any of the above-described is preferably, and further includes vibrating sensor, the vibrating sensor by flexible cable with it is described Control device electrical connection.
Any of the above-described is preferably, and the digital signal of the sound wheat harvest mixer includes the first audio signal and the Two audio signals.
Any of the above-described is preferably, and first audio signal refers to the language using first microphone acquisition user Message breath.
Any of the above-described is preferably, and second audio signal, which refers to, utilizes second microphone acquisition described first Ambient sound within the scope of audio signal time of origin.
Any of the above-described is preferably, and the control device further includes following submodule: acoustic characteristic detection sub-module, is used In to the collected audio signal progress acoustic characteristic detection;Keynote source decision sub-module, for carrying out the judgement of keynote source; Keynote source compensates submodule, for carrying out the compensation of keynote source;Noise reduction submodule, for eliminating noise.
The present invention can effectively remove noise and noise by the processing to two-way audio, obtain effectively clearly Audio data improves neck and wears the speech recognition accuracy of formula interactive voice earphone, further improves the service order of preparation, Improve the satisfaction of user.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of a preferred embodiment according to the invention;
Fig. 2 is the module diagram of a preferred embodiment according to the invention.
Description of symbols in attached drawing:
100 wear the sound wheat processing unit of formula interactive voice earphone for neck;
10 sound wheat harvest mixers;
101 first microphones;102 second microphones;
20 control devices;
201 executive devices;202 vibrating sensors;203 battery packs;
231 acoustic characteristic detection sub-modules;232 keynote source decision sub-modules;233 keynote sources compensate submodule;234 noise reductions Submodule.
Specific embodiment
The present invention is further elaborated with specific embodiment with reference to the accompanying drawing.
Embodiment one
As shown in Figure 1, a kind of sound wheat processing unit 100 for wearing formula interactive voice earphone for neck, comprising: sound wheat harvest sound dress Set 10, speech recognition module, semantic understanding module, voice synthetic module, control device 20 and executive device 201;
The sound wheat harvest mixer 10 is arranged in the neck and wears on the earplug of formula interactive voice earphone and/or on host;
The speech recognition module is integrated on the control device 20;
The semantic understanding module is integrated on the control device 20, is connect with the speech recognition module;
The voice synthetic module is integrated on the control device 20, is connect with the semantic understanding module;
The setting of control device 20 is on the host;
The setting of executive device 201 is on the earplug and/or on host;
The sound wheat harvest mixer 10 is connect by wired or wireless with the control device 20;
The control device 20 is connect by flexible cable with the executive device 201;
The sound wheat harvest mixer 10 acquires the voice of user, and the voice is converted into digital signal and is sent to control dress Set 20;
The control device 20 receives the digital signal, converts thereof into control signal by operation and is sent to and executes dress Set 201;
The executive device 201 receives the control signal that the control device 20 issues, and issues the user with wake-up and mentions Show or execute corresponding control.
Sound wheat harvest mixer 10 be electrodynamic type microphone, Electret Condencer Microphone, piezoelectric microphone, electromagnetic microphone, It is any one or more in carbon granules formula microphone and semiconductor-type microphone.
The control device 20 integrates processing unit.
The processing unit is MTK 6580.
The executive device 201 includes loudspeaker and/or screen.
Filter is additionally provided between the sound wheat processing unit and the control device 20.
Chromacoder is additionally provided between the control device 20 and the executive device 201.
It further include at least one battery pack 203, the battery pack 203 is connected by flexible cable and the control device 20 It connects.
The control device 20 goes back integrated operation memory RAM.
The also integrated fuselage memory space ROM of the control device 20 or fuselage memory space ROM slot.
It further include vibrating sensor 202, the vibrating sensor 202 is electrically connected by flexible cable with the control device 20 It connects.
The digital signal of the sound wheat harvest mixer 10 includes the first audio signal and the second audio signal.
First audio signal refers to the voice messaging using first microphone 101 acquisition user.
Second audio signal, which refers to, acquires the first audio signal time of origin using the second microphone 102 Ambient sound in range.
The control device 20 further includes following submodule: acoustic characteristic detection sub-module 231, for collected institute It states audio signal and carries out acoustic characteristic detection;Keynote source decision sub-module 232, for carrying out the judgement of keynote source;The compensation of keynote source Submodule 233, for carrying out the compensation of keynote source;Noise reduction submodule 234, for eliminating noise.
The step of acoustic characteristic detects is as follows: 1) extracting the audio data that frame length is 20ms, xi(n), and average energy is calculated Measure Ei, zero-crossing rate ZCRi, short-term correlation RiCross correlation C in short-termij(k),
Wherein,2) according to the average energy Ei, the zero-crossing rate ZCRi, the short-term correlation RiWith the cross correlation C in short-termij(k) the non-mute probability of present frame is calculatedIt is general with voice Rate
, whereinFor the channel i max (Ei*ZCRi) experience value,For the channel i max { max [Ri(k)]* max[Cij(k)] experience value }.3) method of the acoustic characteristic detection is also according to the present frame of the channel i Non-mute probabilityWith the speech probabilityJudge the type of present frame, i.e., whether is noise frame, speech frame, without ring of making an uproar Border sound frame,Wherein, It is in the empirical value of correlation judgement, Ambient is without ambient sound frame of making an uproar, and Noise is noise frame, and Speech is speech frame.Keynote Source determines that keynote source decision sub-module 232 is according to present frameNumerical value and determine that result determines that extracts all the way from that works as Keynote source of the previous frame as current location frame.Determination method is as follows: 1) when certain is Speech speech frame all the way, and another way is Ambient without make an uproar ambient sound frame or Noise noise frame when, determine main data path of the road as current location frame;2) when Certain is Ambient without ambient sound frame of making an uproar all the way, and when another way is Noise noise frame, determine the road as current location frame Main data path;3) it when two-way is one species frame, determinesMain number of the maximum channel of numerical value as current location frame According to access.Keynote source compensates, and behind the keynote source for determining current location frame, compensation submodule is right from another way extracted valid data Keynote source carries out speech components compensation.Speech components compensation method are as follows: 1) using the effective audio data in different channels frequency domain into The subband weighted superposition of row entire spectrum compensates;2) frequency spectrum duplication operation is carried out using effective low frequency sub-band data correlation characteristics, Compensate high-frequency sub-band data.Noise is eliminated, and still includes a small amount of noise data, noise reduction submodule in the audio data after compensating Block 234 obtains noise spectrum characteristics according to the noise frame of main data path speech frame forward-backward correlation, and to speech frame on frequency domain Noise spectrum ingredient is effectively inhibited, to obtain purer efficient voice data.Output signal, what is ultimately generated Efficient voice data-pushing is to terminal device.
Main audio data is imported, the environment stored in memory is transferred and determines data, main audio data and environment are determined Data are compared, and determine the noisy environment on periphery when main audio inputs.Transfer environmental noise data from memory, and with Main audio data carries out single frames comparison.Remove audio data identical with environmental noise data in main audio data single frames.It generates Effectively without the audio data of noise.
For a better understanding of the present invention, the above combination specific embodiments of the present invention are described in detail, but are not Limitation of the present invention.Any simple modification made to the above embodiment according to the technical essence of the invention, still belongs to In the range of technical solution of the present invention.In this specification the highlights of each of the examples are it is different from other embodiments it Locate, the same or similar part cross-reference between each embodiment.For system embodiments, due to itself and method Embodiment corresponds to substantially, so being described relatively simple, the relevent part can refer to the partial explaination of embodiments of method.
Methods and apparatus of the present invention may be achieved in many ways.For example, can by software, hardware, firmware or Software, hardware, firmware any combination realize method and system of the invention.The said sequence of the step of for the method Merely to be illustrated, the step of method of the invention, is not limited to sequence described in detail above, special unless otherwise It does not mentionlet alone bright.In addition, in some embodiments, also the present invention can be embodied as to record program in the recording medium, these programs Including for realizing machine readable instructions according to the method for the present invention.Thus, the present invention also covers storage for executing basis The recording medium of the program of method of the invention.
Description of the invention is given for the purpose of illustration and description, and is not exhaustively or will be of the invention It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches It states embodiment and is to more preferably illustrate the principle of the present invention and practical application, and those skilled in the art is enable to manage The solution present invention is to design various embodiments suitable for specific applications with various modifications.

Claims (10)

1. a kind of sound wheat processing unit for wearing formula interactive voice earphone for neck, comprising: sound wheat harvest mixer, speech recognition mould Block, semantic understanding module, voice synthetic module, control device and executive device;It is characterized in that,
The sound wheat harvest mixer is arranged in the neck and wears on the earplug of formula interactive voice earphone and/or on host;
The speech recognition module is integrated on the control device;
The semantic understanding module is integrated on the control device, is connect with the speech recognition module;
The voice synthetic module is integrated on the control device, is connect with the semantic understanding module;
The control device setting is on the host;
The executive device setting is on the earplug and/or on host;
The sound wheat harvest mixer is connect by wired or wireless with the control device;
The control device is connect by flexible cable with the executive device;
The voice of the sound wheat harvest mixer acquisition user, is converted into digital signal for the voice and is sent to control device;
The control device receives the digital signal, converts thereof into control signal by operation and is sent to executive device;
The executive device receives the control signal that the control device issues, and issues the user with wake-up prompt or executes phase It should control.
2. the sound wheat processing unit according to claim 1 for wearing formula interactive voice earphone for neck, which is characterized in that sound wheat Audio signal reception device be electrodynamic type microphone, Electret Condencer Microphone, piezoelectric microphone, electromagnetic microphone, carbon granules formula microphone and It is any one or more in semiconductor-type microphone.
3. the sound wheat processing unit according to claim 2 for wearing formula interactive voice earphone for neck, which is characterized in that described Control device integrates processing unit.
4. the sound wheat processing unit according to claim 3 for wearing formula interactive voice earphone for neck, which is characterized in that described Processing unit is MTK 6580.
5. the sound wheat processing unit according to claim 4 for wearing formula interactive voice earphone for neck, which is characterized in that described Executive device includes loudspeaker and/or screen.
6. the sound wheat processing unit according to claim 5 for wearing formula interactive voice earphone for neck, which is characterized in that described Filter is additionally provided between sound wheat processing unit and the control device.
7. the sound wheat processing unit according to claim 6 for wearing formula interactive voice earphone for neck, which is characterized in that described Chromacoder is additionally provided between control device and the executive device.
8. the sound wheat processing unit according to claim 7 for wearing formula interactive voice earphone for neck, which is characterized in that also wrap At least one battery pack is included, the battery pack is connect by flexible cable with the control device.
9. the sound wheat processing unit according to claim 8 for wearing formula interactive voice earphone for neck, which is characterized in that described Control device also integrated operation memory RAM.
10. the sound wheat processing unit according to claim 9 for wearing formula interactive voice earphone for neck, which is characterized in that institute State the also integrated fuselage memory space ROM of control device or fuselage memory space ROM slot.
CN201711024164.8A 2017-10-27 2017-10-27 The sound wheat processing unit of formula interactive voice earphone is worn for neck Pending CN109729454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711024164.8A CN109729454A (en) 2017-10-27 2017-10-27 The sound wheat processing unit of formula interactive voice earphone is worn for neck

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711024164.8A CN109729454A (en) 2017-10-27 2017-10-27 The sound wheat processing unit of formula interactive voice earphone is worn for neck

Publications (1)

Publication Number Publication Date
CN109729454A true CN109729454A (en) 2019-05-07

Family

ID=66292144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711024164.8A Pending CN109729454A (en) 2017-10-27 2017-10-27 The sound wheat processing unit of formula interactive voice earphone is worn for neck

Country Status (1)

Country Link
CN (1) CN109729454A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464918A (en) * 2020-01-31 2020-07-28 美律电子(深圳)有限公司 Earphone and earphone set

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001306199A (en) * 2000-04-17 2001-11-02 Sharp Corp Network equipment controller
CN102138337A (en) * 2008-08-13 2011-07-27 W·W·格雷林 Wearable headset with self-contained vocal feedback and vocal command

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001306199A (en) * 2000-04-17 2001-11-02 Sharp Corp Network equipment controller
CN102138337A (en) * 2008-08-13 2011-07-27 W·W·格雷林 Wearable headset with self-contained vocal feedback and vocal command

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464918A (en) * 2020-01-31 2020-07-28 美律电子(深圳)有限公司 Earphone and earphone set

Similar Documents

Publication Publication Date Title
CN107071647B (en) A kind of sound collection method, system and device
US9380374B2 (en) Hearing assistance systems configured to detect and provide protection to the user from harmful conditions
US20160316304A1 (en) Hearing assistance system
EP2882203A1 (en) Hearing aid device for hands free communication
WO2016167878A1 (en) Hearing assistance systems configured to enhance wearer's ability to communicate with other individuals
US11700493B2 (en) Hearing aid comprising a left-right location detector
WO2016167877A1 (en) Hearing assistance systems configured to detect and provide protection to the user harmful conditions
US11589173B2 (en) Hearing aid comprising a record and replay function
EP3873110A1 (en) Hearing aid determining turn-taking
CN207518800U (en) Neck wears formula interactive voice earphone
CN207995324U (en) Neck wears formula interactive voice earphone
CN207518802U (en) Neck wears formula interactive voice earphone
CN109729454A (en) The sound wheat processing unit of formula interactive voice earphone is worn for neck
CN109729471A (en) The ANC denoising device of formula interactive voice earphone is worn for neck
CN109729463A (en) The compound audio signal reception device of sound wheat bone wheat of formula interactive voice earphone is worn for neck
CN207518792U (en) Neck wears formula interactive voice earphone
US20220210581A1 (en) Hearing aid comprising a feedback control system
CN207518791U (en) Neck wears formula interactive voice earphone
CN109729462A (en) The bone wheat processing unit of formula interactive voice earphone is worn for neck
CN207518804U (en) The telecommunication devices of formula interactive voice earphone are worn for neck
CN109729470A (en) The sound wheat harvest sound processor of formula interactive voice earphone is worn for neck
CN109729457A (en) The bone wheat harvest sound processor of formula interactive voice earphone is worn for neck
CN207518801U (en) The remote music playing device of formula interactive voice earphone is worn for neck
CN109729472A (en) Neck wears exchange method, system and the device of formula interactive voice earphone
CN114630223B (en) Method for optimizing functions of hearing-wearing device and hearing-wearing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination