WO2023160275A9 - 声音信号的处理方法及耳机设备 - Google Patents

声音信号的处理方法及耳机设备 Download PDF

Info

Publication number
WO2023160275A9
WO2023160275A9 PCT/CN2023/071087 CN2023071087W WO2023160275A9 WO 2023160275 A9 WO2023160275 A9 WO 2023160275A9 CN 2023071087 W CN2023071087 W CN 2023071087W WO 2023160275 A9 WO2023160275 A9 WO 2023160275A9
Authority
WO
WIPO (PCT)
Prior art keywords
signal
sound signal
external
filter
sound
Prior art date
Application number
PCT/CN2023/071087
Other languages
English (en)
French (fr)
Other versions
WO2023160275A1 (zh
Inventor
郭露
王君
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US18/562,609 priority Critical patent/US20240251197A1/en
Priority to EP23758900.7A priority patent/EP4322553A4/en
Publication of WO2023160275A1 publication Critical patent/WO2023160275A1/zh
Publication of WO2023160275A9 publication Critical patent/WO2023160275A9/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1781Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
    • G10K11/17821Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1783Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions
    • G10K11/17837Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions by retaining part of the ambient acoustic environment, e.g. speech or alarm signals that the user needs to hear
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1785Methods, e.g. algorithms; Devices
    • G10K11/17853Methods, e.g. algorithms; Devices of the filter
    • G10K11/17854Methods, e.g. algorithms; Devices of the filter the filter being an adaptive filter
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • G10K11/17881General system configurations using both a reference signal and an error signal the reference signal being an acoustic signal, e.g. recorded with a microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17885General system configurations additionally using a desired external signal, e.g. pass-through audio such as music or speech
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/108Communication systems, e.g. where useful sound is kept and noise is cancelled
    • G10K2210/1081Earphones, e.g. for telephones, ear protectors or headsets
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/129Vibration, e.g. instead of, or in addition to, acoustic noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3023Estimation of noise, e.g. on error signals
    • G10K2210/30231Sources, e.g. identifying noisy processes or components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3025Determination of spectrum characteristics, e.g. FFT
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/321Physical
    • G10K2210/3224Passive absorbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/05Noise reduction with a separate noise microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/05Electronic compensation of the occlusion effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a sound signal processing method and a headphone device.
  • hearing aids In-ear headphones, over-ear headphones and other headphone devices are favored by more and more consumers.
  • the headset device After the user wears the headset device, due to the sealing of the ear cap and earmuff, the external sound heard by the user will be reduced; and when the user wears the headset to speak, he will feel the low-frequency component of the voice signal emitted by himself. The intensity increases and an occlusion effect occurs, causing the speech you produce to sound dull and unclear.
  • Embodiments of the present application provide a sound signal processing method and a headphone device, which can improve the restoration of external sound signals while suppressing the occlusion effect.
  • an embodiment of the present application proposes a headphone device, including: an external microphone, an error microphone, a speaker, a feedforward filter, a feedback filter, a target filter, a first audio processing unit and a second audio processing unit; external The microphone is used to collect external sound signals, which include the first external environment sound signal and the first speech signal; the error microphone is used to collect the in-ear sound signal, which includes the second external environment sound signal and the second speech signal.
  • the signal strength of the second external environment sound signal is lower than the signal strength of the first external environment sound signal, the signal strength of the second voice signal is lower than the signal strength of the first voice signal;
  • the feedforward filter is used to filter the external sound
  • the signal is processed to obtain the sound signal to be compensated;
  • the target filter is used to process the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal;
  • the first audio processing unit is used to obtain the environmental sound attenuation signal and the speech attenuation signal,
  • the second external environment sound signal and the second speech signal in the in-ear sound signal are removed to obtain the occlusion signal;
  • the feedback filter is used to process the occlusion signal to obtain the inverse noise signal;
  • the second audio processing unit is used to be compensated
  • the sound signal and the inverted noise signal are mixed and processed to obtain a mixed audio signal;
  • the speaker is used to play the mixed audio signal.
  • the external sound signal collected by the external microphone is processed through the target filter to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the first audio processing unit converts the in-ear sound collected by the error microphone according to the environmental sound attenuation signal and the speech attenuation signal.
  • the second external environment sound signal and the second speech signal in the signal are removed to obtain the occlusion signal caused by the occlusion effect, and then the feedback filter generates an inverse noise signal corresponding to the occlusion signal and plays it through the speaker.
  • the feedback filter does not attenuate the passively attenuated environmental sound signal and the passively attenuated speech signal in the in-ear sound signal, thereby suppressing the occlusion effect while improving the response to the external environmental sound signal and the user's voice.
  • the degree of signal restoration is not attenuate the passively attenuated environmental sound signal and the passively attenuated speech signal in the in-ear sound signal, thereby suppressing the occlusion effect while improving the response to the external environmental sound signal and the user's voice.
  • the headphone device further includes a vibration sensor and a first control unit; the vibration sensor is used to collect vibration signals when the user speaks; and the first control unit is used to collect vibration signals, external sound signals and in-ear sounds.
  • the signals determines the target volume when the user speaks; and obtains corresponding feedback filter parameters according to the target volume; the feedback filter is specifically used to control the blocking signal according to the feedback filter parameters determined by the first control unit. Perform processing to obtain the inverted noise signal.
  • the feedback filter parameters of the feedback filter are adaptively adjusted according to the volume of the user's speech while wearing the headphones, that is, the de-blocking effect of the feedback filter is adjusted, so that when the user speaks at different volumes while wearing the headphones, the de-blocking effect can be improved Consistency, thereby improving the transparent transmission effect of the external environmental sound signal finally heard in the ear canal and the voice signal sent by the user.
  • the first control unit is specifically configured to: determine the first volume according to the amplitude of the vibration signal; determine the second volume according to the signal strength of the external sound signal; determine the second volume according to the signal strength of the sound signal in the ear.
  • the third volume determine the target volume when the user speaks based on the first volume, the second volume and the third volume. In this way, the target volume when the user speaks is jointly determined based on the vibration signal, external sound signal and in-ear sound signal, thereby making the final feedback filter parameters more accurate.
  • the first control unit is specifically configured to calculate a weighted average of the first volume, the second volume, and the third volume to obtain the target volume.
  • the headphone device further includes a first control unit, and the first control unit is configured to: obtain the first intensity of the low-frequency component in the external sound signal, and the second intensity of the low-frequency component in the in-ear sound signal. intensity; according to the first intensity, the second intensity and the intensity threshold, the corresponding feedback filter parameters are obtained; the feedback filter is specifically used to process the occlusion signal according to the feedback filter parameters determined by the first control unit to obtain inverted noise Signal.
  • the occlusion signal is mainly a low-frequency raised signal generated by the occlusion effect when the user speaks
  • the feedback filter parameters can be accurately determined based on the low-frequency components in the external sound signal and the low-frequency component in the in-ear sound signal; and, the earphones
  • the added hardware structure (for example, only the first control unit and the target filter is added) is less, thereby simplifying the hardware structure in the headset.
  • the first control unit is specifically configured to: calculate the absolute value of the difference between the first intensity and the second intensity to obtain the third intensity; calculate the difference between the third intensity and the intensity threshold to obtain the intensity. Difference; obtain the corresponding feedback filter parameters based on the intensity difference.
  • the first control unit is specifically configured to: calculate the absolute value of the difference between the first intensity and the second intensity to obtain the third intensity; calculate the difference between the third intensity and the intensity threshold to obtain the intensity. Difference; obtain the corresponding feedback filter parameters based on the intensity difference.
  • the headphone device further includes an audio analysis unit and a third audio processing unit
  • the external microphone includes a reference microphone and a call microphone
  • the feedforward filter includes a first feedforward filter and a second feedforward filter.
  • the reference microphone is used to collect the first external sound signal
  • the call microphone is used to collect the second external sound signal
  • the audio analysis unit is used to process the first external sound signal and the second external sound signal to obtain the first external environment sound signal and the first speech signal
  • the first feedforward filter is used to process the first external environment sound signal to obtain the environment signal to be compensated
  • the second feedforward filter is used to process the first speech signal to obtain the speech signal to be compensated
  • the sound signal to be compensated includes the environment signal to be compensated and the speech signal to be compensated
  • the third audio processing unit is used to mix the first external environment sound signal and the first speech signal to obtain the external sound signal.
  • the first external environment sound signal and the first speech signal in the external sound signal can be accurately separated, so that the first feedforward filter can accurately obtain the environment signal to be compensated, so as to improve the understanding of the first
  • the accuracy of the restoration of the external environment sound signal is improved, and the second feed-forward filter can accurately obtain the speech signal to be compensated, so as to improve the accuracy of the restoration of the first speech signal.
  • the headphone device further includes a first control unit; the first control unit is used to obtain the signal strength of the first external environment sound signal and the signal strength of the first voice signal, and obtain the signal strength of the first external environment sound signal according to the first external environment sound signal.
  • the signal strength of the signal and the signal strength of the first voice signal adjust the environmental sound filter parameters of the first feedforward filter and/or the voice filter parameters of the second feedforward filter; the first feedforward filter is specifically used to The first external environment sound signal is processed according to the environmental sound filter parameters determined by the first control unit to obtain the environmental signal to be compensated; the second feedforward filter is specifically used according to the speech filter parameters determined by the first control unit, The first speech signal is processed to obtain a speech signal to be compensated.
  • the environmental sound filter parameters of the first feedforward filter and/or the speech filter parameters of the second feedforward filter are reasonably adjusted to meet different scene requirements.
  • the first control unit is specifically configured to reduce the first feedforward when the difference between the signal strength of the first external environment sound signal and the signal strength of the first speech signal is less than the first set threshold.
  • the environmental sound filter parameters of the filter and when the difference between the signal strength of the first external environment sound signal and the signal strength of the first speech signal is greater than the second set threshold, increasing the speech filter of the second feedforward filter parameter.
  • the first control unit can reduce the parameters of the environmental sound filter, so that the environmental sound signal finally heard in the ear canal is reduced, thereby reducing the negative hearing sensation caused by the noise floor of the circuit and microphone hardware; and, the first control unit also
  • the voice filter parameters can be improved so that the final voice signal in the ear canal is greater than the first voice signal in the external environment, thereby improving the user's ability to hear their own voice clearly in a high-noise environment.
  • the headset device further includes a wireless communication module and a first control unit; the wireless communication module is used to receive filter parameters sent by the terminal device, and the filter parameters include environmental sound filter parameters and voice filter parameters. and feedback one or more of the filter parameters; the first control unit is configured to receive the filter parameters transmitted by the wireless communication module.
  • the reference microphone, call microphone and error microphone do not need to communicate with the first control unit. connection, thereby simplifying the circuit connection method in the headset; and the deblocking effect and transparent transmission effect of the headset can be manually controlled on the terminal device, improving the diversity of the deblocking effect and transparent transmission effect of the headset.
  • the headset device further includes a wireless communication module and a first control unit; the wireless communication module is used to receive the gear information sent by the terminal device; and the first control unit is used to obtain the corresponding filter according to the gear information.
  • the filter parameters include one or more of environmental sound filter parameters, speech filter parameters, and feedback filter parameters. In this way, another way to control the ambient sound filter parameters, voice filter parameters and feedback filter parameters in the headset through the terminal device is provided.
  • the reference microphone, call microphone, error microphone, etc. do not need to be connected to the first control unit connection, thereby simplifying the circuit connection method in the headset; and the deblocking effect and transparent transmission effect of the headset can be manually controlled on the terminal device, improving the diversity of the deblocking effect and transparent transmission effect of the headset.
  • the headphone device further includes a wind noise analysis unit and a second control unit;
  • the wind noise analysis unit is used to calculate the correlation between the first external sound signal and the second external sound signal to determine the external environment wind the intensity of the target filter;
  • the second control unit is used to determine the target filter parameters of the target filter according to the intensity of the wind in the external environment;
  • the target filter is also used to process the external sound signal according to the target filter parameters determined by the second control unit , obtaining an environmental sound attenuation signal, where the external sound signal includes a first external sound signal and a second external sound signal;
  • the first audio processing unit is also used to remove part of the in-ear sound signal according to the environmental sound attenuation signal to obtain the occlusion signal and environmental noise signal;
  • the feedback filter is also used to process the blocking signal and environmental noise signal to obtain the inverted noise signal. In this way, the wind noise ultimately heard in the ear canal in the wind noise scenario is reduced by adjusting the target filter parameters of the target filter.
  • inventions of the present application propose a sound signal processing method, which is applied to a headphone device.
  • the headphone device includes an external microphone, an error microphone, a speaker, a feedforward filter, a feedback filter, a target filter, and a first audio processing unit and a second audio processing unit.
  • the method includes: an external microphone collects an external sound signal, and the external sound signal includes a first external environment sound signal and a first speech signal; an error microphone collects an inner ear sound signal, and the inner ear sound signal includes a second The external environment sound signal, the second speech signal and the occlusion signal, the signal intensity of the second external environment sound signal is lower than the signal intensity of the first external environment sound signal, and the signal intensity of the second speech signal is lower than the signal intensity of the first speech signal.
  • the feedforward filter processes the external sound signal to obtain the sound signal to be compensated;
  • the target filter processes the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal;
  • the first audio processing unit processes the environmental sound attenuation signal and the speech signal according to the environmental sound attenuation signal and the speech attenuation signal. Attenuate the signal, remove the second external environment sound signal and the second speech signal in the ear sound signal, and obtain the occlusion signal;
  • the feedback filter processes the occlusion signal to obtain the inverse noise signal;
  • the second audio processing unit is to be compensated
  • the sound signal and the inverted noise signal are mixed and processed to obtain a mixed audio signal; the speaker plays the mixed audio signal.
  • the headphone device further includes a vibration sensor and a first control unit; before the feedback filter processes the occlusion signal to obtain the inverted noise signal, the headphone device also includes: a vibration sensor that collects the vibration signal when the user speaks. ; The first control unit determines the target volume when the user speaks according to one or more of the vibration signal, the external sound signal and the in-ear sound signal; the first control unit obtains the corresponding feedback filter parameters according to the target volume; feedback filtering The device processes the occlusion signal to obtain an inverse noise signal, including: a feedback filter processes the occlusion signal according to the feedback filter parameters determined by the first control unit to obtain an inverse noise signal.
  • the first control unit determines the target volume when the user speaks based on one or more of the vibration signal, the external sound signal, and the in-ear sound signal, including: the first control unit determines the target volume according to the vibration signal The amplitude determines the first volume; the first control unit determines the second volume according to the signal strength of the external sound signal; the first control unit determines the third volume according to the signal strength of the sound signal in the ear; the first control unit determines the third volume according to the first volume , the second volume and the third volume, determine the target volume when the user speaks.
  • the first control unit determines the target volume when the user speaks according to the first volume, the second volume and the third volume, including: the first control unit calculates the first volume, the second volume and the third volume. The weighted average of the three volumes is used to obtain the target volume.
  • the headphone device further includes a first control unit; before the feedback filter processes the occlusion signal to obtain the inverted noise signal, the headphone device further includes: the first control unit obtains the low-frequency component in the external sound signal the first intensity, and the second intensity of the low-frequency component in the sound signal in the ear; the first control unit obtains the corresponding feedback filter parameters according to the first intensity, the second intensity and the intensity threshold; the feedback filter operates on the occlusion signal Processing to obtain an inverse noise signal includes: a feedback filter processing the blocking signal according to the feedback filter parameters determined by the first control unit to obtain an inverse noise signal.
  • the first control unit obtains corresponding feedback filter parameters according to the first intensity, the second intensity and the intensity threshold, including: the first control unit calculates the difference between the first intensity and the second intensity The absolute value of , the third intensity is obtained; the first control unit calculates the difference between the third intensity and the intensity threshold, and obtains the intensity difference; the first control unit obtains the corresponding feedback filter parameters according to the intensity difference.
  • the headphone device further includes an audio analysis unit and a third audio processing unit
  • the external microphone includes a reference microphone and a call microphone
  • the feedforward filter includes a first feedforward filter and a second feedforward filter.
  • the external microphone collects external sound signals, including: collecting the first external sound signal through the reference microphone, and collecting the second external sound signal through the call microphone
  • the feedforward filter processes the external sound signal to obtain the sound signal to be compensated, including:
  • the audio analysis unit processes the first external environment sound signal and the second external environment sound signal to obtain the first external environment sound signal and the first speech signal
  • the first feedforward filter processes the first external environment sound signal to obtain the to-be-compensated Environmental signal
  • the second feedforward filter processes the first speech signal to obtain the speech signal to be compensated, which includes the environmental signal to be compensated and the speech signal to be compensated
  • the external sound signal is processed in the target filter to obtain Before the environmental sound attenuation signal and the speech attenuation signal, it also includes: a third audio processing unit that mixes the
  • the headphone device further includes a first control unit; before the first feedforward filter processes the first external environment sound signal to obtain the environment signal to be compensated, the headphone device further includes: the first control unit obtains The signal strength of the first external environment sound signal and the signal strength of the first voice signal; the first control unit adjusts the environment of the first feedforward filter according to the signal strength of the first external environment sound signal and the signal strength of the first voice signal.
  • the first feedforward filter processes the first external environment sound signal to obtain the environment signal to be compensated, including: the first feedforward filter according to The environmental sound filter parameters determined by the first control unit process the first external environment sound signal to obtain the environmental signal to be compensated;
  • the second feedforward filter processes the first speech signal to obtain the speech signal to be compensated, including: The second feedforward filter processes the first speech signal according to the speech filter parameters determined by the first control unit to obtain the speech signal to be compensated.
  • the first control unit adjusts the environmental sound filter parameters of the first feedforward filter and/or the second environmental sound filter according to the signal strength of the first external environment sound signal and the signal strength of the first voice signal.
  • the speech filter parameters of the feedforward filter include: when the difference between the signal strength of the first external environment sound signal and the signal strength of the first speech signal is less than the first set threshold, the first control unit reduces the first feedforward Environmental sound filter parameters of the filter; when the difference between the signal strength of the first external environment sound signal and the signal strength of the first voice signal is greater than the second set threshold, the first control unit increases the value of the second feedforward filter. Speech filter parameters.
  • the headphone device further includes a wireless communication module and a first control unit; before the first feedforward filter processes the first external environment sound signal to obtain the environment signal to be compensated, it also includes:
  • the communication module receives filter parameters sent by the terminal device.
  • the filter parameters include one or more of environmental sound filter parameters, voice filter parameters and feedback filter parameters; the first control unit receives the filter sent by the wireless communication module. parameter.
  • the headphone device further includes a wireless communication module and a first control unit; before the first feedforward filter processes the first external environment sound signal to obtain the environment signal to be compensated, it also includes:
  • the communication module receives gear information sent by the terminal device; the first control unit obtains corresponding filter parameters according to the gear information, and the filter parameters include one of environmental sound filter parameters, voice filter parameters, and feedback filter parameters.
  • the headphone device further includes a wind noise analysis unit and a second control unit; the method further includes: the wind noise analysis unit calculates the correlation between the first external sound signal and the second external sound signal to determine The intensity of the wind in the external environment; the second control unit determines the target filter parameters of the target filter according to the intensity of the wind in the external environment; the target filter processes the external sound signal according to the target filter parameters determined by the second control unit, and obtains Environmental sound attenuation signal, the external sound signal includes a first external sound signal and a second external sound signal; the first audio processing unit removes part of the in-ear sound signal according to the environmental sound attenuation signal to obtain an occlusion signal and an environmental noise signal ; The feedback filter processes the blocking signal and the environmental noise signal to obtain the inverted noise signal.
  • the wind noise analysis unit calculates the correlation between the first external sound signal and the second external sound signal to determine The intensity of the wind in the external environment
  • the second control unit determines the target filter parameters of the target filter according to the intensity of the wind in the external
  • Figure 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a scene in which a user wears headphones according to an embodiment of the present application
  • Figure 3 is a schematic diagram illustrating the low-frequency rise and high-frequency attenuation of the sound signal in the ear caused by the user wearing headphones to speak according to an embodiment of the present application;
  • Figure 4 is a schematic structural diagram of an earphone provided by related technologies
  • Figure 5 is a schematic structural diagram of the first earphone provided by the embodiment of the present application.
  • Figure 6 is a schematic flow chart of the first sound signal processing method provided by the embodiment of the present application.
  • Figure 7 is a schematic diagram of the test flow of the feedforward filter parameters obtained by testing the feedforward filter provided by the embodiment of the present application;
  • Figure 8 is a schematic diagram of the testing process for obtaining the target filter parameters of the target filter provided by the embodiment of the present application.
  • Figure 9 is a schematic diagram of the first test signal collected by the external microphone and the second test signal collected by the error microphone obtained from the test provided by the embodiment of the present application;
  • Figure 10 is a schematic structural diagram of a second earphone provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart of the second sound signal processing method provided by the embodiment of the present application.
  • Figure 12 is a schematic diagram illustrating the low-frequency rise and high-frequency attenuation of the sound signal in the ear caused by different volume levels of the voice signal when the user wears headphones and speaks according to an embodiment of the present application;
  • Figure 13 is a schematic structural diagram of a third earphone provided by an embodiment of the present application.
  • Figure 14 is a schematic flow chart of the third sound signal processing method provided by the embodiment of the present application.
  • Figure 15 is a schematic structural diagram of a fourth type of earphone provided by an embodiment of the present application.
  • Figure 16 is a schematic flow chart of the fourth sound signal processing method provided by the embodiment of the present application.
  • Figure 17 is a schematic diagram of a control interface of a terminal device provided by an embodiment of the present application.
  • Figure 18 is a schematic diagram of the frequency response noise caused by the wind speed affecting the eardrum reference point after the user wears the earphones in a wind noise scene according to the embodiment of the present application;
  • Figure 19 is a schematic diagram of the frequency response noise of the eardrum reference point in a wind noise scenario and a wind noise-free scenario provided by the embodiment of the present application;
  • Figure 20 is a schematic structural diagram of a fifth earphone provided by an embodiment of the present application.
  • Figure 21 is a schematic flow chart of the fifth sound signal processing method provided by the embodiment of the present application.
  • Figure 22 is a schematic structural diagram of a sixth type of earphone provided by an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same functions and effects.
  • the first chip and the second chip are only used to distinguish different chips, and their sequence is not limited.
  • words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association of associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the related objects are in an “or” relationship.
  • “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • At least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • the earphone device in the embodiment of the present application may be an earphone, or may be a hearing aid, a stethoscope, or other equipment that needs to be inserted into the ear.
  • the embodiment of the present application mainly uses an earphone as an earphone device as an example for explanation. Headphones may also be called earbuds, headphones, Walkmans, audio players, media players, headphones, earpieces, or some other appropriate term.
  • Figure 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • the system architecture includes a terminal device and a headset, and a communication connection can be established between the headset and the terminal device.
  • the earphone may be a wireless in-ear earphone. That is to say, from the perspective of the communication method between the earphones and the terminal device, wireless in-ear earphones are wireless earphones.
  • Wireless headsets are headsets that can connect wirelessly to terminal devices. According to the electromagnetic wave frequency used by wireless headsets, they can be further divided into: infrared wireless headsets, meter wave wireless headsets (such as FM frequency modulation headsets), decimeter wave wireless headsets ( For example, Bluetooth headphones), etc.; and from the perspective of the way the headphones are worn, wireless in-ear headphones are in-ear headphones.
  • the earphones in the embodiments of the present application can also be other types of earphones.
  • the earphone in the embodiment of the present application may also be a wired earphone.
  • Wired earphones are earphones that can be connected to terminal devices through wires (such as cables). According to the shape of the cable, they can also be divided into cylindrical cable earphones, noodle wire earphones, etc.
  • the headphones can also be semi-in-ear headphones, over-ear headphones (also called over-ear headphones), ear-hook headphones, neck-hook headphones, etc.
  • Figure 2 is a schematic diagram of a scene in which a user wears headphones according to an embodiment of the present application.
  • the earphone may include a reference microphone 21 , a call microphone 22 and an error microphone 23 .
  • the reference microphone 21 and the talking microphone 22 are usually disposed on the side of the headset away from the ear canal, that is, the outside of the headset. Therefore, the reference microphone 21 and the talking microphone 22 can be collectively referred to as external microphones.
  • the reference microphone 21 and the call microphone 22 are used to collect external sound signals.
  • the reference microphone 21 is mainly used to collect the external environment sound signals
  • the call microphone 22 is mainly used to collect the voice signals transmitted through the air when the user speaks. For example, for call scenarios The sound of voices below.
  • the error microphone 23 When the user wears the earphone normally, the error microphone 23 is usually disposed on the side of the earphone close to the ear canal, that is, the inside of the earphone, and is used to collect intra-ear sound signals in the user's ear canal. Therefore, the error microphone 23 may be called an in-ear microphone.
  • the microphone in the headset may include one or more of the reference microphone 21 , the call microphone 22 and the error microphone 23 .
  • the microphones in the headset may include only the call microphone 22 and the error microphone 23 .
  • the number of reference microphones 21 may be one or more
  • the number of call microphones 22 may be one or more
  • the number of error microphones 23 may be one or more.
  • the earphones and the ear canal do not fit perfectly, so there will be a certain gap between the earphones and the ear canal. After the user wears the earphones, external sound signals will enter the ear canal through these gaps; however, because there is a certain sealing between the ear caps and ear cups of the earphones, it can isolate the user's eardrums from external sound signals.
  • the external sound signal entering the ear canal will attenuate the high-frequency component due to wearing the earphone, that is, the external sound signal entering the ear canal will be attenuated Loss will occur, which means that the user hears less external sounds.
  • the external sound signals include environmental sound signals and the voice signal when the user speaks.
  • the acoustic cavity in the ear canal will change from an open field to a pressure field. Therefore, when the user wears headphones to speak, the user will feel the intensity of the low-frequency component in the voice signal he or she emits is enhanced, resulting in an occlusion effect, causing the voice emitted to sound dull and unclear, etc., hindering the user from communicating with others. fluency.
  • FIG. 3 is a schematic diagram illustrating the low-frequency rise and high-frequency attenuation of the sound signal in the ear caused by the user wearing headphones to speak according to an embodiment of the present application.
  • the abscissa represents the frequency of the sound signal in the ear, in Hz
  • the ordinate represents the intensity difference between the sound signal in the ear and the external sound signal, in dB (decibel).
  • a speaker in an earphone divides the inner cavity of the housing into a front cavity and a rear cavity.
  • the front cavity is the part of the inner cavity with the sound outlet
  • the rear cavity is the part away from the inner cavity.
  • the part of the mouthpiece A leakage hole can be provided on the shell of the front cavity or the rear cavity of the earphone, and the leakage amount of the front cavity or the rear cavity can be adjusted through the leakage hole, so that the user can wear the earphone at a certain level.
  • the leakage of low-frequency components is produced to a certain extent, thereby suppressing the occlusion effect.
  • the leakage hole will occupy part of the space of the earphones, and this method will also produce a certain low-frequency loss. For example, when using headphones to play music, the output performance of low-frequency music will be lost, and the improvement effect will be poor.
  • the headset may be a headset with active noise reduction, which includes an external microphone, a feedforward filter, an error microphone, a feedback filter, a mixing processing module and a speaker.
  • the external microphone may be a reference microphone or a call microphone. .
  • the external sound signal entering the ear canal will attenuate the high-frequency component due to wearing the earphones.
  • the high-frequency component is a high-frequency component greater than or equal to 800Hz.
  • the feedforward filter is used to compensate for the attenuation of the high-frequency component caused by wearing the earphones.
  • the external sound signal entering the ear canal will be attenuated less by the low-frequency component generated by wearing headphones. Therefore, the feedforward filter cannot compensate for the loss of low-frequency component.
  • the error microphone collects intra-aural sound signals in the user's ear canal.
  • the sound signal in the ear includes the passively attenuated ambient sound signal H 1 , the passively attenuated speech signal H 2 , and the additional low frequency H generated in the coupling cavity between the front mouth of the headset and the ear canal due to skull vibration.
  • H 3 refers to the low-frequency raised signal of the speech signal generated by the occlusion effect.
  • the low-frequency raised signal of the speech signal generated by the occlusion effect can be called an occlusion signal.
  • the in-ear sound signal collected by the error microphone can be processed by a feedback filter to obtain an inverse noise signal, and the inverse noise signal can be played through the speaker to suppress the occlusion effect.
  • the mixing processing module performs mixing processing on the sound signal to be compensated and the inverted noise signal to obtain the mixed sound signal. audio signal and sends the mixed audio signal to the speaker for playback.
  • the passively attenuated ambient sound signal H 1 refers to the attenuated signal of the ambient sound signal that enters the ear canal due to wearing earphones, that is, the passive noise reduction of the external environmental sound signal by wearing earphones
  • the environmental sound signal after passive attenuation H 2 refers to the signal after the speech signal entering the ear canal is attenuated due to wearing earphones, that is, the signal sent by the user is passively attenuated by wearing earphones.
  • noisy speech signal the signal entering the ear canal is attenuated due to wearing earphones
  • the in-ear sound signal includes the passively attenuated environmental sound signal H 1 , the passively attenuated speech signal H 2 , and the occlusion signal H 3 . Therefore, when the feedback filter processes the in-ear sound signal, in addition to the occlusion The signal H 3 is weakened or even eliminated, and the passively attenuated environmental sound signal H 1 and the passively attenuated speech signal H 2 are also weakened, so that the passively attenuated environmental sound signal H 1 and the passively attenuated speech signal H are 2 will also be weakened to a certain extent.
  • the feedforward filter can compensate for the external environmental sound signal and the voice signal emitted by the user, and play the sound signal to be compensated through the speaker to achieve the restoration of the external sound signal; however, since the feedback filter does not detect the sound in the ear, During signal processing, a part of the passively attenuated environmental sound signal H 1 and a part of the passively attenuated speech signal H 2 are additionally weakened. Therefore, the final environmental sound signal and speech signal in the ear canal will be weakened, that is, the external environmental sound signal and the speech signal emitted by the user cannot be well restored.
  • embodiments of the present application provide a sound signal processing method and an earphone device.
  • the external sound signal collected by the external microphone is processed through the target filter.
  • the environmental sound attenuation signal and the speech attenuation signal are obtained.
  • the first audio processing unit processes the environmental sound attenuation signal and the speech attenuation signal obtained by the target filter, and combines the passively attenuated environmental sound signal and the passively attenuated environmental sound signal in the in-ear sound signal collected by the error microphone.
  • the passively attenuated speech signal is removed to obtain the occlusion signal caused by the occlusion effect, and the occlusion signal is sent to the feedback filter.
  • the feedback filter can generate an inverse noise signal corresponding to the occlusion signal and play it through the speaker, even if The feedback filter can not attenuate the passively attenuated environmental sound signal and the passively attenuated speech signal in the in-ear sound signal, thereby suppressing the occlusion effect while improving the response to the external environmental sound signal and the user's speech signal. degree of restoration.
  • FIG. 5 is a schematic structural diagram of a first earphone provided by an embodiment of the present application.
  • the headset includes an external microphone, an error microphone, a feedforward filter, a feedback filter, a target filter, a first audio processing unit, a second audio processing unit and a speaker.
  • the external microphone is connected to the feedforward filter and the target filter respectively, the error microphone and the target filter are both connected to the first audio processing unit, the first audio processing unit is also connected to the feedback filter, and the feedback filter and the feedforward filter are connected to each other.
  • the filters are all connected to the second audio processing unit, and the second audio processing unit is also connected to the speaker.
  • the external microphone can be a reference microphone or a call microphone, which is used to collect external sound signals.
  • the external sound signals collected by the external microphone include the first external environment sound signal and the first voice signal sent by the user.
  • the feedforward filter is used to compensate for the loss of external sound signals caused by wearing headphones. After the external sound signal collected by the external microphone is processed by the feedforward filter, the sound signal to be compensated is obtained. The sound signal to be compensated is combined with the external sound signal leaked into the ear canal through the gap between the earphone and the ear canal, so that the external sound can be detected. Signal restoration.
  • the external sound signal that leaks into the ear canal through the gap between the earphone and the ear canal can be called a passively attenuated external sound signal, which includes a passively attenuated environmental sound signal and a passively attenuated speech signal.
  • Error microphones are used to collect sound signals in the ear.
  • the sound signal in the ear includes the passively attenuated ambient sound signal H 1 , the passively attenuated speech signal H 2 , and the occlusion signal H generated in the coupling cavity between the front mouth of the headset and the ear canal due to skull vibration. 3 .
  • the passively attenuated environmental sound signal H 1 can be called the second external environmental sound signal, which refers to the environmental sound signal leaked into the ear canal through the gap between the earphones and the ear canal;
  • the passively attenuated speech signal can be H2 is called the second voice signal, which refers to the voice signal leaked into the ear canal through the gap between the earphone and the ear canal.
  • the external sound signal entering the ear canal will have high-frequency components attenuated due to wearing the earphones. Therefore, the signal intensity of the second external environment sound signal in the in-ear sound signal will be lower than the signal intensity of the first external environment sound signal in the external sound signal; and, the signal intensity of the second speech signal in the in-ear sound signal The intensity will also be lower than the signal intensity of the first voice signal in the external sound signal.
  • the target filter is used to process external sound signals to obtain environmental sound attenuation signals and speech attenuation signals.
  • the ambient sound attenuation signal refers to the signal after active noise reduction of the first external environment sound signal in the external sound signal through the target filter;
  • the speech attenuation signal refers to the first voice signal in the external sound signal through the target filter.
  • the signal after active noise reduction is used to process external sound signals to obtain environmental sound attenuation signals and speech attenuation signals.
  • the environmental sound attenuation signal and the second external environmental sound signal among the in-ear sound signals are signals with similar amplitudes and the same phase; the speech attenuation signal and the second speech signal among the in-ear sound signals are of similar amplitude. Signals that are close and have the same phase.
  • the ambient sound attenuation signal has the same amplitude and the same phase as the second external environment sound signal, and the speech attenuation signal has the same amplitude and the same phase as the second voice signal.
  • the first audio processing unit is used to remove the second external environment sound signal and the second speech signal from the in-ear sound signal collected by the error microphone according to the ambient sound attenuation signal and the speech attenuation signal processed by the target filter to obtain the occlusion Signal.
  • the feedback filter is used to process the blocking signal to obtain the inverted noise signal.
  • the inverted noise signal is a signal with a similar amplitude and an opposite phase to the blocking signal.
  • the inverted noise signal and the occlusion signal are equal in magnitude and opposite in phase.
  • the second audio processing unit is used to perform mixing processing on the sound signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the mixed audio signal includes the sound signal to be compensated and the inverted noise signal.
  • Speakers are used to play the mixed audio signal.
  • the mixed audio signal played by the speaker includes the sound signal to be compensated and the reverse phase noise signal.
  • the sound signal to be compensated can be combined with the environmental sound signal and speech signal leaked into the ear canal through the gap between the earphones and the ear canal to restore the external sound signal; and the inverted noise signal can weaken or offset the occlusion effect in the ear canal.
  • the incoming low-frequency boost signal is used to suppress the occlusion effect caused by wearing headphones when speaking. Therefore, the earphones of the embodiments of the present application can improve the restoration of the first external environment sound signal and the first voice signal sent by the user while suppressing the occlusion effect.
  • the microphone in the embodiment of the present application is a device for collecting sound signals
  • the speaker is a device for playing sound signals.
  • a microphone may also be called a microphone, a headset, a pickup, a receiver, a microphone, a sound sensor, a sound-sensitive sensor, an audio collection device or some other suitable term.
  • the embodiments of this application mainly use a microphone as an example to carry out technical solutions. describe. Speakers, also called “horns", are used to convert audio electrical signals into sound signals. The embodiments of this application mainly take a speaker as an example to describe the technical solution.
  • the earphone shown in FIG. 5 is only an example provided by the embodiment of the present application.
  • the headset may have more or fewer components than shown, may combine two or more components, or may be implemented with different configurations of components. It should be noted that, in an optional situation, the above-mentioned components of the earphone can also be coupled together.
  • Figure 6 is a schematic flowchart of the first sound signal processing method provided by the embodiment of the present application. This method can be applied to the headset shown in Figure 5, and the headset is in a state of being worn by the user. The method may specifically include the following step:
  • an external microphone collects external sound signals.
  • the external sound signals collected by the external microphone include the first external environment sound signal and the first voice signal sent by the user.
  • the external microphone may be a reference microphone or a call microphone, and the external microphone may be an analog signal.
  • the feedforward filter processes the external sound signal to obtain the sound signal to be compensated.
  • a first analog-to-digital conversion unit (not shown) may be disposed between the external microphone and the feedforward filter. The input end of the first analog-to-digital conversion unit is connected to the external microphone. The first analog-to-digital conversion unit The output of the unit is connected to the feedforward filter.
  • the external microphone transmits the external sound signal to the first analog-to-digital conversion unit, and the first analog-to-digital conversion unit performs analog-to-digital conversion on the external sound signal. Convert analog signals into digital signals, and transmit the analog-to-digital converted external sound signals to the feedforward filter for processing.
  • Feedforward filter parameters are preset in the feedforward filter, and the feedforward filter parameters may be called FF parameters.
  • the feedforward filter filters the external sound signal after analog-to-digital conversion based on the set feedforward filter parameters, and the sound signal is to be compensated. After obtaining the sound signal to be compensated, the feedforward filter can transmit the sound signal to be compensated to the second audio processing unit.
  • the target filter processes the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the output end of the first analog-to-digital conversion unit can also be connected to the target filter. After the first analog-to-digital conversion unit performs analog-to-digital conversion on the external sound signal, it can also convert the analog-to-digital converted external sound signal to Passed to the target filter for processing.
  • Target filter parameters are preset in the target filter. Based on the set target filter parameters, the target filter filters the external sound signal after analog-to-digital conversion to obtain an environmental sound attenuation signal and a speech attenuation signal.
  • the target filter can map the external sound signal into the passively attenuated environmental sound signal H 1 and the passively attenuated speech signal H 2 , and the passively attenuated environmental sound signal H 1 and the passively attenuated
  • the resulting speech signal H 2 can be collectively called the passively attenuated signal HE_pnc.
  • the target filter parameter may be a proportional coefficient, which is a positive number greater than 0 and less than 1.
  • the target filter calculates the product of the external sound signal and the proportional coefficient to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the target filter parameter can be an attenuation parameter, which is a positive number.
  • the target filter calculates the difference between the external sound signal and the attenuation parameter to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the target filter may transmit the environmental sound attenuation signal and the speech attenuation signal to the first audio processing unit for processing.
  • the error microphone collects the sound signal in the ear.
  • the in-ear sound signals collected by the error microphone include: the second external environment sound signal, the second speech signal and the occlusion signal.
  • the second external environment sound signal is the passively attenuated environmental sound signal H 1
  • the second speech signal is the passively attenuated speech signal H 2 .
  • the first audio processing unit removes the second external environment sound signal and the second speech signal from the in-ear sound signal to obtain an occlusion signal.
  • a second analog-to-digital conversion unit may be provided between the error microphone and the first audio processing unit.
  • the input end of the second analog-to-digital conversion unit is connected to the error microphone.
  • the second analog-to-digital conversion unit The output end of the conversion unit is connected with the first audio processing unit.
  • the error microphone Since the in-ear sound signal collected by the error microphone is an analog signal, after the error microphone collects the in-ear sound signal, it transmits the in-ear sound signal to the second analog-to-digital conversion unit, and the second analog-to-digital conversion unit performs the processing on the in-ear sound signal.
  • Analog-to-digital conversion converts analog signals into digital signals, and transmits the analog-to-digital converted in-ear sound signals to the first audio processing unit for processing.
  • the first audio processing unit can receive the ambient sound attenuation signal and the speech attenuation signal transmitted by the target filter, and the first audio processing unit can also receive the in-ear sound signal. Then, the first audio processing unit processes the ambient sound attenuation signal and the speech attenuation signal processed by the target filter to obtain an inverse attenuation signal. The inverse attenuation signal is mixed with the ambient sound attenuation signal and the speech attenuation signal. The amplitudes are similar and the phases are opposite; then, the first audio processing unit mixes the inverted attenuation signal with the in-ear sound signal, that is, the second external environment sound signal and the second speech signal in the in-ear sound signal are mixed. Remove and get the occlusion signal.
  • the feedback filter processes the blocking signal to obtain an inverted noise signal.
  • the first audio processing unit After obtaining the blocking signal, the first audio processing unit transmits the blocking signal to the feedback filter.
  • Feedback filter parameters are preset in the feedback filter, and the feedback filter parameters may be called FB parameters.
  • the feedback filter processes the blocking signal based on the set feedback filter parameters to obtain an inverted noise signal, and transmits the inverted noise signal to the second audio processing unit.
  • the inverted noise signal and the blocking signal have similar amplitudes and opposite phases.
  • the second audio processing unit performs mixing processing on the sound signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the second audio processing unit After receiving the sound signal to be compensated transmitted by the feedforward filter and the inverted noise signal transmitted by the feedback filter, the second audio processing unit mixes the sound signal to be compensated and the inverted noise signal to obtain the mixed audio Signal.
  • the mixed audio signal includes the sound signal to be compensated and the inverted noise signal.
  • the speaker plays the mixed audio signal.
  • a digital-to-analog conversion unit may be provided between the second audio processing unit and the speaker, the input end of the digital-to-analog conversion unit is connected to the second audio processing unit, and the output of the digital-to-analog conversion unit Connect to the speaker.
  • the second audio processing unit transmits the mixed audio signal to the digital-to-analog conversion unit, and the digital-to-analog conversion unit
  • the audio signal undergoes digital-to-analog conversion, converts the digital signal into an analog signal, and transmits the mixed audio signal after digital-to-analog conversion to the speaker.
  • the speaker plays according to the mixed audio signal after digital-to-analog conversion, while achieving noise reduction on the occlusion signal (that is, suppressing the occlusion effect), and improving the restoration of the first external environment sound signal and the first voice signal sent by the user.
  • the external sound signal can be transmitted transparently into the user's ear canal without adjusting the parameters of the feedforward filter, allowing the user to experience external sounds as if they were not wearing headphones.
  • the feedback filter parameters, feedforward filter parameters and target filter parameters can be obtained through pre-testing.
  • FIG. 7 is a schematic diagram of the test flow of the feedforward filter parameters obtained by testing the feedforward filter provided by the embodiment of the present application. Referring to Figure 7, it may include the following steps:
  • S701 tests the first frequency response at the eardrum of a standard human ear in an empty field.
  • Frequency response also known as frequency response, refers to how well a system responds to different frequencies.
  • S703 Use the difference between the first frequency response and the second frequency response as the feedforward filter parameter of the feedforward filter.
  • the first frequency response FR1 at the eardrum is tested; after the tester wears the headphones, the second frequency response FR2 at the eardrum is tested.
  • the first frequency response FR1 and the second frequency response FR2 can be The difference is determined as the feedforward filter parameter of the feedforward filter.
  • the tester can wear an earphone in one ear (such as the left ear), while the other ear (such as the right ear) does not need to wear the earphone.
  • the tester reads a paragraph of text at a fixed and stable volume, and continuously adjusts the filter parameters of the feedback filter until the sounds heard by the left and right ears are consistent, then the filter parameters are determined to be feedback filter parameters.
  • the feedback filter parameters of the feedback filter are adjusted so that the sounds heard by the left and right ears are consistent, it can also offset the additional low-frequency lift caused by the occlusion effect.
  • the difference between the left ear and the right ear will be The sounds heard by the ears tend to be consistent.
  • the feedback filter parameters of the feedback filters corresponding to different volumes can be tested, such as measuring the feedback filter parameters corresponding to the feedback filters at volumes of 60dB, 70dB, and 80dB.
  • the volume of the sound emitted by the tester can be measured with a sound level meter at a distance of 20cm from the mouth.
  • Figure 8 is a schematic diagram of the testing process for obtaining the target filter parameters of the target filter provided by the embodiment of the present application. Referring to Figure 8, it may include the following steps:
  • S802 Use the absolute value of the difference between the first signal intensity and the second signal intensity as the target filter parameter of the target filter.
  • the target filter parameter of the target filter
  • the target filter can calculate the difference between the external sound signal collected by the external microphone and the target filter parameters, thereby obtaining the ambient sound attenuation signal and the speech attenuation signal, so that the first audio processing is finally performed
  • the signal processed by the unit only includes the blocking signal to prevent the feedback filter from additionally attenuating the external sound signal.
  • FIG. 9 schematic diagrams of the first test signal and the second test signal obtained by the test are respectively shown.
  • the abscissa represents the frequency of the first test signal and the second test signal, in Hz
  • the ordinate represents the signal strength of the first test signal and the second test signal, in dB (decibel), both on the vertical axis.
  • the difference can be understood as the target filter parameters of the target filter.
  • the first signal strength of the first test signal collected by the external microphone is S1
  • the target filter parameter can be a proportional coefficient, The proportional coefficient is a positive number greater than 0 and less than 1.
  • the target filter can calculate the product of the external sound signal collected by the external microphone and the parameters of the target filter, thereby obtaining the ambient sound attenuation signal and the speech attenuation signal, so that the first audio processing unit finally
  • the processed signal only includes the blocking signal to prevent additional attenuation of external sound signals by the feedback filter.
  • FIG. 10 is a schematic structural diagram of a second earphone provided by an embodiment of the present application.
  • the headset includes a reference microphone, a call microphone, an error microphone, an audio analysis unit, a first feedforward filter, a second feedforward filter, a feedback filter, a target filter, a first audio processing unit, and a third feedforward filter.
  • a second audio processing unit, a third audio processing unit and a speaker are examples of the headset's audio processing unit and a speaker.
  • the headset shown in Figure 10 is provided with only one external microphone and one feed-forward filter
  • the headset shown in Figure 10 is provided with two external microphones. and two feedforward filters.
  • the two external microphones are the reference microphone and the call microphone respectively.
  • the two feedforward filters are the first feedforward filter and the second feedforward filter respectively.
  • An audio analysis unit and a third audio processing unit are also added to the headphones.
  • the reference microphone and the call microphone are both connected to the audio analysis unit.
  • the audio analysis unit is also connected to the first feedforward filter, the second feedforward filter and the third audio processing unit respectively.
  • the third audio unit is connected to the target filter.
  • the error microphone and the target filter are both connected to the first audio processing unit
  • the first audio processing unit is also connected to the feedback filter
  • the feedback filter, the first feedforward filter and the second feedforward filter are all connected to the second
  • the audio processing unit is connected
  • the second audio processing unit is also connected to the speaker.
  • the first external sound signal collected by the reference microphone includes the external environment sound signal and the voice signal sent by the user
  • the second external sound signal collected by the call microphone also includes the external environment sound signal and the voice signal sent by the user.
  • the first external sound signal and the second external sound signal may have differences. different.
  • the second external sound signal collected by the call microphone may include more speech signals than the first external sound signal collected by the reference microphone.
  • the audio analysis unit is used to separate the first external sound signal collected by the reference microphone and the second external sound signal collected by the call microphone to obtain the first external environment sound signal and the first voice signal sent by the user.
  • the first feedforward filter can be used to compensate for the loss of external environmental sound signals caused by wearing headphones.
  • the first external environment sound signal is processed by the first feedforward filter to obtain the environment signal to be compensated.
  • the environmental signal to be compensated is combined with the external environmental sound signal leaked into the ear canal through the gap between the earphones and the ear canal (that is, the passively attenuated environmental sound signal), so that the first external environmental sound signal can be restored.
  • the second feed-forward filter may be used to compensate for the loss of the voice signal emitted by the user due to earphone wearing.
  • the first voice signal is processed by the second feedforward filter to obtain the voice signal to be compensated.
  • the voice signal to be compensated is combined with the voice signal leaked into the ear canal through the gap between the earphone and the ear canal (that is, the passively attenuated voice signal), so that the first voice signal sent by the user can be restored.
  • Error microphones are used to collect sound signals in the ear.
  • the in-ear sound signal includes a second external environment sound signal, a second speech signal and an occlusion signal.
  • the third audio processing unit is used to separate the first external environment sound signal obtained by the audio analysis unit and the first voice signal sent by the user for mixing processing to obtain the external sound signal.
  • the external sound signal includes a first external environment sound signal and a first voice signal sent by the user.
  • the target filter is used to process external sound signals to obtain environmental sound attenuation signals and speech attenuation signals.
  • the first audio processing unit is used to remove the second external environment sound signal and the second speech signal from the in-ear sound signal collected by the error microphone according to the ambient sound attenuation signal and the speech attenuation signal processed by the target filter to obtain the occlusion Signal.
  • the feedback filter is used to process the blocking signal to obtain the inverted noise signal.
  • the inverted noise signal is a signal with a similar amplitude and an opposite phase to the blocking signal.
  • the second audio processing unit is used for mixing the environment signal to be compensated, the speech signal to be compensated and the inverse noise signal to obtain a mixed audio signal.
  • the mixed audio signal includes a speech signal to be compensated, a speech signal to be compensated and an inverted noise signal.
  • Speakers are used to play the mixed audio signal.
  • the mixed audio signal played by the speaker includes the speech signal to be compensated, the speech signal to be compensated and the reverse phase noise signal.
  • the speech signal to be compensated is combined with the environmental sound signal leaked into the ear canal through the gap between the earphone and the ear canal, realizing the restoration of the first external environment sound signal.
  • the speech signal to be compensated is leaked into the ear canal combined with the gap between the earphone and the ear canal.
  • the voice signal can realize the restoration of the first voice signal sent by the user, thereby realizing the restoration of the external sound signal; and the inverted noise signal can weaken or offset the low-frequency rising signal caused by the occlusion effect in the ear canal to inhibit wearing The occlusion effect caused by headphones when speaking. Therefore, the earphones of the embodiments of the present application can improve the restoration of the first external environment sound signal and the first voice signal sent by the user while suppressing the occlusion effect.
  • the earphone shown in FIG. 10 is only an example provided by the embodiment of the present application.
  • the headset may have more or fewer components than shown, may combine two or more components, or may be implemented with different configurations of components. It should be noted that, in an optional situation, the above-mentioned components of the earphone can also be coupled together.
  • Figure 11 is a schematic flow chart of the second sound signal processing method provided by the embodiment of the present application. This method can be applied to the headset shown in Figure 10, and the headset is in a state of being worn by the user.
  • the method may specifically include the following step:
  • the reference microphone collects the first external sound signal.
  • the call microphone collects the second external sound signal.
  • the headset is provided with a reference microphone and a call microphone. Both the reference microphone and the call microphone are used to collect external sound signals.
  • the external sound signal collected by the reference microphone is called the first external sound signal
  • the external sound signal collected by the call microphone is called the first external sound signal.
  • the second external sound signal is used to collect external sound signals.
  • the audio analysis unit separates the first external environment sound signal and the first speech signal based on the first external sound signal and the second external sound signal.
  • the audio analysis unit can analyze the first external sound signal and the second external sound signal.
  • the external sound signal and the second external sound signal are analyzed, and the first external environment sound signal and the first speech signal are separated therefrom.
  • the first feedforward filter processes the first external environment sound signal to obtain the environment signal to be compensated.
  • a third analog-to-digital conversion unit may be provided between the audio analysis unit and the first feedforward filter, and the input end of the third analog-to-digital conversion unit is connected to the audio analysis unit.
  • the output terminals of the three analog-to-digital conversion units are connected to the first feedforward filter.
  • the audio analysis unit separates the first external sound signal according to the first external sound signal and the second external sound signal.
  • the ambient sound signal is also an analog signal.
  • the audio analysis unit After splitting and obtaining the first external environment sound signal, the audio analysis unit transmits the first external environment sound signal to the third analog-to-digital conversion unit.
  • the third analog-to-digital conversion unit performs analog-to-digital conversion on the first external environment sound signal, and converts the first external environment sound signal into The analog signal is converted into a digital signal, and the analog-to-digital converted first external environment sound signal is sent to the first feedforward filter for processing.
  • Environmental sound filter parameters are pre-set in the first feed-forward filter. Based on the set environmental sound filter parameters, the first feed-forward filter filters the first external environment sound signal after analog-to-digital conversion to obtain the desired signal. Compensate the environment signal, and transmit the environment signal to be compensated to the second audio processing unit.
  • the second feedforward filter processes the first speech signal to obtain the speech signal to be compensated.
  • a fourth analog-to-digital conversion unit may be provided between the audio analysis unit and the second feedforward filter, and the input end of the fourth analog-to-digital conversion unit is connected to the audio analysis unit.
  • the output terminals of the four analog-to-digital conversion units are connected to the second feedforward filter.
  • the audio analysis unit splits the first speech signal based on the first external sound signal and the second external sound signal.
  • the signals are also analog.
  • the audio analysis unit After splitting the first voice signal, the audio analysis unit transmits the first voice signal to the fourth analog-to-digital conversion unit.
  • the fourth analog-to-digital conversion unit performs analog-to-digital conversion on the first voice signal and converts the analog signal into a digital signal. , and transmit the analog-to-digital converted first speech signal to the second feedforward filter for processing.
  • Speech filter parameters are preset in the second feed-forward filter. Based on the set speech filter parameters, the second feed-forward filter filters the first speech signal after analog-to-digital conversion to obtain the speech signal to be compensated. And transmit the speech signal to be compensated to the second audio processing unit.
  • the third audio processing unit performs mixing processing on the first external environment sound signal and the first voice signal to obtain an external sound signal.
  • the output ends of the third analog-to-digital conversion unit and the fourth analog-to-digital conversion unit can also be connected to the third audio processing unit, and the third analog-to-digital conversion unit can convert the first external environment sound after analog-to-digital conversion.
  • the signal is transmitted to the third audio processing unit, and the fourth analog-to-digital conversion unit may transmit the analog-to-digital converted first voice signal to the third audio processing unit.
  • the third audio processing unit can mix the analog-to-digital converted first external environment sound signal and the analog-to-digital converted first speech signal to obtain the external sound signal, and transmit the external sound signal to the target filter for processing.
  • the external sound signal includes a first external environment sound signal and a first voice signal sent by the user.
  • the target filter is used to process the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the error microphone collects sound signals in the ear.
  • the first audio processing unit removes the second external environment sound signal and the second speech signal from the in-ear sound signal to obtain an occlusion signal.
  • the feedback filter processes the blocking signal to obtain an inverted noise signal.
  • the second audio processing unit performs mixing processing on the environment signal to be compensated, the speech signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the second audio processing unit After receiving the environment signal to be compensated transmitted by the first feedforward filter, the speech signal to be compensated transmitted by the second feedforward filter, and the inverted noise signal transmitted by the feedback filter, the second audio processing unit converts the environment signal to be compensated
  • the signal, the speech signal to be compensated and the inverted noise signal are mixed and processed to obtain a mixed audio signal.
  • the mixed audio signal includes an environment signal to be compensated, a speech signal to be compensated and an inverted noise signal.
  • the speaker plays the mixed audio signal.
  • the speaker plays the mixed audio signal, it can reduce the noise of the occlusion signal (ie, suppress the occlusion effect) and improve the restoration degree of the first external environment sound signal and the first voice signal sent by the user.
  • different users may have different vocal intensities when wearing earphones to speak, the same user may have different wearing positions when wearing earphones multiple times, and the same user may have different vocal intensity when wearing earphones multiple times. Due to different circumstances, the low-frequency component of the sound signal in the ear caused by the user wearing headphones to speak is raised to a different extent, that is, the intensity of the occlusion signal caused by the occlusion effect is different.
  • FIG. 12 is a schematic diagram illustrating the low-frequency rise and high-frequency attenuation of the sound signal in the ear caused by the different volume of the voice signal when the user wears the earphone and speaks according to an embodiment of the present application.
  • the abscissa represents the frequency of the sound signal in the ear, in Hz
  • the ordinate represents the intensity difference between the sound signal in the ear and the external sound signal, in dB (decibel);
  • dB decibel
  • the arrow respectively Indicates the rising intensity of the low-frequency components corresponding to different volumes.
  • the volume corresponding to each line segment increases in turn.
  • the volume corresponding to the first line segment 121 is greater than the volume corresponding to the second line segment 122
  • the volume corresponding to the second line segment 122 is greater than the volume corresponding to the third line segment 123 .
  • the lifting intensity of the low-frequency component corresponding to the first line segment 121 is about 20dB
  • the lifting intensity of the low-frequency component corresponding to the second line segment 122 is about 15dB
  • the lifting intensity of the low-frequency component corresponding to the third line segment 123 is about 12dB.
  • the lifting intensity of the low-frequency component corresponding to the first line segment 121 is greater than that of the low-frequency component corresponding to the second line segment 122
  • the lifting intensity of the low-frequency component corresponding to the second line segment 122 is greater than that of the low-frequency component corresponding to the third line segment 123. Lift intensity.
  • the low-frequency component of the sound signal in the ear will rise; and when the user speaks at different volumes, the degree of rise of the low-frequency component due to the occlusion effect will be different, and the volume and low-frequency
  • the degree of elevation of the components is positively correlated. That is, the stronger the volume, the higher the elevation of the low-frequency components; the weaker the volume, the lower the elevation of the low-frequency components.
  • the intensity of the occlusion signal generated by the volume of the first voice signal sent by the user is less than
  • the intensity of the occlusion signal that the feedback filter parameters can achieve the de-occlusion effect will occur, excessive de-occlusion will occur, resulting in the loss of the low-frequency component of the speech signal finally heard in the ear canal; and when the first speech signal sent by the user
  • the intensity of the occlusion signal generated by the volume is greater than the intensity of the occlusion signal that the feedback filter parameters can achieve the de-occlusion effect, insufficient de-occlusion will occur, resulting in excessive low-frequency components of the final speech signal heard in the ear canal.
  • embodiments of the present application can also adaptively adjust the feedback filter parameters of the feedback filter according to the volume of the user's voice when wearing the earphones, that is, adjust the de-blocking effect of the feedback filter, so that the user can speak at different volumes while wearing the earphones. , can improve the consistency of the de-occlusion effect, thereby improving the transparent transmission effect of the external environment sound signal finally heard in the ear canal and the voice signal sent by the user.
  • the feedback filter parameters of the feedback filter according to the volume of the user's voice when wearing the earphones, that is, adjust the de-blocking effect of the feedback filter, so that the user can speak at different volumes while wearing the earphones.
  • can improve the consistency of the de-occlusion effect thereby improving the transparent transmission effect of the external environment sound signal finally heard in the ear canal and the voice signal sent by the user.
  • FIG. 13 is a schematic structural diagram of a third earphone provided by an embodiment of the present application.
  • the headset includes an external microphone, an error microphone, a feedforward filter, a feedback filter, a target filter, a first audio processing unit, a second audio processing unit, a vibration sensor, a first control unit and a speaker.
  • the difference between the earphone shown in FIG. 13 and the earphone shown in FIG. 5 is that the earphone shown in FIG. 13 adds a vibration sensor and a first control unit based on the earphone shown in FIG. 5 .
  • the external microphones are connected to the feedforward filter, the target filter and the first control unit respectively
  • the error microphones are connected to the first audio processing unit and the first control unit respectively
  • the target filter is connected to the first audio processing unit
  • the An audio processing unit is also connected to the feedback filter.
  • the vibration sensor is connected to the first control unit, and the first control unit is connected to the feedback filter.
  • the feedback filter and the feedforward filter are both connected to the second audio processing unit, and the second audio processing unit is also connected to the speaker.
  • the external microphone can be a reference microphone or a call microphone, which is used to collect external sound signals. Error microphones are used to collect sound signals in the ear.
  • the vibration sensor is used to collect the vibration signals caused by the user wearing headphones and speaking.
  • the first control unit is used to determine the target volume when the user wears the earphones to speak based on the vibration signals collected by the vibration sensor, the external sound signals collected by the external microphone, and the in-ear sound signals collected by the error microphone, that is, the volume generated by the coupling between the ear cap and the ear canal. Vibration intensity. Moreover, the first control unit can search for the feedback filter parameters that match the target volume according to the pre-stored relationship between the volume and the feedback filter parameters of the feedback filter, and transmit the feedback filter parameters to the feedback filter. filter, so that the feedback filter processes the blocking signal transmitted by the first audio processing unit according to the feedback filter parameters transmitted by the first control unit to obtain an inverted noise signal.
  • the earphone shown in FIG. 13 is only an example provided by the embodiment of the present application.
  • the headset may have more or fewer components than shown, may combine two or more components, or may be implemented with different configurations of components. It should be noted that, in an optional situation, the above-mentioned components of the earphone can also be coupled together.
  • Figure 14 is a schematic flowchart of the third sound signal processing method provided by the embodiment of the present application. This method can be applied to the headset shown in Figure 13, and the headset is in a state of being worn by the user. The method may specifically include the following step:
  • an external microphone collects external sound signals.
  • the feedforward filter processes the external sound signal to obtain the sound signal to be compensated.
  • the target filter processes the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the error microphone collects the sound signal in the ear.
  • the first audio processing unit removes the second external environment sound signal and the second speech signal from the in-ear sound signal to obtain an occlusion signal.
  • the vibration sensor collects vibration signals.
  • Vibrations will be caused when the user wears earphones and speaks. Therefore, the vibration signal caused by the user wearing earphones and speaking is collected through a vibration sensor, that is, the vibration signal when the user wears earphones and speaks. The vibration signal is related to the volume when the user speaks.
  • the first control unit determines the target volume based on the vibration signal, external sound signal and in-ear sound signal, and searches for feedback filter parameters based on the target volume.
  • the first control unit can receive the vibration signal transmitted by the vibration sensor, the external sound signal transmitted by the external microphone, and the in-ear sound signal transmitted by the error microphone.
  • the external sound signal includes the first voice signal when the user speaks. Therefore, the volume when the user speaks can be determined based on the external sound signal collected by the external microphone; and the in-ear sound signal collected by the error microphone includes the second voice signal.
  • the second voice signal The signal can also reflect the first voice signal when the user speaks to a certain extent, that is, when the intensity of the first voice signal is stronger, the intensity of the second voice signal will also be stronger. Therefore, the ear signal collected by the error microphone can also be used.
  • the acoustic signal determines the volume at which the user speaks.
  • the first control unit may receive the vibration signal transmitted by the vibration sensor, obtain the amplitude of the vibration signal, and search for the corresponding volume from the relationship between amplitude and volume, and call the found volume the first volume.
  • the first control unit can determine the user's speaking time based on the external sound signal. the second volume, and determine the third volume when the user speaks based on the sound signal in the ear.
  • the first control unit determines the target volume when the user speaks based on the first volume, the second volume and the third volume.
  • the target volume may be a weighted average of the first volume, the second volume and the third volume, and the corresponding weights of the first volume, the second volume and the third volume may be equal or unequal.
  • the target volume when the user speaks can also be determined based on any one or both of the vibration signal, the external sound signal, and the in-ear sound signal.
  • the target volume when the user speaks can be determined through the external sound signal collected by the external microphone and the vibration signal collected by the vibration sensor.
  • a call microphone can be used as an external microphone.
  • the first control unit determines the target volume for the user to speak while wearing the earphones based on the vibration signal and the external sound signal.
  • the error microphone may not be connected to the first control unit.
  • the target volume when the user speaks can be determined only through the in-ear sound signal collected by the error microphone. If the user is in a wind-noisy scene, such as a user wearing headphones while riding a bicycle or running in a wind-noisy environment, the external microphone will be greatly affected by the wind noise, making it difficult to determine the volume of the user's speech from the external sound signals collected by the external microphone. The internal microphone is less affected by wind noise. Therefore, the target volume when the user speaks can be determined through the in-ear sound signal collected by the internal microphone. In this scenario, there is no need to install a vibration sensor in the headset, and the external microphone does not need to be connected to the first control unit.
  • the target volume when the user speaks can also be determined only through the external sound signals collected by the external microphone.
  • the external microphone will receive less interference. Therefore, the external sound signal collected by the external microphone can be used to determine the target of the user's speech. volume.
  • the first control unit After the first control unit determines the target volume when the user speaks, the first control unit can search for a feedback filter that matches the target volume based on the pre-stored relationship between the volume and the feedback filter parameters of the feedback filter. filter parameters and transfer the feedback filter parameters to the feedback filter.
  • the comparison table of the relationship between the volume and the feedback filter parameters of the feedback filter there is a positive correlation between the volume and the feedback filter parameters.
  • the feedback filter parameters are larger, and when the volume is smaller, the feedback filter parameters are smaller.
  • the volume of the user's speech is positively correlated with the elevation of the low-frequency component caused by the occlusion effect. Therefore, when the target volume is determined to be larger, correspondingly, the intensity of the occlusion signal caused by the occlusion effect is greater, and the feedback filter parameters of the feedback filter can be increased to better suppress the occlusion effect and improve the results caused by insufficient de-occlusion.
  • the phenomenon of excessive low-frequency components of the speech signal ultimately heard in the ear canal When the target volume is determined to be smaller, correspondingly, the intensity of the occlusion signal caused by the occlusion effect is smaller, and the feedback filter parameters of the feedback filter can be reduced to better suppress the occlusion effect and improve the phenomenon of excessive de-occlusion.
  • the feedback filter processes the blocking signal based on the feedback filter parameters to obtain an inverted noise signal.
  • the feedback filter After receiving the feedback filter parameters transmitted by the first control unit, the feedback filter processes the blocking signal according to the transmitted feedback filter parameters to obtain an inverted noise signal.
  • the inverted noise signal and the blocking signal have similar amplitudes and opposite phases.
  • the second audio processing unit performs mixing processing on the sound signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the speaker plays the mixed audio signal.
  • the first control unit can also determine the first intensity of the low-frequency component in the external sound signal and the in-ear sound based on the external sound signal collected by the external microphone and the in-ear sound signal collected by the error microphone. The second intensity of the low-frequency component in the signal.
  • the first control unit can select a higher Feedback filter parameters, transmit the selected feedback filter parameters to the feedback filter to adjust the blocking signal. If the absolute value of the difference between the first intensity and the second intensity is less than or equal to the intensity threshold, it is determined that the elevation of the low-frequency component caused by the occlusion effect is less, that is, the intensity of the occlusion signal is smaller, then the first control unit can select a smaller Low feedback filter parameters, transmit the selected feedback filter parameters to the feedback filter to adjust the blocking signal.
  • a comparison table of the relationship between the intensity difference and the feedback filter parameters is preset in the headset.
  • the intensity difference refers to the difference between the third intensity and the intensity threshold
  • the third intensity is the difference between the first intensity and the second intensity.
  • the first control unit can calculate the absolute value of the difference between the first intensity and the second intensity to obtain the third intensity; then, the first control unit calculates the difference between the third intensity and the intensity threshold to obtain the intensity difference; then, according to For the calculated intensity difference, find the corresponding feedback filter parameters from the comparison table of the relationship between the intensity difference and the feedback filter parameters.
  • the intensity difference is positively correlated with the feedback filter parameters.
  • the feedback filter parameters are larger; when the intensity difference is smaller, the feedback filter parameters are smaller.
  • the first control unit directly searches for the corresponding feedback filter parameters based on the external sound signal and the in-ear sound signal.
  • embodiments of the present application may also adjust the feedback filtering of the feedback filter according to the actual use conditions. While adjusting the filter parameters, the environmental sound filter parameters of the first feedforward filter and/or the speech filter parameters of the second feedforward filter can also be adjusted. For its specific implementation, please refer to the following description.
  • FIG. 15 is a schematic structural diagram of a fourth type of earphone provided by an embodiment of the present application.
  • the headset includes: a reference microphone, a call microphone, an error microphone, an audio analysis unit, a first feedforward filter, a second feedforward filter, a feedback filter, a target filter, and a first audio processing unit.
  • the headset shown in Figure 15 is provided with only one external microphone and one feed-forward filter, while the headset shown in Figure 15 is provided with two external microphones. and two feedforward filters.
  • the two external microphones are the reference microphone and the call microphone respectively.
  • the two feedforward filters are the first feedforward filter and the second feedforward filter respectively.
  • An audio analysis unit, a third audio processing unit, a vibration sensor and a first control unit are also added to the headphones.
  • the reference microphone and the call microphone are both connected to the audio analysis unit.
  • the audio analysis unit is also connected to the first feedforward filter, the second feedforward filter, the third audio processing unit and the first control unit respectively.
  • the third audio unit The error microphone is connected to the target filter, the error microphone is connected to the first audio processing unit and the first control unit respectively, the target filter is connected to the first audio processing unit, and the first audio processing unit is also connected to the feedback filter.
  • the vibration sensor is connected to the first control unit, and the first control unit is connected to the feedback filter, the first feedforward filter and the second feedforward filter respectively.
  • the feedback filter, the first feedforward filter and the second feedforward filter are all connected to the second audio processing unit, and the second audio processing unit is also connected to the speaker.
  • the vibration sensor is used to collect vibration signals caused by the user wearing headphones and speaking.
  • the first control unit is used to determine the current scene information based on the vibration signal collected by the vibration sensor, the first external environment sound signal split by the audio analysis unit and the first voice signal sent by the user, and adjust the first scene information according to the scene information.
  • Ambient sound filter parameters of the feed forward filter and/or speech filter parameters of the second feed forward filter are used to determine the current scene information based on the vibration signal collected by the vibration sensor, the first external environment sound signal split by the audio analysis unit and the first voice signal sent by the user.
  • the earphone shown in FIG. 15 is only an example provided by the embodiment of the present application.
  • the headset may have more or fewer components than shown, may combine two or more components, or may be implemented with different configurations of components. It should be noted that, in an optional situation, the above-mentioned components of the earphone can also be coupled together.
  • Figure 16 is a schematic flowchart of the fourth sound signal processing method provided by the embodiment of the present application. This method can be applied to the headset shown in Figure 10, and the headset is in a state of being worn by the user. The method may specifically include the following steps. :
  • the reference microphone collects the first external sound signal.
  • the call microphone collects the second external sound signal.
  • the audio analysis unit separates the first external environment sound signal and the first speech signal based on the first external sound signal and the second external sound signal.
  • the third audio processing unit performs mixing processing on the first external environment sound signal and the first voice signal to obtain an external sound signal.
  • the target filter is used to process the external sound signal to obtain the environmental sound attenuation signal and the speech attenuation signal.
  • the error microphone collects the sound signal in the ear.
  • the first audio processing unit removes the second external environment sound signal and the second speech signal from the in-ear sound signal to obtain an occlusion signal.
  • the vibration sensor collects vibration signals.
  • the first control unit determines the environmental sound filter parameters of the first feedforward filter based on the first external environment sound signal and the first speech signal.
  • the first feedforward filter processes the first external environment sound signal based on the determined environmental sound filter parameters to obtain the environment signal to be compensated.
  • the first control unit can receive the audio analysis unit and split the first external environment sound signal and the first voice signal, and obtain the signal strength of the first external environment sound signal and the signal strength of the first voice signal. When the difference between the signal strength of the first external environment sound signal and the signal strength of the first voice signal is less than the first set threshold, it is determined that the user is in a relatively quiet external environment.
  • the first control unit can reduce the environmental sound filter parameters of the first feedforward filter, so that the first feedforward filter performs the processing on the first external environment sound signal according to the determined environmental sound filter parameters.
  • the environmental signal to be compensated is processed to reduce the final environmental sound signal heard in the ear canal, thereby reducing the negative hearing sensation caused by the noise floor of the circuit and microphone hardware.
  • the first control unit determines the speech filter parameters of the second feedforward filter based on the first external environment sound signal and the first speech signal.
  • the second feedforward filter processes the first speech signal based on the determined speech filter parameters to obtain the speech signal to be compensated.
  • the second set threshold may be greater than or equal to the first set threshold.
  • the first control unit can increase the speech filter parameters of the second feedforward filter, so that the second feedforward filter processes the first speech signal to obtain the speech to be compensated based on the determined speech filter parameters.
  • the voice signal to be compensated combines with the voice signal leaked into the ear canal through the gap between the earphone and the ear canal, so that the final voice signal in the ear canal is greater than the first voice signal in the external environment, thereby increasing the final voice signal heard in the ear canal. Large, improve users' ability to hear their own voices clearly in high-noise environments.
  • the first control unit determines the target volume based on the vibration signal, the external sound signal and the in-ear sound signal, and searches for the feedback filter parameters of the feedback filter based on the target volume.
  • the feedback filter processes the blocking signal based on the determined feedback filter parameters to obtain an inverted noise signal.
  • the second audio processing unit performs mixing processing on the environment signal to be compensated, the speech signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the sound signal processing method corresponding to Figure 15 and Figure 16 can be applied to the de-occlusion scenario when the user wears headphones and speaks at different volumes, so that when the user wears headphones and speaks at different volumes, the de-occlusion is improved Consistency of effect. Moreover, it can also be applied to reasonably adjust the environmental sound filter parameters of the first feedforward filter and/or the speech filter parameters of the second feedforward filter in different external environments to meet different scene requirements.
  • the above describes how to adjust the environmental sound filter parameters of the first feedforward filter, the speech filter parameters of the second feedforward filter, and the feedback filtering by using one or more devices among the external microphone, the internal microphone, and the vibration sensor. feedback filter parameters of the controller.
  • other methods may be used to set the environmental sound filter parameters of the first feedforward filter, the speech filter parameters of the second feedforward filter, and the feedback filter parameters of the feedback filter.
  • FIG. 17 is an exemplary control interface of a terminal device provided by an embodiment of the present application.
  • the control interface can be considered as a user-oriented input interface, which provides controls with multiple functions so that the user can control the headset by controlling relevant controls.
  • the interface shown in (a) in Figure 17 is the first interface 170a displayed on the terminal device.
  • Two mode selection controls are displayed on the first interface 170a, which are the automatic mode control and the custom mode control.
  • the user can Perform corresponding operations on the first interface 170a to control the determination of filter parameters in the headset in different ways.
  • the first operation may be the user's selection parameter of the custom mode control on the first interface 170a, such as a single-click operation, a double-click operation, a long-term operation, etc. Press Action etc.
  • the terminal device jumps to the interface shown in (b) in Figure 17 .
  • the interface shown in (b) in Figure 17 is the second interface 170b displayed on the terminal device.
  • the second interface 170b displays environmental sound filter parameter setting options, voice filter parameter setting options and feedback filter parameter setting. options.
  • the terminal device jumps to the interface shown in (c) of Figure 17 in response to the first operation.
  • the interface shown in (c) in Figure 17 is the third interface 170c displayed on the terminal device.
  • a gear wheel is displayed on the third interface 170c.
  • the gear wheel includes multiple gears, such as gear 1.
  • each gear corresponds to a feedback filter parameter.
  • the gear is indicated by the gear adjustment button 171, and the feedback filter parameters corresponding to each gear are stored in the terminal device. Therefore, the terminal device searches for the corresponding feedback according to the gear selected by the user using the gear adjustment button 171. Filter parameters, and send the feedback filter parameters to the headset through wireless links such as Bluetooth.
  • the headset can be equipped with a wireless communication module such as Bluetooth.
  • the wireless communication module can also be connected to the first control unit in the headset.
  • the wireless communication module in the headset receives the feedback filter parameters sent by the terminal device and transmits the feedback filter parameters to The first control unit then transmits it to the feedback filter, so that the feedback filter processes the blocking signal based on the feedback filter parameters.
  • the feedback filter parameters corresponding to each gear can also be set in the headset.
  • the terminal device After the user selects a gear using the gear adjustment button 171, the terminal device sends the gear information to the headset through a wireless link.
  • the wireless communication module in the headset receives the gear information sent by the terminal device, searches for the corresponding feedback filter parameters based on the gear information, and transmits the found feedback filter parameters to the feedback filter, so that the feedback filter is based on the Feedback filter parameters process the blocked signal.
  • the interface displayed on the terminal device is the same as the third interface shown in (c) in Figure 17 170c is similar; correspondingly, similar operation methods can also be used to select environmental sound filter parameters or voice filter parameters.
  • the terminal device When the user inputs the third operation to the automatic mode control on the first interface 170a, the terminal device enters the automatic detection mode, and the terminal device automatically detects the external environment in which the user is located, such as a noisy external environment or a relatively quiet external environment. environment, etc., and determine one or more of the environmental sound filter parameters, speech filter parameters and feedback filter parameters according to the detected external environment. After the terminal device determines the corresponding filter parameters, it can send them to the headset through the wireless link.
  • the external environment in which the user is located such as a noisy external environment or a relatively quiet external environment. environment, etc.
  • control interface on the terminal device may also include more or less controls/elements/symbols/functions/text/patterns/colors, or controls/elements/symbols/functions/text/patterns on the control interface.
  • /Color can also take on other forms of transformation.
  • the gear corresponding to each filter parameter can also be designed in the form of an adjustment bar for user touch control, which is not limited in the embodiments of the present application.
  • Wind noise refers to the whirring sound produced when there is wind in the external environment, which affects the normal use of headphones.
  • FIG. 18 is a schematic diagram of the frequency response noise caused by the wind speed affecting the eardrum reference point after the user wears the earphones in a wind noise scene according to an embodiment of the present application.
  • the abscissa represents the frequency of the external environmental noise, in Hz
  • the ordinate is the frequency response value of the eardrum reference point, in dB
  • the direction shown by the arrow respectively represents the frequency of the eardrum reference point corresponding to different wind speeds.
  • the frequency response value of the eardrum reference point will be affected by the wind speed, and as the wind speed increases, the bandwidth corresponding to the frequency response value of the eardrum reference point will also increase.
  • FIG. 19 is a schematic diagram of the frequency response noise of the eardrum reference point in a wind noise scenario and a wind noise-free scenario provided by an embodiment of the present application.
  • the curve corresponding to the first external environment sound refers to: the relationship curve between the frequency response value of the eardrum reference point and the frequency when not in a wind noise scene
  • the curve corresponding to the second external environment sound refers to: in a wind noise scene
  • the low-frequency component of the audio signal played by the speaker will be higher than that in the stable environment. This reduces the low-frequency component in the audio signal played by the speaker, resulting in higher wind noise ultimately heard in the ear canal in a wind noise scenario.
  • headphones with a transparent transmission function generally turn off the external microphone function in wind noise scenarios.
  • this method is not effective in suppressing wind noise and maintaining the transparent transmission function of the headphones.
  • embodiments of the present application can also adjust the target filter parameters of the target filter to reduce the wind noise ultimately heard in the ear canal in a wind noise scenario.
  • the target filter parameters of the target filter can also adjust the target filter parameters of the target filter to reduce the wind noise ultimately heard in the ear canal in a wind noise scenario.
  • FIG. 20 is a schematic structural diagram of the fifth earphone provided by an embodiment of the present application.
  • the headset includes a reference microphone, a call microphone, an error microphone, a wind noise analysis module, a first feedforward filter, a feedback filter, a target filter, a first audio processing unit, a second audio processing unit, and a third audio processing unit. Two control units and speakers.
  • the difference between the headset shown in Figure 20 and the headset shown in Figure 5 is that the headset shown in Figure 5 is provided with only one external microphone, while the headset shown in Figure 20 is provided with two external microphones.
  • the two external microphones are respectively As a reference microphone and a call microphone; in addition, a wind noise analysis module and a second control unit are added to the headset shown in Figure 20.
  • the reference microphone and the call microphone are both connected to the wind noise analysis unit, the wind noise analysis unit is also connected to the first feedforward filter, the second control unit and the target filter respectively, and the second control unit is also connected to the target filter;
  • the error microphone and the target filter are both connected to the first audio processing unit, and the first audio processing unit is also connected to the feedback filter; and the feedback filter and the first feedforward filter are both connected to the second audio processing unit, and the second audio processing unit
  • the processing unit is also connected to the loudspeaker.
  • the reference microphone collects the first external sound signal
  • the call microphone collects the second external sound signal.
  • the wind noise analysis unit is used to calculate the correlation between the first external sound signal and the second external sound signal to analyze the intensity of the wind in the external environment.
  • the second control unit is used to adjust the target filter according to the intensity of the wind in the external environment calculated by the wind noise analysis unit.
  • Target filter parameters When the intensity of the wind in the external environment is high, the target filter parameter of the target filter is reduced, so that when the target filter processes the first external environment sound signal in the external sound signal, the first external environment sound signal can be removed more effectively. less, so that the signal processed by the first audio processing unit includes the blocking signal and a part of the environmental noise signal, the feedback filter can remove this part of the environmental noise signal when processing the signal transmitted by the first audio processing unit, thereby reducing the risk In noisy scenes, the final wind noise heard in the ear canal.
  • the second feedforward filter is not shown.
  • a second feedforward filter can also be provided in the earphones, as well as an audio analysis unit used to distinguish external environmental sound signals and voice signals emitted by the user, etc.
  • the earphone shown in FIG. 20 is only an example provided by the embodiment of the present application.
  • the headset may have more or fewer components than shown, may combine two or more components, or may be implemented with different configurations of components. It should be noted that, in an optional situation, the above-mentioned components of the earphone can also be coupled together.
  • Figure 21 is a schematic flow chart of the fifth sound signal processing method provided by the embodiment of the present application. This method can be applied to the headset shown in Figure 20. The headset is in a state of being worn by the user. At this time, the user is exposed to wind noise. In this scenario, and the user does not send a voice signal, the method may specifically include the following steps:
  • the reference microphone collects the first external sound signal.
  • the call microphone collects the second external sound signal.
  • the wind noise analysis unit calculates the intensity of the wind in the external environment based on the first external sound signal and the second external sound signal.
  • both the first external sound signal and the second external sound signal only include external environmental sound signals.
  • the first external sound signal collected by the reference microphone and the first external sound signal collected by the call microphone The weaker the correlation between the second external sound signal; and when the intensity of the external wind in the external environment where the user is located is smaller, the first external sound signal collected by the reference microphone and the second external sound collected by the call microphone The stronger the correlation between signals. That is, the correlation between the first external sound signal and the second external sound signal is negatively correlated with the intensity of the external wind in the external environment.
  • the wind noise analysis unit calculates the correlation between the first external sound signal and the second external sound signal to analyze the intensity of the external environment wind, and transmits the determined intensity of the external environment wind to the second control unit.
  • the second control unit is used to adjust the target filter parameters of the target filter according to the intensity of the wind in the external environment.
  • the second control unit adjusts the target filter parameters of the target filter according to the intensity of the wind in the external environment calculated by the wind noise analysis unit.
  • the target filter parameters of the target filter are reduced, that is, the intensity of the external environment wind is negatively correlated with the target filter parameters of the target filter.
  • One possible implementation method is that the earphones are preset with a comparison table of the relationship between the intensity of the ambient wind and the target filter parameters. After determining the intensity of the external ambient wind, the second control unit searches for the corresponding target filter parameters.
  • the target filter processes the external sound signal to obtain the environmental sound attenuation signal.
  • the target filter After receiving the target filter parameters transmitted by the second control unit, the target filter processes the external sound signal based on the target filter parameters to obtain an environmental sound attenuation signal.
  • the ambient sound attenuation signal obtained by the target filter after processing the external sound signal is less removed compared to the external environment sound signal collected by the external microphone; when the target filter parameters are smaller, When the target filter is large, the ambient sound attenuation signal obtained after processing the external sound signal by the target filter removes more of the external environmental sound signal collected by the external microphone.
  • the error microphone collects the sound signal in the ear.
  • the first audio processing unit removes part of the in-ear sound signal according to the environmental sound attenuation signal to obtain an occlusion signal and an environmental noise signal.
  • the ambient sound attenuation signal obtained by the target filter after processing the external sound signal is less, therefore, after the first audio processing unit removes part of the signal in the ear sound signal according to the ambient sound attenuation signal, the remaining signal will not include occlusion In addition to the signal, it also includes a part of the environmental noise signal.
  • the environmental noise signals obtained by the first audio processing unit are more; when the environmental sound attenuation signals processed by the target filter are more, the first audio processing unit processes The less environmental noise signal is obtained.
  • the feedback filter processes the blocking signal and the environmental noise signal to obtain an inverted noise signal.
  • the inverted noise signal obtained by processing the occlusion signal and the environmental noise signal by the feedback filter has a similar amplitude and an opposite phase to the mixed signal (a mixed signal of the occlusion signal and the environmental noise signal).
  • the first feedforward filter processes the external sound signal to obtain the environment signal to be compensated.
  • the external sound signal may only include the external environment sound signal collected by the reference microphone and the call microphone.
  • the second audio processing unit mixes the environment signal to be compensated and the inverted noise signal to obtain a mixed audio signal.
  • the speaker plays the mixed audio signal.
  • the embodiments of the present application can reduce the wind noise ultimately heard in the ear canal in a wind noise scenario by reducing the target filter parameters of the target filter without changing the feedforward filter parameters of the feedforward filter.
  • the earphones of the embodiments of the present application can be applied to the following two scenarios: In one scenario, when the user wears the earphones to speak, it can suppress the occlusion effect while improving the response to the first external environment sound signal and the user. The degree of restoration of the first voice signal emitted; another scenario, used to wear headphones in a wind noise scenario to reduce the final wind noise heard in the ear canal.
  • Figure 22 is a schematic structural diagram of a sixth type of headset provided by an embodiment of the present application.
  • the headset includes a reference microphone, a call microphone, an error microphone, an audio analysis unit, a first feedforward filter, a second feedforward filter, feedback filter, target filter, first audio processing unit, second audio processing unit, third audio processing unit, speaker, wind noise analysis unit and second control unit.
  • the schematic structural diagram of the earphone shown in Figure 22 can be understood as the structure obtained by combining the earphone shown in Figure 10 and the earphone shown in Figure 20.
  • the same hardware structure in Figure 10 and Figure 20 can be shared, such as target filtering
  • the hardware structures such as amplifier, reference microphone, and error microphone are shared.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions executed by the processing unit of the computer or other programmable data processing apparatus produce a A device for realizing the functions specified in one process or multiple processes of the flowchart and/or one block or multiple blocks of the block diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Otolaryngology (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Headphones And Earphones (AREA)

Abstract

本申请实施例提供一种声音信号的处理方法及耳机设备,应用于电子技术领域。通过增加目标滤波器和第一音频处理单元,目标滤波器对外部麦克风采集的外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号,第一音频处理单元根据环境声音衰减信号和语音衰减信号,将误差麦克风采集的耳内声音信号中的第二外界环境声音信号和第二语音信号去除,得到闭塞信号,并将闭塞信号传送给反馈滤波器,则反馈滤波器可以生成闭塞信号对应的反相噪声信号,并通过扬声器播放。使得反馈滤波器可以不对耳内声音信号中的第二外界环境声音信号和第二语音信号进行减弱,从而在抑制闭塞效应的同时,提高对第一外界环境声音信号和用户发出的第一语音信号的还原度。

Description

声音信号的处理方法及耳机设备
本申请要求于2022年02月28日提交中国国家知识产权局、申请号为202210193354.7、申请名称为“声音信号的处理方法及耳机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种声音信号的处理方法及耳机设备。
背景技术
随着电子技术的不断发展,助听器,入耳式耳机、耳罩式耳机等耳机设备受到了越来越多消费者的青睐。
用户在佩戴耳机设备后,由于耳帽与耳罩的密封性,会使得用户听到的外界声音减小;并且,用户在佩戴耳机说话时,会感受到自己发出的语音信号中的低频分量的强度增强,出现闭塞效应,从而导致自己发出的语音听起来沉闷以及不清晰等。
但是,目前的耳机设备在抑制闭塞效应的同时,无法很好地还原外界的声音信号。
发明内容
本申请实施例提供一种声音信号的处理方法及耳机设备,其可以在抑制闭塞效应的同时,提高对外界声音信号的还原度。
第一方面,本申请实施例提出一种耳机设备,包括:外部麦克风、误差麦克风、扬声器、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元和第二音频处理单元;外部麦克风用于采集外界声音信号,外界声音信号包括第一外界环境声音信号和第一语音信号;误差麦克风用于采集耳内声音信号,耳内声音信号包括第二外界环境声音信号、第二语音信号和闭塞信号,第二外界环境声音信号的信号强度低于第一外界环境声音信号的信号强度,第二语音信号的信号强度低于第一语音信号的信号强度;前馈滤波器用于对外界声音信号进行处理,得到待补偿声音信号;目标滤波器,用于对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号;第一音频处理单元用于根据环境声音衰减信号和语音衰减信号,对耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号;反馈滤波器用于对闭塞信号进行处理,得到反相噪声信号;第二音频处理单元用于对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号;扬声器用于播放混音音频信号。
这样,通过目标滤波器对外部麦克风采集的外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号,第一音频处理单元根据环境声音衰减信号和语音衰减信号,将误差麦克风采集的耳内声音信号中的第二外界环境声音信号和第二语音信号去除,得到因闭塞效应带来的闭塞信号,则反馈滤波器生成闭塞信号对应的反相噪声信号,并通过扬声器进行播放。从而使得反馈滤波器可以不对耳内声音信号中的被动衰减后的环境声音信号和被动衰减后的语音信号进行减弱,从而实现在抑制闭塞效应的同时,提高对外界环境声音信号和用户发出的语音信号的还原度。
在一种可能的实现方式中,耳机设备还包括振动传感器和第一控制单元;振动传感器用于采集用户发声时的振动信号;第一控制单元用于根据振动信号、外界声音信 号和耳内声音信号中的一者或多者,确定用户发声时的目标音量;以及根据目标音量获取对应的反馈滤波器参数;反馈滤波器具体用于根据第一控制单元确定的反馈滤波器参数,对闭塞信号进行处理,得到反相噪声信号。这样,根据用户佩戴耳机时说话的音量,自适应调整反馈滤波器的反馈滤波器参数,即调整反馈滤波器的去闭塞效果,使得用户佩戴耳机时以不同的音量说话时,可以提高去闭塞效果的一致性,从而使得耳道内最终听到的外界环境声音信号和用户发出的语音信号的透传效果得到提高。
在一种可能的实现方式中,第一控制单元具体用于:根据振动信号的振幅确定第一音量;根据外界声音信号的信号强度,确定第二音量;根据耳内声音信号的信号强度,确定第三音量;根据第一音量、第二音量和第三音量,确定用户发声时的目标音量。这样,根据振动信号、外界声音信号和耳内声音信号,共同确定用户发声时的目标音量,从而使得最终确定的反馈滤波器参数更加准确。
在一种可能的实现方式中,第一控制单元具体用于计算第一音量、第二音量和第三音量的加权平均值,得到目标音量。
在一种可能的实现方式中,耳机设备还包括第一控制单元,第一控制单元用于:获取外界声音信号中的低频分量的第一强度,以及耳内声音信号中的低频分量的第二强度;根据第一强度、第二强度和强度阈值,获取对应的反馈滤波器参数;反馈滤波器具体用于根据第一控制单元确定的反馈滤波器参数,对闭塞信号进行处理,得到反相噪声信号。由于闭塞信号主要是由于用户说话时的闭塞效应产生的低频抬升信号,因此,可根据外界声音信号中的低频分量和耳内声音信号中的低频分量,准确确定出反馈滤波器参数;并且,耳机中增加的硬件结构(如只增加了第一控制单元和目标滤波器)较少,从而简化耳机中的硬件结构。
在一种可能的实现方式中,第一控制单元具体用于:计算第一强度与第二强度的差值的绝对值,得到第三强度;计算第三强度与强度阈值的差值,得到强度差值;根据强度差值获取对应的反馈滤波器参数。这样,通过将外界声音信号中的低频分量的第一强度与耳内声音信号中的低频分量的第二强度的差值的绝对值,与强度阈值进行比较,可方便确定出闭塞效应带来的低频分量的抬升强度,从而便于确定反馈滤波器参数。
在一种可能的实现方式中,耳机设备还包括音频分析单元和第三音频处理单元,外部麦克风包括参考麦克风和通话麦克风,前馈滤波器包括第一前馈滤波器和第二前馈滤波器;参考麦克风用于采集第一外界声音信号;通话麦克风用于采集第二外界声音信号;音频分析单元用于对第一外界声音信号和第二外界声音信号进行处理,得到第一外界环境声音信号和第一语音信号;第一前馈滤波器用于对第一外界环境声音信号进行处理,得到待补偿环境信号;第二前馈滤波器用于对第一语音信号进行处理,得到待补偿语音信号,待补偿声音信号包括待补偿环境信号和待补偿语音信号;第三音频处理单元用于对第一外界环境声音信号和第一语音信号进行混音处理,得到外界声音信号。这样,基于音频分析单元,可准确拆分出外界声音信号中的第一外界环境声音信号和第一语音信号,从而使得第一前馈滤波器可准确得到待补偿环境信号,以提高对第一外界环境声音信号的还原的准确度,并且使得第二前馈滤波器可准确得到待补偿语音信号,以提高对第一语音信号的还原的准确度。
在一种可能的实现方式中,耳机设备还包括第一控制单元;第一控制单元用于获取第一外界环境声音信号的信号强度以及第一语音信号的信号强度,并根据第一外界环境声音信号的信号强度以及第一语音信号的信号强度,调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数;第一前馈滤波器具体用于根据第一控制单元确定的环境音滤波器参数,对第一外界环境声音信号进行处理,得到待补偿环境信号;第二前馈滤波器具体用于根据第一控制单元确定的语音滤波器参数,对第一语音信号进行处理,得到待补偿语音信号。这样,通过合理调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数,以满足不同的场景需求。
在一种可能的实现方式中,第一控制单元具体用于当第一外界环境声音信号的信号强度与第一语音信号的信号强度的差值小于第一设定阈值时,降低第一前馈滤波器的环境音滤波器参数;以及当第一外界环境声音信号的信号强度与第一语音信号的信号强度的差值大于第二设定阈值时,提高第二前馈滤波器的语音滤波器参数。这样,第一控制单元可降低环境音滤波器参数,使得耳道内最终听到的环境声音信号减小,从而降低由于电路以及麦克风硬件底噪带来的负面听感;并且,第一控制单元还可提高语音滤波器参数,使得耳道内最终的语音信号大于外界环境中的第一语音信号,提高用户在高噪声环境下可以听清自己发出的声音。
在一种可能的实现方式中,耳机设备还包括无线通信模块和第一控制单元;无线通信模块用于接收终端设备发送的滤波器参数,滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者;第一控制单元用于接收无线通信模块传送的滤波器参数。这样,提供了一种通过终端设备控制耳机中的环境音滤波器参数、语音滤波器参数以及反馈滤波器参数的方式,此时,参考麦克风、通话麦克风和误差麦克风等可以无需与第一控制单元连接,从而简化耳机中的电路连接方式;并且,可在终端设备上人为控制耳机的去闭塞效果和透传效果,提高了耳机的去闭塞效果和透传效果的多样性。
在一种可能的实现方式中,耳机设备还包括无线通信模块和第一控制单元;无线通信模块用于接收终端设备发送的档位信息;第一控制单元用于根据档位信息获取对应的滤波器参数,滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者。这样,提供了另一种通过终端设备控制耳机中的环境音滤波器参数、语音滤波器参数以及反馈滤波器参数的方式,此时,参考麦克风、通话麦克风和误差麦克风等可以无需与第一控制单元连接,从而简化耳机中的电路连接方式;并且,可在终端设备上人为控制耳机的去闭塞效果和透传效果,提高了耳机的去闭塞效果和透传效果的多样性。
在一种可能的实现方式中,耳机设备还包括风噪分析单元和第二控制单元;风噪分析单元用于计算第一外界声音信号与第二外界声音信号的相关度,以确定外界环境风的强度;第二控制单元用于根据外界环境风的强度,确定目标滤波器的目标滤波器参数;目标滤波器还用于根据第二控制单元确定的目标滤波器参数,对外界声音信号进行处理,得到环境声音衰减信号,外界声音信号包括第一外界声音信号和第二外界声音信号;第一音频处理单元还用于根据环境声音衰减信号对耳内声音信号中的部分 信号进行去除,得到闭塞信号和环境噪声信号;反馈滤波器,还用于对闭塞信号和环境噪声信号进行处理,得到反相噪声信号。这样,通过对目标滤波器的目标滤波器参数进行调整,以降低风噪场景下耳道内最终听到的风噪声。
第二方面,本申请实施例提出一种声音信号的处理方法,应用于耳机设备,耳机设备包括外部麦克风、误差麦克风、扬声器、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元和第二音频处理单元,该方法包括:外部麦克风采集外界声音信号,外界声音信号包括第一外界环境声音信号和第一语音信号;误差麦克风采集耳内声音信号,耳内声音信号包括第二外界环境声音信号、第二语音信号和闭塞信号,第二外界环境声音信号的信号强度低于第一外界环境声音信号的信号强度,第二语音信号的信号强度低于第一语音信号的信号强度;前馈滤波器对外界声音信号进行处理,得到待补偿声音信号;目标滤波器对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号;第一音频处理单元根据环境声音衰减信号和语音衰减信号,对耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号;反馈滤波器对闭塞信号进行处理,得到反相噪声信号;第二音频处理单元对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号;扬声器播放混音音频信号。
在一种可能的实现方式中,耳机设备还包括振动传感器和第一控制单元;在反馈滤波器对闭塞信号进行处理,得到反相噪声信号之前,还包括:振动传感器采集用户发声时的振动信号;第一控制单元根据振动信号、外界声音信号和耳内声音信号中的一者或多者,确定用户发声时的目标音量;第一控制单元根据目标音量获取对应的反馈滤波器参数;反馈滤波器对闭塞信号进行处理,得到反相噪声信号,包括:反馈滤波器根据第一控制单元确定的反馈滤波器参数,对闭塞信号进行处理,得到反相噪声信号。
在一种可能的实现方式中,第一控制单元根据振动信号、外界声音信号和耳内声音信号中的一者或多者,确定用户发声时的目标音量,包括:第一控制单元根据振动信号的振幅确定第一音量;第一控制单元根据外界声音信号的信号强度,确定第二音量;第一控制单元根据耳内声音信号的信号强度,确定第三音量;第一控制单元根据第一音量、第二音量和第三音量,确定用户发声时的目标音量。
在一种可能的实现方式中,第一控制单元根据第一音量、第二音量和第三音量,确定用户发声时的目标音量,包括:第一控制单元计算第一音量、第二音量和第三音量的加权平均值,得到目标音量。
在一种可能的实现方式中,耳机设备还包括第一控制单元;在反馈滤波器对闭塞信号进行处理,得到反相噪声信号之前,还包括:第一控制单元获取外界声音信号中的低频分量的第一强度,以及耳内声音信号中的低频分量的第二强度;第一控制单元根据第一强度、第二强度和强度阈值,获取对应的反馈滤波器参数;反馈滤波器对闭塞信号进行处理,得到反相噪声信号,包括:反馈滤波器根据第一控制单元确定的反馈滤波器参数,对闭塞信号进行处理,得到反相噪声信号。
在一种可能的实现方式中,第一控制单元根据第一强度、第二强度和强度阈值,获取对应的反馈滤波器参数,包括:第一控制单元计算第一强度与第二强度的差值的绝对值,得到第三强度;第一控制单元计算第三强度与强度阈值的差值,得到强度差 值;第一控制单元根据强度差值获取对应的反馈滤波器参数。
在一种可能的实现方式中,耳机设备还包括音频分析单元和第三音频处理单元,外部麦克风包括参考麦克风和通话麦克风,前馈滤波器包括第一前馈滤波器和第二前馈滤波器;外部麦克风采集外界声音信号,包括:通过参考麦克风采集第一外界声音信号,以及通过通话麦克风采集第二外界声音信号;前馈滤波器对外界声音信号进行处理,得到待补偿声音信号,包括:音频分析单元对第一外界声音信号和第二外界声音信号进行处理,得到第一外界环境声音信号和第一语音信号;第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号;第二前馈滤波器对第一语音信号进行处理,得到待补偿语音信号,待补偿声音信号包括待补偿环境信号和待补偿语音信号;在目标滤波器对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号之前,还包括:第三音频处理单元对第一外界环境声音信号和第一语音信号进行混音处理,得到外界声音信号。
在一种可能的实现方式中,耳机设备还包括第一控制单元;在第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:第一控制单元获取第一外界环境声音信号的信号强度以及第一语音信号的信号强度;第一控制单元根据第一外界环境声音信号的信号强度以及第一语音信号的信号强度,调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数;第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号,包括:第一前馈滤波器根据第一控制单元确定的环境音滤波器参数,对第一外界环境声音信号进行处理,得到待补偿环境信号;第二前馈滤波器对第一语音信号进行处理,得到待补偿语音信号,包括:第二前馈滤波器根据第一控制单元确定的语音滤波器参数,对第一语音信号进行处理,得到待补偿语音信号。
在一种可能的实现方式中,第一控制单元根据第一外界环境声音信号的信号强度以及第一语音信号的信号强度,调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数,包括:当第一外界环境声音信号的信号强度与第一语音信号的信号强度的差值小于第一设定阈值时,第一控制单元降低第一前馈滤波器的环境音滤波器参数;当第一外界环境声音信号的信号强度与第一语音信号的信号强度的差值大于第二设定阈值时,第一控制单元提高第二前馈滤波器的语音滤波器参数。
在一种可能的实现方式中,耳机设备还包括无线通信模块和第一控制单元;在第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:无线通信模块接收终端设备发送的滤波器参数,滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者;第一控制单元接收无线通信模块传送的滤波器参数。
在一种可能的实现方式中,耳机设备还包括无线通信模块和第一控制单元;在第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:无线通信模块接收终端设备发送的档位信息;第一控制单元根据档位信息获取对应的滤波器参数,滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者。
在一种可能的实现方式中,耳机设备还包括风噪分析单元和第二控制单元;该方 法还包括:风噪分析单元计算第一外界声音信号与第二外界声音信号的相关度,以确定外界环境风的强度;第二控制单元根据外界环境风的强度,确定目标滤波器的目标滤波器参数;目标滤波器根据第二控制单元确定的目标滤波器参数,对外界声音信号进行处理,得到环境声音衰减信号,外界声音信号包括第一外界声音信号和第二外界声音信号;第一音频处理单元根据环境声音衰减信号对耳内声音信号中的部分信号进行去除,得到闭塞信号和环境噪声信号;反馈滤波器对闭塞信号和环境噪声信号进行处理,得到反相噪声信号。
第二方面各可能的实现方式,效果与第一方面以及第一方面的可能的设计中的效果类似,在此不再赘述。
附图说明
图1为本申请实施例提供的一种系统架构示意图;
图2为本申请实施例提供的用户佩戴耳机的场景示意图;
图3为本申请实施例提供的用户因佩戴耳机说话,导致耳内声音信号的低频抬升与高频衰减的示意图;
图4为相关技术提供的一种耳机的结构示意图;
图5为本申请实施例提供的第一种耳机的结构示意图;
图6为本申请实施例提供的第一种声音信号的处理方法的流程示意图;
图7为本申请实施例提供的测试得到前馈滤波器的前馈滤波器参数的测试流程示意图;
图8为本申请实施例提供的测试得到目标滤波器的目标滤波器参数的测试流程示意图;
图9为本申请实施例提供的测试得到的外部麦克风采集的第一测试信号与误差麦克风采集的第二测试信号的示意图;
图10为本申请实施例提供的第二种耳机的结构示意图;
图11为本申请实施例提供的第二种声音信号的处理方法的流程示意图;
图12为本申请实施例提供的用户佩戴耳机说话时的语音信号的音量不同,而导致的耳内声音信号的低频抬升与高频衰减的示意图;
图13为本申请实施例提供的第三种耳机的结构示意图;
图14为本申请实施例提供的第三种声音信号的处理方法的流程示意图;
图15为本申请实施例提供的第四种耳机的结构示意图;
图16为本申请实施例提供的第四种声音信号的处理方法的流程示意图;
图17为本申请实施例提供的一种终端设备的控制界面示意图;
图18为本申请实施例提供的用户在处于风噪场景下佩戴耳机后,风速影响耳膜参考点的频响噪声的示意图;
图19为本申请实施例提供的风噪场景下与无风噪场景下,耳膜参考点的频响噪声的示意图;
图20为本申请实施例提供的第五种耳机的结构示意图;
图21为本申请实施例提供的第五种声音信号的处理方法的流程示意图;
图22为本申请实施例提供的第六种耳机的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一芯片和第二芯片仅仅是为了区分不同的芯片,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
随着电子技术的不断发展,耳机设备受到了越来越多消费者的青睐。本申请实施例中的耳机设备可以是耳机,也可以是助听器或者诊听器等需要入耳的设备,本申请实施例主要是以耳机作为耳机设备为例进行说明。耳机也可能被称为耳塞、耳麦、随身听、音讯播放器、媒体播放器、头戴式受话器、听筒设备或其它某个合适的术语。
参照图1所示,图1为本申请实施例提供的一种系统架构示意图,该系统架构包括终端设备和耳机,耳机与终端设备之间可建立通信连接。
其中,该耳机可以为无线入耳式耳机。即从耳机与终端设备之间的通信方式上看,无线入耳式耳机属于无线耳机。无线耳机是可以与终端设备进行无线连接的耳机,根据无线耳机使用的电磁波频率,还可以将它们进一步区分为:红外线无线耳机、米波无线耳机(例如FM调频耳机)、分米波无线耳机(例如蓝牙(Bluetooth)耳机)等;而从耳机的佩戴方式上看,无线入耳式耳机属于入耳式耳机。
可以理解的是,本申请实施例中的耳机还可以是其它类型的耳机。示例性的,从耳机与终端设备之间的通信方式上看,本申请实施例中的耳机还可以是有线耳机。有线耳机是可以与终端设备通过导线(例如线缆)连接的耳机,根据线缆形状还可以区分为圆柱形线缆耳机、面条线耳机等。从耳机的佩戴方式上看,该耳机还可以是半入耳式耳机、包耳式耳机(也可以称作耳罩式耳机)、耳挂式耳机、颈挂式耳机等。
参照图2,图2为本申请实施例提供的用户佩戴耳机的场景示意图。其中,耳机可以包括参考麦克风21、通话麦克风22和误差麦克风23。
在用户正常佩戴该耳机的情况下,参考麦克风21和通话麦克风22通常设置在耳机远离耳道的一侧,即耳机的外侧,因此,可将参考麦克风21和通话麦克风22统称为外部麦克风。参考麦克风21和通话麦克风22用于采集外界声音信号,其中,参考麦克风21主要用于采集外界环境声音信号,通话麦克风22主要用于采集用户说话时 通过空气传播的语音信号,例如,针对通话场景下的说话声。
在用户正常佩戴该耳机的情况下,误差麦克风23通常设置在耳机靠近耳道的一侧,即耳机的内侧,其用于采集用户耳道内的耳内声音信号。因此,可将误差麦克风23称为耳内麦克风。
可以理解的是,在一些产品中,耳机中的麦克风可以包括参考麦克风21、通话麦克风22和误差麦克风23中的一种或多种。例如,耳机中的麦克风可以仅包括通话麦克风22和误差麦克风23。并且,参考麦克风21的数量可以是一个或多个,通话麦克风22的数量可以是一个或多个,误差麦克风23的数量可以是一个或多个。
通常来说,耳机与耳道并不是完全贴合的,因此,耳机与耳道之间会存在一定的缝隙。用户在佩戴耳机后,外界声音信号会通过这些缝隙进入耳道内;但是,由于耳机的耳帽与耳罩之间存在一定的密封性,其可以使得用户的耳膜处与外界声音信号产生隔离,因此,即使外界声音信号通过耳机与耳道之间的缝隙进入耳道内,进入到耳道内部的外界声音信号也会因为佩戴耳机而产生高频分量的衰减,即进入到耳道内部的外界声音信号会发生损失,也就使得用户听到的外界声音减少。例如,用户在佩戴耳机说话时,外界声音信号包括环境声音信号和用户说话时的语音信号。
并且,用户在佩戴耳机后,耳道内的声学腔体会从开放场变为压力场。因此,用户在佩戴耳机说话时,用户会感受到自己发出的语音信号中的低频分量的强度增强,出现闭塞效应,从而导致自己发出的语音听起来沉闷以及不清晰等,阻碍用户与他人交流的流畅度。
也就是说,用户在佩戴耳机说话时,会使得耳内声音信号的低频分量抬升,而耳内声音信号的高频分量衰减,其低频分量抬升的程度与高频分量衰减的程度可参照图3所示。
图3为本申请实施例提供的用户因佩戴耳机说话,导致耳内声音信号的低频抬升与高频衰减的示意图。其中,横坐标表示耳内声音信号的频率,单位为Hz,纵坐标表示耳内声音信号与外界声音信号之间的强度差异,单位为dB(分贝)。
可以看出,由于闭塞效应的存在,会使得耳内声音信号的低频分量发生抬升,抬升的强度在15dB左右;而由于耳机的阻挡,使得进入到耳道内部的外界声音信号会因为佩戴耳机而产生高频分量的衰减,衰减的强度在-15dB左右。
需要说明的是,耳机在传声过程中,骨传导能量会导致下颌骨和靠近外耳道的软组织振动,这进而引起耳道软骨壁振动,产生的能量随后转移到管内的空气体积中。当耳道被堵塞时,大部分能量被困住,导致传递到鼓膜并最终传递到耳蜗的声压水平上升,从而产生闭塞效应。
在一种相关技术中,耳机内的扬声器将壳体的内腔分为前腔体和后腔体,其中,前腔体为内腔中具有出音嘴的部分,后腔体为内腔背离出音嘴的部分。可通过在耳机中的前腔体或后腔体的壳体上设置泄露孔,通过泄露孔调整前腔体或后腔体的泄漏量的大小,使得用户在佩戴耳机的情况下,可在一定程度上产生低频分量的泄露,从而抑制闭塞效应。
但是,通过设置泄露孔的方式,泄露孔会占据耳机的一部分空间,并且这种方式也会产生一定的低频损失。例如,在采用耳机播放音乐时,会损失低频音乐的输出性 能,其改善效果不佳。
在另一种相关技术中,可通过误差麦克风采用主动降噪(active noise cancellation,ANC)的方式,来抑制闭塞效应。参照图4所示,该耳机可以是具有主动降噪的耳机,其包括外部麦克风、前馈滤波器、误差麦克风、反馈滤波器、混音处理模块和扬声器,外部麦克风可以是参考麦克风或通话麦克风。
通过外部麦克风采集外界声音信号,通过前馈滤波器补偿因耳机佩戴导致的外界声音信号的损失。即外部麦克风采集的外界声音信号经过前馈滤波器处理后,得到待补偿声音信号,并通过扬声器播放该待补偿声音信号。该待补偿声音信号结合耳机与耳道之间的缝隙泄露到耳道内的外界声音信号,就能够实现对外界声音信号的还原,即实现外界声音信号透传(hearthrough,HT)至用户的耳道内,实现与不佩戴耳机一样感受外界声音。
用户在佩戴耳机后,进入到耳道内部的外界声音信号会因为佩戴耳机而产生高频分量的衰减,如高频分量为大于或等于800Hz的高频分量,通过前馈滤波器补偿因佩戴耳机带来的800Hz以上的高频分量的损失。而进入到耳道内部的外界声音信号会因为佩戴耳机产生的低频分量的衰减较小,因此,前馈滤波器可以不对低频分量的损失进行补偿。
误差麦克风采集用户耳道内的耳内声音信号。在用户说话的场景下,耳内声音信号包括被动衰减后的环境声音信号H 1,被动衰减后的语音信号H 2,以及因颅骨震动使得耳机前嘴与耳道耦合腔体内产生的额外低频H 3,即H 3指的是闭塞效应产生的语音信号的低频抬升信号,可将闭塞效应产生的语音信号的低频抬升信号称为闭塞信号。可将误差麦克风采集的耳内声音信号经过反馈滤波器处理后,得到反相噪声信号,并通过扬声器播放该反相噪声信号,以抑制闭塞效应。
需要说明的是,前馈滤波器在得到待补偿声音信号,以及反馈滤波器在得到反相噪声信号之后,是混音处理模块对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号,并将混音音频信号传送给扬声器进行播放。
其中,被动衰减后的环境声音信号H 1,指的是因为佩戴耳机而使得进入到耳道内部的环境声音信号发生衰减后的信号,即通过佩戴耳机而对外界的环境声音信号进行被动降噪后的环境声音信号;被动衰减后的语音信号H 2,指的是因为佩戴耳机而使得进入到耳道内部的语音信号发生衰减后的信号,即通过佩戴耳机而对用户发出的信号进行被动降噪后的语音信号。
但是,耳内声音信号包括被动衰减后的环境声音信号H 1、被动衰减后的语音信号H 2,以及闭塞信号H 3,因此,反馈滤波器在对耳内声音信号进行处理时,除了对闭塞信号H 3进行减弱甚至消除,还会对被动衰减后的环境声音信号H 1和被动衰减后的语音信号H 2进行减弱,使得被动衰减后的环境声音信号H 1和被动衰减后的语音信号H 2也会受到一定程度的减弱。
虽然前馈滤波器可以对外界的环境声音信号和用户发出的语音信号进行补偿,并通过扬声器播放待补偿声音信号,以实现对外界声音信号的还原;但是,由于反馈滤波器在对耳内声音信号进行处理时,又额外减弱了一部分被动衰减后的环境声音信号H 1和一部分被动衰减后的语音信号H 2。因此,会导致耳道内最终的环境声音信号和语 音信号减弱,即无法很好地还原外界环境声音信号和用户发出的语音信号。
基于此,本申请实施例提供了一种声音信号的处理方法及耳机设备,通过在耳机内增加目标滤波器和第一音频处理单元,通过目标滤波器对外部麦克风采集的外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号,第一音频处理单元根据目标滤波器处理得到的环境声音衰减信号和语音衰减信号,将误差麦克风采集的耳内声音信号中的被动衰减后的环境声音信号和被动衰减后的语音信号去除,得到因闭塞效应带来的闭塞信号,并将闭塞信号传送给反馈滤波器,则反馈滤波器可以生成闭塞信号对应的反相噪声信号,并通过扬声器进行播放,即使得反馈滤波器可以不对耳内声音信号中的被动衰减后的环境声音信号和被动衰减后的语音信号进行减弱,从而实现在抑制闭塞效应的同时,提高对外界环境声音信号和用户发出的语音信号的还原度。
示例性的,图5为本申请实施例提供的第一种耳机的结构示意图。参照图5所示,耳机包括外部麦克风、误差麦克风、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元和扬声器。
其中,外部麦克风分别与前馈滤波器和目标滤波器连接,误差麦克风和目标滤波器均与第一音频处理单元连接,第一音频处理单元还与反馈滤波器连接,而反馈滤波器和前馈滤波器均与第二音频处理单元连接,第二音频处理单元还与扬声器连接。
外部麦克风可以为参考麦克风或通话麦克风,其用于采集外界声音信号。用户在佩戴耳机说话时,外部麦克风采集的外界声音信号包括第一外界环境声音信号和用户发出的第一语音信号。
前馈滤波器用于补偿因耳机佩戴导致的外界声音信号的损失。外部麦克风采集的外界声音信号经过前馈滤波器处理后,得到待补偿声音信号,该待补偿声音信号结合耳机与耳道之间的缝隙泄露到耳道内的外界声音信号,就能够实现对外界声音信号的还原。可将通过耳机与耳道之间的缝隙泄露到耳道内的外界声音信号称为被动衰减的外界声音信号,其包括被动衰减后的环境声音信号和被动衰减后的语音信号。
误差麦克风用于采集耳内声音信号。在用户说话的场景下,耳内声音信号包括被动衰减后的环境声音信号H 1,被动衰减后的语音信号H 2,以及因颅骨震动使得耳机前嘴与耳道耦合腔体内产生的闭塞信号H 3。可将被动衰减后的环境声音信号H 1称为第二外界环境声音信号,其指的是通过耳机与耳道之间的缝隙泄露到耳道内的环境声音信号;可将被动衰减后的语音信号H 2称为第二语音信号,其指的是通过耳机与耳道之间的缝隙泄露到耳道内的语音信号。
由于用户在佩戴耳机后,进入到耳道内部的外界声音信号会因为佩戴耳机而产生高频分量的衰减。因此,耳内声音信号中的第二外界环境声音信号的信号强度,会低于外界声音信号中的第一外界环境声音信号的信号强度;并且,耳内声音信号中的第二语音信号的信号强度,也会低于外界声音信号中的第一语音信号的信号强度。
目标滤波器用于对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号。环境声音衰减信号指的是通过目标滤波器对外界声音信号中的第一外界环境声音信号进行主动降噪后的信号;语音衰减信号指的是通过目标滤波器对外界声音信号中的第一语音信号进行主动降噪后的信号。
在一些实施例中,环境声音衰减信号与耳内声音信号中的第二外界环境声音信号 为幅值相近且相位相同的信号;语音衰减信号与耳内声音信号中的第二语音信号为幅值相近且相位相同的信号。可选的,环境声音衰减信号与第二外界环境声音信号的幅值相等且相位相同,语音衰减信号与第二语音信号的幅值相等且相位相同。
第一音频处理单元用于根据目标滤波器处理得到的环境声音衰减信号和语音衰减信号,将误差麦克风采集的耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
反馈滤波器用于对闭塞信号进行处理,得到反相噪声信号。该反相噪声信号是与闭塞信号的幅值相近且相位相反的信号。例如,在一些实施例中,反相噪声信号与闭塞信号的幅值相等且相位相反。
第二音频处理单元用于对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号。该混音音频信号包括待补偿声音信号和反相噪声信号。
扬声器用于播放该混音音频信号。
由于扬声器播放的混音音频信号包括待补偿声音信号和反相噪声信号。待补偿声音信号可以结合耳机与耳道之间的缝隙泄露到耳道内的环境声音信号和语音信号,实现对外界声音信号的还原;而该反相噪声信号可以减弱或抵消耳道内因闭塞效应带来的低频抬升信号,以抑制佩戴耳机说话时带来的闭塞效应。因此,本申请实施例的耳机可以在抑制闭塞效应的同时,提高对第一外界环境声音信号和用户发出的第一语音信号的还原度。
可以理解的是,本申请实施例中的麦克风是一种用于采集声音信号的装置,扬声器为用于播放声音信号的装置。
麦克风也可能被称为话筒、耳麦、拾音器、收音器、传音器、声音传感器、声敏传感器、音频采集装置或其它某个合适的术语,本申请实施例主要以麦克风为例进行技术方案的描述。扬声器也称“喇叭”,用于将音频电信号转换为声音信号。本申请实施例主要以扬声器为例进行技术方案的描述。
可以理解的是,图5所示的耳机仅为本申请实施例提供的一种示例。在本申请的具体实现中,耳机可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。需要说明的是,在一种可选的情况中,耳机的上述各个部件也可以耦合在一起设置。
基于图5所示的耳机的结构示意图,下面描述本申请实施例提供的声音信号的处理方法。图6为本申请实施例提供的第一种声音信号的处理方法的流程示意图,该方法可以应用于图5所示的耳机中,且该耳机处于被用户佩戴的状态,该方法具体可以包括如下步骤:
S601,外部麦克风采集外界声音信号。
在用户佩戴耳机说话时,外部麦克风采集的外界声音信号包括第一外界环境声音信号和用户发出的第一语音信号。其中,该外部麦克风可以为参考麦克风或通话麦克风,该外部麦克风为模拟信号。
S602,前馈滤波器对外界声音信号进行处理,得到待补偿声音信号。
在一些实施例中,在外部麦克风与前馈滤波器之间可设置有第一模数转换单元(未示出),第一模数转换单元的输入端与外部麦克风连接,第一模数转换单元的输出端 与前馈滤波器连接。
由于外部麦克风采集的外界声音信号为模拟信号,外部麦克风在采集到外界声音信号之后,将外界声音信号传送给第一模数转换单元,第一模数转换单元对外界声音信号进行模数转换,将模拟信号转换为数字信号,并将模数转换后的外界声音信号传送给前馈滤波器进行处理。
在前馈滤波器中预先设置有前馈滤波器参数,该前馈滤波器参数可称为FF参数。前馈滤波器基于设置的前馈滤波器参数,对模数转换后的外界声音信号进行滤波处理,待补偿声音信号。前馈滤波器在得到待补偿声音信号之后,可将待补偿声音信号传送给第二音频处理单元。
S603,目标滤波器对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号。
在一些实施例中,第一模数转换单元的输出端还可以与目标滤波器连接,第一模数转换单元对外界声音信号进行模数转换之后,还可以将模数转换后的外界声音信号传送给目标滤波器进行处理。
在目标滤波器中预先设置有目标滤波器参数,目标滤波器基于设置的目标滤波器参数,对模数转换后的外界声音信号进行滤波处理,得到环境声音衰减信号和语音衰减信号。
在一种可能的实现方式中,目标滤波器可将外界声音信号映射成被动衰减后的环境声音信号H 1和被动衰减后的语音信号H 2,被动衰减后的环境声音信号H 1和被动衰减后的语音信号H 2可一起称为被动衰减信号HE_pnc。
一种情况,该目标滤波器参数可以为一个比例系数,该比例系数为大于0且小于1的正数。目标滤波器计算外界声音信号与比例系数的乘积,得到环境声音衰减信号和语音衰减信号。
另一种情况,该目标滤波器参数可以为一个衰减参数,该衰减参数为正数。目标滤波器计算外界声音信号与衰减参数的差值,得到环境声音衰减信号和语音衰减信号。
目标滤波器在得到环境声音衰减信号和语音衰减信号之后,可以将环境声音衰减信号和语音衰减信号传送给第一音频处理单元进行处理。
S604,误差麦克风采集耳内声音信号。
在用户佩戴耳机说话时,误差麦克风采集的耳内声音信号包括:第二外界环境声音信号、第二语音信号和闭塞信号。第二外界环境声音信号为被动衰减后的环境声音信号H 1,第二语音信号为被动衰减后的语音信号H 2
S605,第一音频处理单元将耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
在一些实施例中,在误差麦克风与第一音频处理单元之间可设置有第二模数转换单元(未示出),第二模数转换单元的输入端与误差麦克风连接,第二模数转换单元的输出端与第一音频处理单元连接。
由于误差麦克风采集的耳内声音信号为模拟信号,误差麦克风在采集到耳内声音信号之后,将耳内声音信号传送给第二模数转换单元,第二模数转换单元对耳内声音信号进行模数转换,将模拟信号转换为数字信号,并将模数转换后的耳内声音信号传 送给第一音频处理单元进行处理。
因此,第一音频处理单元可接收到目标滤波器传送的环境声音衰减信号和语音衰减信号,第一音频处理单元还可接收到耳内声音信号。然后,第一音频处理单元对目标滤波器处理得到的环境声音衰减信号和语音衰减信号进行处理,得到反相衰减信号,该反相衰减信号与环境声音衰减信号和语音衰减信号混合后的信号的幅值相近且相位相反;接着,第一音频处理单元将反相衰减信号与耳内声音信号进行混音处理,即实现对耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
S606,反馈滤波器对闭塞信号进行处理,得到反相噪声信号。
第一音频处理单元在得到闭塞信号之后,将闭塞信号传送给反馈滤波器。在反馈滤波器中预先设置有反馈滤波器参数,反馈滤波器参数可称为FB参数。反馈滤波器基于设置的反馈滤波器参数,对闭塞信号进行处理,得到反相噪声信号,并将该反相噪声信号传送给第二音频处理单元。该反相噪声信号与闭塞信号的幅值相近且相位相反。
S607,第二音频处理单元对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号。
第二音频处理单元在接收到前馈滤波器传送的待补偿声音信号,以及反馈滤波器传送的反相噪声信号之后,将待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号。该混音音频信号包括待补偿声音信号和反相噪声信号。
S608,扬声器播放混音音频信号。
在一些实施例中,在第二音频处理单元与扬声器之间可设置有数模转换单元(未示出),数模转换单元的输入端与第二音频处理单元连接,数模转换单元的输出端与扬声器连接。
由于第二音频处理单元处理得到的混音音频信号为数字信号,第二音频处理单元在处理得到混音音频信号之后,将混音音频信号传送给数模转换单元,数模转换单元对混音音频信号进行数模转换,将数字信号转换为模拟信号,并将数模转换后的混音音频信号传送给扬声器。扬声器根据该数模转换后的混音音频信号进行播放,在实现对闭塞信号进行降噪的同时(即抑制闭塞效应),提高对第一外界环境声音信号和用户发出的第一语音信号的还原度,即可以无需调节前馈滤波器的前馈滤波器参数,就可以使得外界声音信号透传至用户的耳道内,实现与不佩戴耳机一样感受外界声音。
在一些实施例中,可通过预先测试,得到反馈滤波器参数、前馈滤波器参数和目标滤波器参数。
图7为本申请实施例提供的测试得到前馈滤波器的前馈滤波器参数的测试流程示意图,参照图7所示,其可以包括如下步骤:
S701,测试空场下,标准人耳的耳膜处的第一频响。
可以理解的是,空场指的是测试人员没有佩戴耳机时的场景,标准人耳可以理解为听力正常的测试人员的耳朵。频响也可称为频率响应,指的是系统对不同频率的响应程度。
S702,测试佩戴耳机后,标准人耳的耳膜处的第二频响
S703,将第一频响与第二频响的差值,作为前馈滤波器的前馈滤波器参数。
测试人员在没有佩戴耳机时,测试其耳膜处的第一频响FR1;测试人员在佩戴耳机后,测试其耳膜处的第二频响FR2。由于佩戴耳机后,由于耳机的阻挡,外界声音信号从耳机与耳道之间的缝隙进入耳道内,会发生高频分量的衰减,因此,可将第一频响FR1与第二频响FR2的差值,确定为前馈滤波器的前馈滤波器参数。
而在测试反馈滤波器的反馈滤波器参数时,测试人员的一只耳朵(如左耳)可以佩戴耳机,而另一只耳朵(如右耳)可以不佩戴耳机。测试人员以某一固定的平稳音量朗读一段文字,并不断调试反馈滤波器的滤波器参数,直至左耳和右耳听到的声音一致时,则确定该滤波器参数为反馈滤波器参数。当调节的反馈滤波器的反馈滤波器参数使得左耳和右耳听到的声音一致时,其也就能够抵消闭塞效应带来的额外低频抬升。
通常来说,在没有调试反馈滤波器的反馈滤波器参数之前,左耳与右耳听到的声音的差别越大,随着反馈滤波器的反馈滤波器参数的不断调节,使得左耳和右耳听到的声音趋于一致。
在实际测试过程中,可测试不同音量对应的反馈滤波器的反馈滤波器参数,如分别测量60dB、70dB和80dB等音量下,反馈滤波器对应的反馈滤波器参数。在测试时,可通过声级计在距离嘴部20cm处测量测试人员发出的声音的音量。
图8为本申请实施例提供的测试得到目标滤波器的目标滤波器参数的测试流程示意图,参照图8所示,其可以包括如下步骤:
S801,耳机佩戴在标准人头的情况下,播放环境声以测试外部麦克风采集的第一测试信号的第一信号强度,以及误差麦克风采集的第二测试信号的第二信号强度。
S802,将第一信号强度与第二信号强度的差值的绝对值,作为目标滤波器的目标滤波器参数。
测试人员在佩戴耳机后,外部麦克风采集的第一测试信号的第一信号强度为S1,以及误差麦克风采集的第二测试信号的第二信号强度为S2,则目标滤波器的目标滤波器参数=|S1-S2|,此时,该目标滤波器参数可以为一个衰减参数。
因此,用户后续在佩戴耳机说话时,目标滤波器可以计算外部麦克风采集的外界声音信号与该目标滤波器参数的差值,从而得到环境声音衰减信号和语音衰减信号,使得最终经过第一音频处理单元处理得到的信号仅包括闭塞信号,以防止反馈滤波器对外界声音信号进行额外衰减。
如图9所示,分别示出了测试得到的第一测试信号与第二测试信号的示意图。其横坐标表示第一测试信号与第二测试信号的频率,单位为Hz,纵坐标表示第一测试信号与第二测试信号的信号强度,单位为dB(分贝),两者在纵轴方向上的差距就可以理解为目标滤波器的目标滤波器参数。
在另一些实施例中,测试人员在佩戴耳机后,外部麦克风采集的第一测试信号的第一信号强度为S1,以及误差麦克风采集的第二测试信号的第二信号强度为S2,也可以将第二信号强度与第一信号强度的比值确定为目标滤波器的目标滤波器参数,即目标滤波器的目标滤波器参数=S2/S1,此时,该目标滤波器参数可以为一个比例系数,该比例系数为大于0且小于1的正数。
因此,用户后续在佩戴耳机说话时,目标滤波器可以计算外部麦克风采集的外界声音信号与该目标滤波器参数的乘积,从而得到环境声音衰减信号和语音衰减信号,使得最终经过第一音频处理单元处理得到的信号仅包括闭塞信号,以防止反馈滤波器对外界声音信号进行额外衰减。
示例性的,图10为本申请实施例提供的第二种耳机的结构示意图。参照图10所示,耳机包括参考麦克风、通话麦克风、误差麦克风、音频分析单元、第一前馈滤波器、第二前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元、第三音频处理单元和扬声器。
图10所示的耳机与图5所示的耳机的区别在于:图5所示的耳机中仅设置一个外部麦克风和一个前馈滤波器,而图10所示的耳机中设置有两个外部麦克风和两个前馈滤波器,这两个外部麦克风分别为参考麦克风和通话麦克风,这两个前馈滤波器分别为第一前馈滤波器和第二前馈滤波器;此外,图10所示的耳机中还增加了音频分析单元和第三音频处理单元。
其中,参考麦克风和通话麦克风均与音频分析单元连接,音频分析单元还分别与第一前馈滤波器、第二前馈滤波器和第三音频处理单元连接,第三音频单元与目标滤波器连接,误差麦克风和目标滤波器均与第一音频处理单元连接,第一音频处理单元还与反馈滤波器连接,而反馈滤波器、第一前馈滤波器和第二前馈滤波器均与第二音频处理单元连接,第二音频处理单元还与扬声器连接。
通过参考麦克风和通话麦克风共同采集外界声音信号。参考麦克风采集的第一外界声音信号包括外界环境声音信号和用户发出的语音信号,通话麦克风采集的第二外界声音信号也包括外界环境声音信号和用户发出的语音信号。但是,由于用户在正常佩戴耳机时,通话麦克风与用户的嘴部之间的距离,小于参考麦克风与用户的嘴部之间的距离,因此,第一外界声音信号和第二外界声音信号可能有所不同。例如,通话麦克风采集的第二外界声音信号中包括的语音信号,会多于参考麦克风采集的第一外界声音信号中包括的语音信号。
音频分析单元用于对参考麦克风采集的第一外界声音信号,与通话麦克风采集的第二外界声音信号进行分离,得到第一外界环境声音信号和用户发出的第一语音信号。
第一前馈滤波器可以用于补偿因耳机佩戴导致的外界环境声音信号的损失。音频分析单元在分离得到第一外界环境声音信号之后,该第一外界环境声音信号经过第一前馈滤波器进行处理,可得到待补偿环境信号。该待补偿环境信号结合耳机与耳道之间的缝隙泄露到耳道内的外界环境声音信号(即被动衰减后的环境声音信号),就能够实现对第一外界环境声音信号的还原。
第二前馈滤波器可以用于补偿因耳机佩戴导致的用户发出的语音信号的损失。音频分析单元在分离得到用户发出的第一语音信号之后,该第一语音信号经过第二前馈滤波器进行处理,可得到待补偿语音信号。该待补偿语音信号结合耳机与耳道之间的缝隙泄露到耳道内的语音信号(即被动衰减后的语音信号),就能够实现对用户发出的第一语音信号的还原。
误差麦克风用于采集耳内声音信号。在用户说话的场景下,耳内声音信号包括第二外界环境声音信号、第二语音信号和闭塞信号。
第三音频处理单元用于将音频分析单元分离得到第一外界环境声音信号和用户发出的第一语音信号进行混音处理,得到外界声音信号。该外界声音信号包括第一外界环境声音信号和用户发出的第一语音信号。
目标滤波器用于对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号。
第一音频处理单元用于根据目标滤波器处理得到的环境声音衰减信号和语音衰减信号,将误差麦克风采集的耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
反馈滤波器用于对闭塞信号进行处理,得到反相噪声信号。该反相噪声信号是与闭塞信号的幅值相近且相位相反的信号。
第二音频处理单元用于对待补偿环境信号、待补偿语音信号和反相噪声信号进行混音处理,得到混音音频信号。该混音音频信号包括待补偿语音信号、待补偿语音信号和反相噪声信号。
扬声器用于播放该混音音频信号。
由于扬声器播放的混音音频信号包括待补偿语音信号、待补偿语音信号和反相噪声信号。待补偿语音信号结合耳机与耳道之间的缝隙泄露到耳道内的环境声音信号,实现对第一外界环境声音信号的还原,待补偿语音信号结合耳机与耳道之间的缝隙泄露到耳道内的语音信号,实现对用户发出的第一语音信号的还原,从而实现对外界声音信号的还原;而该反相噪声信号可以减弱或抵消耳道内因闭塞效应带来的低频抬升信号,以抑制佩戴耳机说话时带来的闭塞效应。因此,本申请实施例的耳机可以在抑制闭塞效应的同时,提高对第一外界环境声音信号和用户发出的第一语音信号的还原度。
可以理解的是,图10所示的耳机仅为本申请实施例提供的一种示例。在本申请的具体实现中,耳机可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。需要说明的是,在一种可选的情况中,耳机的上述各个部件也可以耦合在一起设置。
基于图10所示的耳机的结构示意图,下面描述本申请实施例提供的声音信号的处理方法。图11为本申请实施例提供的第二种声音信号的处理方法的流程示意图,该方法可以应用于图10所示的耳机中,且该耳机处于被用户佩戴的状态,该方法具体可以包括如下步骤:
S1101,参考麦克风采集第一外界声音信号。
S1102,通话麦克风采集第二外界声音信号。
在耳机中设置有参考麦克风和通话麦克风,参考麦克风和通话麦克风均用于采集外界声音信号,将参考麦克风采集的外界声音信号称为第一外界声音信号,将通话麦克风采集的外界声音信号称为第二外界声音信号。
S1103,音频分析单元根据第一外界声音信号和第二外界声音信号,拆分出第一外界环境声音信号和第一语音信号。
由于第一外界声音信号和第二外界声音信号中的外界环境声音信号不同,第一外界声音信号和第二外界声音信号中的用户发出的语音信号也不同,因此,音频分析单 元可以对第一外界声音信号和第二外界声音信号进行分析,从中拆分出第一外界环境声音信号和第一语音信号。
S1104,第一前馈滤波器对第一外界环境声音信号进行处理,得到待补偿环境信号。
在一些实施例中,在音频分析单元与第一前馈滤波器之间可设置有第三模数转换单元(未示出),第三模数转换单元的输入端与音频分析单元连接,第三模数转换单元的输出端与第一前馈滤波器连接。
由于参考麦克风采集的第一外界声音信号和通话麦克风采集的第二外界声音信号均为模拟信号,因此,音频分析单元根据第一外界声音信号和第二外界声音信号,拆分出的第一外界环境声音信号也为模拟信号。
音频分析单元在拆分得到第一外界环境声音信号之后,将第一外界环境声音信号传送给第三模数转换单元,第三模数转换单元对第一外界环境声音信号进行模数转换,将模拟信号转换为数字信号,并将模数转换后的第一外界环境声音信号传送给第一前馈滤波器进行处理。
在第一前馈滤波器中预先设置有环境音滤波器参数,第一前馈滤波器基于设置的环境音滤波器参数,对模数转换后的第一外界环境声音信号进行滤波处理,得到待补偿环境信号,并将待补偿环境信号传送给第二音频处理单元。
S1105,第二前馈滤波器对第一语音信号进行处理,得到待补偿语音信号。
在一些实施例中,在音频分析单元与第二前馈滤波器之间可设置有第四模数转换单元(未示出),第四模数转换单元的输入端与音频分析单元连接,第四模数转换单元的输出端与第二前馈滤波器连接。
由于参考麦克风采集的第一外界声音信号和通话麦克风采集的第二外界声音信号均为模拟信号,因此,音频分析单元根据第一外界声音信号和第二外界声音信号,拆分出的第一语音信号也为模拟信号。
音频分析单元在拆分得到第一语音信号之后,将第一语音信号传送给第四模数转换单元,第四模数转换单元对第一语音信号进行模数转换,将模拟信号转换为数字信号,并将模数转换后的第一语音信号传送给第二前馈滤波器进行处理。
在第二前馈滤波器中预先设置有语音滤波器参数,第二前馈滤波器基于设置的语音滤波器参数,对模数转换后的第一语音信号进行滤波处理,得到待补偿语音信号,并将待补偿语音信号传送给第二音频处理单元。
S1106,第三音频处理单元对第一外界环境声音信号和第一语音信号进行混音处理,得到外界声音信号。
在一些实施例中,第三模数转换单元和第四模数转换单元的输出端还可以与第三音频处理单元连接,第三模数转换单元可以将模数转换后的第一外界环境声音信号传送给第三音频处理单元,第四模数转换单元可以将模数转换后的第一语音信号传送给第三音频处理单元。
第三音频处理单元可以将模数转换后的第一外界环境声音信号和模数转换后的第一语音信号进行混音处理,得到外界声音信号,并将外界声音信号传送给目标滤波器进行处理。该外界声音信号包括第一外界环境声音信号和用户发出的第一语音信号。
S1107,目标滤波器用于对外界声音信号进行处理,得到环境声音衰减信号和语音 衰减信号。
S1108,误差麦克风采集耳内声音信号。
S1109,第一音频处理单元将耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
S1110,反馈滤波器对闭塞信号进行处理,得到反相噪声信号。
需要说明的是,S1107至S1110的原理与上述S603至S606的原理类似,为避免重复,在此不再赘述。
S1111,第二音频处理单元对待补偿环境信号、待补偿语音信号和反相噪声信号进行混音处理,得到混音音频信号。
第二音频处理单元在接收到第一前馈滤波器传送的待补偿环境信号,第二前馈滤波器传送的待补偿语音信号,以及反馈滤波器传送的反相噪声信号之后,将待补偿环境信号、待补偿语音信号和反相噪声信号进行混音处理,得到混音音频信号。该混音音频信号包括待补偿环境信号、待补偿语音信号和反相噪声信号。
S1112,扬声器播放混音音频信号。
需要说明的是,S1112的原理与上述S608的原理类似,为避免重复,在此不再赘述。
因此,扬声器在播放混音音频信号时,可以在实现对闭塞信号进行降噪的同时(即抑制闭塞效应),提高对第一外界环境声音信号和用户发出的第一语音信号的还原度。
在一种可能的场景中,不同用户在佩戴耳机说话时的发声强度可能有所不同,同一用户多次佩戴耳机时的佩戴位置可能存在差异,以及同一用户多次佩戴耳机时的发声强度可能有所不同等情况,使得因用户佩戴耳机说话导致的耳内声音信号的低频分量的抬升程度有所不同,即因闭塞效应带来的闭塞信号的强度不同。
参照图12所示,图12为本申请实施例提供的用户佩戴说话时的语音信号的音量不同,而导致的耳内声音信号的低频抬升与高频衰减的示意图。其中,横坐标表示耳内声音信号的频率,单位为Hz,纵坐标表示耳内声音信号与外界声音信号之间的强度差异,单位为dB(分贝);且沿着箭头所示的方向,分别表示不同音量对应的低频分量的抬升强度,沿着箭头所示的方向,各条线段对应的音量依次提升。
例如,第一线段121对应的音量大于第二线段122对应的音量,第二线段122对应的音量大于第三线段123对应的音量。可以看出,第一线段121对应的低频分量的抬升强度在20dB左右,第二线段122对应的低频分量的抬升强度在15dB左右,第三线段123对应的低频分量的抬升强度在12dB左右。也就是说,第一线段121对应的低频分量的抬升强度大于第二线段122对应的低频分量的抬升强度,第二线段122对应的低频分量的抬升强度大于第三线段123对应的低频分量的抬升强度。
可以看出,由于闭塞效应的存在,会使得耳内声音信号的低频分量发生抬升;并且,当用户说话时的音量不同时,因闭塞效应带来的低频分量的抬升程度不同,且音量与低频分量的抬升程度呈正相关。即音量越强,低频分量的抬升程度越高;音量越弱,低频分量的抬升程度越低。
若反馈滤波器采用一固定的反馈滤波器参数来对闭塞信号进行处理,得到反相噪声信号,以抑制闭塞效应时,当用户发出的第一语音信号的音量所产生的闭塞信号的 强度,小于反馈滤波器参数能够实现去闭塞效果的闭塞信号的强度时,会出现过度去闭塞的情况,从而导致耳道内最终听到的语音信号的低频分量存在损失;而当用户发出的第一语音信号的音量所产生的闭塞信号的强度,大于反馈滤波器参数能够实现去闭塞效果的闭塞信号的强度时,会出现去闭塞不足的情况,从而导致耳道内最终听到的语音信号的低频分量过多。
因此,本申请实施例还可以根据用户佩戴耳机时说话的音量,自适应调整反馈滤波器的反馈滤波器参数,即调整反馈滤波器的去闭塞效果,使得用户佩戴耳机时以不同的音量说话时,可以提高去闭塞效果的一致性,从而使得耳道内最终听到的外界环境声音信号和用户发出的语音信号的透传效果得到提高。其具体实现方式可参照下面的描述。
示例性的,图13为本申请实施例提供的第三种耳机的结构示意图。参照图13所示,耳机包括外部麦克风、误差麦克风、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元、振动传感器、第一控制单元和扬声器。
图13所示的耳机与图5所示的耳机的区别在于:图13所示的耳机在图5所示的耳机的基础上,增加了振动传感器和第一控制单元。
其中,外部麦克风分别与前馈滤波器、目标滤波器和第一控制单元连接,误差麦克风分分别与第一音频处理单元和第一控制单元连接,目标滤波器与第一音频处理单元连接,第一音频处理单元还与反馈滤波器连接。振动传感器与第一控制单元连接,第一控制单元与反馈滤波器连接。而反馈滤波器和前馈滤波器均与第二音频处理单元连接,第二音频处理单元还与扬声器连接。
外部麦克风可以为参考麦克风或通话麦克风,其用于采集外界声音信号。误差麦克风用于采集耳内声音信号。振动传感器用于采集用户佩戴耳机说话时带来的振动信号。
第一控制单元用于根据振动传感器采集的振动信号、外部麦克风采集的外界声音信号以及误差麦克风采集的耳内声音信号,确定用户佩戴耳机说话时的目标音量,即耳帽与耳道耦合产生的振动强度。并且,第一控制单元可根据预先存储的音量与反馈滤波器的反馈滤波器参数之间的关系对照表,从中查找到与目标音量匹配的反馈滤波器参数,并将反馈滤波器参数传送给反馈滤波器,以使反馈滤波器根据第一控制单元传送的反馈滤波器参数,对第一音频处理单元传送的对闭塞信号进行处理,得到反相噪声信号。
需要说明的是,前馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元和扬声器的具体描述,可以参照图5所示的耳机对应的描述,为避免重复,在此不再赘述。
可以理解的是,图13所示的耳机仅为本申请实施例提供的一种示例。在本申请的具体实现中,耳机可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。需要说明的是,在一种可选的情况中,耳机的上述各个部件也可以耦合在一起设置。
基于图13所示的耳机的结构示意图,下面描述本申请实施例提供的声音信号的处理方法。图14为本申请实施例提供的第三种声音信号的处理方法的流程示意图,该方 法可以应用于图13所示的耳机中,且该耳机处于被用户佩戴的状态,该方法具体可以包括如下步骤:
S1401,外部麦克风采集外界声音信号。
S1402,前馈滤波器对外界声音信号进行处理,得到待补偿声音信号。
S1403,目标滤波器对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号。
S1404,误差麦克风采集耳内声音信号。
S1405,第一音频处理单元将耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
需要说明的是,S1401至S1405的原理与上述S601至S605的原理类似,为避免重复,在此不再赘述。
S1406,振动传感器采集振动信号。
用户在佩戴耳机说话时会引起振动,因此,通过振动传感器采集用户佩戴耳机说话时带来的振动信号,即采集用户佩戴耳机发声时的振动信号,该振动信号与用户说话时的音量相关。
S1407,第一控制单元根据振动信号、外界声音信号和耳内声音信号,确定目标音量,并根据目标音量查找反馈滤波器参数。
第一控制单元可接收振动传感器传送的振动信号,外部麦克风传送的外界声音信号以及误差麦克风传送的耳内声音信号。外界声音信号包括用户说话时的第一语音信号,因此,可根据外部麦克风采集的外界声音信号确定用户说话时的音量;而误差麦克风采集的耳内声音信号包括第二语音信号,该第二语音信号也可以在一定程度上体现用户说话时的第一语音信号,即当第一语音信号的强度越强时,第二语音信号的强度也越强,因此,也可根据误差麦克风采集的耳内声音信号确定用户说话时的音量。
在一些实施例中,当用户说话时的音量越大时,振动传感器采集的振动信号的振幅越大,因此,在第一控制单元中预先存储有振动信号的振幅与音量的关系对照表,当第一控制单元可接收振动传感器传送的振动信号之后,获取振动信号的振幅,并从振幅与音量的关系对照表中查找对应的音量,将查找到的音量称为第一音量。
并且,当用户说话时的音量越大时,外部麦克风采集的外界声音信号以及误差麦克风采集的耳内声音信号的强度也越大,因此,第一控制单元可以根据外界声音信号确定出用户说话时的第二音量,以及根据耳内声音信号确定出用户说话时的第三音量。
第一控制单元根据第一音量、第二音量和第三音量,确定出用户说话时的目标音量。该目标音量可以为第一音量、第二音量和第三音量的加权平均值,第一音量、第二音量和第三音量对应的权重可以相等,也可以不相等。
当然,在一些实施例中,也可以根据振动信号、外界声音信号和耳内声音信号中的任意一者或任意两者,来确定用户说话时的目标音量。
一种情况,可通过外部麦克风采集的外界声音信号和振动传感器采集的振动信号,确定用户说话时的目标音量。为了提高目标音量的准确度,可以采用通话麦克风作为外部麦克风。第一控制单元根据振动信号和外界声音信号,确定用户佩戴耳机说话的目标音量。这种情况下,误差麦克风可以与第一控制单元不连接。
另一种情况,可仅通过误差麦克风采集的耳内声音信号,确定用户说话时的目标音量。若用户处于风噪场景下,如用户佩戴耳机骑车或者跑步等风噪环境,外部麦克风受到风噪的影响较大,导致难以从外部麦克风采集的外界声音信号中确定出用户说话时的音量,而内部麦克风受到风噪的影响较小,因此,可通过内部麦克风采集的耳内声音信号,确定用户说话时的目标音量。这种场景下,可以无需在耳机中设置振动传感器,且外部麦克风可以与第一控制单元不连接。
再一种场景下,还可以仅通过外部麦克风采集的外界声音信号,确定用户说话时的目标音量。在用户处于正常环境下,如未处于风噪场景下或者风速小于预设风速的场景下,外部麦克风受到的干扰较小,因此,可通过外部麦克风采集的外界声音信号,确定用户说话时的目标音量。这种场景下,可以无需在耳机中设置振动传感器,且误差麦克风可以与第一控制单元不连接。
第一控制单元在确定用户说话时的目标音量之后,第一控制单元可根据预先存储的音量与反馈滤波器的反馈滤波器参数之间的关系对照表,从中查找到与目标音量匹配的反馈滤波器参数,并将反馈滤波器参数传送给反馈滤波器。
其中,在音量与反馈滤波器的反馈滤波器参数之间的关系对照表中,音量与反馈滤波器参数呈正相关。当音量越大时,反馈滤波器参数越大,当音量越小时,反馈滤波器参数越小。
由于用户说话时的音量与闭塞效应带来的低频分量的抬升程度呈正相关。因此,当确定目标音量越大时,相应的,闭塞效应带来的闭塞信号的强度越大,则可以提高反馈滤波器的反馈滤波器参数,更好地抑制闭塞效应,以改善去闭塞不足导致的耳道内最终听到的语音信号的低频分量过多的现象。而当确定目标音量越小时,相应的,闭塞效应带来的闭塞信号的强度越小,则可以降低反馈滤波器的反馈滤波器参数,更好地抑制闭塞效应,以改善过度去闭塞的现象。
S1408,反馈滤波器基于反馈滤波器参数对闭塞信号进行处理,得到反相噪声信号。
反馈滤波器在接收到第一控制单元传送的反馈滤波器参数之后,根据传送的反馈滤波器参数对闭塞信号进行处理,得到反相噪声信号。该反相噪声信号与闭塞信号的幅值相近且相位相反。
S1409,第二音频处理单元对待补偿声音信号和反相噪声信号进行混音处理,得到混音音频信号。
S1410,扬声器播放混音音频信号。
需要说明的是,S1409和S1410的原理与上述S607和S608的原理类似,为避免重复,在此不再赘述。
可以看出,图13和图14对应的声音信号的处理方式,可以适用于用户佩戴耳机时以不同的音量说话时的去闭塞场景,使得用户佩戴耳机时以不同的音量说话时,提高去闭塞效果的一致性。
这种场景下,可以无需额外调整前馈滤波器的前馈滤波器参数,也无需额外调整目标滤波器的目标滤波器参数,就可以实现对第一外界环境声音信号和用户发出的第一语音信号进行还原,即实现第一外界环境声音信号和用户发出的第一语音信号透传至用户的耳道内。
当然,在另一些实施例中,第一控制单元也可以根据外部麦克风采集的外界声音信号以及误差麦克风采集的耳内声音信号,确定外界声音信号中的低频分量的第一强度,以及耳内声音信号中的低频分量的第二强度。
若第一强度与第二强度的差值的绝对值大于强度阈值,则确定闭塞效应带来的低频分量的抬升较多,即闭塞信号的强度较大,则第一控制单元可以选择较高的反馈滤波器参数,将选取的反馈滤波器参数传送给反馈滤波器,以对闭塞信号进行调整。若第一强度与第二强度的差值的绝对值小于或等于强度阈值,则确定闭塞效应带来的低频分量的抬升较少,即闭塞信号的强度较小,则第一控制单元可以选择较低的反馈滤波器参数,将选取的反馈滤波器参数传送给反馈滤波器,以对闭塞信号进行调整。
具体的,在耳机中预先设置有强度差值与反馈滤波器参数的关系对照表,强度差值指的是第三强度与强度阈值的差值,第三强度为第一强度与第二强度的差值的绝对值。第一控制单元可计算第一强度与第二强度的差值的绝对值,得到第三强度;接着,第一控制单元计算第三强度与强度阈值的差值,得到强度差值;然后,根据计算得到的强度差值,从强度差值与反馈滤波器参数的关系对照表中,查找对应的反馈滤波器参数。
其中,在强度差值与反馈滤波器参数的关系对照表中,强度差值与反馈滤波器参数呈正相关。当强度差值越大时,反馈滤波器参数越大;当强度差值越小时,反馈滤波器参数越小。
在这种场景下,耳机中可以不设置振动传感器,第一控制单元直接根据外界声音信号和耳内声音信号,查找对应的反馈滤波器参数。
在实际使用过程中,用户也可能希望保留外界声音信号中有用的信息,并去除不希望听到的噪声信号,因此,本申请实施例也可以根据实际使用情况,在调整反馈滤波器的反馈滤波器参数的同时,也可以调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数。其具体实现方式可参照下面的描述。
示例性的,图15为本申请实施例提供的第四种耳机的结构示意图。参照图15所示,耳机包括:参考麦克风、通话麦克风、误差麦克风、音频分析单元、第一前馈滤波器、第二前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元、第三音频处理单元、振动传感器、第一控制单元和扬声器。
图15所示的耳机与图5所示的耳机的区别在于:图5所示的耳机中仅设置一个外部麦克风和一个前馈滤波器,而图15所示的耳机中设置有两个外部麦克风和两个前馈滤波器,这两个外部麦克风分别为参考麦克风和通话麦克风,这两个前馈滤波器分别为第一前馈滤波器和第二前馈滤波器;此外,图15所示的耳机中还增加了音频分析单元、第三音频处理单元、振动传感器和第一控制单元。
其中,参考麦克风和通话麦克风均与音频分析单元连接,音频分析单元还分别与第一前馈滤波器、第二前馈滤波器、第三音频处理单元和第一控制单元连接,第三音频单元与目标滤波器连接,误差麦克风分别与第一音频处理单元和第一控制单元连接,目标滤波器与第一音频处理单元连接,第一音频处理单元还与反馈滤波器连接。振动传感器与第一控制单元连接,第一控制单元分别与反馈滤波器、第一前馈滤波器和第二前馈滤波器连接。反馈滤波器、第一前馈滤波器和第二前馈滤波器均与第二音频处 理单元连接,第二音频处理单元还与扬声器连接。
关于参考麦克风、通话麦克风、音频分析单元、第一前馈滤波器、第二前馈滤波器、误差麦克风、第三音频处理单元、目标滤波器、第一音频处理单元、第二音频处理单元和扬声器的具体描述,可以参照图10所示的耳机对应的描述,为避免重复,在此不再赘述。
此外,振动传感器用于采集用户佩戴耳机说话时带来的振动信号。第一控制单元用于根据振动传感器采集的振动信号、音频分析单元拆分得到的第一外界环境声音信号和用户发出的第一语音信号,确定当前所处的场景信息,根据场景信息调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数。
可以理解的是,图15所示的耳机仅为本申请实施例提供的一种示例。在本申请的具体实现中,耳机可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。需要说明的是,在一种可选的情况中,耳机的上述各个部件也可以耦合在一起设置。
基于图15所示的耳机的结构示意图,下面描述本申请实施例提供的声音信号的处理方法。图16本申请实施例提供的第四种声音信号的处理方法的流程示意图,该方法可以应用于图10所示的耳机中,且该耳机处于被用户佩戴的状态,该方法具体可以包括如下步骤:
S1601,参考麦克风采集第一外界声音信号。
S1602,通话麦克风采集第二外界声音信号。
S1603,音频分析单元根据第一外界声音信号和第二外界声音信号,拆分出第一外界环境声音信号和第一语音信号。
需要说明的是,S1601至S1603的原理与上述S1101至S1103的原理类似,为避免重复,在此不再赘述。
S1604,第三音频处理单元对第一外界环境声音信号和第一语音信号进行混音处理,得到外界声音信号。
S1605,目标滤波器用于对外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号。
S1606,误差麦克风采集耳内声音信号。
S1607,第一音频处理单元将耳内声音信号中的第二外界环境声音信号和第二语音信号进行去除,得到闭塞信号。
需要说明的是,S1604的原理与上述S1106的原理类似,S1605至S1607的原理与上述S603至S605的原理类似,为避免重复,在此不再赘述。
S1608,振动传感器采集振动信号。
S1609,第一控制单元根据第一外界环境声音信号和第一语音信号,确定第一前馈滤波器的环境音滤波器参数。
S1610,第一前馈滤波器基于确定的环境音滤波器参数,对第一外界环境声音信号进行处理,得到待补偿环境信号。
第一控制单元可接收到音频分析单元拆分出第一外界环境声音信号和第一语音信号,并获取第一外界环境声音信号的信号强度以及第一语音信号的信号强度。当第一 外界环境声音信号的信号强度与第一语音信号的信号强度的差值小于第一设定阈值时,确定用户处于相对安静的外界环境下。
在这种场景下,第一控制单元可降低第一前馈滤波器的环境音滤波器参数,使得第一前馈滤波器在根据确定的环境音滤波器参数,对第一外界环境声音信号进行处理得到待补偿环境信号,使得耳道内最终听到的环境声音信号减小,从而降低由于电路以及麦克风硬件底噪带来的负面听感。
S1611,第一控制单元根据第一外界环境声音信号和第一语音信号,确定第二前馈滤波器的语音滤波器参数。
S1612,第二前馈滤波器基于确定的语音滤波器参数,对第一语音信号进行处理,得到待补偿语音信号。
相应的,当第一外界环境声音信号的信号强度与第一语音信号的信号强度的差值大于第二设定阈值时,确定用户处于嘈杂的外界环境下。第二设定阈值可以大于或等于第一设定阈值。
在这种场景下,第一控制单元可提高第二前馈滤波器的语音滤波器参数,使得第二前馈滤波器基于确定的语音滤波器参数,对第一语音信号进行处理得到待补偿语音信号。该待补偿语音信号结合耳机与耳道之间的缝隙泄露到耳道内的语音信号,使得耳道内最终的语音信号大于外界环境中的第一语音信号,从而使得耳道内最终听到的语音信号增大,提高用户在高噪声环境下可以听清自己发出的声音。
S1613,第一控制单元根据振动信号、外界声音信号和耳内声音信号,确定目标音量,并根据目标音量查找反馈滤波器的反馈滤波器参数。
S1614,反馈滤波器基于确定的反馈滤波器参数对闭塞信号进行处理,得到反相噪声信号。
需要说明的是,S1613和S1614的原理与上述S1407和S1408的原理类似,为避免重复,在此不再赘述。
S1615,第二音频处理单元对待补偿环境信号、待补偿语音信号和反相噪声信号进行混音处理,得到混音音频信号。
S1616,扬声器播放混音音频信号
可以看出,图15和图16对应的声音信号的处理方式,可以适用于用户佩戴耳机时以不同的音量说话时的去闭塞场景,使得用户佩戴耳机时以不同的音量说话时,提高去闭塞效果的一致性。并且,也可以适用于在处于不同的外界环境下,合理调整第一前馈滤波器的环境音滤波器参数和/或第二前馈滤波器的语音滤波器参数,以满足不同的场景需求。
以上介绍了通过外部麦克风、内部麦克风以及振动传感器中的一种或多种器件,来调整第一前馈滤波器的环境音滤波器参数、第二前馈滤波器的语音滤波器参数以及反馈滤波器的反馈滤波器参数。当然,可以采用其它方式来设置第一前馈滤波器的环境音滤波器参数、第二前馈滤波器的语音滤波器参数以及反馈滤波器的反馈滤波器参数。
在一种可能的实现方式中,参见图17所示,图17为本申请实施例提供的一种示例性的终端设备的控制界面。在一些实施例中,该控制界面可以认为是面向用户的输 入界面,该输入界面上提供多种功能的控件,以使得用户通过控制相关控件实现对耳机的控制。
图17中的(a)所示的界面为终端设备上显示的第一界面170a,在第一界面170a上显示有两个模式选择控件,其分别为自动模式控件和自定义模式控件,用户可以在第一界面170a上进行相应的操作,以采用不同的方式控制耳机中的滤波器参数的确定方式。
当用户对第一界面170a上的自定义模式控件输入第一操作时,该第一操作可以是用户对第一界面170a上的自定义模式控件的选中参数,如单击操作、双击操作、长按操作等。终端设备响应于第一操作,跳转到如图17中的(b)所示的界面。
图17中的(b)所示的界面为终端设备上显示的第二界面170b,在第二界面170b上显示有环境音滤波器参数设置选项、语音滤波器参数设置选项以及反馈滤波器参数设置选项。当用户对第二界面170b上的反馈滤波器参数设置选项输入第二操作时,终端设备响应于第一操作,跳转到如图17中的(c)所示的界面。
图17中的(c)所示的界面为终端设备上显示的第三界面170c,在第三界面170c上显示有档位轮盘,该档位轮盘包括多个档位,如档位1至档位8,每个档位对应一个反馈滤波器参数。其中,档位调节按钮171指示的档位,且终端设备中存储有每个档位对应的反馈滤波器参数,因此,终端设备根据用户采用档位调节按钮171选择的档位,查找对应的反馈滤波器参数,并将该反馈滤波器参数通过蓝牙等无线链路发送给耳机。
耳机可设置蓝牙等无线通信模块,该无线通信模块还可以与耳机中的第一控制单元连接,耳机中的无线通信模块接收终端设备发送的反馈滤波器参数,并将该反馈滤波器参数传送给第一控制单元,第一控制单元再将其传送给反馈滤波器,使得反馈滤波器基于该反馈滤波器参数对闭塞信号进行处理。
当然,在耳机中也可设置每个档位对应的反馈滤波器参数,用户采用档位调节按钮171选择的档位之后,终端设备将档位信息通过无线链路发送给耳机。耳机中的无线通信模块接收终端设备发送的档位信息,并根据该档位信息查找对应的反馈滤波器参数,并将查找到的反馈滤波器参数传送给反馈滤波器,使得反馈滤波器基于该反馈滤波器参数对闭塞信号进行处理。
需要说明的是,当用户选中第二界面170b上的环境音滤波器参数设置选项或语音滤波器参数设置选项时,终端设备上显示的界面与图17中的(c)所示的第三界面170c类似;相应的,也可采用类似的操作方式,选择环境音滤波器参数或语音滤波器参数。
而当用户对第一界面170a上的自动模式控件输入第三操作时,使得终端设备进入自动检测模式,终端设备自动检测用户所处的外界环境,如处于嘈杂的外界环境、处于相对安静的外界环境等,并根据检测得到的外界环境,确定环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者。当终端设备确定了相应的滤波器参数之后,可通过无线链路将其发送给耳机。
可以理解的是,当耳机中仅设置有一个外部麦克风以及其对应的前馈滤波器时,第二界面170b中可以仅显示前馈滤波器参数设置选项和反馈滤波器参数设置选项。
需要说明的是,上述图17所示的实施例仅用于解释本申请的方案而非限定。在实 际应用中,终端设备上的控制界面还可以包括更多或更少的控件/元素/符号/功能/文字/图案/颜色,或者控制界面上的控件/元素/符号/功能/文字/图案/颜色还可以呈现其它形式的变形。例如,各个滤波器参数对应的档位,还可以设计为供用户触摸控制的调节条的形式,本申请实施例对此不作限定。
在一种可能的场景中,当用户处于风噪场景下,如用户佩戴耳机骑车或者跑步等可能存在的风噪环境,风速会使得通过耳机传入耳道内的声音信号受到噪声的影响。用户在处于风噪场景下佩戴耳机时,可能希望依旧提高对外界环境声音的还原度以及实现对风噪的抑制。风噪指的是外界环境中有风时而产生的呼呼声,其影响用户正常使用耳机。
参照图18所示,图18为本申请实施例提供的用户在处于风噪场景下佩戴耳机后,风速影响耳膜参考点的频响噪声的示意图。其中,横坐标表示外界环境噪声的频率,单位为Hz,纵坐标为耳膜参考点的频响值,单位为dB;且沿着箭头所示的方向,分别表示不同风速对应的耳膜参考点的频响噪声,沿着箭头所示的方向,各条线段对应的风速依次提升。
可以看出,在用户佩戴耳机时,耳膜参考点的频响值会受到风速的影响,且随着风速的增加,耳膜参考点的频响值对应的带宽也会增加。
参照图19所示,图19为本申请实施例提供的风噪场景下与无风噪场景下,耳膜参考点的频响噪声的示意图。其中,第一外界环境声对应的曲线指的是:未处于风噪场景下耳膜参考点的频响值与频率对应的关系曲线,第二外界环境声对应的曲线指的是:处于风噪场景下耳膜参考点的频响值与频率对应的关系曲线。
可以看出,当用户佩戴耳机处于风噪环境下,由于风噪信号的存在,会使得耳机中的外部麦克风接收到额外过量的低频噪声,类似有风时而产生的呼呼声。
若用户佩戴耳机处于风噪场景下,若目标滤波器依旧按照预先测试时的目标滤波器参数,对外界声音信号进行衰减处理时,会使得喇叭播放的音频信号中的低频分量,高于稳定环境下喇叭播放的音频信号中的低频分量,从而导致风噪场景下,耳道内最终听到的风噪声较高。
在一些相关技术中,具有透传功能的耳机,一般在处于风噪场景下会关闭外部麦克风的功能,但是这种方式对风噪的抑制以及对耳机的透传功能的保持的效果不佳。
因此,本申请实施例还可以通过对目标滤波器的目标滤波器参数进行调整,以降低风噪场景下耳道内最终听到的风噪声。其具体实现方式可参照下面的描述。
示例性的,图20为本申请实施例提供的第五种耳机的结构示意图。参照图20所示,耳机包括参考麦克风、通话麦克风、误差麦克风、风噪分析模块、第一前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元、第二控制单元和扬声器。
图20所示的耳机与图5所示的耳机的区别在于:图5所示的耳机中仅设置一个外部麦克风,而图20所示的耳机中设置有两个外部麦克风这两个外部麦克风分别为参考麦克风和通话麦克风;此外,图20所示的耳机中还增加了风噪分析模块和第二控制单元。
其中,参考麦克风和通话麦克风均与风噪分析单元连接,风噪分析单元还分别与 第一前馈滤波器、第二控制单元和目标滤波器连接,第二控制单元还与目标滤波器连接;误差麦克风和目标滤波器均与第一音频处理单元连接,第一音频处理单元还与反馈滤波器连接;而反馈滤波器和第一前馈滤波器均与第二音频处理单元连接,第二音频处理单元还与扬声器连接。
参考麦克风采集第一外界声音信号,通话麦克风采集第二外界声音信号。风噪分析单元用于计算第一外界声音信号与第二外界声音信号的相关度,以分析外界环境风的强度。
第二控制单元用于根据风噪分析单元计算得到的外界环境风的强度,调整目标滤波器的
目标滤波器参数。在外界环境风的强度较高时,降低目标滤波器的目标滤波器参数,使得目标滤波器在对外界声音信号中的第一外界环境声音信号进行处理时,第一外界环境声音信号去除的越少,从而第一音频处理单元处理后的信号包括闭塞信号和一部分环境噪声信号,则反馈滤波器在对第一音频处理单元传送的信号进行处理时,可以去除这部分环境噪声信号,从而降低风噪场景下,耳道内最终听到的风噪声。
需要说明的是,由于在风噪场景下,用户一般不会进行说话,因此,图20所示的耳机中
未示出第二前馈滤波器。当然,在实际产品中,耳机中也可以设置第二前馈滤波器,以及用于区分外界环境声音信号和用户发出的语音信号的音频分析单元等。
可以理解的是,图20所示的耳机仅为本申请实施例提供的一种示例。在本申请的具体实现中,耳机可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。需要说明的是,在一种可选的情况中,耳机的上述各个部件也可以耦合在一起设置。
基于图20所示的耳机的结构示意图,下面描述本申请实施例提供的声音信号的处理方法。图21为本申请实施例提供的第五种声音信号的处理方法的流程示意图,该方法可以应用于图20所示的耳机中,该耳机处于被用户佩戴的状态,此时,用户处于风噪场景下,且用户未发出语音信号,该方法具体可以包括如下步骤:
S2101,参考麦克风采集第一外界声音信号。
S2102,通话麦克风采集第二外界声音信号。
S2103,风噪分析单元根据第一外界声音信号与第二外界声音信号,计算外界环境风的强度。
当用户佩戴耳机处于风噪场景下,且用户未发出语音信号时,第一外界声音信号和第二外界声音信号均仅包括外界环境声音信号。
用户在正常佩戴耳机时,由于参考麦克风和通话麦克风所处的位置不同,当用户所处的外界环境中的外界环境风的强度越大时,参考麦克风采集的第一外界声音信号和通话麦克风采集的第二外界声音信号之间的相关度越弱;而当用户所处的外界环境中的外界环境风的强度越小时,参考麦克风采集的第一外界声音信号和通话麦克风采集的第二外界声音信号之间的相关度越强。即第一外界声音信号和第二外界声音信号的相关度,与外界环境中的外界环境风的强度呈负相关。
风噪分析单元计算第一外界声音信号与第二外界声音信号的相关度,以分析外界 环境风的强度,并将确定的外界环境风的强度传送给第二控制单元。
S2104,第二控制单元用于根据外界环境风的强度,调整目标滤波器的目标滤波器参数。
第二控制单元根据风噪分析单元计算得到的外界环境风的强度,调整目标滤波器的目标滤波器参数。在外界环境风的强度较高时,降低目标滤波器的目标滤波器参数,即外界环境风的强度与目标滤波器的目标滤波器参数呈负相关。
一种可能的实现方式,耳机中预先设置有环境风的强度与目标滤波器参数关系对照表,第二控制单元在确定了外界环境风的强度之后,从中查找对应的目标滤波器参数。
S2105,目标滤波器对外界声音信号进行处理,得到环境声音衰减信号。
目标滤波器在接收到第二控制单元传送的目标波器参数,基于该目标滤波器参数对外界声音信号进行处理,得到环境声音衰减信号。
可以理解的是,当目标滤波器参数越小时,目标滤波器对外界声音信号处理后得到的环境声音衰减信号,相对于外部麦克风采集的外界环境声音信号去除得越少;当目标滤波器参数越大时,目标滤波器对外界声音信号处理后得到的环境声音衰减信号,相对于外部麦克风采集的外界环境声音信号去除得越多。
S2106,误差麦克风采集耳内声音信号。
S2107,第一音频处理单元根据环境声音衰减信号对耳内声音信号中的部分信号进行去除,得到闭塞信号和环境噪声信号。
若目标滤波器对外界声音信号处理后得到的环境声音衰减信号较少,因此,第一音频处理单元根据环境声音衰减信号对耳内声音信号中的部分信号进行去除之后,剩余的信号除了包括闭塞信号之外,还包括一部分环境噪声信号。
当目标滤波器处理得到的环境声音衰减信号越少时,第一音频处理单元处理后得到的环境噪声信号越多;当目标滤波器处理得到的环境声音衰减信号越多时,第一音频处理单元处理后得到的环境噪声信号越少。
S2108,反馈滤波器对闭塞信号和环境噪声信号进行处理,得到反相噪声信号。
反馈滤波器对闭塞信号和环境噪声信号处理后得到的反相噪声信号,与混合信号(闭塞信号和环境噪声信号的混合信号)的幅值相近且相位相反。
因此,后续在采用扬声器播放该反相噪声信号时,就可以去除这部分环境噪声信号,从而降低风噪场景下耳道内最终听到的风噪声。
S2109,第一前馈滤波器对外界声音信号进行处理,得到待补偿环境信号。
其中,该外界声音信号可以仅包括参考麦克风和通话麦克风采集的外界环境声音信号。
S2110,第二音频处理单元将待补偿环境信号和反相噪声信号进行混音处理,得到混音音频信号。
S2111,扬声器播放混音音频信号。
因此,当用户佩戴耳机处于风噪场景下,若不降低目标滤波器的目标滤波器参数,且不改变前馈滤波器的前馈滤波器参数的情况下,前馈滤波器处理后得到的待补偿环境声音信号中,可能包括因风噪导致的额外的低频噪声。因此,本申请实施例在不改 变前馈滤波器的前馈滤波器参数的情况下,可以通过降低目标滤波器的目标滤波器参数,降低风噪场景下耳道内最终听到的风噪声。
一种可能的实现方式,本申请实施例的耳机可以适用于以下两种场景:一种场景,用户佩戴耳机说话时,其可以在抑制闭塞效应的同时,提高对第一外界环境声音信号和用户发出的第一语音信号的还原度;另一种场景,用于佩戴耳机处于风噪场景下,降低耳道内最终听到的风噪声。
因此,耳机中的具体硬件结构可如图22所示。图22为本申请实施例提供的第六种耳机的结构示意图,参照图22所示,该耳机包括参考麦克风、通话麦克风、误差麦克风、音频分析单元、第一前馈滤波器、第二前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元、第二音频处理单元、第三音频处理单元、扬声器、风噪分析单元和第二控制单元。
图22所示的耳机结构示意图,可以理解为是将图10所示的耳机和图20所示的耳机组合在一起得到的结构,图10和图20中相同的硬件结构可共用,如目标滤波器、参考麦克风、误差麦克风等硬件结构是共用的。
关于图22所示的耳机中的各硬件结构的具体功能,可参照图20所示的耳机以及图10所示的耳机的具体描述,为避免重复,在此不再赘述。
本申请实施例是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理单元以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理单元执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
以上的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (24)

  1. 一种耳机设备,其特征在于,包括:外部麦克风、误差麦克风、扬声器、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元和第二音频处理单元;
    所述外部麦克风,用于采集外界声音信号;所述外界声音信号包括第一外界环境声音信号和第一语音信号;
    所述误差麦克风,用于采集耳内声音信号;所述耳内声音信号包括第二外界环境声音信号、第二语音信号和闭塞信号,所述第二外界环境声音信号的信号强度低于所述第一外界环境声音信号的信号强度,所述第二语音信号的信号强度低于所述第一语音信号的信号强度;
    所述前馈滤波器,用于对所述外界声音信号进行处理,得到待补偿声音信号;
    所述目标滤波器,用于对所述外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号;
    所述第一音频处理单元,用于根据所述环境声音衰减信号和所述语音衰减信号,对所述耳内声音信号中的所述第二外界环境声音信号和所述第二语音信号进行去除,得到所述闭塞信号;
    所述反馈滤波器,用于对所述闭塞信号进行处理,得到反相噪声信号;
    所述第二音频处理单元,用于对所述待补偿声音信号和所述反相噪声信号进行混音处理,得到混音音频信号;
    所述扬声器,用于播放所述混音音频信号。
  2. 根据权利要求1所述的耳机设备,其特征在于,所述耳机设备还包括振动传感器和第一控制单元;
    所述振动传感器,用于采集用户发声时的振动信号;
    所述第一控制单元,用于根据所述振动信号、所述外界声音信号和所述耳内声音信号中的一者或多者,确定用户发声时的目标音量;以及根据所述目标音量获取对应的反馈滤波器参数;
    所述反馈滤波器,具体用于根据所述第一控制单元确定的所述反馈滤波器参数,对所述闭塞信号进行处理,得到反相噪声信号。
  3. 根据权利要求2所述的耳机设备,其特征在于,所述第一控制单元具体用于:
    根据所述振动信号的振幅确定第一音量;
    根据所述外界声音信号的信号强度,确定第二音量;
    根据所述耳内声音信号的信号强度,确定第三音量;
    根据所述第一音量、所述第二音量和所述第三音量,确定用户发声时的目标音量。
  4. 根据权利要求3所述的耳机设备,其特征在于,所述第一控制单元,具体用于计算所述第一音量、所述第二音量和所述第三音量的加权平均值,得到所述目标音量。
  5. 根据权利要求1所述的耳机设备,其特征在于,所述耳机设备还包括第一控制单元,所述第一控制单元用于:
    获取所述外界声音信号中的低频分量的第一强度,以及所述耳内声音信号中的低频分量的第二强度;
    根据所述第一强度、所述第二强度和强度阈值,获取对应的反馈滤波器参数;
    所述反馈滤波器,具体用于根据所述第一控制单元确定的所述反馈滤波器参数,对所述闭塞信号进行处理,得到反相噪声信号。
  6. 根据权利要求5所述的耳机设备,其特征在于,所述第一控制单元具体用于:
    计算所述第一强度与所述第二强度的差值的绝对值,得到第三强度;
    计算所述第三强度与所述强度阈值的差值,得到强度差值;
    根据所述强度差值获取对应的反馈滤波器参数。
  7. 根据权利要求1所述的耳机设备,其特征在于,所述耳机设备还包括音频分析单元和第三音频处理单元,所述外部麦克风包括参考麦克风和通话麦克风,所述前馈滤波器包括第一前馈滤波器和第二前馈滤波器;
    所述参考麦克风,用于采集第一外界声音信号;
    所述通话麦克风,用于采集第二外界声音信号;
    所述音频分析单元,用于对所述第一外界声音信号和所述第二外界声音信号进行处理,得到所述第一外界环境声音信号和所述第一语音信号;
    所述第一前馈滤波器,用于对所述第一外界环境声音信号进行处理,得到待补偿环境信号;
    所述第二前馈滤波器,用于对所述第一语音信号进行处理,得到待补偿语音信号;所述待补偿声音信号包括所述待补偿环境信号和所述待补偿语音信号;
    所述第三音频处理单元,用于对所述第一外界环境声音信号和所述第一语音信号进行混音处理,得到所述外界声音信号。
  8. 根据权利要求7所述的耳机设备,其特征在于,所述耳机设备还包括第一控制单元;
    所述第一控制单元,用于获取所述第一外界环境声音信号的信号强度以及所述第一语音信号的信号强度,并根据所述第一外界环境声音信号的信号强度以及所述第一语音信号的信号强度,调整所述第一前馈滤波器的环境音滤波器参数和/或所述第二前馈滤波器的语音滤波器参数;
    所述第一前馈滤波器,具体用于根据所述第一控制单元确定的所述环境音滤波器参数,对所述第一外界环境声音信号进行处理,得到待补偿环境信号;
    所述第二前馈滤波器,具体用于根据所述第一控制单元确定的所述语音滤波器参数,对所述第一语音信号进行处理,得到待补偿语音信号。
  9. 根据权利要求8所述的耳机设备,其特征在于,所述第一控制单元,具体用于当所述第一外界环境声音信号的信号强度与所述第一语音信号的信号强度的差值小于第一设定阈值时,降低所述第一前馈滤波器的环境音滤波器参数;以及当所述第一外界环境声音信号的信号强度与所述第一语音信号的信号强度的差值大于第二设定阈值时,提高所述第二前馈滤波器的语音滤波器参数。
  10. 根据权利要求7所述的耳机设备,其特征在于,所述耳机设备还包括无线通信模块和第一控制单元;
    所述无线通信模块,用于接收终端设备发送的滤波器参数;所述滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者;
    所述第一控制单元,用于接收所述无线通信模块传送的所述滤波器参数。
  11. 根据权利要求7所述的耳机设备,其特征在于,所述耳机设备还包括无线通信模块和第一控制单元;
    所述无线通信模块,用于接收终端设备发送的档位信息;
    所述第一控制单元,用于根据所述档位信息获取对应的滤波器参数;所述滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者。
  12. 根据权利要求7所述的耳机设备,其特征在于,所述耳机设备还包括风噪分析单元和第二控制单元;
    所述风噪分析单元,用于计算所述第一外界声音信号与所述第二外界声音信号的相关度,以确定外界环境风的强度;
    所述第二控制单元,用于根据所述外界环境风的强度,确定所述目标滤波器的目标滤波器参数;
    所述目标滤波器,还用于根据所述第二控制单元确定的所述目标滤波器参数,对所述外界声音信号进行处理,得到环境声音衰减信号;所述外界声音信号包括所述第一外界声音信号和所述第二外界声音信号;
    所述第一音频处理单元,还用于根据所述环境声音衰减信号对所述耳内声音信号中的部分信号进行去除,得到所述闭塞信号和环境噪声信号;
    所述反馈滤波器,还用于对所述闭塞信号和所述环境噪声信号进行处理,得到反相噪声信号。
  13. 一种声音信号的处理方法,其特征在于,应用于耳机设备,所述耳机设备包括外部麦克风、误差麦克风、扬声器、前馈滤波器、反馈滤波器、目标滤波器、第一音频处理单元和第二音频处理单元,所述方法包括:
    所述外部麦克风采集外界声音信号;所述外界声音信号包括第一外界环境声音信号和第一语音信号;
    所述误差麦克风采集耳内声音信号;所述耳内声音信号包括第二外界环境声音信号、第二语音信号和闭塞信号,所述第二外界环境声音信号的信号强度低于所述第一外界环境声音信号的信号强度,所述第二语音信号的信号强度低于所述第一语音信号的信号强度;
    所述前馈滤波器对所述外界声音信号进行处理,得到待补偿声音信号;
    所述目标滤波器对所述外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号;
    所述第一音频处理单元根据所述环境声音衰减信号和所述语音衰减信号,对所述耳内声音信号中的所述第二外界环境声音信号和所述第二语音信号进行去除,得到所述闭塞信号;
    所述反馈滤波器对所述闭塞信号进行处理,得到反相噪声信号;
    所述第二音频处理单元对所述待补偿声音信号和所述反相噪声信号进行混音处理,得到混音音频信号;
    所述扬声器播放所述混音音频信号。
  14. 根据权利要求13所述的方法,其特征在于,所述耳机设备还包括振动传感器和第一控制单元;在所述反馈滤波器对所述闭塞信号进行处理,得到反相噪声信号之 前,还包括:
    所述振动传感器采集用户发声时的振动信号;
    所述第一控制单元根据所述振动信号、所述外界声音信号和所述耳内声音信号中的一者或多者,确定用户发声时的目标音量;
    所述第一控制单元根据所述目标音量获取对应的反馈滤波器参数;
    所述反馈滤波器对所述闭塞信号进行处理,得到反相噪声信号,包括:
    所述反馈滤波器根据所述第一控制单元确定的所述反馈滤波器参数,对所述闭塞信号进行处理,得到反相噪声信号。
  15. 根据权利要求14所述的方法,其特征在于,所述第一控制单元根据所述振动信号、所述外界声音信号和所述耳内声音信号中的一者或多者,确定用户发声时的目标音量,包括:
    所述第一控制单元根据所述振动信号的振幅确定第一音量;
    所述第一控制单元根据所述外界声音信号的信号强度,确定第二音量;
    所述第一控制单元根据所述耳内声音信号的信号强度,确定第三音量;
    所述第一控制单元根据所述第一音量、所述第二音量和所述第三音量,确定用户发声时的目标音量。
  16. 根据权利要求15所述的方法,其特征在于,所述第一控制单元根据所述第一音量、所述第二音量和所述第三音量,确定用户发声时的目标音量,包括:
    所述第一控制单元计算所述第一音量、所述第二音量和所述第三音量的加权平均值,得到所述目标音量。
  17. 根据权利要求13所述的方法,其特征在于,所述耳机设备还包括第一控制单元;在所述反馈滤波器对所述闭塞信号进行处理,得到反相噪声信号之前,还包括:
    所述第一控制单元获取所述外界声音信号中的低频分量的第一强度,以及所述耳内声音信号中的低频分量的第二强度;
    所述第一控制单元根据所述第一强度、所述第二强度和强度阈值,获取对应的反馈滤波器参数;
    所述反馈滤波器对所述闭塞信号进行处理,得到反相噪声信号,包括:
    所述反馈滤波器根据所述第一控制单元确定的所述反馈滤波器参数,对所述闭塞信号进行处理,得到反相噪声信号。
  18. 根据权利要求17所述的方法,其特征在于,所述第一控制单元根据所述第一强度、所述第二强度和强度阈值,获取对应的反馈滤波器参数,包括:
    所述第一控制单元计算所述第一强度与所述第二强度的差值的绝对值,得到第三强度;
    所述第一控制单元计算所述第三强度与所述强度阈值的差值,得到强度差值;
    所述第一控制单元根据所述强度差值获取对应的反馈滤波器参数。
  19. 根据权利要求13所述的方法,其特征在于,所述耳机设备还包括音频分析单元和第三音频处理单元,所述外部麦克风包括参考麦克风和通话麦克风,所述前馈滤波器包括第一前馈滤波器和第二前馈滤波器;
    所述外部麦克风采集外界声音信号,包括:
    通过所述参考麦克风采集第一外界声音信号,以及通过所述通话麦克风采集第二外界声音信号;
    所述前馈滤波器对所述外界声音信号进行处理,得到待补偿声音信号,包括:
    所述音频分析单元对所述第一外界声音信号和所述第二外界声音信号进行处理,得到所述第一外界环境声音信号和所述第一语音信号;
    所述第一前馈滤波器对所述第一外界环境声音信号进行处理,得到待补偿环境信号;
    所述第二前馈滤波器对所述第一语音信号进行处理,得到待补偿语音信号;所述待补偿声音信号包括所述待补偿环境信号和所述待补偿语音信号;
    在所述目标滤波器对所述外界声音信号进行处理,得到环境声音衰减信号和语音衰减信号之前,还包括:
    所述第三音频处理单元对所述第一外界环境声音信号和所述第一语音信号进行混音处理,得到所述外界声音信号。
  20. 根据权利要求19所述的方法,其特征在于,所述耳机设备还包括第一控制单元;在所述第一前馈滤波器对所述第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:
    所述第一控制单元获取所述第一外界环境声音信号的信号强度以及所述第一语音信号的信号强度;
    所述第一控制单元根据所述第一外界环境声音信号的信号强度以及所述第一语音信号的信号强度,调整所述第一前馈滤波器的环境音滤波器参数和/或所述第二前馈滤波器的语音滤波器参数;
    所述第一前馈滤波器对所述第一外界环境声音信号进行处理,得到待补偿环境信号,包括:
    所述第一前馈滤波器根据所述第一控制单元确定的所述环境音滤波器参数,对所述第一外界环境声音信号进行处理,得到待补偿环境信号;
    所述第二前馈滤波器对所述第一语音信号进行处理,得到待补偿语音信号,包括:
    所述第二前馈滤波器根据所述第一控制单元确定的所述语音滤波器参数,对所述第一语音信号进行处理,得到待补偿语音信号。
  21. 根据权利要求20所述的方法,其特征在于,所述第一控制单元根据所述第一外界环境声音信号的信号强度以及所述第一语音信号的信号强度,调整所述第一前馈滤波器的环境音滤波器参数和/或所述第二前馈滤波器的语音滤波器参数,包括:
    当所述第一外界环境声音信号的信号强度与所述第一语音信号的信号强度的差值小于第一设定阈值时,所述第一控制单元降低所述第一前馈滤波器的环境音滤波器参数;
    当所述第一外界环境声音信号的信号强度与所述第一语音信号的信号强度的差值大于第二设定阈值时,所述第一控制单元提高所述第二前馈滤波器的语音滤波器参数。
  22. 根据权利要求19所述的方法,其特征在于,所述耳机设备还包括无线通信模块和第一控制单元;在所述第一前馈滤波器对所述第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:
    所述无线通信模块接收终端设备发送的滤波器参数;所述滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者;
    所述第一控制单元接收所述无线通信模块传送的所述滤波器参数。
  23. 根据权利要求19所述的方法,其特征在于,所述耳机设备还包括无线通信模块和第一控制单元;在所述第一前馈滤波器对所述第一外界环境声音信号进行处理,得到待补偿环境信号之前,还包括:
    所述无线通信模块接收终端设备发送的档位信息;
    所述第一控制单元根据所述档位信息获取对应的滤波器参数;所述滤波器参数包括环境音滤波器参数、语音滤波器参数以及反馈滤波器参数中的一者或多者。
  24. 根据权利要求19所述的方法,其特征在于,所述耳机设备还包括风噪分析单元和第二控制单元;所述方法还包括:
    所述风噪分析单元计算所述第一外界声音信号与所述第二外界声音信号的相关度,以确定外界环境风的强度;
    所述第二控制单元根据所述外界环境风的强度,确定所述目标滤波器的目标滤波器参数;
    所述目标滤波器根据所述第二控制单元确定的所述目标滤波器参数,对所述外界声音信号进行处理,得到环境声音衰减信号;所述外界声音信号包括所述第一外界声音信号和所述第二外界声音信号;
    所述第一音频处理单元根据所述环境声音衰减信号对所述耳内声音信号中的部分信号进行去除,得到所述闭塞信号和环境噪声信号;
    所述反馈滤波器对所述闭塞信号和所述环境噪声信号进行处理,得到反相噪声信号。
PCT/CN2023/071087 2022-02-28 2023-01-06 声音信号的处理方法及耳机设备 WO2023160275A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/562,609 US20240251197A1 (en) 2022-02-28 2023-01-06 Sound Signal Processing Method and Headset Device
EP23758900.7A EP4322553A4 (en) 2022-02-28 2023-01-06 PROCESSING METHOD FOR AUDIO SIGNALS AND HEADPHONE DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210193354.7A CN116709116A (zh) 2022-02-28 2022-02-28 声音信号的处理方法及耳机设备
CN202210193354.7 2022-02-28

Publications (2)

Publication Number Publication Date
WO2023160275A1 WO2023160275A1 (zh) 2023-08-31
WO2023160275A9 true WO2023160275A9 (zh) 2024-01-18

Family

ID=87764672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071087 WO2023160275A1 (zh) 2022-02-28 2023-01-06 声音信号的处理方法及耳机设备

Country Status (4)

Country Link
US (1) US20240251197A1 (zh)
EP (1) EP4322553A4 (zh)
CN (1) CN116709116A (zh)
WO (1) WO2023160275A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12081948B2 (en) * 2022-06-14 2024-09-03 GMI Technology Inc. Self-fitting hearing compensation device with real-ear measurement, self-fitting hearing compensation method thereof and computer program product

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11856375B2 (en) * 2007-05-04 2023-12-26 Staton Techiya Llc Method and device for in-ear echo suppression
DE102016011719B3 (de) * 2016-09-30 2017-09-07 Rheinisch-Westfälische Technische Hochschule Aachen Aktive Unterdrückung des Okklusionseffektes in Hörhilfen
US10657950B2 (en) * 2018-07-16 2020-05-19 Apple Inc. Headphone transparency, occlusion effect mitigation and wind noise detection
CN113132841B (zh) * 2019-12-31 2022-09-09 华为技术有限公司 降低耳机闭塞效应的方法及相关装置
DE102020107620B3 (de) * 2020-03-19 2021-07-29 Rheinisch-Westfälische Technische Hochschule (Rwth) Aachen System und Verfahren zur Kompensation des Okklusionseffektes bei Kopfhörern oder Hörhilfen mit verbesserter Wahrnehmung der eigenen Stimme
CN113676803B (zh) * 2020-05-14 2023-03-10 华为技术有限公司 一种主动降噪方法及装置
US11849274B2 (en) * 2020-06-25 2023-12-19 Qualcomm Incorporated Systems, apparatus, and methods for acoustic transparency
CN113873378B (zh) * 2020-06-30 2023-03-10 华为技术有限公司 一种耳机噪声处理方法、装置及耳机
CN113784256A (zh) * 2021-07-13 2021-12-10 深圳市大十科技有限公司 一种耳机系统和耳堵效应的控制方法
EP4304201A1 (en) * 2022-07-08 2024-01-10 GN Audio A/S Audio device with anc and hear-through

Also Published As

Publication number Publication date
EP4322553A1 (en) 2024-02-14
WO2023160275A1 (zh) 2023-08-31
CN116709116A (zh) 2023-09-05
EP4322553A4 (en) 2024-10-09
US20240251197A1 (en) 2024-07-25

Similar Documents

Publication Publication Date Title
JP6797159B2 (ja) Anrヘッドホンで周囲の自然さを提供すること
JP6055108B2 (ja) バイノーラルテレプレゼンス
JP5956083B2 (ja) Anrヘッドホンでの閉塞効果低減処理
JP6120980B2 (ja) 能動ヒアスルーを有するanrヘッドホンのためのユーザインターフェース
KR101689339B1 (ko) 이어폰 구조체 및 그 작동 방법
KR102266080B1 (ko) 주파수 의존 측음 교정
CN107533838A (zh) 使用多个麦克风的语音感测
CN106888414A (zh) 具有闭塞耳朵的说话者的自身语音体验的控制
US9542957B2 (en) Procedure and mechanism for controlling and using voice communication
WO2023160275A9 (zh) 声音信号的处理方法及耳机设备
EP2362677B1 (en) Earphone microphone
US11335315B2 (en) Wearable electronic device with low frequency noise reduction
CN116744169B (zh) 耳机设备、声音信号的处理方法及佩戴贴合度测试方法
US11445290B1 (en) Feedback acoustic noise cancellation tuning
CN214799882U (zh) 一种自适应方向助听器
US20230421971A1 (en) Hearing aid comprising an active occlusion cancellation system
CN117678243A (zh) 声音处理装置、声音处理方法和助听装置
CN115396799A (zh) 一种自适应方向助听器
CN110278502A (zh) 耳机装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23758900

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023758900

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023758900

Country of ref document: EP

Effective date: 20231106

WWE Wipo information: entry into national phase

Ref document number: 18562609

Country of ref document: US