EP4297428A1 - Tws-kopfhörer und abspielverfahren und -vorrichtung für einen tws-kopfhörer - Google Patents

Tws-kopfhörer und abspielverfahren und -vorrichtung für einen tws-kopfhörer Download PDF

Info

Publication number
EP4297428A1
EP4297428A1 EP22794455.0A EP22794455A EP4297428A1 EP 4297428 A1 EP4297428 A1 EP 4297428A1 EP 22794455 A EP22794455 A EP 22794455A EP 4297428 A1 EP4297428 A1 EP 4297428A1
Authority
EP
European Patent Office
Prior art keywords
signal
filter
speaker
frequency bands
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22794455.0A
Other languages
English (en)
French (fr)
Other versions
EP4297428A4 (de
Inventor
Wei Xiong
Cunshou QIU
Yi YUN
Chao Xu
Qin Guo
Yan Li
Lisheng TIAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP4297428A1 publication Critical patent/EP4297428A1/de
Publication of EP4297428A4 publication Critical patent/EP4297428A4/de
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1785Methods, e.g. algorithms; Devices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • G10K11/17881General system configurations using both a reference signal and an error signal the reference signal being an acoustic signal, e.g. recorded with a microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17885General system configurations additionally using a desired external signal, e.g. pass-through audio such as music or speech
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R23/00Transducers other than those covered by groups H04R9/00 - H04R21/00
    • H04R23/02Transducers using more than one principle simultaneously
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • H04R3/14Cross-over networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/108Communication systems, e.g. where useful sound is kept and noise is cancelled
    • G10K2210/1081Earphones, e.g. for telephones, ear protectors or headsets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • H04R1/24Structural combinations of separate transducers or of two parts of the same transducer and responsive respectively to two or more frequency ranges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication

Definitions

  • This application relates to audio processing technologies, and in particular, to a true wireless stereo (true wireless stereo, TWS) earphone, and a play method and apparatus of a TWS earphone.
  • true wireless stereo true wireless stereo, TWS
  • a common requirement is to implement a stable function of active noise cancellation (active noise cancellation or active noise control, ANC) or hear through (hear through, HT) while providing high-quality music or wideband high-definition calls.
  • active noise cancellation or active noise control, ANC active noise cancellation or active noise control, ANC
  • hear through hearing through
  • TWS earphones can implement the ANC function, the TWS earphones can also cancel some music or call sound, affecting sound quality and call definition.
  • This application provides a TWS earphone, and a play method and apparatus of a TWS earphone, which provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • this application provides a TWS earphone, including an audio signal processing path, a frequency divider, and at least two speakers.
  • An output end of the audio signal processing path is connected to an input end of the frequency divider, and an output end of the frequency divider is connected to the at least two speakers.
  • the audio signal processing path is configured to output a speaker drive signal after noise cancellation or hear through processing is performed on an audio source.
  • the audio source is original music or call voice, or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice.
  • the frequency divider is configured to divide the speaker drive signal into sub-audio signals of at least two frequency bands.
  • the at least two frequency bands correspond to main operating frequency bands of the at least two speakers. Adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap.
  • the at least two speakers are configured to play the corresponding sub-audio signals.
  • a frequency band of the processed speaker drive signal corresponds to a frequency band of the audio source, and may include low, middle, and high frequency bands. However, because a main operating frequency band of a single speaker may cover only some of the low, middle, and high frequency bands, the single speaker cannot provide high sound quality in all the frequency bands.
  • a parameter of the frequency divider is set to control the frequency divider to perform frequency division on the speaker drive signal in a preset manner.
  • the frequency divider may be configured to perform frequency division on the speaker drive signal based on the main operating frequency bands of the at least two speakers to obtain the sub-audio signals of the at least two frequency bands respectively corresponding to the main operating frequency bands of the at least two speakers. Then, each speaker plays a sub-audio signal in a corresponding frequency band, so that the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker.
  • a parameter of the frequency divider is set to control the frequency divider to perform frequency division on the speaker drive signal in a preset manner.
  • At least two speakers are disposed in the TWS earphone of this application, and main operating frequency bands of the at least two speakers are not exactly same.
  • the frequency divider may divide the speaker drive signal into sub-audio signals of at least two frequency bands. Adjacent frequency bands in the at least two frequency bands partially overlap or do not overlap. In this case, each sub-audio signal is transmitted to a speaker with a matching frequency band.
  • the matching frequency band may mean that a main operating frequency band of the speaker covers a frequency band of the sub-audio signal transmitted to the speaker. In this way, the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker, which can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • the audio signal processing path includes: a secondary path SP filter, configured to prevent cancellation of sound of the audio source by the noise cancellation or hear through processing when the noise cancellation or hear through processing and the audio source are concurrent.
  • the audio signal processing path further includes a feedback FB microphone and a feedback filter.
  • the FB microphone is configured to pick up an ear canal signal.
  • the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice.
  • the SP filter is configured to input the audio source, process the audio source, and transmit, to the feedback filter, a signal obtained by superimposing an output signal on the ear canal signal.
  • the feedback filter is configured to generate a signal for the noise cancellation or hear through processing.
  • the signal for the noise cancellation or hear through processing is one superimposed signal of the generated speaker drive signal.
  • the feedback FB microphone, the feedback filter, and the SP filter are disposed in a coder-decoder CODEC.
  • the SP filter is configured to input the audio source and process the audio source.
  • An output signal is one superimposed signal of the speaker drive signal.
  • the SP filter is disposed in a digital signal processing DSP chip.
  • the SP filter (including a fixed SP filter or an adaptive SP filter) is determined by using the foregoing method, so that during music playing or calls, not only a noise cancellation or hear through function can be implemented, but also music or call voice can be prevented from being canceled by using a hear through or noise cancellation technology.
  • the TWS earphone further includes a first digital-to-analog converter DAC.
  • An input end of the first DAC is connected to the output end of the audio signal processing path, and an output end of the first DAC is connected to the input end of the frequency divider.
  • the first DAC is configured to convert the speaker drive signal from a digital format to an analog format.
  • the frequency divider is an analog frequency divider.
  • the speaker drive signal in the digital format may be first converted into a speaker drive signal in the analog format by the first DAC, and then the speaker drive signal in the analog format is divided into sub-audio signals of at least two frequency bands by the analog frequency divider.
  • the TWS earphone further includes at least two second DACs. Input ends of the at least two second DACs are all connected to the output end of the frequency divider, and output ends of the at least two second DACs are respectively connected to the at least two speakers.
  • Each second DAC is configured to convert one of the sub-audio signals of the at least two frequency bands from a digital format to an analog format.
  • the frequency divider is a digital frequency divider.
  • the main operating frequency bands of the at least two speakers are not exactly same.
  • the at least two speakers include a moving-coil speaker and a moving-iron speaker.
  • the TWS earphone in this embodiment is provided with two speakers: a moving-coil speaker and a moving-iron speaker.
  • a main operating frequency band of the moving-coil speaker is less than 8.5 kHz, and a main operating frequency band of the moving-iron speaker is more than 8.5 kHz.
  • a frequency divider may be provided to divide a speaker drive signal in an analog format into a sub-audio signal of less than 8.5 kHz and a sub-audio signal of more than 8.5 kHz.
  • the moving-coil speaker plays the sub-audio signal of less than 8.5 kHz to maintain an optimal frequency response
  • the moving-iron speaker plays the sub-audio signal of more than 8.5 kHz to maintain an optimal frequency response. Therefore, the TWS earphone can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • the at least two speakers include a moving-coil speaker, a moving-iron speaker, a micro-electro-mechanical system MEMS speaker, and a planar vibrating diaphragm.
  • the TWS earphone in this embodiment is provided with four speakers: a moving-coil speaker, a moving-iron speaker, a MEMS speaker, and a planar vibrating diaphragm.
  • a main operating frequency band of the moving-coil speaker is less than 8.5 kHz, and a main operating frequency band of the moving-iron speaker is more than 8.5 kHz.
  • Amain operating frequency band of the MEMS speaker depends on an application form.
  • the main operating frequency band for an in-ear headphone is a full frequency band.
  • the main operating frequency band for an over-ear headphone is less than 7 kHz, which is weak, and thus a main action frequency band is high frequency of more than 7 kHz.
  • a main operating frequency band of the planar vibrating diaphragm is 10 kHz to 20 kHz.
  • a frequency divider may be provided to divide a speaker drive signal in an analog format into four sub-frequency bands.
  • the moving-coil speaker plays a sub-audio signal of less than 8.5 kHz to maintain an optimal frequency response
  • the moving-iron speaker plays a sub-audio signal of more than 8.5 kHz to maintain an optimal frequency response
  • the MEMS speaker plays a sub-audio signal of more than 7 kHz to maintain an optimal frequency response
  • the planar vibrating diaphragm plays a sub-audio signal of more than 10 kHz to maintain an optimal frequency response. Therefore, the TWS earphone can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • this application provides a play method of a TWS earphone.
  • the method is applied to the TWS earphone according to the first aspect.
  • the method includes: obtaining an audio source, where the audio source is original music or call voice, or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice; performing noise cancellation or hear through processing on the audio source to obtain a speaker drive signal; dividing the speaker drive signal into sub-audio signals of at least two frequency bands, where adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap; and playing the sub-audio signals of the at least two frequency bands respectively through at least two speakers.
  • the performing noise cancellation or hear through processing on the audio source to obtain a speaker drive signal includes: obtaining a fixed secondary path SP filter by using a coder-decoder CODEC; processing the audio source by using the fixed SP filter to obtain a filtered signal; and performing the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the obtaining a fixed secondary path SP filter by using a coder-decoder CODEC includes: obtaining an estimated SP filter based on a preset speaker drive signal and an ear canal signal picked up by a feedback FB microphone, where the ear canal signal includes a residual noise signal inside an ear canal; and determining the estimated SP filter as the fixed SP filter when a difference signal between a signal obtained through the estimated SP filter and the ear canal signal is within a specified range.
  • the method further includes: obtaining a parameter of a cascaded second-order filter based on a target frequency response of the estimated SP filter and a preset frequency division requirement when the difference signal between the signal obtained through the estimated SP filter and the ear canal signal is within the specified range; and obtaining an SP cascaded second-order filter based on the parameter of the cascaded second-order filter, and using the SP cascaded second-order filter as the fixed SP filter.
  • the performing noise cancellation or hear through processing on the audio source to obtain a speaker drive signal includes: obtaining an adaptive SP filter by using a digital signal processing DSP chip; processing the audio source by using the adaptive SP filter to obtain a filtered signal; and performing the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the obtaining an adaptive SP filter by using a digital signal processing DSP chip includes: obtaining a real-time noise signal; obtaining an estimated SP filter based on the audio source and the real-time noise signal; and determining the estimated SP filter as the adaptive SP filter when a difference signal between a signal obtained through the estimated SP filter and the real-time noise signal is within a specified range.
  • the obtaining a real-time noise signal includes: obtaining an external signal picked up by a feedforward FF microphone and an ear canal signal picked up by a feedback FB microphone, where the external signal includes an external noise signal and the music or call voice, and the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice; obtaining a voice signal picked up by a main microphone; and subtracting the external signal and the ear canal signal from the voice signal to obtain the real-time noise signal.
  • main operating frequency bands of the at least two speakers are not exactly same.
  • the at least two speakers include a moving-coil speaker and a moving-iron speaker.
  • the at least two speakers include a moving-coil speaker, a moving-iron speaker, a micro-electro-mechanical system MEMS speaker, and a planar vibrating diaphragm.
  • this application provides a play apparatus of a TWS earphone.
  • the apparatus is used in the TWS earphone in the first aspect.
  • the apparatus includes: an obtaining module, configured to obtain an audio source, where the audio source is original music or call voice, or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice; a processing module, configured to perform noise cancellation or hear through processing on the audio source to obtain a speaker drive signal; a frequency division module, configured to divide the speaker drive signal into sub-audio signals of at least two frequency bands, where adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap; and a play module, configured to play the sub-audio signals of the at least two frequency bands respectively through at least two speakers.
  • the processing module is specifically configured to: obtain a fixed secondary path SP filter by using a coder-decoder CODEC; process the audio source by using the fixed SP filter to obtain a filtered signal; and perform the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the processing module is specifically configured to: obtain an estimated SP filter based on a preset speaker drive signal and an ear canal signal picked up by a feedback FB microphone, where the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice; and determine the estimated SP filter as the fixed SP filter when a difference signal between a signal obtained through the estimated SP filter and the ear canal signal is within a specified range.
  • the processing module is further configured to: obtain a parameter of a cascaded second-order filter based on a target frequency response of the estimated SP filter and a preset frequency division requirement when the difference signal between the signal obtained through the estimated SP filter and the ear canal signal is within the specified range; and obtain an SP cascaded second-order filter based on the parameter of the cascaded second-order filter, and use the SP cascaded second-order filter as the fixed SP filter.
  • the processing module is specifically configured to: obtain an adaptive SP filter by using a digital signal processing DSP chip; process the audio source by using the adaptive SP filter to obtain a filtered signal; and perform the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the processing module is specifically configured to: obtain a real-time noise signal; obtain an estimated SP filter based on the audio source and the real-time noise signal; and determine the estimated SP filter as the adaptive SP filter when a difference signal between a signal obtained through the estimated SP filter and the real-time noise signal is within a specified range.
  • the processing module is specifically configured to: obtain an external signal picked up by a feedforward FF microphone and an ear canal signal picked up by a feedback FB microphone, where the external signal includes an external noise signal and the music or call voice, and the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice; obtain a voice signal picked up by a main microphone; subtract the external signal and the ear canal signal from the voice signal to obtain a signal difference; and obtain the estimated SP filter based on the audio source and the signal difference.
  • main operating frequency bands of the at least two speakers are not exactly same.
  • the at least two speakers include a moving-coil speaker and a moving-iron speaker.
  • the at least two speakers include a moving-coil speaker, a moving-iron speaker, a micro-electro-mechanical system MEMS speaker, and a planar vibrating diaphragm.
  • this application provides a computer-readable storage medium, including a computer program.
  • the computer program When the computer program is executed by a computer, the computer is enabled to perform the method according to the second aspect.
  • this application provides a computer program.
  • the computer program is executed by a computer, the method according to the second aspect is performed.
  • At least one (item) refers to one or more, and "a plurality of” refers to two or more.
  • the term “and/or” is used for describing an association relationship between associated objects, and indicates that three relationships may exist. For example, “A and/or B” may indicate the following three cases: Only A exists, only B exists, and both A and B exist, where A and B may be singular or plural.
  • the character “/” usually indicates an "or” relationship between associated objects.
  • At least one of the following or a similar expression thereof indicates any combination of the following, including any combination of one or more of the following.
  • At least one of a, b, or c may indicate a, b, c, a and b, a and c, b and c, or a, b, and c, where a, b, and c may be singular or plural.
  • FIG. 1 is a schematic diagram of an example structure of a TWS earphone in the related technology.
  • the TWS earphone includes three types of microphones: a main microphone, a feedforward (feedforward, FF) microphone, and a feedback (feedback, FB) microphone.
  • the main microphone is configured to pick up a voice in a call.
  • the FF microphone is configured to pick up an external noise signal.
  • the FB microphone is configured to pick up a residual noise signal inside an ear canal.
  • the TWS earphone further includes a moving-coil speaker.
  • the moving-coil speaker is configured to play processed music or call voice.
  • FIG. 2a is a schematic diagram of an example structure of a TWS earphone in the related technology.
  • the main microphone and the FF microphone are respectively connected to an input end of a voice enhancement filter, and an output end of the voice enhancement filter and an audio source (including music and call voice) are superimposed by a superimposer 1 to obtain a downlink signal.
  • the downlink signal is transmitted to an input end of a superimposer 2.
  • the downlink signal is further transmitted to an input end of a secondary path (secondary path, SP) filter.
  • the FF microphone is further connected to an input end of a feedforward filter, and an output end of the feedforward filter is connected to another input end of the superimposer 2.
  • the FB microphone is connected to an input end of a superimposer 3, an output end of the SP filter is connected to another input end of the superimposer 3, an output end of the superimposer 3 is connected to an input end of a feedback filter, and an output end of the feedback filter is connected to a third input end of the superimposer 2.
  • An output end of the superimposer 2 is connected to an input end of a digital-to-analog converter (digital to analog converter, DAC), and an output end of the DAC is connected to the moving-coil speaker.
  • DAC digital to analog converter
  • An active noise cancellation & hear through & augmented hearing (ANC&HT&AH, AHA) joint controller is connected to the voice enhancement filter, the feedforward filter, the feedback filter, and the SP filter respectively.
  • Function of AHA joint controller In order to ensure normal and stable operation of the TWS earphone, some abnormal cases need to be handled by using the TWS earphone, so an AHAjoint controller is required.
  • the AHAjoint controller analyzes a plurality of signals, determines a current state of ANC, HT, or augmented hearing (augmented hearing, AH), then determines whether an abnormal case such as squealing or clipping occurs, and therefore implements corresponding processing, and implements system control by controlling parameter values of the foregoing filters.
  • the voice enhancement filter, the audio source, and the AHA joint controller are disposed in a digital signal processing (digital signal processing, DSP) chip, and the feedforward filter, the feedback filter, the SP filter, and the DAC are disposed in a coder-decoder (coder-decoder, CODEC).
  • DSP digital signal processing
  • CODEC coder-decoder
  • FIG. 2b is a schematic diagram of an example structure of a TWS earphone in the related technology. As shown in FIG. 2b , a difference from the structure shown in FIG. 2a lies in that the voice enhancement filter is moved from the DSP chip to the CODEC.
  • the TWS earphone may implement the following functions:
  • the FF microphone picks up an external noise signal, and the feedforward filter generates a feedforward noise cancellation signal.
  • the FB microphone picks up a residual noise signal inside an ear canal, and the feedback filter generates a feedback noise cancellation signal.
  • the feedforward noise cancellation signal, the feedback noise cancellation signal, and the downlink signal are superimposed to form a final speaker drive signal.
  • the DAC performs digital-to-analog conversion to generate an analog speaker drive signal.
  • the analog speaker drive signal is played in a reverse manner by the moving-coil speaker, the analog speaker drive signal is canceled out with an audio signal in space, to obtain an analog audio signal with noise canceled out. In this way, low-frequency noise within a specific frequency band range can be canceled, thereby implementing noise cancellation.
  • the FF microphone picks up an external noise signal, and the feedforward filter generates a feedforward compensation signal.
  • the FB microphone picks up a sound signal inside an ear canal, and the feedback filter generates a feedback suppression signal.
  • the feedforward compensation signal, the feedback suppression signal, and the downlink signal are superimposed to form a final speaker drive signal.
  • the DAC performs digital-to-analog conversion to generate an analog speaker drive signal.
  • the analog speaker drive signal is played by the moving-coil speaker, which can reduce or suppress low-frequency noise, and compensate a high-frequency signal, so that an analog audio signal with acoustic compensation implemented can be obtained.
  • a "block” effect of active phonating and listening which means that a wearer hears his/her own voice
  • a "stethoscope” effect of vibration sound from a body part for example, sound of walking, chewing, or scratching with an earphone worn
  • high-frequency sound of passive listening for example, sound or music in an environment
  • a signal obtained through superimposition may include the feedforward compensation signal, the feedback suppression signal, and the downlink signal, or may include one or two of the feedforward compensation signal, the feedback suppression signal, and the downlink signal.
  • the signal obtained through superimposition may include the foregoing three signals.
  • the signal obtained through superimposition may include the feed compensation signal and the feedback suppression signal, but does not include the downlink signal.
  • a voice enhancement function When a voice enhancement function is implemented on a DSP side, signals from the FF microphone and the main microphone are transmitted to the voice enhancement filter for processing, to obtain a signal with ambient noise suppressed and voice reserved. Then, the signal is downlinked to the CODEC, and is superimposed with signals output by the feedforward filter and the feedback filter to obtain a speaker drive signal. After digital-to-analog conversion by the DAC, an analog speaker drive signal is output to be played by the moving-coil speaker. Sound played by the moving-coil speaker will be picked up by the FB microphone.
  • an estimated signal played by the moving-coil speaker may be subtracted from a signal picked up by the FB microphone, so that the signal is not discarded by the feedback filter through noise cancellation processing, thereby implementing concurrence of voice enhancement and music/call, and further implementing an ANC/HT function on this basis.
  • voice enhancement implemented in the DSP ensures computing overheads, and has a stable noise cancellation effect and a stable playing effect after processing, but a long delay, which sounds reverberant.
  • voice enhancement implemented in the CODEC has a relatively fixed and simple algorithm, a short delay, a low reverberant sense, but a limited noise cancellation effect.
  • This application provides a speaker structure of a TWS earphone, to resolve the foregoing technical problem.
  • FIG. 3 is a schematic diagram of an example structure of a TWS earphone according to this application.
  • the TWS earphone 30 includes three types of microphones: a main microphone 31, an FF microphone 32, and an FB microphone 33.
  • the main microphone 31 is configured to pick up a voice in a call.
  • the FF microphone 32 is configured to pick up an external noise signal.
  • the FB microphone 33 is configured to pick up a residual noise signal inside an ear canal.
  • the TWS earphone 30 further includes a moving-coil speaker 34a and a moving-iron speaker 34b.
  • Amain operating frequency band of the moving-coil speaker 34a is less than 8.5 kHz
  • a main operating frequency band of the moving-iron speaker 34b is more than 8.5 kHz.
  • a quantity of speakers is not specifically limited in this application, provided that there are at least two speakers.
  • the at least two speakers may have different main operating frequency bands, or some speakers may have a same main operating frequency band and the other speakers may have different main operating frequency bands from the main operating frequency bands of the foregoing some speakers.
  • the at least two speakers may include a moving-coil speaker, a moving-iron speaker, a micro-electro-mechanical system (micro-electro-mechanical system, MEMS) MEMS speaker, and a planar vibrating diaphragm.
  • MEMS micro-electro-mechanical system
  • a main operating frequency band of the moving-coil speaker is less than 8.5 kHz, and a main operating frequency band of the moving-iron speaker is more than 8.5 kHz.
  • Amain operating frequency band of the MEMS speaker depends on an application form.
  • the main operating frequency band for an in-ear headphone is a full frequency band.
  • the main operating frequency band for an over-ear headphone is less than 7 kHz, which is weak, and thus a main action frequency band is high frequency of more than 7 kHz.
  • a main operating frequency band of the planar vibrating diaphragm is 10 kHz to 20 kHz.
  • FIG. 4 is a schematic diagram of an example structure of a TWS earphone according to this application.
  • the TWS earphone 40 includes: an audio signal processing path 41, a frequency divider 42, and at least two speakers 43.
  • An output end of the audio signal processing path 41 is connected to an input end of the frequency divider 42, and an output end of the frequency divider 42 is connected to the at least two speakers 43.
  • the audio signal processing path 41 is configured to output a speaker drive signal after noise cancellation or hear through processing is performed on an audio source.
  • the audio source is original music or call voice, or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice.
  • the frequency divider 42 is configured to divide the speaker drive signal into sub-audio signals of at least two frequency bands.
  • the at least two frequency bands correspond to main operating frequency bands of the at least two speakers 43. Adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap.
  • the at least two speakers 43 are configured to play the corresponding sub-audio signals.
  • a frequency band of the processed speaker drive signal corresponds to a frequency band of the audio source, and may include low, middle, and high frequency bands. However, because a main operating frequency band of a single speaker may cover only some of the low, middle, and high frequency bands, the single speaker cannot provide high sound quality in all the frequency bands.
  • the frequency divider 42 of this application may be configured to perform frequency division on the speaker drive signal based on the main operating frequency bands of the at least two speakers to obtain the sub-audio signals of the at least two frequency bands respectively corresponding to the main operating frequency bands of the at least two speakers. Then, each speaker plays a sub-audio signal in a corresponding frequency band, so that the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker.
  • a parameter of the frequency divider 42 may be set to control the frequency divider 42 to perform frequency division on the speaker drive signal in a preset manner.
  • the audio signal processing path 41 may be provided with the structure shown in FIG. 2a or FIG. 2b .
  • the SP filter is disposed in the coder-decoder (coder-decoder, CODEC), and a fixed SP filter is obtained by using the CODEC.
  • the CODEC may obtain an estimated SP filter based on a preset speaker drive signal and an ear canal signal picked up by the FB microphone, where the ear canal signal includes a residual noise signal inside an ear canal; and determine the estimated SP filter as the fixed SP filter when a difference signal between a signal obtained through the estimated SP filter and the ear canal signal is within a specified range.
  • FIG. 5a is an example flowchart of obtaining a fixed SP filter according to this application.
  • a speaker drive signal x[n] is preset, and is transmitted to at least two speakers after passing through a frequency divider, to emit sound.
  • the sound is transmitted to an FB microphone, and is picked up by the FB microphone.
  • the sound is converted into a digital signal y[n].
  • a transfer function of an SP filter may be assumed to be a high-order FIR filter. Iterative modeling may be performed by using the least mean square (LMS) algorithm. A transfer function S(z) of a real SP filter is unknown, but all information of the transfer function S(z) is included in the speaker drive signal x[n] and the digital signal y[n] picked up by the FB microphone. Therefore, a high-order FIR filter ⁇ (z) may be used as an estimated SP filter to simulate S(z). x[n] is inputted to ⁇ (z) to obtain ⁇ [n], and a difference is obtained between y[n] and ⁇ [n] to obtain an error signal e[n].
  • LMS least mean square
  • the algorithm converges to a satisfactory state, and in this case, the high-order FIR filter ⁇ (z) is approximately equal to the real S(z), so that the high-order FIR filter ⁇ (z) may be determined as a transfer function of a fixed SP filter.
  • iterative modeling may be performed by using the least mean square (least mean square, LMS) algorithm.
  • FIG. 5b is an example flowchart of obtaining a fixed SP filter according to this application. As shown in FIG.
  • a parameter of a cascaded second-order filter is obtained based on a target frequency response of the estimated SP filter and a preset frequency division requirement when the difference signal between the signal obtained through the estimated SP filter and the ear canal signal is within the specified range; and an SP cascaded second-order filter is obtained based on the parameter of the cascaded second-order filter, and the SP cascaded second-order filter is used as the fixed SP filter.
  • B [ k ] and A [ k ] are respectively complex frequency responses of the coefficients b and a of the IIR filters, and are calculated by using a method same as the formula (1).
  • a group of IIR filter parameters under least mean square can be obtained by using an optimization algorithm.
  • An IIR filter obtained based on the group of IIR filter parameters can be used as a final fixed SP filter and implemented in the CODEC for hardwareization.
  • the audio signal processing path 41 may be provided with a structure shown in FIG. 6 .
  • the SP filter is disposed in the digital signal processing (digital signal processing, DSP) chip, and an adaptive SP filter is obtained by using the DSP chip.
  • the DSP chip may obtain a real-time noise signal, obtain an estimated SP filter based on an audio source (where the audio source is original music or call voice; or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice) and the real-time noise signal, and determine the estimated SP filter as the adaptive SP filter when a difference signal between a signal obtained through the estimated SP filter and the real-time noise signal is within a specified range.
  • an audio source where the audio source is original music or call voice; or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice
  • the obtaining a real-time noise signal may be obtaining an external signal (where the external signal includes an external noise signal and the music or call voice) picked up by the FF microphone and an ear canal signal (where the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice) picked up by the FB microphone, obtaining a voice signal picked up by the main microphone, and subtracting the external signal and the ear canal signal from the voice signal to obtain the real-time noise signal.
  • a high-order FIR filter ⁇ (z) corresponding to a minimum error signal e[n] may also be obtained by using the process shown in FIG. 5a .
  • the audio source is real-time, a noise cancellation or hear through function of the TWS earphone is concurrent with the music or call voice.
  • a signal picked up by the FB microphone cannot be directly used for estimation of the SP filter, and modeling analysis needs to be performed after an impact of another signal is removed, that is, the real-time noise signal needs to be obtained. Therefore, the SP filter obtained in this way may be adaptive to real-time conditions of the audio source and the noise signal instead of being fixed.
  • the SP filter (including a fixed SP filter or an adaptive SP filter) is determined by using the foregoing method, so that during music playing or calls, not only a noise cancellation or hear through function can be implemented, but also music or call voice can be prevented from being canceled by using a hear through or noise cancellation technology.
  • the TWS earphone includes two speakers: a moving-coil speaker and a moving-iron speaker.
  • a dashed line represents a frequency response curve of the moving-coil speaker.
  • a single-point line represents a frequency response curve of the moving-iron speaker.
  • a solid line represents a frequency division line.
  • FIG. 7a is a schematic diagram of an example of signal frequency division according to this application.
  • the frequency divider 42 is configured to attenuate a non-main operating frequency band of the moving-coil speaker, reserve power of the moving-coil speaker in a main operating frequency band, attenuate a non-main operating frequency band of the moving-iron speaker, reserve power of the moving-iron speaker in a main operating frequency band, and perform frequency division at a cross frequency in respective attenuated frequency bands of the moving-coil speaker and the moving-iron speaker, to obtain sub-audio signals in two frequency bands.
  • FIG. 7b is a schematic diagram of an example of signal frequency division according to this application.
  • the frequency divider 42 is configured to attenuate a non-main operating frequency band of the moving-iron speaker, reserve power of the moving-iron speaker in a main operating frequency band, and perform frequency division at a frequency in an attenuated frequency band of the moving-iron speaker, to obtain sub-audio signals in two frequency bands.
  • FIG. 7c is a schematic diagram of an example of signal frequency division according to this application.
  • the frequency divider 42 is configured to perform two-segment attenuation on a non-main operating frequency band of the moving-coil speaker, reserve power of the moving-coil speaker in a main operating frequency band, perform two-segment attenuation on a non-main operating frequency band of the moving-iron speaker, reserve power of the moving-iron speaker in a main operating frequency band, perform frequency division at a turning frequency of a first attenuated frequency band and a second attenuated frequency band of the moving-iron speaker, and perform frequency division at a turning frequency of a first attenuated frequency band and a second attenuated frequency band of the moving-coil speaker, to obtain sub-audio signals in three frequency bands.
  • the TWS earphone includes three speakers: a moving-coil speaker, a moving-iron speaker, and a MEMS speaker.
  • a dashed line represents a frequency response curve of the moving-coil speaker.
  • a single-point line represents a frequency response curve of the moving-iron speaker.
  • a double-point line represents a frequency response curve of the MEMS speaker.
  • a solid line represents a frequency division line.
  • FIG. 7d is a schematic diagram of an example of signal frequency division according to this application.
  • the frequency divider 42 is configured to attenuate a non-main operating frequency band of the MEMS speaker, reserve power of the MEMS speaker in a main operating frequency band, and perform frequency division at a cross frequency in respective attenuated frequency bands of the moving-iron speaker and the MEMS speaker, to obtain sub-audio signals in four frequency bands.
  • FIG. 7a to FIG. 7d are examples in which the frequency divider 42 performs frequency division on a speaker drive signal.
  • a specific frequency division manner of the frequency divider 42 is not limited in this application.
  • At least two speakers are disposed in the TWS earphone of this application, and main operating frequency bands of the at least two speakers are not exactly same.
  • the frequency divider may divide the speaker drive signal into sub-audio signals of at least two frequency bands. Adjacent frequency bands in the at least two frequency bands partially overlap or do not overlap. In this case, each sub-audio signal is transmitted to a speaker with a matching frequency band.
  • the matching frequency band may mean that a main operating frequency band of the speaker covers a frequency band of the sub-audio signal transmitted to the speaker. In this way, the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker, which can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • FIG. 8a is a schematic diagram of an example structure of a TWS earphone according to this application.
  • the TWS earphone 40 further includes: a first DAC 44.
  • An input end of the first DAC 44 is connected to the output end of the audio signal processing path 41, and an output end of the first DAC 44 is connected to the input end of the frequency divider 42.
  • the first DAC 44 is configured to convert the speaker drive signal from a digital format to an analog format.
  • the frequency divider 42 is an analog frequency divider.
  • the speaker drive signal in the digital format may be first converted into a speaker drive signal in the analog format by the first DAC, and then the speaker drive signal in the analog format is divided into sub-audio signals of at least two frequency bands by the analog frequency divider.
  • FIG. 8b is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 8b , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 8a .
  • a main microphone 601 and an FF microphone 602 are respectively connected to an input end of a voice enhancement filter 603, and an output end of the voice enhancement filter 603 and an audio source 604 (including music and call voice) are superimposed by a superimposer 1 to obtain a downlink signal.
  • the downlink signal is transmitted to an input end of a superimposer 2.
  • the downlink signal is further transmitted to an input end of an SP filter 605.
  • the FF microphone 602 is further connected to an input end of a feedforward filter 606, and an output end of the feedforward filter 606 is connected to another input end of the superimposer 2.
  • An FB microphone 607 is connected to an input end of a superimposer 3, an output end of the SP filter 605 is connected to another input end of the superimposer 3, an output end of the superimposer 3 is connected to an input end of a feedback filter 608, and an output end of the feedback filter 608 is connected to a third input end of the superimposer 2.
  • An output end of the superimposer 2 is connected to an input end of a digital-to-analog converter (DAC) 609, an output end of the DAC 609 is connected to an input end of an analog frequency divider 610, and an output end of the analog frequency divider 610 is connected to a moving-coil speaker 611a and a moving-iron speaker 611b.
  • An AHA joint controller 612 is connected to the voice enhancement filter 603, the feedforward filter 606, the feedback filter 608, and the SP filter 605 respectively.
  • the TWS earphone 60 in this embodiment is provided with two speakers: a moving-coil speaker 611a and a moving-iron speaker 611b.
  • a main operating frequency band of the moving-coil speaker 611a is less than 8.5 kHz
  • a main operating frequency band of the moving-iron speaker 611b is more than 8.5 kHz.
  • a frequency divider 42 may be provided to divide a speaker drive signal in an analog format into a sub-audio signal of less than 8.5 kHz and a sub-audio signal of more than 8.5 kHz.
  • the moving-coil speaker 611a plays the sub-audio signal of less than 8.5 kHz to maintain an optimal frequency response
  • the moving-iron speaker 611b plays the sub-audio signal of more than 8.5 kHz to maintain an optimal frequency response. Therefore, the TWS earphone 60 can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • FIG. 8c is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 8c , the structure of this embodiment is another more detailed implementation of the structure shown in FIG. 8a .
  • a difference from the structure shown in FIG. 8b lies in that the output end of the analog frequency divider 610 is connected to the moving-coil speaker 611a, the moving-iron speaker 611b, a MEMS speaker 611c, and a planar vibrating diaphragm 611d.
  • the TWS earphone 60 in this embodiment is provided with four speakers: a moving-coil speaker 611a, a moving-iron speaker 611b, a MEMS speaker 611c, and a planar vibrating diaphragm 611d.
  • a main operating frequency band of the moving-coil speaker 611a is less than 8.5 kHz, and a main operating frequency band of the moving-iron speaker 611b is more than 8.5 kHz.
  • Amain operating frequency band of the MEMS speaker 611c depends on an application form.
  • the main operating frequency band for an in-ear headphone is a full frequency band.
  • the main operating frequency band for an over-ear headphone is less than 7 kHz, which is weak, and thus a main action frequency band is high frequency of more than 7 kHz.
  • a main operating frequency band of the planar vibrating diaphragm 611d is 10 kHz to 20 kHz.
  • a frequency divider 42 may be provided to divide a speaker drive signal in an analog format into four sub-frequency bands.
  • the moving-coil speaker 611a plays a sub-audio signal of less than 8.5 kHz to maintain an optimal frequency response
  • the moving-iron speaker 611b plays a sub-audio signal of more than 8.5 kHz to maintain an optimal frequency response
  • the MEMS speaker 611c plays a sub-audio signal of more than 7 kHz to maintain an optimal frequency response
  • the planar vibrating diaphragm 611d plays a sub-audio signal of more than 10 kHz to maintain an optimal frequency response. Therefore, the TWS earphone 60 can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • the voice enhancement filter 603, the audio source 604, and the AHAjoint controller 612 are disposed in a digital signal processing (digital signal processing, DSP) chip, and the feedforward filter 606, the feedback filter 608, the SP filter 605, and the DAC 609 are disposed in a coder-decoder (coder-decoder, CODEC).
  • DSP digital signal processing
  • CODEC coder-decoder
  • FIG. 8d is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 8d , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 8a .
  • a difference from the structure shown in FIG. 8b lies in that the voice enhancement filter 603 is moved from the DSP chip to the CODEC.
  • FIG. 8e is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 8e , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 8a .
  • a difference from the structure shown in FIG. 8c lies in that the voice enhancement filter 603 is moved from the DSP chip to the CODEC.
  • FIG. 9a is a schematic diagram of an example structure of a TWS earphone according to this application.
  • the TWS earphone 40 further includes: at least two second DACs 45. Input ends of the at least two second DACs 45 are all connected to the output end of the frequency divider 42, and output ends of the at least two second DACs 45 are respectively connected to the at least two speakers 43.
  • Each second DAC 45 is configured to convert one of the sub-audio signals of the at least two frequency bands from a digital format to an analog format.
  • the frequency divider 42 is a digital frequency divider.
  • the speaker drive signal in the digital format may be first divided into sub-audio signals of at least two frequency bands by the digital frequency divider, and then the sub-audio signals in the digital format that are transmitted to the at least two second DACs are respectively converted into sub-audio signals in the analog format by the at least two second DACs.
  • FIG. 9b is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 9b , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 9a .
  • a main microphone 701 and an FF microphone 702 are respectively connected to an input end of a voice enhancement filter 703, and an output end of the voice enhancement filter 703 and an audio source 704 (including music and call voice) are superimposed by a superimposer 1 to obtain a downlink signal.
  • the downlink signal is transmitted to an input end of a superimposer 2.
  • the downlink signal is further transmitted to an input end of an SP filter 705.
  • the FF microphone 702 is further connected to an input end of a feedforward filter 706, and an output end of the feedforward filter 706 is connected to another input end of the superimposer 2.
  • An FB microphone 707 is connected to an input end of a superimposer 3, an output end of the SP filter 705 is connected to another input end of the superimposer 3, an output end of the superimposer 3 is connected to an input end of a feedback filter 708, and an output end of the feedback filter 708 is connected to a third input end of the superimposer 2.
  • An output end of the superimposer 2 is connected to an input end of a digital frequency divider 709, and an output end of the digital frequency divider 709 is connected to input ends of two DACs 710a and 710b.
  • An output end of the DAC 710a is connected to a moving speaker 711a, and an output end of the DAC 710b is connected to a moving-iron speaker 711b.
  • An AHAjoint controller 712 is connected to the voice enhancement filter 703, the feedforward filter 706, the feedback filter 708, and the SP filter 705 respectively.
  • FIG. 9c is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 9c , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 9a .
  • the TWS earphone 70 in this embodiment is provided with four speakers: a moving-coil speaker 711a, a moving-iron speaker 711b, a MEMS speaker 711c, and a planar vibrating diaphragm 711d.
  • a main operating frequency band of the moving-coil speaker 711a is less than 8.5 kHz, and a main operating frequency band of the moving-iron speaker 711b is more than 8.5 kHz.
  • Amain operating frequency band of the MEMS speaker 711c depends on an application form.
  • the main operating frequency band for an in-ear headphone is a full frequency band.
  • the main operating frequency band for an over-ear headphone is less than 7 kHz, which is weak, and thus a main action frequency band is high frequency of more than 7 kHz.
  • a main operating frequency band of the planar vibrating diaphragm 711d is 10 kHz to 20 kHz.
  • a frequency divider 42 may be provided to divide a speaker drive signal in an analog format into four sub-frequency bands.
  • the moving-coil speaker 711a plays a sub-audio signal of less than 8.5 kHz to maintain an optimal frequency response
  • the moving-iron speaker 711b plays a sub-audio signal of more than 8.5 kHz to maintain an optimal frequency response
  • the MEMS speaker 711c plays a sub-audio signal of more than 7 kHz to maintain an optimal frequency response
  • the planar vibrating diaphragm 711d plays a sub-audio signal of more than 10 kHz to maintain an optimal frequency response. Therefore, the TWS earphone 70 can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • the voice enhancement filter 703, the audio source 704, and the AHA joint controller 712 are disposed in a DSP chip, and the feedforward filter 706, the feedback filter 708, the SP filter 705, and the DAC 709 are disposed in a CODEC.
  • FIG. 9d is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 9d , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 9a .
  • a difference from the structure shown in FIG. 9b lies in that the voice enhancement filter 703 is moved from the DSP chip to the CODEC.
  • FIG. 9e is a schematic diagram of an example structure of a TWS earphone according to this application. As shown in FIG. 9e , the structure of this embodiment is a more detailed implementation of the structure shown in FIG. 9a .
  • a difference from the structure shown in FIG. 9c lies in that the voice enhancement filter 703 is moved from the DSP chip to the CODEC.
  • FIG. 10 is an example flowchart of a play method of a TWS earphone according to this application. As shown in FIG. 10 , the method in this embodiment may be applied to the TWS earphone in the foregoing embodiments. The method may include the following steps: Step 1001: Obtain an audio source.
  • the audio source is original music or call voice.
  • the audio source may be music, video sound, or the like that a user is listening to by using earphones, or may be call voice when a user is making a call by using earphones.
  • the audio source may come from a player of an electronic device.
  • the audio source includes a voice signal obtained through voice enhancement processing and original music or call voice.
  • the audio source may be further obtained by superimposing an external voice signal obtained through the voice enhancement processing.
  • the external voice signal obtained through the voice enhancement processing may be obtained by using the voice enhancement filter in the structure shown in FIG. 2a or FIG. 2b , and details are not described herein again.
  • Step 1002 Perform noise cancellation or hear through processing on the audio source to obtain a speaker drive signal.
  • a fixed secondary path SP filter may be obtained by using the CODEC, the audio source is processed by using the fixed SP filter to obtain a filtered signal, and the noise cancellation or hear through processing is performed on the filtered signal to obtain the speaker drive signal.
  • the CODEC may obtain an estimated SP filter based on a preset speaker drive signal and an ear canal signal picked up by a feedback FB microphone, where the ear canal signal includes a residual noise signal inside an ear canal; and determine the estimated SP filter as the fixed SP filter when a difference signal between a signal obtained through the estimated SP filter and the ear canal signal is within a specified range.
  • a parameter of a cascaded second-order filter is obtained based on a target frequency response of the estimated SP filter and a preset frequency division requirement when the difference signal between the signal obtained through the estimated SP filter and the ear canal signal is within the specified range; and an SP cascaded second-order filter is obtained based on the parameter of the cascaded second-order filter, and the SP cascaded second-order filter is used as the fixed SP filter.
  • an adaptive SP filter may be obtained by using the DSP chip, the audio source is processed by using the adaptive SP filter to obtain a filtered signal, and the noise cancellation or hear through processing is performed on the filtered signal to obtain the speaker drive signal.
  • the DSP chip may obtain a real-time noise signal, obtain an estimated SP filter based on the audio source and the real-time noise signal; and determine the estimated SP filter as the adaptive SP filter when a difference signal between a signal obtained through the estimated SP filter and the real-time noise signal is within a specified range.
  • the DSP chip may first obtain an external signal picked up by the FF microphone and an ear canal signal picked up by the FB microphone, where the external signal includes an external noise signal and music or call voice, and the ear canal signal includes a residual noise signal inside an ear canal and music or call voice, then obtain a voice signal picked up by the main microphone, and finally subtract the external signal and the ear canal signal from the voice signal to obtain the real-time noise signal.
  • Step 1003 Divide the speaker drive signal into sub-audio signals of at least two frequency bands.
  • a frequency band of the processed speaker drive signal corresponds to a frequency band of the audio source, and may include low, middle, and high frequency bands. However, because a main operating frequency band of a single speaker may cover only some of the low, middle, and high frequency bands, the single speaker cannot provide high sound quality in all the frequency bands.
  • the frequency divider of this application may be configured to perform frequency division on the speaker drive signal based on the main operating frequency bands of the at least two speakers to obtain the sub-audio signals of the at least two frequency bands respectively corresponding to the main operating frequency bands of the at least two speakers. Then, each speaker plays a sub-audio signal in a corresponding frequency band, so that the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker. Adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap. For example, the frequency divider divides the speaker drive signal into two frequency bands: a high frequency band and a low frequency band, which are completely separate without overlapping, or partially overlap.
  • the frequency divider divides the speaker drive signal into three frequency bands: a high frequency band, a middle frequency band, and a low frequency band, where the high frequency band and the middle frequency band are completely separate without overlapping, and the middle frequency band and the low frequency band partially overlap; or the high frequency band, the middle frequency band, and the low frequency band are completely separate without overlapping; or the high frequency band and the middle frequency band partially overlap, and the middle frequency band and the low frequency band are completely separate without overlapping.
  • a parameter of the frequency divider 42 may be set to control the frequency divider 42 to perform frequency division on the speaker drive signal in a preset manner.
  • frequency division refer to FIG. 6a to FIG. 6d, and details are not described herein again.
  • Step 1004 Play the sub-audio signals of the at least two frequency bands respectively through the at least two speakers.
  • the at least two speakers are disposed in the TWS earphone of this application, and main operating frequency bands of the at least two speakers are not exactly same.
  • the frequency divider may divide the speaker drive signal into sub-audio signals of at least two frequency bands. Adjacent frequency bands in the at least two frequency bands partially overlap or do not overlap. In this case, each sub-audio signal is transmitted to a speaker with a matching frequency band.
  • the matching frequency band may mean that a main operating frequency band of the speaker covers a frequency band of the sub-audio signal transmitted to the speaker. In this way, the speaker maintains an optimal frequency response when playing the sub-audio signal transmitted to the speaker, which can provide a high-quality audio source in various frequency bands, and can also support ultra-wideband audio calls.
  • FIG. 11 is a diagram of an example structure of a play apparatus of a TWS earphone according to this application. As shown in FIG. 11 , the apparatus 1100 in this embodiment may be used in the TWS earphone in the foregoing embodiments.
  • the apparatus 1100 includes: an obtaining module 1101, a processing module 1102, a frequency division module 1103, and a play module 1104.
  • the obtaining module 1101 is configured to obtain an audio source, where the audio source is original music or call voice, or the audio source includes a voice signal obtained through voice enhancement processing and the original music or call voice.
  • the processing module 1102 is configured to perform noise cancellation or hear through processing on the audio source to obtain a speaker drive signal.
  • the frequency division module 1103 is configured to divide the speaker drive signal into sub-audio signals of at least two frequency bands, where adjacent frequency bands in the at least two frequency bands partially overlap, or adjacent frequency bands in the at least two frequency bands do not overlap.
  • the play module 1104 is configured to play the sub-audio signals of the at least two frequency bands respectively through at least two speakers.
  • the processing module 1102 is specifically configured to: obtain a fixed secondary path SP filter by using a coder-decoder CODEC; process the audio source by using the fixed SP filter to obtain a filtered signal; and perform the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the processing module 1102 is specifically configured to: obtain an estimated SP filter based on a preset speaker drive signal and an ear canal signal picked up by a feedback FB microphone, where the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice; and determine the estimated SP filter as the fixed SP filter when a difference signal between a signal obtained through the estimated SP filter and the ear canal signal is within a specified range.
  • the processing module 1102 is further configured to: obtain a parameter of a cascaded second-order filter based on a target frequency response of the estimated SP filter and a preset frequency division requirement when the difference signal between the signal obtained through the estimated SP filter and the ear canal signal is within the specified range; and obtain an SP cascaded second-order filter based on the parameter of the cascaded second-order filter, and use the SP cascaded second-order filter as the fixed SP filter.
  • the processing module 1102 is specifically configured to: obtain an adaptive SP filter by using a digital signal processing DSP chip; process the audio source by using the adaptive SP filter to obtain a filtered signal; and perform the noise cancellation or hear through processing on the filtered signal to obtain the speaker drive signal.
  • the processing module 1102 is specifically configured to: obtain a real-time noise signal; obtain an estimated SP filter based on the audio source and the real-time noise signal; and determine the estimated SP filter as the adaptive SP filter when a difference signal between a signal obtained through the estimated SP filter and the real-time noise signal is within a specified range.
  • the processing module 1102 is specifically configured to: obtain an external signal picked up by a feedforward FF microphone and an ear canal signal picked up by a feedback FB microphone, where the external signal includes an external noise signal and the music or call voice, and the ear canal signal includes a residual noise signal inside an ear canal and the music or call voice; obtain a voice signal picked up by a main microphone; subtract the external signal and the ear canal signal from the voice signal to obtain a signal difference; and obtain the estimated SP filter based on the audio source and the signal difference.
  • main operating frequency bands of the at least two speakers are not exactly same.
  • the at least two speakers include a moving-coil speaker and a moving-iron speaker.
  • the at least two speakers include a moving-coil speaker, a moving-iron speaker, a micro-electro-mechanical system MEMS speaker, and a planar vibrating diaphragm.
  • the apparatus in this embodiment may be configured to perform the technical solution of the method embodiment shown in FIG. 10 .
  • the implementation principles and technical effects are similar, and are not described herein again.
  • the steps in the foregoing method embodiments may be implemented by using a hardware integrated logic circuit in a processor, or by using instructions in a form of software.
  • the processor may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array, FPGA) or another programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
  • the general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in embodiments of this application may be directly presented as being performed and completed by a hardware encoding processor, or performed and completed by a combination of hardware and a software module in an encoding processor.
  • the software module may be located in a storage medium mature in the art, such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register.
  • the storage medium is located in a memory, and the processor reads information in the memory and completes the steps in the foregoing method in combination with hardware of the processor.
  • the memory in the foregoing embodiments may be a volatile memory or a non-volatile memory, or may include both a volatile memory and a non-volatile memory.
  • the non-volatile memory may be a read-only memory (read-only memory, ROM), a programmable read-only memory (programmable ROM, PROM), an erasable programmable read-only memory (erasable PROM, EPROM), an electrically erasable programmable read-only memory (electrically EPROM, EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (random access memory, RAM) and is used as an external cache.
  • RAMs in many forms may be used, such as a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), a synchronous dynamic random access memory (synchronous DRAM, SDRAM), a double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), an enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), a synchlink dynamic random access memory (synchlink DRAM, SLDRAM), and a direct rambus random access memory (direct rambus RAM, DR RAM).
  • static random access memory static random access memory
  • DRAM dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • synchlink dynamic random access memory synchlink dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the disclosed system, apparatus, and method may be implemented in another manner.
  • the described apparatus embodiment is merely an example.
  • division into the units is merely logical function division and may be other division during actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or another form.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments.
  • the functions When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions in this application essentially, or the part contributing to the conventional technology, or some of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for instructing a computer device (a personal computer, a server, a network device, or the like) to perform all or some of the steps of the method in embodiments of this application.
  • the foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disc.
  • program code such as a USB flash drive, a removable hard disk, a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Otolaryngology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
EP22794455.0A 2021-04-28 2022-03-28 Tws-kopfhörer und abspielverfahren und -vorrichtung für einen tws-kopfhörer Pending EP4297428A4 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110467311.9A CN115250397A (zh) 2021-04-28 2021-04-28 Tws耳机和tws耳机的播放方法及装置
PCT/CN2022/083464 WO2022227982A1 (zh) 2021-04-28 2022-03-28 Tws耳机和tws耳机的播放方法及装置

Publications (2)

Publication Number Publication Date
EP4297428A1 true EP4297428A1 (de) 2023-12-27
EP4297428A4 EP4297428A4 (de) 2024-08-21

Family

ID=83697287

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22794455.0A Pending EP4297428A4 (de) 2021-04-28 2022-03-28 Tws-kopfhörer und abspielverfahren und -vorrichtung für einen tws-kopfhörer

Country Status (3)

Country Link
EP (1) EP4297428A4 (de)
CN (1) CN115250397A (de)
WO (1) WO2022227982A1 (de)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118679757A (zh) * 2022-11-01 2024-09-20 深圳市韶音科技有限公司 音频处理设备和方法
CN116156385B (zh) * 2023-04-19 2023-07-07 深圳市汇顶科技股份有限公司 滤波方法、滤波装置、芯片和耳机

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2006114941A1 (ja) * 2005-04-25 2008-12-18 新潟精密株式会社 クロック発生回路およびオーディオシステム
US9253570B2 (en) * 2012-03-15 2016-02-02 Jerry Harvey Crossover based canalphone system
US9786261B2 (en) * 2014-12-15 2017-10-10 Honeywell International Inc. Active noise reduction earcup with speaker array
CN208940178U (zh) * 2018-11-06 2019-06-04 广东思派康电子科技有限公司 一种双麦克风tws耳机
CN208850008U (zh) * 2018-11-16 2019-05-10 深圳市星科启电子商务有限公司 一种内嵌分频电路的多振动单元tws耳机
CN110418233A (zh) * 2019-07-26 2019-11-05 歌尔股份有限公司 一种耳机降噪方法、装置、耳机及可读存储介质
CN110401901A (zh) * 2019-08-22 2019-11-01 杭州聚声科技有限公司 一种参量阵发声系统

Also Published As

Publication number Publication date
CN115250397A (zh) 2022-10-28
WO2022227982A1 (zh) 2022-11-03
EP4297428A4 (de) 2024-08-21

Similar Documents

Publication Publication Date Title
JP7008806B2 (ja) 音響デバイスの並列能動騒音低減(anr)及びヒアスルー信号伝達経路
EP3189672B1 (de) Regelung der raumklanglautstärke
CN106878895B (zh) 包括改进的反馈抵消系统的听力装置
US9807503B1 (en) Systems and methods for use of adaptive secondary path estimate to control equalization in an audio device
CN105074814B (zh) 个人音频装置的低时延多驱动器自适应消噪(anc)系统
EP4297428A1 (de) Tws-kopfhörer und abspielverfahren und -vorrichtung für einen tws-kopfhörer
JP6964581B2 (ja) 固定応答フィルタによって部分的に提供されるフィードバック応答を有するフィードバック適応雑音消去(anc)コントローラおよび方法
US9437182B2 (en) Active noise reduction method using perceptual masking
US20130129105A1 (en) Audio headset with active noise control of the non-adaptive type for listening to an audio music source and/or for "hands-free" telephony functions
CN113068091B (zh) 噪声消除来自触觉振动驱动器的声学噪声的耳机
JP2016218456A (ja) ノイズ除去音声再生
GB2459758A (en) Noise reduction filter circuit with optimal filter property selecting unit
WO2021017136A1 (zh) 一种耳机降噪方法、装置、耳机及可读存储介质
CN114787911A (zh) 耳戴式播放设备的噪声消除系统和信号处理方法
CN114080638A (zh) 具有多个前馈麦克风的anr系统中的增益调整
JP6197930B2 (ja) 耳孔装着型収音装置、信号処理装置、収音方法
JP2010157852A (ja) 音響補正装置、音響測定装置、音響再生装置、音響補正方法及び音響測定方法
CN106358108A (zh) 偿滤波器拟合系统、音响补偿系统及方法
EP3977443B1 (de) Mehrzweckmikrofon in akustischen geräten
CN111629313B (zh) 包括环路增益限制器的听力装置
KR100952400B1 (ko) 원하지 않는 라우드 스피커 신호들을 제거하는 방법
US11335315B2 (en) Wearable electronic device with low frequency noise reduction
EP3486896B1 (de) Rauschunterdrückungssystem und signalverarbeitungsverfahren
JP6100562B2 (ja) 補聴器及びこもり音抑制装置
JP5228647B2 (ja) ノイズキャンセリングシステム、ノイズキャンセル信号形成方法およびノイズキャンセル信号形成プログラム

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230920

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20240722

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 1/24 20060101ALN20240716BHEP

Ipc: H04R 5/033 20060101ALN20240716BHEP

Ipc: G10K 11/178 20060101ALI20240716BHEP

Ipc: H04R 3/14 20060101ALI20240716BHEP

Ipc: H04R 1/10 20060101AFI20240716BHEP